Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more

Online Talk

Show Me How: Build Streaming Data Pipelines from SQL Server to MongoDB

Available On-demand

Data pipelines do the heavy lifting of helping organizations integrate, transform, and prepare data for downstream systems in operational use cases. But legacy databases and rigid batch-based pipelines hold organizations back as real-time data streaming becomes a business-critical technology. Streaming pipelines have become essential for businesses to serve modern consumers.

During this hands-on workshop, we'll guide you through the journey of an ecommerce company that started with siloed data spread across multiple environments. See how integrating and processing real-time customer order and clickstream data across various sources enabled them to unlock Customer 360 and build hyper-personalized campaigns so customers get faster, better experiences.

You’ll learn how to:

  • Connect data sources and sinks to Confluent Cloud, using Confluent’s fully managed SQL Server CDC (Change Data Capture) Source Connector and MongoDB Atlas Sink Connector.
  • Process data streams using ksqlDB to join and enrich customer data, generating a unified 360 view.
  • Govern data using Schema Registry, Stream Lineage, and Stream Catalog.
  • Share ready-to-use data products securely in one click with teams and external organizations.

Don’t miss our live Q&A! Register today and get started building your own streaming pipelines.

Resources:

Presenter

Maygol Kananizadeh

Senior Developer Adoption Manager, Confluent

Watch Now

Additional Resources

cc demo
kafka microservices
Image-Event-Driven Microservices-01

Additional Resources

cc demo
kafka microservices
microservices-and-apache-kafka