Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Operating Kafka at scale can consume your cloud spend and engineering time. And operating everyday tasks like scaling or deploying new clusters can be complex and require dedicated engineers. This post focuses on how Confluent Cloud is 1) Resource Efficient, 2) Fully Managed, and 3) Complete.
In part 2 of our blog series on understanding and optimizing your Kafka costs, we dive into how to estimate costs stemming from the development and operations personnel needed to self-manage Kafka.
It's hard to properly calculate the cost of running Kafka. In part 1 of 4, learn to calculate your Kafka costs based on your infrastructure, networking, and cloud usage.
If you’ve been working with Kafka Streams and have seen an “unknown magic byte” error, you might be wondering what a magic byte is in the first place, and also, how to resolve the error. This post explains the answers to both questions.
Get an introduction to why Python is becoming a popular language for developing Apache Kafka client applications. You will learn about several benefits that Kafka developers gain by using the Python language.
Discover tools, practices, and patterns for planning geo-replicated Apache Kafka deployments to build reliable, scalable, secure, and globally distributed data pipelines that meet your business needs.
Using Apache Kafka to decouple microservices is a successful way to build a more resilient, flexible, and scalable architecture. However, it is very common for such microservices to pair with a database. This blog provides a real-world use case on how Kafka replaces a database with ksqlDB.
This article summarizes dynamic versus static consumer group membership in Apache Kafka. It shows how the approaches affect rebalancing in heavy state applications and teaches the user how to choose between the methods.
Learn what windowing is in Kafka Streams and get comfortable with the differences between the main types.
Apache Kafka 3.4 includes early access to ZooKeeper to KRaft migrations, enabling existing Kafka clusters to migrate to KRaft mode and gain scalability and resiliency benefits. Additionally, 3.4 includes several updates to Kafka Core, Streams, Connect, and more.
Learn what a Kafka consumer group ID is and how assigning one to Kafka consumers during configuration helps with detecting new data, work sharing, and data recovery.
If you’ve used Kafka for any amount of time you’ve likely heard about connections; the most common place that they come up is in regard to clients. Sure, producer and consumer clients connect to the cluster to do their jobs, but it doesn’t stop there. Nearly all interactions across a cluster...
The call for papers for Kafka Summit London 2023 has opened, and we’re looking to hear about your experiences using and working with Kafka. Every great technical talk starts with an experience. If you’re stuck looking for ideas on what to talk about, write what you know...