Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
It was another productive month in the Apache Kafka community. Many of the KIPs that were under active discussion in the last Log Compaction were implemented, reviewed, and merged into […]
A few months ago, we announced the major release of Apache Kafka 0.9, which added several new features like Security, Kafka Connect, the new Java consumer and also critical bug […]
For a long time, a substantial portion of data processing that companies did ran as big batch jobs — CSV files dumped out of databases, log files collected at the […]
Welcome to the February 2016 edition of Log Compaction, a monthly digest of highlights in the Apache Kafka and stream processing community. Got a newsworthy item? Let us know.
TLS, Kerberos, SASL, and Authorizer in Apache Kafka 0.9 – Enabling New Encryption, Authorization, and Authentication Features Apache Kafka is frequently used to store critical data making it one of […]
When Apache Kafka® was originally created, it shipped with a Scala producer and consumer client. Over time we came to realize many of the limitations of these APIs. For example, […]
When we released Apache Kafka 0.9.0.0, we talked about all of the big new features we added: the new consumer, Kafka Connect, security features, and much more. What we didn’t […]
Happy 2016! Wishing you a wonderful, highly scalable, and very reliable year. Log Compaction is a monthly digest of highlights in the Apache Kafka and stream processing community. Got a newsworthy item? Let us […]
Apache Kafka is a high-throughput distributed message system that is being adopted by hundreds of companies to manage their real-time data. Companies use Kafka for many applications (real time stream […]
We are very excited to announce the general availability of Confluent Platform 2.0. For organizations that want to build a streaming data pipeline around Apache Kafka, Confluent Platform is the […]
I am pleased to announce the availability of the 0.9 release of Apache Kafka. This release has been in the works for several months with contributions from the community and […]
The Apache Kafka community just concluded its busiest month ever. As we are preparing for the upcoming release of Kafka 0.9.0.0, the community worked together to close a record number […]
Apache Kafka has a data structure called the “request purgatory”. The purgatory holds any request that hasn’t yet met its criteria to succeed but also hasn’t yet resulted in an […]
The amount of work that got done by the community in the last month is truly impressive, especially considering how many conferences took place in September. Let’s take a look at the highlights: The […]