Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more

Kafka Summit NYC is Almost Here – Don’t Miss the Streams Track!

Written By

Ever wondered what it’s like to run Kafka in production? What about building and deploying microservices that process streams of data in real-time, at large scale?  Or, maybe just the pitfalls to avoid in your own organization? The Streams Track at Kafka Summit NYC is for you. In just a few weeks, companies like Google, Uber, and AltspaceVR, among others, will be sharing lessons they’ve learned with stream processing solutions, and how they run them in production at scale.

We might be biased, but we think the streams track is one of the most exciting tracks at Kafka Summit, and we know our speakers are excited to share their thoughts and insights on the world of stream processing. In this track, we will explore new Kafka features that are exceptionally useful when building event-driven applications and microservices, and how companies use Kafka in their stream processing architectures to build great products and to understand and act upon everything their customers are doing in real-time.

Here are some of our can’t miss sessions:

 

Hanging Out With Your Past Self In VR: Time-Shifted Avatar Replication Using Kafka Streams
Greg Fodor, Director of Engineering, AltspaceVRIn this talk, we’ll show how AltspaceVR developed a Kafka Streams based solution to perform real time mirroring, capture, and playback of networked avatars in a shared VR environment. For intermediate to advanced Kafka Streams users, we’ll cover some common pitfalls, lessons learned, and design patterns that helped us create a streaming application that provides features our users call magic.
ben stopford The Data Dichotomy: Rethinking Data & Services with Streams
Ben Stopford, Engineer, ConfluentTypically when we build service based apps, microservices, SOA and the like, we use REST or some RPC framework. But building such applications becomes tricky as they get larger, more complex and share more data. We can trace this trickiness back to a dichotomy that underlies the way systems interact: Data systems are designed to expose data, to make it freely accessible. But services, instead, focus on encapsulation. Restricting the data each service exposes. These two forces inevitably compete as such systems evolve.This talk will look at a different approach. One where a distributed log can be used to hold data that is shared between services. Then stateful stream processors are embedded right in each service, providing facilities for joining and reacting to the shared streams. The result is a very different way to architect and build service-based applications, but one with some unique benefits as we scale.
Scalable Real-time Complex Event Processing at Uber
Shuyi Chen, Senior Software Engineer, UberThe Marketplace data team at Uber has built a scalable complex event processing platform to solve many challenging real-time data needs for various Uber products. This platform has been in production for >1 year and supporting over 100 real-time data use cases with a team of 3. In this talk, we will share the detail of the design and our experience, and how we employ Kafka and Samza at scale.


It’s not too late to
register for Kafka Summit NYC!  Also, you may want to participate in our Kafka Summit Hackathon where we help the community learn how to build streaming applications with Apache Kafka. Hope to see you there!

  • Michael is a former principal technologist in the Office of the CTO at Confluent, the company founded by the original creators of Apache Kafka®. He focuses on longer-term product and technology strategy. Previously, Michael was the lead product manager for stream processing at Confluent, where his team created Kafka Streams and the streaming database ksqlDB. He is a well-known technology blogger in the big data community (www.michael-noll.com) and a committer/contributor to open source projects such as Apache Storm and Apache Kafka.

Did you like this blog post? Share it now