Kafka in the Cloud: Why it’s 10x better with Confluent | Find out more
Stream processing has emerged as a paradigm for applications that require low-latency evaluation of operators over unbounded sequences of data. Defining the semantics of stream processing is challenging in the presence of distributed data sources because the physical and logical order of data in a stream may become inconsistent in such a setting.
In this paper, we introduce the Dual Streaming Model. The model presents the result of an operator as a stream of successive updates, which induces a duality of results and streams. As such, it also addresses the inconsistencies between the physical and logical order of streaming data in a continuous manner, without explicit buffering and reordering.
We further discuss the trade-offs and challenges faced when implementing this model in terms of correctness, latency and processing cost. A case study based on Apache Kafka illustrates the effectiveness of the model based on real-world requirements.