Problem Description

In SAMZA-974, we built a mechanism to support batch job with bounded data source. The feature provides the following functionality:

  • Samza shuts down the task if all SSPs in the task are at end of stream (EOS).

  • Samza provides a callback named EndOfStreamListenerTask to the task so that it can perform cleanups/ commits once tasks are at end of stream.

  • Samza shuts down the containers and then the job if all the tasks have been shut down.

This works for applications which do not have any data shuffling phase. With the introduction of partitionBy operators, the processors can send output to any partitions of intermediate streams, and the intermediate streams will be consumed again for further processing. Since the end of stream tokens are not carried over from the original input streams to the intermediate streams, the job won’t be able to shut down even if all the input streams reach to the end. To address this problem, we need to extend the existing end-of-stream feature to support applications with intermediate streams.

The same problem exists for propagating watermarks needed for event time processing. After the shuffling phase, the downstream stage need to compute the event time based on all the watermarks received from the upstream stage producers. So for any downstram task, it needs to be able to consumer watermark messages from upstream tasks and emit watermarks based on the message timestamp.


  • Build the general support for control messages through intermediate streams, and reconciliation on the consumers.

  • Use the control messages to support end-of-stream and watermark originating from the upstream tasks to the downstream tasks connected by intermediate streams.

  • For end-of-stream messages, Samza will shut down the application once all the input streams reach end-of-stream.

  • For watermark messages, Samza will emit watermark to the consumer tasks with the earliest timestamp that all upstream tasks produce.

  • The solution should still work if we split the application into multiple jobs based on the partitionBy operators.

Proposed Design

We propose to use the in-band channel of intermediate stream to propagate control message. The design diagram is below: 

How it works:

  1. The upstream tasks will send out control message to all the downstream intermediate topic partition. The control message will be serialized and sent out with user messages in the same stream.

  2. Downstream Samza processor will consume the intermediate streams, and deserialize both user messages and control messages in SystemConsumers.

  3. The control messages will be reconciled based on the count from all the producers (tasks) from the upstream. See below for more details of different control message reconsiliation. 

Intermediate Stream Message Format:

The format of the intermediate stream message:

IntermediateMessage =>  [MessageType MessageData]
  MessageType => byte
  MessageData => byte[]

  MessageType => [0(UserMessage), 1(Watermark), 2(EndOfStream)]
  MessageData => [UserMessage/ControlMessage]
  ControlMessage => [EndOfStreamMessage/WatermarkMessage]
     Version => int
     TaskName => String
     TaskCount => int
     Other Message Data (based on different types of control message) 

For user message, we will use the user provided serde (default is the system serde). For control message, we will use JSON serde since it is built in Samza and easy to parse.


The reconciliation of control messages happens inside TaskInstance after the message is delivered to it from the chooser.  For the scope of this proposal, we support two kinds of control messages: end-of-stream and watermark

  • End-of-stream Message: This message indicate the upstream task has ended producing to this stream.
  • Watermark Message: This message contains a timestamp of the upstream task has processed so far. 

The reconciliation process works as follows:

  1. The downstream TaskInstance receives the control message, and update the internal bookkeeping of the messages. For end-of-stream, it keeps the set of upstream tasks for the intermediate stream. For watermark, it keeps the mapping from task to its latest timestamp.
  2. Once the task count in the bookkeeping matches the total count, the TaskInstance will emit a single IncomingMessageEnvelope containing the intermediate stream and partition, and the message itself. The timestamp in the watermark message will be: 

    InputWatermark = min { OutputWatermark(task) for each task in upstream tasks }
  3. After reconsiliation, the control message evelope will be sent to the task to process.

The TaskInstance uses the following maps for bookkeeping received end-of-stream and watermark messages:

EndOfStream Bookkeeping: Map( streamId -> { Set<TaskName>, totalTasks } )
Watermark Bookkeeping: Map( streamId -> { Map<TaskName, Timestamp>, totalTasks, timestampOfLastEmission } )

Checkpoint control messages

For failure scenario, we need to keep the state of bookkeeping so we can restore it during recovery. This can be done by checkpointing the bookkeeping states along with the input messages offset.

The checkpoint for EndOfStream:

EndOfStreamCheckpoint =>
 streamId => String
 totalTasks => int
 tasks => Set<String>

The checkpoint for Watermark:

WatermarkCheckpoint =>
 streamId => String
 totalTasks => int
 tasksToEventTime => Map<String, Long>
During failure recovery, the TaskInstance will restore the bookkeeping info from the checkpoint and continue to process future control messages. 

Detail details


We will support two types of ControlMessage: EndOfStreamMessage and WatermarkMessage

public abstract class ControlMessage {
  private final String taskName;
  private final int taskCount;
  private int version = 1;

  public ControlMessage(String taskName, int taskCount) {
    this.taskName = taskName;
    this.taskCount = taskCount;

  public String getTaskName() {
    return taskName;

  public int getTaskCount() {
    return taskCount;

  public void setVersion(int version) {
    this.version = version;

  public int getVersion() {
    return version;
public class EndOfStreamMessage extends ControlMessage{

 private EndOfStreamMessage(String streamId, String taskName, int taskCount) {
   super(taskName, taskCount);
   this.streamId = streamId;

public class WatermarkMessage extends ControlMessage{
 private final long timestamp;

 private WatermarkMessage(long timestamp, String taskName, int taskCount) {
   super(taskName, taskCount);
   this.timestamp = timestamp;

 public long getTimestamp() {   return timestamp;  }

Rejected Alternative:

Out-of-band control stream

In this approach the ApplicationRunner will create a separate control stream for propagating control messages. The control stream is a one-partition broadcast stream which will be consumed by each container in the application. The application runner will manage the lifecycle of the control stream: it creates it for the first time and purge the stream at the start (same as output streams when consuming from Hadoop) of future runs.

How it works for end-of-stream:
  1. When an input stream is consumed to the end, Samza sends an Eos message to the control channel which includes the input topic and partition.

  2. Once the EOS messages are received from all the partitions of this input, we know the input is end-of-stream. Then the ControlStreamConsumer will inspect the stream graph and find out the intermediate stream that all its input streams to it have been all end-of-stream. If so, we mark the intermediate stream pending end-of-stream. After that, whenever a marked intermediate stream partition reaches its highest offset (high watermark in Kafka), we can emit end-of-stream message for this partition. It’s guaranteed that the partition reaches end of stream.

Comparisons of the two approaches:






- Intermediate streams are clean with only user data. This is convenient if user wants to consume it elsewhere.

- Simple recovery from failure, just read the control stream from the beginning.

- Less number of messages. The control messages needed is the same as the input stream partition count (n partitions). So the total will be n messages.

- Need to correlate the out-of-band control message with the source stream, which is complex to track and requires synchronization between input streams and control stream. 

- Need to maintain a separate stream for control messages


- No coordination needed between control message and input messages. When a control message is received, it is a marker that the messages sent before the control message have been consumed completely. This is critical to support general event-time watermarks.

- Complicated failure scenario. The consumer of control messages needs to checkpoint the control messages received, so when it recovered from failure, it can still resume.

- More control messages required. For each intermediate stream (m partitions), we need to write each task of the producer (n tasks) into it. So the total will be n*m messages.

Based on the pros and cons above, we propose to use the in-band approach to support control messages.



  • No labels