Status


Discussion threadhttps://lists.apache.org/thread.html/rf09dfeeaf35da5ee98afe559b5a6e955c9f03ade0262727f6b5c4c1e%40%3Cdev.flink.apache.org%3E
Vote threadhttps://lists.apache.org/thread.html/r0f71d53810c7970430e5952ec69aa59ac54353de6233844421f753d6%40%3Cdev.flink.apache.org%3E
JIRA

Release1.12


Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

As discussed in FLIP-131, Flink will deprecate the DataSet API in favor of DataStream API and Table API. Users should be able to use DataStream API to write jobs that support both bounded and unbounded execution modes. However Flink does not provide a sink API to guarantee the exactly once semantics in both bounded and unbounded scenarios, which blocks the unification. 

So we want to introduce a new unified sink API which could let the user develop sink once and run it everywhere. Specifically Flink allows the user to 

  1. Choose the different SDK(SQL/Table/DataStream)
  2. Choose the different execution mode(Batch/Stream) according to the scenarios(bounded/unbounded) 

We hope these things(SDK/Execution mode) are transparent to the sink API.

The document includes three parts: The first part describes the semantics the unified sink API should support. According to the first part the second part proposes a new unified sink API. In the last part we introduce two open questions related to the API.

Semantics

This section tries to figure out the sink developer would be responsible for which part and Flink would be responsible for which part in terms of ensuring consistency. After this we could know what semantics the sink API should have to support the exact once semantics.

From a high-level perspective, we can always abstract a job by reading data from an external system and then writing the processed data to another external system. Sink's responsibility is to write data to external systems. In general, writing data to external systems mainly answers four questions What & How & When & Where to commit the data. Following we would give some explanation about these four questions. After that, we could answer the question asked at the beginning of this part. 

What to Commit. The data generated by the job goes through two stages before it could be committed to the external system. The first stage is the preparation. Normally the data produced by the job could not be written/committed to the external system immediately thanks to some conditions that have not been met. For example, in the `StreamingFileSink` the data is first written to an in-progress-file before it could be committed to the filesystem. The second stage is a commitment. In this stage, the data could be committed to an external system. The data in the second stage is called What to commit.

How to Commit. When data is ready to be committed we need a system-specific way to commit the data to the external. Sometimes the sink needs to commit the data by modifying the file’s name. Sometimes the sink needs to submit some meta infos to let data be visible to the end users.

When to Commit. From the users’ perspective, one of the most important requirements is correctness. Normally the user wants the data generated by the job to be committed to the external system once and only once. If we commit the data at the wrong time there might be duplicated data. For example current Flink through restarting the job when there is failover. This always replays some elements.

Where to Commit. This is about how many resources that we need to guarantee exactly once when time is ready. For example If we assume that commit operation will happen at where the committable data is produced the lifecycle of the component that is responsible for producing the committable data would last until the end of the commit operation.

Actually When & Where are all about how to guarantee the exactly once semantics. Now Flink exposes the internal implementation to the sink developer through two interfaces `CheckpointedFunction` & `CheckpointListener`. All the exactly once sinks are coupled with the internal exactly once implementation through implementing the two interfaces to guarantee the exactly once semantics. 

For supporting the bounded scenario we find that the mechanism of `CheckpointedFunction` & `CheckpointListener` is not inline with the bounded scenario. In a bounded scenario Flink should guarantee the exact once semantics:

  1. Even if there is only one resource.(Where)
  2. Even if there is no normal checkpoint at all.(When)

It means current streaming style sink implementation is not suitable for the bounded scenario. Flink must introduce some new mechanism for the bounded scenario. However, we might not couple the sink API to another new internal mechanism which guarantees consistency in the bounded scenario. These internal mechanisms should be decoupled with the sink’s API.

In summary the sink API should be responsible for producing What to commit & providing How to commit. Flink should be responsible for guaranteeing the exact semantics. Flink could “optimize” When & Where to commit according to the execution mode and these optimizations should be transparent to the sink developer.

Sink API


/**
 * This interface lets the sink developer build a simple sink topology, which could guarantee the exactly once
 * semantics in both batch and stream execution mode if there is a {@link Committer} or {@link GlobalCommitter}.
 * 1. The {@link SinkWriter} is responsible for producing the committable.
 * 2. The {@link Committer} is responsible for committing a single committable.
 * 3. The {@link GlobalCommitter} is responsible for committing an aggregated committable, which we call the global
 *    committable. The {@link GlobalCommitter} is always executed with a parallelism of 1.
 * Note: Developers need to ensure the idempotence of {@link Committer} and {@link GlobalCommitter}.
 *
 * @param <InputT>        The type of the sink's input
 * @param <CommT>         The type of information needed to commit data staged by the sink
 * @param <WriterStateT>  The type of the sink writer's state
 * @param <GlobalCommT>   The type of the aggregated committable
 */
public interface Sink<InputT, CommT, WriterStateT, GlobalCommT> extends Serializable {

	/**
	 * Create a {@link SinkWriter}.
	 *
	 * @param context the runtime context.
	 * @param states the writer's state.
	 *
	 * @return A sink writer.
	 *
	 * @throws IOException if fail to create a writer.
	 */
	SinkWriter<InputT, CommT, WriterStateT> createWriter(
			InitContext context,
			List<WriterStateT> states) throws IOException;

	/**
	 * Creates a {@link Committer}.
	 *
	 * @return A committer.
	 *
	 * @throws IOException if fail to create a committer.
	 */
	Optional<Committer<CommT>> createCommitter() throws IOException;

	/**
	 * Creates a {@link GlobalCommitter}.
	 *
	 * @return A global committer.
	 *
	 * @throws IOException if fail to create a global committer.
	 */
	Optional<GlobalCommitter<CommT, GlobalCommT>> createGlobalCommitter() throws IOException;

	/**
	 * Returns the serializer of the committable type.
	 */
	Optional<SimpleVersionedSerializer<CommT>> getCommittableSerializer();

	/**
	 * Returns the serializer of the aggregated committable type.
	 */
	Optional<SimpleVersionedSerializer<GlobalCommT>> getGlobalCommittableSerializer();

	/**
	 * Return the serializer of the writer's state type.
	 */
	Optional<SimpleVersionedSerializer<WriterStateT>> getWriterStateSerializer();

	/**
	 * The interface exposes some runtime info for creating a {@link SinkWriter}.
	 */
	interface InitContext {

		/**
		 * Returns a {@link ProcessingTimeService} that can be used to
		 * get the current time and register timers.
		 */
		ProcessingTimeService getProcessingTimeService();

		/**
		 * @return The id of task where the writer is.
		 */
		int getSubtaskId();

		/**
		 * @return The metric group this writer belongs to.
		 */
		MetricGroup metricGroup();
	}

	/**
	 * A service that allows to get the current processing time and register timers that
	 * will execute the given {@link ProcessingTimeCallback} when firing.
	 */
	interface ProcessingTimeService {

		/** Returns the current processing time. */
		long getCurrentProcessingTime();

		/**
		 * Invokes the given callback at the given timestamp.
		 *
		 * @param time Time when the callback is invoked at
		 * @param processingTimerCallback The callback to be invoked.
		 */
		void registerProcessingTimer(long time, ProcessingTimeCallback processingTimerCallback);

		/**
		 * A callback that can be registered via {@link #registerProcessingTimer(long,
		 * ProcessingTimeCallback)}.
		 */
		interface ProcessingTimeCallback {

			/**
			 * This method is invoked with the time which the callback register for.
			 *
			 * @param time The time this callback was registered for.
			 */
			void onProcessingTime(long time) throws IOException;
		}
	}
}



/**
 * The {@code SinkWriter} is responsible for writing data and handling any potential tmp area used to write yet un-staged
 * data, e.g. in-progress files. The data (or metadata pointing to where the actual data is staged) ready to commit is
 * returned to the system by the {@link #prepareCommit(boolean)}.
 *
 * @param <InputT>         The type of the sink writer's input
 * @param <CommT>          The type of information needed to commit data staged by the sink
 * @param <WriterStateT>   The type of the writer's state
 */
public interface SinkWriter<InputT, CommT, WriterStateT> extends AutoCloseable {

	/**
	 * Add an element to the writer.
	 *
	 * @param element The input record
	 * @param context The additional information about the input record
	 *
	 * @throws IOException if fail to add an element.
	 */
	void write(InputT element, Context context) throws IOException;

	/**
	 * Prepare for a commit.
	 *
	 * <p>This will be called before we checkpoint the Writer's state in Streaming execution mode.
	 *
	 * @param flush Whether flushing the un-staged data or not
	 * @return The data is ready to commit.
	 *
	 * @throws IOException if fail to prepare for a commit.
	 */
	List<CommT> prepareCommit(boolean flush) throws IOException;

	/**
	 * @return The writer's state.
	 *
	 * @throws IOException if fail to snapshot writer's state.
	 */
	List<WriterStateT> snapshotState() throws IOException;


	/**
	 * Context that {@link #write} can use for getting additional data about an input record.
	 */
	interface Context {

		/**
		 * Returns the current event-time watermark.
		 */
		long currentWatermark();

		/**
		 * Returns the timestamp of the current input record or {@code null} if the element does not
		 * have an assigned timestamp.
		 */
		Long timestamp();
	}
}


/**
 * The {@code Committer} is responsible for committing the data staged by the sink.
 *
 * @param <CommT> The type of information needed to commit the staged data
 */
public interface Committer<CommT> extends AutoCloseable {

	/**
	 * Commit the given list of {@link CommT}.
	 * @param committables A list of information needed to commit data staged by the sink.
	 * @return A list of {@link CommT} needed to re-commit, which is needed in case we implement a "commit-with-retry" pattern.
	 * @throws IOException if the commit operation fail and do not want to retry any more.
	 */
	List<CommT> commit(List<CommT> committables) throws IOException;
}



/**
 * The {@code GlobalCommitter} is responsible for creating and committing an aggregated committable,
 * which we call global committable (see {@link #combine}).
 *
 * <p>The {@code GlobalCommitter} runs with parallelism equal to 1.
 *
 * @param <CommT>         The type of information needed to commit data staged by the sink
 * @param <GlobalCommT>   The type of the aggregated committable
 */
public interface GlobalCommitter<CommT, GlobalCommT> extends AutoCloseable {

	/**
	 * Find out which global committables need to be retried when recovering from the failure.
	 * @param globalCommittables A list of {@link GlobalCommT} for which we want to verify
	 *                              which ones were successfully committed and which ones did not.
	 *
	 * @return A list of {@link GlobalCommT} that should be committed again.
	 *
	 * @throws IOException if fail to filter the recovered committables.
	 */
	List<GlobalCommT> filterRecoveredCommittables(List<GlobalCommT> globalCommittables) throws IOException;

	/**
	 * Compute an aggregated committable from a list of committables.
	 * @param committables A list of {@link CommT} to be combined into a {@link GlobalCommT}.
	 *
	 * @return an aggregated committable
	 *
	 * @throws IOException if fail to combine the given committables.
	 */
	GlobalCommT combine(List<CommT> committables) throws IOException;

	/**
	 * Commit the given list of {@link GlobalCommT}.
	 *
	 * @param globalCommittables a list of {@link GlobalCommT}.
	 *
	 * @return A list of {@link GlobalCommT} needed to re-commit, which is needed in case we implement a "commit-with-retry" pattern.
	 *
	 * @throws IOException if the commit operation fail and do not want to retry any more.
	 */
	List<GlobalCommT> commit(List<GlobalCommT> globalCommittables) throws IOException;

	/**
	 * Signals that there is no committable any more.
	 *
	 * @throws IOException if fail to handle this notification.
	 */
	void endOfInput() throws IOException;
}


Open Questions

There are still two open questions related to the unified sink API

How does the sink API support to write to the Hive?  

In general HiveSink needs three steps before committing the data to the Hive:

  1. The first step is writing the data to the FileSystem. 
  2. The second step is computing which directories could be committed 
  3. The third step is commit the directories the HMS

One of the special requirements of Hive is that the data partitioning key of the first two steps might be different.  For example the first step needs partition by the order.id and the second step needs to partition by the order.created_at. It is because it would introduce data skew if we use the same key to partition. The unified sink API uses the `Writer` to produce the committable data. It implies that there would be only one partition key. So this api does not meet the above scenario directly. 

From the discussion we will not support the HiveSink in the first version.

Is the sink an operator or a topology?

The scenario could be more complicated. For example, some users want to merge the files in one bucket before committing to the HMS. Where to place this logic? Do we want to put all these logic into one operator or a sink topology? 

From the discussion in the long run we should give the sink developer the ability of building “arbitrary” topologies. But for Flink-1.12 we should be more focused on only satisfying the S3/HDFS/Iceberg sink.

Compatibility, Deprecation, and Migration Plan

Rejected Alternatives

The difference between the rejected version and accepted version is how to expose the state to the user. The accepted version could give the framework a greater opportunity to optimize state handling.

/**
* The sink interface acts like a factory to create the {@link Writer} and {@link Committer}.
*
* @param <T>       The type of the sink's input.
* @param <CommT>   The type of the committable data.
*/
public interface Sink<T, CommT> extends Serializable {

    /**
	* Create a {@link Writer}.
	* @param context the runtime context
	* @return A new sink writer.
	*/
	Writer<T, CommT> createWriter(InitialContext context) throws Exception;

	/**
	* Create a {@link Committer}.
	* @return A new sink committer
	*/
	Committer<CommT> createCommitter();

	/**
	* @return a committable serializer
	*/
	SimpleVersionedSerializer<CommT> getCommittableSerializer();

    /**
	* Providing the runtime information.
	*/
	interface InitialContext {

		boolean isRestored();

		int getSubtaskIndex();

		OperatorStateStore getOperatorStateStore();

		MetricGroup metricGroup();

	}
}


/**
* The Writer is responsible for writing data and handling any potential tmp area used to write yet un-staged data, e.g. in-progress files.
* As soon as some data is ready to commit, they (or metadata pointing to where the actual data is staged) are shipped to an operator who knows when to commit them.
*
* @param <T> 		The type of writer's input
* @param <CommT> 	The type of the committable data.
*/
public interface Writer<T, CommT> extends AutoCloseable {

	/**
	* Add an element to the writer.
	* @param t          The input record
	* @param context    The additional information about the input record
	* @param output     The committable data could be shipped to the committer by this
	* @throws Exception
	*/
	void write(T t, Context context, WriterOutput<CommT> output) throws Exception;

	/**
	* Snapshot the state of the writer.
	* @param output The committable data could be shipped to the committer by this.
	*/
	void snapshotState(WriterOutput<CommT> output) throws Exception;

	/**
	* Flush all the data which is un-staged to the committer.
	*/
	void flush(WriterOutput<CommT> output) throws IOException;

    interface Context {

		long currentProcessingTime();

		long currentWatermark();
		
		Long timestamp();
	}
}