The Flink community has created and maintains multiple Flink connectors, which can be found in multiple locations.
The Flink community wants to improve on the overall connector ecosystem, which includes that we want to move existing connectors out of Flink's main repository and as a result decouple the release cycle of Flink with the release cycles of the connectors. This should result in:
- Faster releases of connectors: New features can be added more quickly, bugs can be fixed immediately, and we can have faster security patches in case of direct or indirect (through dependencies) security flaws.
- Adding newer connector features to older Flink versions: By having stable connector APIs, the same connector artifact may be used with different Flink versions. Thus, new features can also immediately be used with older Flink versions.
- More activity and contributions around connectors: By easing the contribution and development process around connectors, we will see faster development and also more connectors.
- Standardized documentation and user experience for the connectors, regardless of where they are maintained.
- A faster Flink CI: By not needing to build and test connectors, the Flink CI pipeline will be faster and Flink developers will experience fewer build stabilities (which mostly come from connectors). That should speed up Flink development.
The following has been discussed and agreed by the Flink community with regards to connectors:
- All connector repositories will remain under the ASF, which means that code will still be hosted at https://github.com/apache and all ASF policies will be followed.
- Each connector will end up in its own connector repository. For example, https://github.com/apache/flink-connector-kafka for a Kafka connector, https://github.com/apache/flink-connector-elasticsearch for an Elasticsearch connector etc.
The following connectors will be moved out from Flink's main repository to an individual repository:
Only the following connectors will remain in Flink's main repository:
- PRs for new connectors to Flink's main repository should not be merged, as these new connectors should also be hosted outside of Flink's main repository. If you have a connector that you would like to build or maintain, please reach out to the Flink Dev mailing list https://flink.apache.org/community.html for more information to get started using the external connector repository setup.
- A dedicated FLIP connector template exists to help you to come up with an initial proposal that can be presented on the mailing list.
- The discussion threads on these topics can be found in:
This document outlines common rules for connectors that are developed & released separately from Flink (otherwise known as "externalized").
This may imply releasing the exact same connector jar multiple times under different versions.
The default branch is called
main and is used for the next major iteration.
Remaining branches are called
v<major>.<minor>, for example
Branches are not specific to a Flink version, i.e., no
The Flink versions supported by the project (at the time of writing the last 2 minor Flink versions) must be supported.
How this is achieved is left to the connector, as long as it conforms to the rest of the proposal.
Since branches may not be specific to a particular Flink version this may require the creation of dedicated modules for each supported Flink version.
The architecture of such modules is up to the connector.
For example, there could be:
- a base connector with version-specific extension modules ("shims") that inject the version-specific behavior
- a common utility module that is used by version-specific connector modules.
The last 2 major connector releases are supported with only the latter receiving additional features, with the following exceptions:
- If the older major connector version does not support any currently supported Flink version, then it is no longer supported.
- If the last 2 major versions do not cover all supported Flink versions, then the latest connector version that supports the older Flink version /additionally /gets patch support.
For a given major connector version only the latest minor version is supported.
This means if 1.1.x is released there will be no more 1.0.x release
|Change||Initial state||Final state|
New minor Connector version
New major Connector version
New major Connector version
The last 2 major version versions do not cover all supported Flink versions.
New minor Flink version
An older connector does not support any supported Flink version.
https://github.com/apache/flink-connector-elasticsearch/ is the most complete example of an externalized connector.
When moving a connector out of the Flink repo the git history should be preserved.
Use the git-filter-repo tool to extract the relevant commits.
As an example, the externalization of the Cassandra connector required these commands to be run (in a fresh copy of the Flink repository!!!):
The result should be that only the desired modules related to the connector exist in your local branch.
Then rebase this branch on top of the bootstrapped externalized connector repo, then apply changes to make things actually work.
We have a parent pom that connectors should use.
It handles various things; from setting up essential modules (like the compiler plugin), to QA (including license checks!), testing and Java 11/17 support.
(Almost) everything is opt-in, requiring the project to put a plugin into the
See the bottom of the
<properties> for properties that sub-projects should define.
we have a collection of ci utilities that connectors should use.
ci.yml workflow can be used like this:
We have a collection of release scripts that connectors should use.
See the contained
README.md for details.
The documentation should follow this structure:
See https://github.com/apache/flink/tree/master/docs#include-externally-hosted-documentation for more information on how to integrate the docs into Flink.
For generating a Maven dependency pom snippet, use the
connector_artifact shortcode instead of
artifact. This allows the Flink docs to inject the Flink version suffix.
Common review issues
Lack of production architecture tests
Within Flink the architecture tests for production are centralized in flink-architecture-tests-production, while the test architecture tests are spread out into each module. When externalizing the connector a separate architecture tests for production code must be added to the connector module(s).
Dependency convergence errors on transitive Flink dependencies
Flink is pulling transitively pulling in different version of dependencies like Kryo or objenesis, that must be converged in the connector.
Excess test dependencies
Flink defines several default test dependencies, like JUnit4 or hamcrest. These may not be required by the connector if it was already migrated to JUnit5/assertj.
The DockerImageVersions class is a central listing of docker images used in Flink tests. Since connector-specific entries will be removed once the externalization is complete connectors shouldn't rely on this class but handle this on their own (either creating a trimmed-down copy, hard-coding the version or deriving it from a Maven property).
Bundling of flink-connector-base
Connectors should not bundle the connector-base module from Flink and instead set it to
provided, as contained classes may rely on internal Flink classes.