This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Child pages
  • Version Scheme and API Compatibility

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

For the purposes of understanding our versioning and API compatibility commitments we need to define what are considered parts of the Apache NiFi API.  This following items are considered part of the NiFi API:

  • Any code in the nifi-api module not clearly documented as unstable.
  • Any part of the REST API not clearly documented as unstable.
  • Any extension such as Processor, Controller Service, Reporting Task.
  • Any specialized protocols or formats such as:
    • Site-to-site
    • Serialized Flow File
    • Flow File Repository
    • Content Repository
    • Provenance Repository
    • State management
  • Any configuration file necessary to operate NiFi such as:
    • All files found in the nifi/conf directory
    • Templates
    • The configuration parameters and intended behavior of a given component/extension point

Anything not listed above is not considered part of the API and is subject to change in a manner that may differ from the guidance below.  There are substantial portions of the codebase which are intentionally considered private from an API point of view.  If there are specific items not listed here which should be considered part of the API please raise a discussion about it on dev@nifi.apache.org so it can be considered for inclusion.  We want to keep the public surface of the Apache NiFi API well defined and well scoped so that those consuming the application either as a developer or user know what is considered safe and not safe to do but also so that our community can continue to rapidly evolve the codebase.

For the public API the Apache NiFi project aims to follow versioning principles as described at Semantic Versioning 2.0.0

Consider the following scenarios in the context of the most recent 'example' release being 0.0.1 and with the understanding that these are about the public API as defined above.

  • For releases which are comprised solely of bug fixes or non-feature introducing or enhancing changes that requires only a 'patch' version bump (the Z part in X.Y.Z).  So the next release then is 0.0.2.
  • For releases which include backward compatible changes to introduce feature enhancements or new features that requires a 'minor' version change and the 'patch' version resets to '0' (the Y part in X.Y.0).  So the next release then is 0.1.0. A 'minor' version change is also required for any change that could result in an existing flow becoming invalid, such as the addition of a required property with no default or the addition of a relationship, or the removal of a property or relationship. Note: it is NOT acceptable in a 'minor' version to change anything that can result in an existing flow behaving differently (other than a component becoming invalid). Doing so would fundamentally alter the way in which organizations process data without them realizing it.
  • For releases which include non-backward compatible changes or changes deemed so substantive by the community that it is considered a 'major' version change and the minor and patch versions reset to '0' (the X part in X.0.0).  So the next release then is 1.0.0.

...

mvn versions:set "0.1.0-SNAPSHOT"
mvn versions:commit

It is important that we also define the notion of 'compatibility' and what we're talking about.  The most obvious scenario here is code compatibility of APIs and features and so on.  However, we must also keep in mind compatibility of configuration items like properties files, existing dataflows, and of our runtime API's like the REST API to interact with a running NiFi instance.  All of these are critically important to both developers, clients that interface with NiFi, administrators that configure it, and users that interact with NiFi in operationsFurthermore, we have defined the API above.  But we must also be mindful of the effort required on users to move from major version to major version.  We need to provide, to the extent possible, tooling to automatically migrate or uptake old configuration approaches.  We need to ensure old protocols and new protocols can work together.  We need to ensure that upgrades are as smooth and automatic as possible such that older versions configurations and data storage structures and formats are honored or have a clear conversion path.

Special consideration: You must also consider 'compatibility' of processors and extensions specifically.  For example, if a processor has two relationships and you add a third one or remove one you must be sure to take extra care to document this sort of thing for the release that it goes into.  We should always provide a notice to folks upgrading from version to version so they know what to look out for.  Those cases are usually easily resolved by simply knowing that after upgrading they need to specify a relationship and start that processor or that they need to remove a no longer relevant relationship.  But the key is to ensure we follow through with effective notices for users so they understand what is happening and don't simply perceive a bug or issues introduced by upgrading.

Even within the concept of code compatibility, though, there can be some ambiguity. We should consider code to be 'backward compatible' (and therefore allowed in a 'minor' version change) if the change in X.Y.0 allows all extensions (Processors, Reporting Tasks, Controller Services, and Authority Providers) that are developed against the X.(Y-1).* API to still perform their tasks the same way. For example, it is not backward compatible to remove or change the signature of a method in the 'FlowFile' interface because Processors may no longer function properly. It is, however, acceptable to add a new method to this interface because it is expected that only the Framework will be implementing this interface. Therefore, all extensions will still function properly.Also, starting with the Apache NiFi 1.x codebase we have adopted Apache Yetus 'Audience Annotations' to explicitly mark code for things like interfaces, classes, and methods to indicate the intended audience and stability of those APIs.  If not otherwise marked please consider the interface, class, or method to be private/internal and unstable.  The vast majority of 'nifi-api' should be both public use and stable meaning we will treat any compatibility changes for these items as things which would likely require a major release.  Otherwise, as a community we should be able to more easily navigate the necessary evolution and improvement of the codebase for these interfaces, classes, and methods for which we do not have to be so strictly adherent to backward compatibility.  A further consideration here though is as mentioned previously which is that even if we want to change classes, interfaces, and methods which are not

public and stable we must also be mindful of things like configuration, user experience, REST API, etc... as these are also an important part of our 'interface' and compatibility which we must honor and account for.The community is also evaluating the use of more formalized mechanisms of communicating these concepts using approaches such as Apache Yetus Audience Annotations or built-in software development lifecycle models.  If adopted this could allow us to do things like warn users and developers of components which they should not be extending from or which are unstable.

Given that NiFi itself is designed for extension and the growing community is providing frequent contributions in the form of feature enhancements and new features the most likely scenario in a given release is that it will be a 'minor' version bump.  

...