Skip to end of metadata
Go to start of metadata
The Apache NiFi community recognizes how important it is to provide reliable releases on a number of levels.  But one of the most important aspects is that we consider the importance of changes which create new behavior, change existing behavior and so on.  We're committed to being a responsible community whereby we can continue to evolve the capabilities and features of NiFi and users can have a well understood and reliable upgrade path.  We're committed to ensuring backward compatibility issues are made quite rare or that the impact is clearly understood, minimized, and communicated.  You can read more about our approach to version management.  If you find that we've violated this commitment in any way please send us an email at dev@nifi.apache.org and we'll work to resolve it for you and other users.

To summarize:
  • When moving between patch (also known as incremental) version changes such as 0.1.0 to 0.1.1 users should be safe to assume a clean upgrade can occur with no risk of behavior changes other than bug fixes and no compatibility issues.
  • When moving between minor changes such as 0.1.0 to 0.2.0 users can expect new behaviors and bug fixes but backward compatibility should be protected.
  • When moving between major changes such as 0.x.y to 1.0.0 there may be backward compatibility impacting changes largely focused on removal of deprecated items.

 

The following guidance is specific to the indicated version changes.  It will contain specific items that users should be aware of when moving between versions:
  • Migrating from 1.1.x to 1.2.0
    • With the introduction of component versioning, custom NARs will show up as "unversioned" until they are rebuilt with the latest NAR Maven Plugin (1.2.0). When deploying a rebuilt custom NAR, make sure to remove all previous versions of the NAR from the lib directory.
    • The nifi-documentation JAR is no longer directly in the lib directory and is now part of the framework NAR. Make sure there is no left-over version of an old nifi-documentation JAR in the lib directory after upgrading.
    • Jetty has been upgraded to version 9.4.2.  As a result, TLSv1/1.1 is no longer supported.  Users or clients connecting to NiFi through the UI or API now protected with TLS v1.2.  Any custom code that consumes the NiFi API needs to use TLS v1.2 or later.
  • Migrating from 1.1.x to 1.1.2
    • No known migration issues.
    Migrating from 1.1.0 to 1.1.1
    • No known migration issues.
  • Migrating from 1.0.x to 1.1.0
    • NiFi now supports the concept of restricted components.  These are processors, controller services, reporting tasks that allow an authorized user to execute unsanitized code or access and alter files accessible by the NiFi user on the system NiFi is running.  Therefore, these components are tagged by the developer as restricted and when running NiFi in secure mode only an administrator must grant each user access to the policy allowing restricted component access.  This is explained in greater detail in the admin, user, and developer guides.  When you upgrade you will need to give your users access to this policy to be able to use these components.
    • During cluster startup we have to determine which flow is going to be considered the correct flow to go on based on which nodes attempt to join the cluster.  To help speed up this process there are now two new nifi.properties keys and values you should consider.  They are "nifi.cluster.flow.election.max.wait.time" and "nifi.cluster.flow.election.max.candidates".  You can read more about this in the admin guide under "Flow Election".
  • Migrating from 1.0.0 to 1.0.1
    • No known migration issues.
  • Migrating from 0.7.x to 1.0.0
    • Java 8 is now the minimum JRE/JDK supported
      • Before NiFi 1.0 release we supported a minimum of Java 7.  We've now moved to Java 8.
    • Kerberos System Properties

      • SPNEGO and service principals for Kerberos are now established via separate system properties.
        • New SPNEGO properties
          • nifi.kerberos.spnego.principal
          • nifi.kerberos.spnego.keytab.location
          • nifi.kerberos.spnego.authentication.expiration
        • New service properties
          • nifi.kerberos.service.principal
          • nifi.kerberos.service.keytab.location
        • Removed properties
          • nifi.kerberos.keytab.location
          • nifi.kerberos.authentication.expiration
    • DBCPConnectionPool Service
      • The “Database Driver Jar Url” property has been replaced by the “Database Driver Location(s)” property which accepts a comma-separated list of URLs or local files/folders containing the driver JAR.
      • Existing processors that reference this service will be invalid until the new property is configured.
    • MonitorDiskUsage

      • This standard reporting task has been simplified to let the user specify a logical name, a directory and a threshold to monitor.  Previously it was tightly coupled to the internal flow file and content repositories in a manner that didn't align to the pluggable nature of those repositories.  The new approach gives the user total control over what they want it to monitor.
    • Connection/Relationship Default Back Pressure Settings
      • It used to be that by default no backpressure settings were supplied.  This too often meant people learned the value of backpressure the hard way.  New connections made will now have a default value set of 10,000 flowfiles and 1GB worth of data size.
    • Multi-tenant Authorization Model

      • Authority Provider model has been replaced by a Multi-tenant Authorization model. Access privileges are now defined by policies that can be applied system-wide or to individual components. Details can be found in the ‘Admin Guide’ under ‘Multi-tenant Authorization’.

      • The system properties nifi.authority.provider.configuration.file and nifi.security.user.authority.provider have been replaced by nifi.authorizer.configuration.file and nifi.security.user.authorizer, respectively. Details on configuration can be found in the “Admin Guide’ under ‘Authorizer Configuration’.

      • 0.7.0 authorized users/roles can be converted to the new authorization model. An existing authorized-users.xml file can be referenced in the authorizers.xml "Legacy Authorized Users File” property to automatically generate users and authorizations. Details on configuration can be found in the “Admin Guide” under ‘Authorizers.xml Setup’.

      • Controller Services that will be used by Processors must be defined in the Operate Palette of the root process group or sub process group where they will be used.  Controller Services defined in the Global - Controller Settings window can only be used by Reporting Tasks, not by any Processors.
    • HTTP(S) Site-to-Site

      • HTTP(S) protocol is now supported in Site-to-Site as an underlying transport protocol.

      • HTTP(S) protocol is enabled by default (nifi.remote.input.http.enabled=true).Configuration details can be found in the 'Site-to-Site Properties' section of the 'Admin Guide’. Of note:

        • With both socket and HTTP protocols supported, nifi.remote.input.socket.host has been renamed to nifi.remote.input.host

        • nifi.remote.input.secure is now set to false by default

    • Zero-Master Clustering

      • Master/slave clustering model has been replaced by a Zero-Master Clustering paradigm.  Each node in a NiFi cluster performs the same tasks on the data, but each operates on a different set of data.  A DataFlow manager can now interact with the NiFi cluster through the UI of any node.

      • ZooKeeper elects a single node as the Cluster Coordinator and also handles failover. All cluster nodes report heartbeat and status information to the Cluster Coordinator, which is responsible for disconnecting and connecting nodes. Additionally, every cluster has one Primary Node, also elected by ZooKeeper.

      • Configuration details can be found in the ‘Clustering Configuration’ section of the ‘Admin Guide’ as well as the Cluster Common/Node and ZooKeeper Properties section of the ‘Admin Guide’.  Of note:

          • NiFi Cluster Manager (NCM) configuration and properties are no longer relevant and have been removed.

          • The following properties should be set on each node:

            • nifi.web.http.port=<node port>

            • nifi.cluster.is.node=true

            • nifi.cluster.node.address=<fully qualified hostname of the node>

            • nifi.cluster.node.protocol.port=<node protocol port>

            • nifi.state.management.embedded.zookeeper.start=true

            • nifi.state.management.provider.cluster=zk-provider

            • nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties

            • nifi.zookeeper.connect.string=<A comma-separated list of host:port pairs to connect to ZooKeeper. For example, my-zk-server1:2181,my-zk-server2:2181,my-zk-server3:2183>
          • Embedded ZooKeeper setup

            • The zookeeper.properties file needs to be populated with a list of each node's embedded ZooKeeper server. The servers are specified in the form of server.1, server.2, to server.n. Each of these servers is configured as <hostname>:<quorum port>[:<leader election port>]. For example, server.1=nifi-node1-hostname:2888:3888.

            • The zookeeper.properties file has a property named dataDir which is set to ./state/zookeeper by default. For each node, create a file named myid and place it in this directory. The contents of this file should be the index of the server as specified by the server.<number>.  Configuration details can be found in the “Admin Guide’ under ‘Embedded ZooKeeper Server’.
          • State Management

            • In the state-management.xml file, set the “Connect String” property to the same list of ZooKeeper host:port pairs used for the nifi.zookeeper.connect.string property value.

          • Secure Clustered Environment

            • The identities for each node must be specified in the authorizers.xml file. The authorization policies required for the nodes to communicate will then be created during startup.  Details on configuration can be found in the ‘Admin Guide’ under ‘Authorizers.xml Setup’.
    • Secured Zookeeper

      • The username and password mechanism to provide ZooKeeper authentication is no longer supported.  As a result, the “Username” and “Password” properties in the state-management.xml file have been removed.

      • The “Access Control” property in the state-management.xml file is now set to “Open” by default.  It should be changed to “CreatorOnly” when Zookeeper is secured via Kerberos.
    • QueryDatabaseTable Processor

      • The 'SQL Pre-processing Strategy' property has been replaced by the 'Database Type’ property.  This property sets the type of database for generating database-specific code. Property values include ‘Generic' (default) and ‘Oracle’ (for custom SQL clauses).
    • TailFile Processor

      • TailFile originally stored state in a local file, then state management was added in 0.5.0 to support reading in the local state and moving it into the state manager.  In 1.0.0, the auto-migration from the old state mechanism has been removed.

      • If upgrading from a pre-0.5.0 version of NiFi, it is suggested to upgrade to a version greater than or equal to 0.5.0 first, then go to 1.0.0 to not lose state on existing TailFile processors.
    • LDAP referral strategy ‘IGNORE’ bug fix

      • Errors occurred if the Referral Strategy was set to ‘IGNORE’ due to a typo in the code to accept ‘INGORE' instead.  The login-identity-providers.xml file should now be configured with the intended value of ‘IGNORE’ for the Referral Strategy property.
  • Migrating from 0.7.x to 0.7.3
    • No known migration issues.
  • Migrating from 0.7.x to 0.7.2
    • No known migration issues.
    Migrating from 0.7.0 to 0.7.1
    • No known migration issues.
  • Migrating from 0.6.x to 0.7.0
    • No known migration issues.
  • Migrating from 0.5.x to 0.6.0
    • ListenUDP rewritten modeled after ListenSyslog, ListenTCP, ListenRELP.  Far superior performance and requires less configuration.  Older ListenUDP processor instances of flows will startup invalid.  Simply remove old properties and ensure desired settings and fire it up.
    • NiFi can now be configured such that access to the REST API goes through Kerberos enabled authentication.  Details on configuration can be found in the 'Admin Guide' under 'Kerberos Service'
  • Migrating from 0.4.0 to 0.5.0
    • New framework managed state feature
      • The framework now offers a way to manage state which works both for single nodes as well as being distributed across a cluster.  Numerous processors such as ListFile, FetchFile, GetHTTP, GetHBase, and others have been updated to use this new feature rather than their own rolled approach.  They were each setup to automatically transition the existing saved state files from their previous approach and so should be transparent but it is worth noting in case any issues arise.
    • Improved encryption and decryption features with password and algorithm safety check
      • We moved to a much more recent version of the underlying BouncyCastle provided algorithms, added new key derivation functions and algorithms, and provided validation of which password and algorithm combinations are considered unsafe.  This validation can be overridden if the user prefers but flows using these will need to be manually updated to reflect this override.
    • PutS3Object supports uploading files greater than 5GB
      • The processor now supports using the Multipart Upload feature of the S3 API. Files which were too large previously failed, but now should work transparently. The processor will attempt to clean up partial uploads and can resume progress if an upload is interrupted (e.g. due to a connection failure or processor being stopped)
  • Migrating from 0.3.0 to 0.4.0
    • Better flow validation for connections
      • NiFi now supports stronger flow validation for processors which require incoming connections and those which do not support incoming connections.  Previously connections were allowed to be set but were ignored if they were not meaningful.  For example, in previous versions of NiFi you could have incoming connections to the GetFile processor which didn't make sense because that processor cannot utilize an incoming connection - it is simply the start of a flow.  With this release the flows utilizing these invalid connections will come up in an invalid state and simply deleting the connection will resolve it.
    • ReplaceText has been refactored to provide a better user experience. Changes include:
      • Rather than always searching with a Regular Expression, users can now choose from several different Replacement Strategies. Not only do these new strategies make configuration much easier but in many cases are much more efficient than using Regular Expressions. You should check any of the existing ReplaceText Processors on your graph to see if they should be updated to use a new strategy. The default strategy is Regex Replace, for backward compatibility purposes.
      • A bug was found and corrected that results in a Regular Expression that matches line endings being able to completely remove line endings. For example, if the Processor is configured to perform replacements Line-by-Line and the search regex is ".*" and the replacement is "hello", we would end up with hellohellohellohello... with no line endings. This should have resulted in the same line ending being maintained between each line. This was corrected.
      • If matching against the Regular Expression ".*" to ensure that the content is always replaced, the Processor should be updated to use the Always Replace replacement strategy.
      • When using the "Regex Replace" replacement strategy, back references may now be referenced from within the Expression Language by referring to them as if they were attributes. For example, if the search regex is ".*(hello|goodbye).*" we can now have a replacement such as "${ '$1':toUpper() }" which will return either HELLO or GOODBYE if either 'hello' or 'goodbye' was found. Note the quotes around the $1 - since $1 is not a typical attribute name, it must be quoted.
  • Migrating from 0.2.0 to 0.3.0
    • Added Reporting Tasks for integrating with Apache Ambari
    • Added support for interacting with Kerberos enabled Hadoop clusters
    • Added Processors that integrate with Amazon Web Services, processing images, execute SQL commands, run Apache Flume sources and sinks, and introduced additional avro capabilities.
    • Archival of flow file content is now enabled by default with a target of 12 hours of retention or at 50% of total partition usage.
  • Migrating from 0.1.x to 0.2.0
    • For Windows users: the start-nifi.bat and stop-nifi.bat have been removed. Please use run-nifi.bat instead. This was done in order to ensure that all messages from NiFi are properly written to the log files.
  • Migrating from 0.0.x to 0.1.0
    • We have made management of controller services and reporting tasks a first-class feature that is manageable through the REST API and User Interface.  This means that the 'conf/controller-services.xml' and 'conf/reporting-tasks.xml' are no longer going to be read in on startup.  WARNING: You will have to recreate those services and tasks through the UI and you can delete those configuration files.  This change is a violation of our commitment to proper compatibility changes (and what motivated this page in the first place).
  • Migrating from 0.0.1 to 0.0.2
    • There is now a content viewer available allowing you to look at content as it existed in the flow at certain stages as indexed with provenance.  To enable this you must edit your 'conf/nifi.properties' file.  Edit or add a line that says ''nifi.content.viewer.url=/nifi-content-viewer/"
  • No labels