Child pages
  • Migration Guidance
Skip to end of metadata
Go to start of metadata
The Apache NiFi community recognizes how important it is to provide reliable releases on a number of levels.  But one of the most important aspects is that we consider the importance of changes which create new behavior, change existing behavior and so on.  We're committed to being a responsible community whereby we can continue to evolve the capabilities and features of NiFi and users can have a well understood and reliable upgrade path.  We're committed to ensuring backward compatibility issues are made quite rare or that the impact is clearly understood, minimized, and communicated.  You can read more about our approach to version management.  If you find that we've violated this commitment in any way please send us an email at and we'll work to resolve it for you and other users.


  • When moving between patch (also known as incremental) version changes such as 0.1.0 to 0.1.1 users should be safe to assume a clean upgrade can occur with no risk of behavior changes other than bug fixes and no compatibility issues.
  • When moving between minor changes such as 0.1.0 to 0.2.0 users can expect new behaviors and bug fixes but backward compatibility should be protected.
  • When moving between major changes such as 0.x.y to 1.0.0 t here may be backward compatibility impacting changes largely focused on removal of deprecated items.

Migrating to 1.17.0

  • This release deprecated support for RocksDBFlowFileRepository. The RocksDB repository is packaged in a separate NAR named nifi-rocksdb-nar, available for download from Maven Central
  • This release removed Hive 1.2 components from the standard release binary. The nifi-hive-nar and nifi-hive-services-api-nar modules can be downloaded from Maven Central

Migrating from 1.15.0 to 1.16.0

  • This release adds HTTP request logging and updates the default logback.xml configuration. Deployments with customized Logback configurations should add a new request appender and request logger configuration to avoid writing HTTP requests to nifi-app.log
  • This release removed support for Elasticsearch 2 components. Version 1.15.3 of nifi-elasticsearch-nar can be downloaded from Maven Central
  • This release removed support for Elasticsearch 5 components. Version 1.15.3 of nifi-elasticsearch-5-nar can be downloaded from Maven Central
  • This release removed support for Kite components. Version 1.15.3 of nifi-kite-nar can be downloaded from Maven Central
  • This release removed support for Lumberjack components. Version 1.15.3 of nifi-lumberjack-nar can be downloaded from Maven Central
  • This release removed support for Kafka components supporting Kafka versions prior to 1.0. Version 1.15.3 of nifi-kafa-0-nar modules can be downloaded from Maven Central
  • This release removed support for InfluxDB from the standard NiFi binary download. The nifi-influxdb-nar can be downloaded from Maven Central
  • This release removed the nifi-processor-utils JAR and refactored classes into several new modules under nifi-extension-utils. When rebuilding custom components to depend on 1.16.0 libraries, it will be necessary to remove dependencies on nifi-processor-utils. Custom NAR bundles, already compiled using versions prior to 1.16.0, can continue to be used in runtime deployments
  • This release removed support for MySQL 5.6 /5.7 and Postgres 9.x for NiFi Registry's database, supported databases are now H2, Postgres 10.x - 14.x and MySQL 8.x.
  • NOTE: A bug was discovered after the release (NIFI-9836) that causes NiFi Registry to only work with H2. The issue is addressed in main and will be resolved in 1.16.1.

Migrating from 1.14.0 to 1.15.0

  • NiFi requires Java 8 Update 251 or later in order to support the new default JSON Web Token signature algorithm PS512, which uses RSASSA-PSS using SHA-512 and MGF1 with SHA-512

Migrating from 1.13.x to 1.14.0

  • NiFi is now secure by default. The default port has been changed from 8080 to 8443 and a certificate is automatically generated. Upon the first start, unless the configuration is changed, a username and password will be automatically generated and written to the logs. The username and password may be changed at any time by running bin/ set-single-user-credentials <username> <password>
  • The sensitive properties key property nifi.sensitive.props.key is now required. Previously, a default value of "nififtw!" but that meant that if the default was never changed, anyone who gained access to the flow.xml.gz (whether maliciously or not) could decrypt sensitive properties. Now, if no value is set, one will be randomly generated upon NiFI start, and it will be written to The value can be set by running bin/ set-sensitive-properties-key <new key>. This will automatically lookup the current sensitive properties key from (and use the default if the property is not set), decrypt any sensitive keys, re-encrypt with the new key, and write all of the configuration with the newly encrypted values. This can be used to set the initial password or to change an existing password. It is important to note that if the sensitive properties key is lost, NiFi will not be able to load the flow.xml.gz and will fail upon startup. If this occurs, manual intervention will be required in order to remove any sensitive properties from the flow.xml.gz, and those sensitive values must then be re-configured after starting NiFi. For this reason, it is highly recommend that a sensitive properties key be set explicitly.
  • The nifi-storm-spout module was removed from nifi-external and will no longer be provided in source or binary form. Users requiring the module can obtain the source and binary from 1.13.x and prior releases.
  • Processors were never intended to be scheduled to run on Primary Node Only unless they were a "source" processor. Doing so can cause issues with data sitting in the flow and never being processed. But that rule was never enforced. Until now. In 1.14.0, any Processor that is scheduled to run on Primary Node Only and also has incoming connections will be made invalid. Please ensure that any Processor that receives data from another Processor is scheduled to run on All Nodes (this is set by going to the Processor Configuration and navigating to the Scheduling tab).

Migrating from 1.13.x to 1.13.1 

  • Removed the following nar(s) from the convenience build.  They are still built and made available in maven repositories so you can add them to your deployment lib folder and use them if you like.  They include; nifi-grpc-nar

Migrating from 1.12.x to 1.13.x

  • HTTP access to NiFi by default is now configured to accept connections to only.  If you want to allow broader access for some reason for HTTP and you understand the security implications you can still control that as always by changing the '' property in as always. That said, please take the time to configure proper HTTPS.  We offer detailed instructions and tooling to assist.
  • Removed the following nar(s) from the convenience build.  They are still built and made available in maven repositories so you can add them to your deployment lib folder and use them if you like.  They include; nifi-livy-nar, nifi-livy-controller-service-api-nar, nifi-kafka-0-11-nar, nifi-beats-nar, nifi-ignite-nar
  • Both embedded and external ZooKeeper connections can now be secured with TLS. The administration guide contains configuration examples to enable this feature. For embedded ZooKeeper, it will require setting the secureClientPort value in There are also new properties defined in the admin guide, including which tells NiFI to use a secure client to access a secured ZooKeeper, and* properties to define separate key/trust stores if required. By default the* values will be used to establish trust, but if* values are defined, these will be used instead.
  • The handling of the X-ProxiedEntitiesChain header for secure proxied requests is now more strict, requiring the outermost <, >. For example, whereas a value of %{SSL_CLIENT_S_DN} (Apache httpd) or $ssl_client_s_dn (NGINX) was previously valid, <%{SSL_CLIENT_S_DN}> or <$ssl_client_s_dn> must now be used.

Migrating from 1.12.0 to 1.12.1

  • Storage container auto-creation was added to PutAzureBlobStorage in NIFI-6913, causing authorization failures when using SAS tokens without container list or create permissions. The fix for NIFI-7794 makes this configurable, but reverts to the previous default -- not creating containers. If you were using 1.12.0 with storage container auto-creation, you will need to change the new Create Container property to true to enable the behavior.

Migrating from 1.x.x to 1.12.x

  • PutKudu processor - while NIFI-6551 fixes flows writing to Kudu timestamp (UNIXTIME_MICROS) columns via timestamp or date fields. This change could break PutKudu processors that are writing to Kudu timestamp (UNIXTIME_MICROS) columns via numeric fields. Before this change, flows would often multiply millisecond values by 1000 to write microsecond values to Kudu. On upgrade this multiplication should be removed and milliseconds should be sent.
  • HandleHttpRequest was updated to no longer write the 'http.param.*' attributes. These attributes were undocumented and therefore not part of its 'contract'. They were removed because in the case of query parameters, they were duplicative and in the case of multipart/form data, they were both duplicative of FlowFile content and also dangerous as could be potentially extremely large. Multipart/form data was never intended to be included as attributes but were inadvertently included. This is no longer the case. If multipart/form data is needed as an attribute, the appropriate attributes can be captured via the ExtractText processor.
  • NARs for Kafka 0.9 and Kafka 0.10 have been removed from the convenience binary. You can still get them from the various artifact repositories and use them in your flows but we cannot bundle them due to space limitations by default.
  • The SSLContextService interface was updated as part of NIFI-7407.  Custom NARs that depend on nifi-standard-services-api-nar may need to be rebuilt against version 1.12.0 to ensure compatibility.  Users of those custom nars could also just pull in the appropriate service API and service implementation nars in their environment.

Migrating from 1.x.x to 1.11.x

  • CompressContent has been updated to specify the compression level when using XZ-LZMA2 compression whereas before it was unspecified.  The default is level 1 now.  If you wish to see higher levels of compression (and CPU effort) you might wish to set this value higher. CompressContent had previously only used that property for GZIP compression.

Migrating from 1.x.x to 1.10.0 

  • The RPM creation mechanism appears broken for both Java 8 and Java 11 binaries if you wanted to build those yourself.  Should be resolved in a later release.
  • We've removed the following nars from the default convenience binary.  These include kite-nar, kafka-0-8-nar, flume-nar, media-nar, druid-controller-service-api-nar, druid-nar, other-graph-services-nar.  You can still get them from the various artifact repositories and use them in your flows but we cannot bundle them due to space limitations by default.
  • The "Auto-Create Partitions" property was removed from the PutHive3Streaming processor, causing existing instances of this processor to become invalid. The property would appear as an unsupported user-defined property and must be removed to return the processor to a valid state.
  • The RecordSetWriter interface added a non-default method that adds a Map<String, String> argument to the createWriter() method. This will cause ScriptedRecordSetWriter scripts to fail because of the missing method. Adding the Map argument to the method fixes the issue (NIFI-6318).
  • The Zookeeper dependency that NiFi uses for state management and cluster elections was upgraded to v3.5.5. From v3.5.x onwards, Zookeeper changed the file format and as a result NiFi users using an existing embedded zookeeper will need to adjust their existing file accordingly. More details here:
    For new deployments of the 1.10.0 release onwards, NiFi will be packaged with an updated template file.

    To update an existing file however, edit the conf/ file:
    1. Remove the clientPort=2181 line (or whatever your port number may be)
    2. Add the client port to the end of the server string eg: server.1=localhost:2888:3888;2181

Migrating from 1.x.x to 1.9.0 

  • Schema validation changed for ConvertAvroSchema, ConvertCSVToAvro, ConvertJSONToAvro processors. If using an URI to locate the schema file, a scheme such as file://, hdfs://, etc, must be specified, otherwise the processor will be considered as invalid.

Migrating from 1.x.x to 1.8.0

  • The newly added NiFi cluster node load balancing feature uses a dedicated port, defaults to 6342. You may need to update firewall configurations to allow communications between NiFi nodes to use it. Here is the list of NiFi ports.
  • MergeRecord (available since 1.4.0) behavior changes if 'Minimum Number of Records' (defaults to 1) is configured less than 'Maximum Number of Records'  (defaults to 1000). Because NIFI-5514 corrected MergeRecord to honor 'Minimum Number of Records'. Before 1.8.0, MergeRecord merged incoming FlowFiles up to configured maximum conditions (Number of Records, Bin Size and Bin Age) regardless of 'Minimum Number of Records'. Since 1.8.0, MergeRecord can finish merging once the 'Minimum Number of Records' reaches, as a result, the processor may produce more small outgoing FlowFiles compared to old versions. Please increase the minimum number of records (up to the value of Maximum Number of Records) to make the processor merge more records.

Migrating from 1.x.x to 1.7.1

  • Apache NiFi 1.7.1 resolves an issue where secure clusters which relied on wildcard certificates could encounter a certificate path validation error. As documented in the Administration Guide, wildcard certificates are not officially supported and all certificates should have a unique entry in the Distinguished Name (DN) field and Subject Alternative Names (SAN) field matching the hostname of the node. This release fixes a regression in 1.7.0 but should not be confused with intentional or forward support for wildcard certificates. 

  • Hostname validation has become more strict in 1.7.1. In previous versions, NiFi supported certificates which did not contain a Subject Alternative Name (SAN) field matching the node hostname. In following with RFC 6125, all certificates should include an entry in the SAN array which matches the node hostname for hostname verification. 

Migrating from 1.6.0 to 1.7.0

  • Some component properties are renamed:
    • Old properties become unsupported user-defined properties (dynamic properties), and those components become invalid.
    • To make such component valid, migrate values from old properties to new ones, then remove old properties.
    • The list of changed property names:
      • ReportLineageToAtlas by NIFI-4980
        • kafka-kerberos-service-name-kafka => kafka-kerberos-service-name
      • GetCouchbaseKey and PutCouchbaseKey by NIFI-5257
        • Couchbase Cluster Controller Service => cluster-controller-service
        • Bucket Name => bucket-name
        • Document Type => document-type
        • Document Id => document-id
        • Persist To => persist-to
        • Replicate To => replicate-to 
  • Apache NiFi 1.7.0 introduces new policies for controlling access to provenance events from a component. Previously NiFi leveraged the data policies for a component to control access to provenance events but these would also allow a user to download flowfile attributes and content through provenance events and in queues on outgoing connections. Often times granting this much access was too much for the operators of the NiFi instance. However, withholding this access made it difficult for operators to understand the dataflow and track what was happening. By introducing a new view provenance events policy for controlling access to the event itself these operators can better understand the dataflow and track what is happening while the administrators can still maintain tight control of the flowfile attributes and content. When upgrading to Apache NiFi 1.7.0 these new policies will not exist. As a result, users who could previously access provenance events through the data policies described above will no longer have access to these provenance events. An administrator will need to create new provenance policies and assign the users in question before they can resume access.

Migrating from 1.5.0 to 1.6.0

  • PutMongo can fail in insert mode. Will be fixed in next release. In the mean time you can set query keys for insert even though they'll be ignored it should workaround the validation bug.

Migrating from 1.4.x to 1.5.0

  • No known migration issues.

Migrating from 1.4.0 to 1.5.0

  • AWS components for NiFi have been reorganized into sub-projects nifi-aws-serice-api, nifi-aws-abstract-processors, and nifi-aws-processors to separate service interfaces from concrete implementation classes.  Custom AWS components should be rebuilt to target NiFi 1.5.0.  For bundles that only implement controller service interfaces (AWSCredentialsProviderService), it is recommended that the NAR dependency be changed to nifi-aws-service-api-nar.  Custom AWS components built for earlier versions of NiFi can continue to be used if the matching version of the nifi-aws-nar.

  • ExecuteStreamCommand now has a failure relationship which will need to be routed somewhere or auto-terminated, otherwise existing instances of the processor will be invalid ( NIFI-4559 - Getting issue details... STATUS )

Migrating from 1.3.0 to 1.4.0

  • A restricted implementation of the SSLContextService has been added, StandardRestrictedSSLContextService. It provides the ability to configure keystore and/or truststore properties once and reuse that configuration throughout the application, but only allows a restricted ("modern") set of TLS/SSL protocols to be chosen (as of 1.4.0, no SSL protocols are supported, only TLS v1.2). The set of protocols selectable will evolve over time as new protocols emerge and older protocols are deprecated. The generic "TLS" entry is also supported and will automatically select the best available option without user intervention (this is the recommended setting). This service is recommended over StandardSSLContextService if a component doesn't expect to communicate with legacy systems since it is unlikely that legacy systems will support these protocols. 

    • The following Listen* processors now require a StandardRestrictedSSLContextService (previously requiring StandardSSLContextService): ListenBeats, ListenHTTP, ListenLumberjack, ListenRELP, ListenSMTP, ListenSyslog, ListenTCP, ListenTCPRecord
    • ListenGRPC is a new processor for 1.4.0, and requires StandardRestrictedSSLContextService
    • Dataflow managers will need to instantiate a new instance of StandardRestrictedSSLContextService and associate it with any of the above components in an existing flow
  • An update to the Authorization framework has introduced more granular configuration options in the authorizers.xml file. These options are detailed in this file and in the Administration Guide. This file defines the available Authorizers for NiFi to utilize in a secure configuration. The file contains a property called that defines which Authorizer to use. By default, both the authorizers.xml and the property have been updated to utilize the new authorizer configuration. However, existing authorizers.xml configurations are still valid. In this case, it's important to ensure that the value of the property is set to the identifier of the Authorizer from your existing authorizers.xml. 

Migrating from 1.2.0 to 1.3.0

  • A new property was added to to indicate the maximum number of threads that should be available for cluster request replication. The new property is nifi.cluster.node.protocol.max.threads and defaults to 50. The existing property which set the fixed size of the thread pool nifi.cluster.node.protocol.threads now serves as the initial size and still defaults to 10. However, the new thread pool will add and remove threads as necessary.

Migrating from 1.1.x to 1.2.0

  • With the introduction of component versioning, custom NARs will show up as "unversioned" until they are rebuilt with the latest NAR Maven Plugin (1.2.0). When deploying a rebuilt custom NAR, make sure to remove all previous versions of the NAR from the lib directory.
  • The nifi-documentation JAR is no longer directly in the lib directory and is now part of the framework NAR. Make sure there is no left-over version of an old nifi-documentation JAR in the lib directory after upgrading.
  • Jetty has been upgraded to version 9.4.2.  As a result, TLSv1/1.1 is no longer supported.  Users or clients connecting to NiFi through the UI or API now protected with TLS v1.2.  Any custom code that consumes the NiFi API needs to use TLS v1.2 or later.

Migrating from 1.1.x to 1.1.2

  • No known migration issues.

Migrating from 1.1.0 to 1.1.1

  • No known migration issues.

Migrating from 1.0.x to 1.1.0

  • NiFi now supports the concept of restricted components.  These are processors, controller services, reporting tasks that allow an authorized user to execute unsanitized code or access and alter files accessible by the NiFi user on the system NiFi is running.  Therefore, these components are tagged by the developer as restricted and when running NiFi in secure mode only an administrator must grant each user access to the policy allowing restricted component access.  This is explained in greater detail in the admin, user, and developer guides.  When you upgrade you will need to give your users access to this policy to be able to use these components.
  • During cluster startup we have to determine which flow is going to be considered the correct flow to go on based on which nodes attempt to join the cluster.  To help speed up this process there are now two new keys and values you should consider.  They are "nifi.cluster.flow.election.max.wait.time" and "nifi.cluster.flow.election.max.candidates".  You can read more about this in the admin guide under "Flow Election".

Migrating from 1.0.0 to 1.0.1

  • No known migration issues.

Migrating from 0.7.x to 1.0.0

  • Java 8 is now the minimum JRE/JDK supported
    • Before NiFi 1.0 release we supported a minimum of Java 7.  We've now moved to Java 8.
  • Kerberos System Properties

    • SPNEGO and service principals for Kerberos are now established via separate system properties.
      • New SPNEGO properties
        • nifi.kerberos.spnego.principal
        • nifi.kerberos.spnego.keytab.location
        • nifi.kerberos.spnego.authentication.expiration
      • New service properties
        • nifi.kerberos.service.principal
        • nifi.kerberos.service.keytab.location
      • Removed properties
        • nifi.kerberos.keytab.location
        • nifi.kerberos.authentication.expiration
  • DBCPConnectionPool Service
    • The “Database Driver Jar Url” property has been replaced by the “Database Driver Location(s)” property which accepts a comma-separated list of URLs or local files/folders containing the driver JAR.
    • Existing processors that reference this service will be invalid until the new property is configured.
  • MonitorDiskUsage

    • This standard reporting task has been simplified to let the user specify a logical name, a directory and a threshold to monitor.  Previously it was tightly coupled to the internal flow file and content repositories in a manner that didn't align to the pluggable nature of those repositories.  The new approach gives the user total control over what they want it to monitor.
  • Connection/Relationship Default Back Pressure Settings
    • It used to be that by default no backpressure settings were supplied.  This too often meant people learned the value of backpressure the hard way.  New connections made will now have a default value set of 10,000 flowfiles and 1GB worth of data size.
  • Multi-tenant Authorization Model

    • Authority Provider model has been replaced by a Multi-tenant Authorization model. Access privileges are now defined by policies that can be applied system-wide or to individual components. Details can be found in the ‘Admin Guide’ under ‘Multi-tenant Authorization’.

    • The system properties nifi.authority.provider.configuration.file and have been replaced by nifi.authorizer.configuration.file and, respectively. Details on configuration can be found in the “Admin Guide’ under ‘Authorizer Configuration’.

    • 0.7.0 authorized users/roles can be converted to the new authorization model. An existing authorized-users.xml file can be referenced in the authorizers.xml "Legacy Authorized Users File” property to automatically generate users and authorizations. Details on configuration can be found in the “Admin Guide” under ‘Authorizers.xml Setup’.

    • Controller Services that will be used by Processors must be defined in the Operate Palette of the root process group or sub process group where they will be used.  Controller Services defined in the Global - Controller Settings window can only be used by Reporting Tasks, not by any Processors.
  • HTTP(S) Site-to-Site

    • HTTP(S) protocol is now supported in Site-to-Site as an underlying transport protocol.

    • HTTP(S) protocol is enabled by default (nifi.remote.input.http.enabled=true).Configuration details can be found in the 'Site-to-Site Properties' section of the 'Admin Guide’. Of note:

      • With both socket and HTTP protocols supported, has been renamed to

      • is now set to false by default

  • Zero-Master Clustering

Migrating from 0.7.x to 0.7.4

  • No known migration issues.

Migrating from 0.7.x to 0.7.3

  • No known migration issues.

Migrating from 0.7.x to 0.7.2

  • No known migration issues.

Migrating from 0.7.0 to 0.7.1

  • No known migration issues.

Migrating from 0.6.x to 0.7.0

  • No known migration issues.

Migrating from 0.5.x to 0.6.0

  • ListenUDP rewritten modeled after ListenSyslog, ListenTCP, ListenRELP.  Far superior performance and requires less configuration.  Older ListenUDP processor instances of flows will startup invalid.  Simply remove old properties and ensure desired settings and fire it up.
  • NiFi can now be configured such that access to the REST API goes through Kerberos enabled authentication.  Details on configuration can be found in the 'Admin Guide' under 'Kerberos Service'

Migrating from 0.4.0 to 0.5.0

  • New framework managed state feature
    • The framework now offers a way to manage state which works both for single nodes as well as being distributed across a cluster.  Numerous processors such as ListFile, FetchFile, GetHTTP, GetHBase, and others have been updated to use this new feature rather than their own rolled approach.  They were each setup to automatically transition the existing saved state files from their previous approach and so should be transparent but it is worth noting in case any issues arise.
  • Improved encryption and decryption features with password and algorithm safety check
    • We moved to a much more recent version of the underlying BouncyCastle provided algorithms, added new key derivation functions and algorithms, and provided validation of which password and algorithm combinations are considered unsafe.  This validation can be overridden if the user prefers but flows using these will need to be manually updated to reflect this override.
  • PutS3Object supports uploading files greater than 5GB
      • The processor now supports using the Multipart Upload feature of the S3 API. Files which were too large previously failed, but now should work transparently. The processor will attempt to clean up partial uploads and can resume progress if an upload is interrupted (e.g. due to a connection failure or processor being stopped)

Migrating from 0.3.0 to 0.4.0

  • Better flow validation for connections
    • NiFi now supports stronger flow validation for processors which require incoming connections and those which do not support incoming connections.  Previously connections were allowed to be set but were ignored if they were not meaningful.  For example, in previous versions of NiFi you could have incoming connections to the GetFile processor which didn't make sense because that processor cannot utilize an incoming connection - it is simply the start of a flow.  With this release the flows utilizing these invalid connections will come up in an invalid state and simply deleting the connection will resolve it.
  • ReplaceText has been refactored to provide a better user experience. Changes include:
    • Rather than always searching with a Regular Expression, users can now choose from several different Replacement Strategies. Not only do these new strategies make configuration much easier but in many cases are much more efficient than using Regular Expressions. You should check any of the existing ReplaceText Processors on your graph to see if they should be updated to use a new strategy. The default strategy is Regex Replace, for backward compatibility purposes.
    • A bug was found and corrected that results in a Regular Expression that matches line endings being able to completely remove line endings. For example, if the Processor is configured to perform replacements Line-by-Line and the search regex is ".*" and the replacement is "hello", we would end up with hellohellohellohello... with no line endings. This should have resulted in the same line ending being maintained between each line. This was corrected.
    • If matching against the Regular Expression ".*" to ensure that the content is always replaced, the Processor should be updated to use the Always Replace replacement strategy.
    • When using the "Regex Replace" replacement strategy, back references may now be referenced from within the Expression Language by referring to them as if they were attributes. For example, if the search regex is ".*(hello|goodbye).*" we can now have a replacement such as "${ '$1':toUpper() }" which will return either HELLO or GOODBYE if either 'hello' or 'goodbye' was found. Note the quotes around the $1 - since $1 is not a typical attribute name, it must be quoted.

Migrating from 0.2.0 to 0.3.0

  • Added Reporting Tasks for integrating with Apache Ambari
  • Added support for interacting with Kerberos enabled Hadoop clusters
  • Added Processors that integrate with Amazon Web Services, processing images, execute SQL commands, run Apache Flume sources and sinks, and introduced additional avro capabilities.
  • Archival of flow file content is now enabled by default with a target of 12 hours of retention or at 50% of total partition usage.

Migrating from 0.1.x to 0.2.0

  • For Windows users: the start-nifi.bat and stop-nifi.bat have been removed. Please use run-nifi.bat instead. This was done in order to ensure that all messages from NiFi are properly written to the log files.

Migrating from 0.0.x to 0.1.0

  • We have made management of controller services and reporting tasks a first-class feature that is manageable through the REST API and User Interface.  This means that the 'conf/controller-services.xml' and 'conf/reporting-tasks.xml' are no longer going to be read in on startup.  WARNING: You will have to recreate those services and tasks through the UI and you can delete those configuration files.  This change is a violation of our commitment to proper compatibility changes (and what motivated this page in the first place).

Migrating from 0.0.1 to 0.0.2

  • There is now a content viewer available allowing you to look at content as it existed in the flow at certain stages as indexed with provenance.  To enable this you must edit your 'conf/' file.  Edit or add a line that says ''nifi.content.viewer.url=/nifi-content-viewer/"
  • No labels