Current state: in review
Discussion thread: here
Currently, users initiate replica reassignment by writing directly to a ZooKeeper node named /admin/reassign_partitions.
As explained in KIP-4, ZooKeeper based APIs have many problems. For example, there is no way to return a helpful error code if an invalid reassignment is proposed. Adding new features over time is difficult when external tools can access and overwrite Kafka's internal metadata data structures. Also, ZooKeeper-based APIs inherently lack security and auditability.
In addition to all the general problems of ZK-based APIs, the current reassignment interface has some problems specific to its particular structure. There is no mechanism provided for aborting a reassignment operation once it has been initiated. Furthermore, the API assumes that reassignment operations are launched as a batch – there is no way to incrementally add a new reassignment operation once the batch has been initiated.
We would like to provide a well-supported AdminClient API that does not suffer from these problems. This API can be used as a foundation on which to build future improvements.
We will add two new APIs: alterPartitionAssignments, and listPartitionReassignments. As the names imply, the alter API modifies partition reassignments, and the list API lists the ones which are ongoing.
Unlike the current ZooKeeper-based API, alterPartitionAssignments can add or remove partition reassignments without interrupting unrelated assignments that are in progress. Partition reassignments can be added or removed by the API on a per-partition basis-- there are no global "before" or "after" snapshots.
This is used to create or cancel a set of partition reassignments. It must be sent to the controller.
This operation requires ALTER on CLUSTER.
This is the response from AlterPartitionsRequest.
- REQUEST_TIMED_OUT: if the request timed out.
- NOT_CONTROLLER: if the node we sent the request to was not the controller.
- INVALID_REPLICA_ASSIGNMENT: if the specified replica assignment was not valid-- for example, if it included negative numbers, repeated numbers, or specified a broker ID that the controller was not aware of.
This RPC lists the currently active partition reassignments. It must be sent to the controller.
It requires DESCRIBE on CLUSTER.
- REQUEST_TIMED_OUT: if the request timed out.
- NOT_CONTROLLER: if the node we sent the request to is not the controller.
If the top-level error code is set, no responses will be provided.
We will add a new field to LeaderAndIsrRequest named TargetReplicas. This field will contain the replicas which will exist once the partition reassignment is complete.
This znode will now store an array of ints, named targetReplicas. If this array is not present or null, there is no ongoing reassignment for this topic. If it is non-null, it corresponds to the target replicas for this partition. Replicas will be removed from targetReplicas as soon as they join the ISR.
When the controller starts up, it will load all of the currently active reassignments from the partition state znodes. This will not impose any additional startup overhead, because the controller needs to read these znodes anyway to start up.
The controller will handle ListPartitionReassignments by listing the current active reassignments.
The controller will handle the AlterPartitionAssignmentsResponse RPC by modifying the specified partition states in /brokers/topics/[topic]/partitions/[partitionId]/state. Once these nodes are modified, the controller will send out a LeaderAndIsrRequest as appropriate.
The kafka-reassign-partitions.sh tool will use the new AdminClient APIs to submit the reassignment plan. It will not use the old ZooKeeper APIs any more. In order to contact the admin APIs, the tool will accept a --bootstrap-server argument.
When changing the throttle, we will use IncrementalAlterConfigs rather than directly writing to Zookeeper.
Compatibility, Deprecation, and Migration Plan
For compatibility purposes, we will continue to allow assignments to be submitted through the /admin/reassign_partitions node. Just as with the current code, this will only be possible if there are no current assignments. In other words, the znode has two states: empty and waiting for a write, and non-empty because there are assignments in progress. Once the znode is non-empty, further writes to it will be ignored.
Applications using the old API will not be able to cancel in-progress assignments. They will also not be able to monitor the status of in-progress assignments, except by polling the znode to see when it becomes empty, which indicates that no assignments are ongoing. To get these benefits, applications will have to be upgraded to the AdminClient API.
The deprecated ZooKeeper-based API will be removed in a future release.
Store the pending replicas in a single ZNode rather than in a per-partition ZNode
We could store the pending replicas in a single ZNode per cluster, rather than putting them beside the ISR in /brokers/topics/[topic]/partitions/[partitionId]/state. However, this ZNode might become very large, which could lead to us running into maximum znode size problems. This is an existing scalability limitation which we would like to solve.
Another reason for storing the targetReplicas state in the ISR state ZNode is that this state belongs in the LeaderAndIsrRequest. By making it clear which replicas are pending and which are existing, we pave the way for future improvements in reassignment. For example, we will be able to distinguish between reassignment traffic and non-reassignment traffic for quota purposes, and provide better metrics.
Reassignment Quotas that only throttle reassignment traffic
Currently, the quotas used for reassignment also apply to all non-ISR traffic. This is undesirable and can lead to problems if nodes fall out of the ISR.
Since this KIP makes brokers aware of which replicas are being used for reassignment, it will be possible to implement quotas that only hit reassignment traffic, and not all non-ISR traffic. This will also obviate the need to manually specify the list of reassigning partitions when setting the per-broker quotas.
Add reassignment metrics
We would like to provide metrics for reassignment. For example, we would like to know how many partitions are being reassigned on a given broker, what write and read bandwidth is being used, etc. Since this KIP clearly separates replicas that are being reassigned from those that are not, and makes this information available on each broker through the LeaderAndIsrRequest, this will be possible in the future.
Improved API for Reassignment Quotas
We would like a better API for managing reassignment quotas-- perhaps a new AdminClient function and RPC. This is somewhat of a minor improvement, since we already have a way to change the configuration via admin client. However, it would still be helpful.
It would be nice if the controller could internally batch reassignment operations. The controller has more information with which to make decisions than external systems. If this were implemented, we would want to enrich the List API with more information, such as which reassignments were currently being worked on, and which the controller had decided to defer until later.
This could be added in a compatible way by making the default batch size infinite, so that users that didn't enable controller-side batching would get the current behavior of all reassignments starting immediately.