DUE TO SPAM, SIGN-UP IS DISABLED. Goto Selfserve wiki signup and request an account.
| Table of Contents |
|---|
Status
Current state: Accepted (2.2)
Discussion thread: here
JIRA: KAFKA-5692
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Similarly to KIP-179, the kafka-preferred-replica-election.sh tool takes a --zookeeper option which means users of the tool must have access to the ZooKeeper cluster backing the Kafka cluster. There is no AdminClient API via which the preferred leader can be elected, so it is only this tool which can be used to do this job. This KIP will provide an AdminClient API for electing the preferred leader, add an option to the kafka-preferred-replica-election.sh tool to use this new API and deprecate the --zookeeper option.
Public Interface
The kafka-preferred-replica-election.sh tool will gain a --bootstrap-server option and the existing --zookeeper option will be deprecated.
The AdminClient will gain a new method:
electPreferredLeaders(Collection<TopicPartition> partitions)
A new network protocol will be added:
ElectPreferredLeadersRequest and ElectPreferredLeadersResponse
Proposed Changes
kafka-preferred-replica-election.sh
The --zookeeper option will be retained and will:
- Cause a deprecation warning to be printed to standard error. The message will say that the
--zookeeperoption will be removed in a future version and that--bootstrap-serveris the replacement option. - Perform the election via ZooKeeper, as currently.
A new --bootstrap-server option will be added and will:
- Perform the election by calling AdminClient.electPreferredLeaders() on an AdminClient instance bootstrapped from the via the given
--bootstrap-server.
Using both options in the same command line will produce an error message and the tool will exit without doing the intended operation.
It is anticipated that a future version of Kafka would remove support for the --zookeeper option.
When the --bootstrap-server option is used new further new options will be available:
admin.config— "Admin client config properties file to pass to the admin client when--bootstrap-serveris given."
The --help output of the tool will be updated to explain what the preferred replica *is*, because this is currently not discoverable from the command line tool help, only from the documentation on the Kafka website.
The --help output for the tool will be updated to note that the command is not necessary if the broker is configured with auto.leader.rebalance.enable=true.
AdminClient: electPreferredLeaders()
The following methods will be added to AdminClient:
| Code Block |
|---|
/**
* Elect the preferred replica of the given {@code partitions} as leader, or
* elect the preferred replica for all partitions as leader if the argument to {@code partitions} is null.
*
* This operation is supported by brokers with version 1.0 or higher.
*/
ElectPreferredLeadersResult electPreferredLeaders(Collection<TopicPartition> partitions, ElectPreferredLeadersOptions options)
ElectPreferredLeadersResult electPreferredLeaders(Collection<TopicPartition> partitions) |
Where
| Code Block |
|---|
public class ElectPreferredLeadersOptions extends AbstractOptions<ElectPreferredLeadersOptions> {
}
public class ElectPreferredLeadersResult {
// package access constructor
/**
* Get the result of the election for the given TopicPartition.
* If there was not an election triggered for the given TopicPartition, the
* returned future will complete with an error.
*/
public KafkaFuture<Void> partitionResult(TopicPartition partition) { ... }
/**
* <p>Get the topic partitions for which a leader election was attempted.
* The presence of a topic partition in the Collection obtained from
* the returned future does not indicate the election was successful:
* A partition will appear in this result if an election was attempted
* even if the election was not successful.</p>
*
* <p>This method is provided to discover the partitions when
* {@link AdminClient#electPreferredLeaders(Collection)} is called
* with a null {@code partitions} argument.</p>
*/
public KafkaFuture<Set<TopicPartition>> partitions();
/**
* Return a future which succeeds if all the topic elections succeed.
*/
KafkaFuture<Void> all() { ... }
} |
A call to electPreferredLeaders() will send a ElectPreferredLeadersRequest to the controller broker.
NetworkProtocol: ElectPreferredLeadersRequest and ElectPreferredLeadersResponse
| No Format |
|---|
ElectPreferredLeadersRequest => [TopicPartitions] TimeoutMs
TopicPartitions => Topic PartitionId
Topic => string
PartitionId => [int32]
TimeoutMs => int32 |
Where
| Field | Description |
|---|---|
partitionId | The partitions of this topic whose preferred leader should be elected |
timeoutMs | The time in ms to wait for the election to complete. |
The request will require AlterCluster on the Cluster resource, since it is a change that affects the whole cluster.
Note: It is not an error if there is a duplicate (topic, partition)-pair in the request.
Note that a ElectPreferredLeadersRequest must be sent to the controller of the cluster.
| No Format |
|---|
ElectPreferredLeadersResponse => ThrottleTimeMs [ReplicaElectionResult]
ThrottleTimeMs => int32
ReplicaElectionResult => Topic [PartitionResult]
Topic => string
PartitionResult => PartitionId ErrorCode ErrorMessage
PartitionId => int32
ErrorCode => int16
ErrorMessage => string |
Where
| Field | Description |
|---|---|
ThrottleTimeMs | The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota |
Topic | The topic name |
PartitionId | The partition id |
ErrorCode | The result error, or zero if there was no error. |
ErrorMessage | The result message, or null if there was no error. |
Anticipated errors:
UNKNOWN_TOPIC_OR_PARTITION (3) If the topic or partition doesn't exist on any broker in the cluster. Note that the use of this code is not precisely the same as it's usual meaning of "This server does not host this topic-partition".
NOT_CONTROLLER (41) If the request is sent to a broker that is not the controller for the cluster.
- CLUSTER_AUTHORIZATION_FAILED (31) If the user didn't have Alter access to the topic.
PREFERRED_LEADER_NOT_AVAILABLE (80) If the preferred lead could not be elected (for example because it is not currently in the ISR)
NONE(0) The elections were successful.
Broker-side election algorithm
The broker-side handling of ElectPreferredLeadersRequest will be somewhat different than currently:
- On receipt of
ElectPreferredLeadersRequestthe controller enqueue aPreferredReplicaLeaderElectionwith theControllerManager After the batch of elections has been started, a callback will either return the responses to the client (if they're available immediately, for example all the leaders were already the preferred ones), or use a purgatory to await the completion of the all of the elections.
- Each
UpdateMetadataRequestwill try to complete the election purgatory. - Successful or timed-out completion of the
PreferredReplicaLeaderElectionwill result in aElectPreferredLeadersResponsebeing returned to the client
This change means that the ElectPreferredLeadersResponse is sent when the election is actually complete, rather than when the /admin/preferred_replica_election znode has merely been updated. Thus if the election fails, the ElectPreferredLeadersResponse's error_code will provide a reason.
When support for the --zookeeper option is eventually removed, the need for the /admin/preferred_replica_election znode will disappear and consequently the code managing it will be removed.
Compatibility, Deprecation, and Migration Plan
Existing users of the kafka-preferred-replica-election.sh will receive a deprecation warning when they use the --zookeeper option. The option will be removed in a future version of Kafka. If this KIP is introduced in version 1.0.0 the removal could happen in 2.0.0.
Rejected Alternatives
One alternative is to do nothing: Let the tool continue to communicate with ZooKeeper directly.
Another alternative is to do exactly this KIP, but without the deprecation of --zookeeper. That would have a higher long term maintenance burden, and would prevent any future plans to, for example, provide alternative cluster technologies than ZooKeeper.