Child pages
  • KIP-183 - Change PreferredReplicaLeaderElectionCommand to use AdminClient
Skip to end of metadata
Go to start of metadata

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state: Adopted

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: KAFKA-5692

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Describe the problems you are trying to solve.

Similarly to KIP-179, the kafka-preferred-replica-election.sh tool takes a --zookeeper option which means users of the tool must have access to the ZooKeeper cluster backing the Kafka cluster. There is no AdminClient API via which the preferred leader can be elected, so it is only this tool which can be used to do this job. This KIP will provide an AdminClient API for electing the preferred leader, add an option to the kafka-preferred-replica-election.sh tool to use this new API and deprecate the --zookeeper option.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade

The kafka-preferred-replica-election.sh tool will gain a --bootstrap-server option and the existing --zookeeper option will be deprecated.

The AdminClient will gain a new method:

  • electPreferredLeaders(Collection<TopicPartition> partitions)

A new network protocol will be added:

  • ElectPreferredLeadersRequest and ElectPreferredLeadersResponse

Proposed Changes

Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.

kafka-preferred-replica-election.sh

The --zookeeper option will be retained and will:

  1. Cause a deprecation warning to be printed to standard error. The message will say that the --zookeeper option will be removed in a future version and that --bootstrap-server is the replacement option.
  2. Perform the election via ZooKeeper, as currently.

A new --bootstrap-server option will be added and will:

  1. Perform the election by calling AdminClient.electPreferredLeaders() on an AdminClient instance bootstrapped from the via the given --bootstrap-server.

Using both options in the same command line will produce an error message and the tool will exit without doing the intended operation.

It is anticipated that a future version of Kafka would remove support for the --zookeeper option.

The --help output of the tool will be updated to explain what the preferred replica *is*, because this is currently not discoverable from the command line tool help, only from the documentation on the Kafka website.

The --help output for the tool will be updated to note that the command is not necessary if the broker is configured with auto.leader.rebalance.enable=true.

AdminClient: electPreferredLeaders()

The following methods will be added to AdminClient:

/**
 * Elect the preferred replica of the given {@code partitions} as leader, or
 * elect the preferred replica for all partitions as leader if the argument to {@code partitions} is null.
 *
 * This operation is supported by brokers with version 1.0 or higher.
 */
ElectPreferredLeadersResult electPreferredLeaders(Collection<TopicPartition> partitions, ElectPreferredLeadersOptions options)
ElectPreferredLeadersResult electPreferredLeaders(Collection<TopicPartition> partitions)

Where

public class ElectPreferredLeadersOptions {
    public ElectPreferredLeadersOptions() { ... }
    /**
     * The request timeout in milliseconds for this operation or {@code null} if the default request timeout for the
     * AdminClient should be used.
     */
    public Integer timeoutMs() { ... }
    /**
     * Set the request timeout in milliseconds for this operation or {@code null} if the default request timeout for the
     * AdminClient should be used.
     */
    public ElectPreferredLeadersOptions timeoutMs(Integer timeoutMs) { ... }
}
public class ElectPreferredLeadersResult {
    // package access constructor

    /**
     * Get the result of the election for the given TopicPartition.
     * If there was not an election triggered for the given TopicPartition, the
     * returned future will complete with an error.
     */
    public KafkaFuture<Void> partitionResult(TopicPartition partition) { ... }

    /**
     * <p>Get the topic partitions for which a leader election was attempted.
     * The presence of a topic partition in the Collection obtained from 
     * the returned future does not indicate the election was successful: 
     * A partition will appear in this result if an election was attempted
     * even if the election was not successful.</p>
     *
     * <p>This method is provided to discover the partitions when
     * {@link AdminClient#electPreferredLeaders(Collection)} is called 
     * with a null {@code partitions} argument.</p>
     */
    public KafkaFuture<Collection<TopicPartition>> partitions();

    KafkaFuture<Void> all() { ... }
 }

A call to electPreferredLeaders() will send a ElectPreferredLeadersRequest to the controller broker.

NetworkProtocol: ElectPreferredLeadersRequest and ElectPreferredLeadersResponse

ElectPreferredLeadersRequest => [topic_partitions]
  topic_partitions => topic [partition_id]
    topic => STRING
    partition_id => INT32

Where

FieldDescription
partition_ida partition of the topic
timeoutthe time to wait for the election to complete

The request will require AlterCluster on the Cluster resource, since it is a change that affects the whole cluster.

Note: It is not an error if there is a duplicate (topic, partition)-pair in the request.

Note that a ElectPreferredLeadersRequest must be sent to the controller of the cluster.

ElectPreferredLeadersResponse => throttle_time_ms [replica_election_result]
  throttle_time_ms => INT32
  replica_election_result => topic [partition_result]
    topic => STRING
    partition_result => partition_id error_code error_message
      partition_id => INT32
      error_code => INT16
      error_message => NULLABLE_STRING

Where

FieldDescription
throttle_time_msduration in milliseconds for which the request was throttled
topica topic name from the request
partition_ida partition id for the topic
error_codean error code for that partition
error_messageThe error message

Anticipated errors:

  • UNKNOWN_TOPIC_OR_PARTITION (3) If the topic or partition doesn't exist on any broker in the cluster. Note that the use of this code is not precisely the same as it's usual meaning of "This server does not host this topic-partition".

  • NOT_CONTROLLER (41) If the request is sent to a broker that is not the controller for the cluster.

  • CLUSTER_AUTHORIZATION_FAILED (31) If the user didn't have Alter access to the topic.
  • NONE (0) The elections were successful.

Broker-side election algorithm

The broker-side handling of ElectPreferredLeadersRequest will be somewhat different than currently:

  1. On receipt of ElectPreferredLeadersRequest the controller will atomically check-and-set a flag (to prevent concurrent elections) then enqueue a PreferredReplicaLeaderElection with the ControllerManager
  2. The controller will when await completion of the PreferredReplicaLeaderElection, with a timeout.
  3. When processing the PreferredReplicaLeaderElection the controller will clear the flag.
  4. Successful or timed-out completion of the PreferredReplicaLeaderElection will result in a ElectPreferredLeadersResponse being returned to the client

(The flag will also be checked-and-set when handling a change of the /admin/preferred_replica_election znode, via the existing --zookeeper-supporting code)

This change means that the ElectPreferredLeadersResponse is sent when the election is actually complete, rather than when the /admin/preferred_replica_election znode has merely been updated. Thus if the election fails, the ElectPreferredLeadersResponse's error_code will provide a reason.

When support for the --zookeeper option is eventually removed, the need for the /admin/preferred_replica_election znode will disappear and consequently the code managing it will be removed.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Existing users of the kafka-preferred-replica-election.sh will receive a deprecation warning when they use the --zookeeper option. The option will be removed in a future version of Kafka. If this KIP is introduced in version 1.0.0 the removal could happen in 2.0.0.

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

One alternative is to do nothing: Let the tool continue to communicate with ZooKeeper directly.

Another alternative is to do exactly this KIP, but without the deprecation of --zookeeper. That would have a higher long term maintenance burden, and would prevent any future plans to, for example, provide alternative cluster technologies than ZooKeeper.

  • No labels