This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.
Current state: Under Discussion
Discussion thread: here
Released: <Kafka Version>
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
With replication services data can be replicated across independent Kafka clusters in multiple data centers. In addition, many customers need "stretch clusters" - a single Kafka cluster that spans across multiple data centers. This architecture has the following useful characteristics:
- Data is natively replicated into all data centers by Kafka topic replication.
- No data is lost when 1 DC is lost and no configuration change is required - design is implicitly relying on native Kafka replication.
- From an operational point of view, it is much easier to configure and operate such a topology than a replication scenario via MM2.
- Currently Kafka has a single partition assignment strategy and if users want to override that, they can only do it via manually assigning brokers to replicas on topic creation.
Multi-level rack awareness
Additionally, stretch clusters are implemented using the rack awareness feature, where each DC is represented as a rack. This ensures that replicas are spread across DCs evenly. Unfortunately, there are cases where this is too limiting - in case there are actual racks inside the DCs, we cannot specify those. Consider having 3 DCs with 2 racks each.
If we were to use racks as /DC1/R1, /DC1/R2, etc, then when using RF=3, it is possible that 2 replicas end up in the same DC, e.g. /DC1/R1, /DC1/R2, /DC2/R1 and DC3 will be unused.
We can see that in this case we could do a more optimal, more distributed solution that is now provided by Kafka if we were to distinguish levels in rack assignment.
Therefore Kafka should support "multi-level" racks, which means that rack IDs should be able to describe some kind of a hierarchy. To make the solution more robust, we would also make the current replica assignment configurable where users could implement their own logic and we would provide the current assignment as default and a new multi-level rack aware described above as it is becoming more and more popular to use stretch-clusters.
With this feature, brokers should be able to:
- spread replicas evenly based on the top level of the hierarchy (i.e. first, between DCs)
- then inside a top-level unit (DC), if there are multiple replicas, they should be spread evenly among lower-level units (i.e. between racks, then between physical hosts, and so on)
- repeat for all levels
In a very simple case the result would look like this:
We propose a number of changes to various interfaces of Kafka. The core change is to transform the rack assignment into a configurable algorithm. This would involve a number of public interface changes:
- Introduce new config to control the underlying implementation (much like the Authorizer interface).
- Create a Java interface in the clients module that sits behind the configuration.
- Refactor the current rack assignment strategy under the new interface
- Add a new implementation, called the multi-level rack assignment strategy, discussed above
- Add an Admin API so clients like the reassign partitions command would rely on the brokers instead of calling an internal utility to assign replicas.
- Add a new protocol to transfer requests and responses of the new Admin API.
- A multi-level rack aware replica selector that is responsible for selecting a replica to fetch from when consumers are using follower fetching. This makes it easier for replicas to fetch from the nearest follower in a multi-level rack environment.
These will be detailed under proposed changes.
Pluggable Partition Assignment Strategy
We will introduce a new broker config called
broker.replica.placer that would make it possible for users to specify their custom implementation of replica placers. Its default value will be the current rack aware replica assignment strategy, we will just make that compatible with this new interface that we'll detail below. We will also allow any plugins to fall back to the original algorithm if needed or return an error if they can't come up with a suitable solution.
Equivalents of these proposed interfaces currently exist in the metadata component under
org.apache.kafka.placement. We'd like to improve these interfaces slightly with some naming improvements and added/changed fields to better fit a general algorithm. We would also like to provide a new, multi-level rack aware implementation discussed in this KIP. The current algorithm would stay as it is.
The ClusterView class would be the improved version of ClusterDescriber. We would like to add the current list of partition assignments.
We would like to add a new Admin API method that enables commands like the reassignment command to generate an assignment that is suggested by the broker based on what’s configured there. This would eliminate the need to pass down the corresponding assignor configuration to the client which may or may not be aware of what’s configured on the brokers and also allows the active controller to be the single source to assign replicas.
By specifying the
forceSimplePlacement flag we instruct the brokers to use the default rack unaware algorithm of Kafka. This is needed to maintain compatibility with the
–disable-rack-awareness flag in
kafka-partition-reassignment.sh. This option will effectively be triggered by specifying that flag and will behave the same way, it will instruct the broker to skip the specified partition assignor and use the default rack unaware algorithm. Note that a rack aware assignor might support a scenario where just part of the brokers have assigned racks.
Some algorithms might work with only a specific subset of brokers as an input to the assignment. In the Admin API we could specify this with the
For the Admin API described above, we’ll need a new protocol as well. This is defined as follows.
Multi-level rack assignment
In this section we’ll detail the behavior of the multi-level replica assignment algorithm. Currently the replica assignment happens on the controller which continues to be the case. The controller has the cluster metadata, therefore we know the full state of the cluster.
The very first step is that we build a tree of the rack assignments as shown below.
The assignor algorithm can be formalized as follows:
- Every rack definition must start with “/”. This is the root node.
- In every node we define a ring that’ll iterate through the children (and it’ll continue at the beginning if it reaches the last children). It by default points to a random leaf to ensure the even distribution of replicas of different partitions on the cluster.
- On adding a new replica on every level we select the child that is pointed by the iterator, pass down the replica to that and increment the pointer.
- Repeat the previous step for the child we passed down this replica until the leaf is reached.
To increase the replication factor, we basically pick up where the algorithm left the last partition and continue with the above algorithm from the next node that'd be given by the algorithm.
Compatibility, Deprecation, and Migration Plan
The change is backward compatible with previous versions as it is an extra addition to Kafka. The current strategy will remain the same but there will be a new one that users can optionally configure if they want to.
No extra tests needed other than the usual: extensive unit testing to make sure that the code works as expected.
Restricted Replica Assignment
I call this restricted because in the current design we don't only call the API when all of the brokers are rack aware but in any case, since someone can implement a rack assignment that places replicas not based on racks but for instance hardware utilization. This would make sense as for instance we may use Cruise Control that can periodically reassign partitions based on the load but we may make its job easier by placing replicas at creation to the right node (one less reassignment needed).
Now at this point all this above might not be needed and we could restrict the scope only to scenarios where all (or at least one) brokers have racks assigned as we simplify the broker side behavior as passing havin no racks would always result in calling the current rack unaware assignment strategy.
Using Cruise Control
Although Cruise Control is mainly used for broker load based partition reassignment, we can easily extend that with another Goal that reorganizes replicas based on the described multi-level algorithm in this KIP. However this still has an initial headwind as replicas are not immediately placed on the right brokers and users would need to force out a rebalance right after topic creation. Secondly, not all Kafka users use Cruise Control, they may use other software for load balancing or none at all.