DUE TO SPAM, SIGN-UP IS DISABLED. Goto Selfserve wiki signup and request an account.
This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.
Status
Current state: [Under Discussion]
Discussion thread: here [https://lists.apache.org/thread/01q2y6stt8wtkvnlbd2mkkt2xzr0jkjc]
JIRA: here [
KAFKA-19507
-
Getting issue details...
STATUS
]
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Kafka's current replica assignment strategy prioritizes balancing replica counts across racks (availability zones in cloud environments) over balancing replicas across individual brokers. While this ensures rack diversity, it creates significant broker-level load imbalance when racks contain unequal numbers of brokers.
Problem Illustration
Consider a 3-replica topic with 3 racks:
Rack A: Brokers 1, 4
Rack B: Brokers 2, 5
Rack C: Broker 3 (single broker)
Under the current strategy:
Brokers 1, 2, 4, 5 each receive 1/6 of all replicas
Broker 3 receives 1/3 of all replicas (twice the load of others)
This forces Broker 3 into a bottleneck ("bucket effect"), as it handles double the traffic and storage load.
To mitigate this, deployments today must maintain broker counts as multiples of rack counts (e.g., 3, 6, 9 brokers for 3 racks). While this ensures balance, it:
Restricts deployment flexibility: Scaling clusters horizontally requires adding/removing nodes in rack-sized increments.
Increases costs unnecessarily: For example, a 4-broker cluster could suffice for a 3-rack setup, but users must deploy 6 brokers to maintain balance—increasing infrastructure costs by 50%.
Proposed Solution
Modify the assignment strategy to:
Prioritize broker-level balance as the primary objective.
Weight rack-level distribution by broker count per rack (e.g., a rack with 2 brokers receives twice the replicas of a rack with 1 broker).
Benefits
Balanced load: All brokers receive near-equal replicas regardless of rack imbalance.
Deployment flexibility: Clusters can scale to any size as long as
rack_count ≥ replica_factor.Cost efficiency: Users deploy only necessary brokers.
Example Scenario
3 replicas, 4 racks with 5 brokers:
Rack A: Brokers 1, 5 → Receives 2/5 of replicas (distributed evenly between Brokers 1 & 5)
Racks B, C, D: 1 broker each → Each receives 1/5 of replicas Result: Every broker handles exactly 1/5 of total replicas—eliminating bottlenecks.
Request
Proposed Changes
Describe the new thing you want to do in appropriate detail. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences. Use judgement based on the scope of the change.
Compatibility, Deprecation, and Migration Plan
- What impact (if any) will there be on existing users?
- If we are changing behavior how will we phase out the older behavior?
- If we need special migration tools, describe them here.
- When will we remove the existing behavior?