Current state: [Under Discussion]
Discussion thread: TBD
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
This KIP is following on KIP-429 to improve Streams scaling out behavior.
Recently Kafka community is promoting cooperative rebalancing to mitigate the pain points in the stop-the-world rebalancing protocol and an initiation for Kafka Connect already started as KIP-415. There are already exciting discussions around it, but for Kafka Streams, the delayed rebalance is not the complete solution. This KIP is trying to customize the cooperative rebalancing approach specifically for KStream application context, based on the great design for Connect and consumer.
Currently Kafka Streams uses consumer membership protocol to coordinate the stream task assignment. When we scale up the stream application, KStream group will attempt to revoke active tasks and let the newly spined hosts take over them. New hosts need to restore assigned tasks' state before transiting to "running". For state heavy application, it is not ideal to give up the tasks immediately once the new player joins the party, instead we should buffer some time to let the new player accept a fair amount of restoring tasks, and finish state reconstruction first before officially taking over the active tasks. Ideally, we could realize no downtime transition during cluster scaling.
In short, the goals of this KIP are:
- Reduce unnecessary downtime due to task restoration and global application revocation.
- Better auto scaling experience for KStream applications.
- Stretch goal: better workload balance across KStream instances.
Consumer Rebalance Protocol: Stop-The-World Effect
As mentioned in motivation, we also want to mitigate the stop-the-world effect of current global rebalance protocol. A quick recap of current rebalance semantics on KStream: when rebalance starts, all stream threads would
Join group with all currently assigned tasks revoked.
Wait until group assignment finish to get assigned tasks and resume working.
Replay the assigned tasks state.
Once all replay jobs finish, stream thread transits to running mode.
If you want to know more about details on protocol level, feel free to checkout KIP-429.
Streams Rebalance Metadata: Remember the PrevTasks
Today Streams embed a full fledged
Consumer client, which hard-code a
ConsumerCoordinator inside. Streams then injects a
StreamsPartitionAssignor to its plugable
PartitionAssignor interface and inside the
StreamsPartitionAssignor we also have a
TaskAssignor interface whose default implementation is
StickyPartitionAssignor. Streams partition assignor logic today sites in the latter two classes. Hence the hierarchy today is:
StreamsPartitionAssignor uses the subscription / assignment metadata byte array field to encode additional information for sticky partitions. More specifically on subscription:
And on assignment:
Streams Sticky TaskAssignor: Stickiness over Balance
Streams' StickyTaskAssignor will honor stickiness over workload balance. More specifically:
- First we calculate the average num.tasks each host should get on average as its "capacity", by dividing the total number of num.tasks to the total number of consumers (i.e. num.threads) and then multiple by the number of consumers that host has.
- Then for each task:
- If it has a client who owns it as its PrevTask, and that client still have capacity assign to it;
- Otherwise if it has a client who owns it as its StandbyTask, and that client still have capacity assign to it;
- If there are still unassigned tasks after step 2), then we loop over them at the per-sub-topology granularity (for workload balance), and again for each task:
- Find the client with the least load, and if there are multiple ones, prefer the one previously owns it, over the one previously owns it as standbyTask, over the one who does not own it at all.
As one can see, we honor stickiness (step 2) over workload balance (step 3).
Streams Two-Phase Task Assignor
Now the second part of this KIP is on Streams' PartitionAssginor implementation on top of the consumer rebalance protocol. Remember the difference between eager and (new) cooperative consumer rebalance protocol is that: in the "eager" mode, we always revoke everything before joining the group, in "cooperative" mode, we always revoke nothing before joining the group, but we may revoke some partitions after joining the group as indicated by the leader. Native consumer assignor would immediately let consumer members to revoke the partition immediately based on the Intersection(total-partitions, assigned-partitions).
In Streams however, we may want to defer the revocation as well if the intended new owner of the partition is "not ready", i.e. if the stateful task's restoration time (hence the unavailability gap) when migrating it to this new owner is long, since it does not have previously restored state for this task and hence need to restore from scratch. More generally speaking, we can extend this term to those hosts who may even have some local stores for the immigrating task, but is far behind the actual state's latest snapshot, and hence would still need to restore for a long time.
Streams SubscriptionInfo Update
The idea to resolve this, is to "delay" the revocation from the current owner to let the new owner first trying to close the gap of state update progress, and then revoke from the old owner and reassign to the new owner. However this cannot be easily done with a fixed "scheduled delay" since it really depends on the progress of the state store restoration on the new owner. To do that we need to let consumers report their current standby-tasks' "progress" when joining the group (some correlated information can be found at KAFKA-4696). More specifically, assuming that we've already done - KAFKA-7149Getting issue details... STATUS which will refactor the existing assignmentInfo format to the following to reduce the message size:
We can refactor the subscriptionInfo format as well to encode the "progress" factor:
More specifically, we will associate each standby task with an int32 value indicating its gap to the current active task's state snapshot. This gap is represented as the Sum(Diff(log_end_offset, restored_offset))_of_all_task_stores.
Also we will not distinguish between previous-active-tasks and previous-standby-tasks, since prev-active-tasks are just a special type of prev-tasks whose gap is zero. For tasks that are not in the prev-tasks list, it is indicating "I do not have this task's state at all, and hence the gap is simply the whole log".
For stateless tasks, there's no state in it and we will use a sentinel value (-1) to indicate its a stateless task in the prevTasks map. And only the host of the active task would include that in the prev-tasks map.
In addition, when Streams app is starting up, before joining the group it will also query the log-end-offset for all the local state stores in his state directory to calculate the gap; and after that the streams app can just maintain the gap dynamically for all its standby tasks (again, active tasks gap is just 0).
StreamsPartitionAssignor Logic Update
And then we will modify our sticky assignor logic. There are two things to keep in mind: 1) there's no semantic difference between prev-active and prev-standby stateful tasks any more, and 2) the assignor should be aware which tasks are stateful and which tasks are stateless, which can be easily inferred from its embedded topology builder. The goal is to assign the set of stateless and stateful tasks independently, trying to achieve workload balance while honoring stickiness (here the term "stickiness" would be interpreted based on the gap-value alone). And for stateless tasks, the assignor would not assign any standby tasks as well (KAFKA-4696).
- For the set of stateless tasks:
- First calculate the average number of tasks each thread should get on average.
- For each task (sorted by topic-groupId), if there is an owner of this task from prevTask (no more than one client should be claiming to own it as the owner) who's not exceeding the average number, assign to it;
- Otherwise, find the host with the largest remaining capacity (defined as the diff between the average number and the number of current assigned tasks) and assign to it.
- For the set of stateful tasks, first consider the active assignment:
- First calculate the average number of active-tasks each thread should get on average (so yes, we are still treating all the stateful tasks equally, and no we are not going to resolve KAFKA-4969 in this KIP).
- For each task (sorted by topic-groupId):
- Find the host with the smallest gap, if its not exceeding the average number, assign to it;
- Otherwise, if there's no hosts who has it before, there is nothing we can do but bite the bullet of restoration-gap, and we can just pick the client with largest remaining capacity and assign to it;
- Otherwise, it means that we have at least one prev-task owner but just the one with smallest gap already exceeded its capacity. We need to make a call here on the trade-off of workload imbalance v.s. restoration gap (some heuristics applicable in the first version)
- If we favor reducing restoration latency, we will still assign the task to the host with smallest gap, but if the standby task number N (used below in step 3) == 0, we'd force assign a standby task to the new owner candidate – otherwise we do nothing but just rely on step 3) to get us some standby tasks.
- Otherwise, we will assign the task to other host following the same logic of 2.b.i) above, but starting with the second smallest gap.
- Then we consider the standby assignment for stateful tasks (assuming num.replicas = N)
- First calculate the average number of standby tasks each thread should get on average.
- For each task(sorted by topic-groupId), ranging i from 1 to N:
- Find the i-th host with the smallest gap excluding the active owner and 1..(i-1)th standby owners, if its not exceeding the average number, assign to it;
- Otherwise, go to the next one with the smallest gap, and go back go 3.b.i) above, until we found no hosts left who has it before, we can just pick the client with largest remaining capacity and assign to it.
- If we run out of hosts before i == N it means we have assigned a standby task to each host, i.e. N > num.hosts, we will throw exception and fail.
- Note since the tasks are all sorted on topic-groupId, e.g. 1-1, 1-2, 1-3, ... 2-3 we are effectively trying to get per-sub-topology workload balance already. Also in the tie-breakers of step 1.c, 2.b.ii), and 2.b.ii) above, we will define it as the one who has the smallest number of tasks assigned to it from the same topic-groupId to further achieve per-sub-topology workload balance in a best effort.
- And whenever we've decided to favor reducing restoration latency in 2.b.iii.1) step above, we have introduced workload imbalance, and we'd want to get out of this state, by re-trigger a rebalance later so that the assignor can check if some standby owner can now take over the task. To do that, we will add a new type of error code named "imbalanced-assignment" in the ErrorCode field if the assignmentInfo, and when 2.b.iii.1) happens we will set this error code to all the members who own a standby task for the one triggered 2.b.iii.1) – there must be at least one of them. And upon receiving this error code, the thread will keep track of the progress of all its owned standby tasks, and then trigger another rebalance when the gap on all of them are close to zero.
NOTE the step 5) above indeed lost the specific information that which task should be on "watching-list", and hence the thread just need to watch all its standby tasks. We can, of course, inject new fields into the AssignmentInfo encoding to explicitly add those "watch-list" standby tasks. Personally I'm a bit reluctant to add them since they seem to be too specific and will make the streams assignor protocol not generalizable enough, but I can be convinced if there's strong motivations for the latter approach.
Please also compare this idea with the original algorithm below in "Assignment Algorithm" and let me know your thoughts.
OLD VERSION OF THE KIP, YET TO BE CLEANED UP
we shall define several terms for easy walkthrough of the algorithm.
- Instance (A.K.A stream instance): the KStream instance serving as container of stream threads set. This could suggest a physical host or a k8s pod. The stream thread's capacity is essentially controlled by the instance relative size.
- Learner task: a special standby task that gets assigned to one stream instance to restore a current active task and transits to active when the restoration is complete.
Learner Task Essential
Learner task shares the same semantics as standby task, which is utilized by the restore consumer to replicate active task state. When the restoration of learner task is complete, the stream instance will initiate a new JoinGroupRequest to call out another rebalance to do the task transfer. The goal of learner task is to delay the task migration when the destination host has not finished replaying the active task.
Next we are going to look at several typical scaling scenarios and edge scenarios to better understand the design of this algorithm.
Scale Up Running Application
The newly joined stream threads will be assigned with learner tasks by the group leader and they will replay the corresponding changelogs on local first. By the end of first round of rebalance, there is no “real ownership transfer”. When new member finally finishes the replay task, it will re-attempt to join the group to indicate that it is “ready” to take on real active tasks. During second rebalance, the leader will eventually transfer the task ownership.
Scale Up from Empty Group
Scaling up from scratch means all stream threads are new members. There is no need to start a learner stage because there is nothing to learn: we don’t even have a changelog topic to start with. We should be able to handle this case by identifying whether the given task is in the active task bucket for other members, if not we just transfer the ownership immediately.
After deprecating group.initial.rebalance.delay, we still expect the algorithm to work because every task assignment during rebalance will adhere to the rule "if given task is currently active, reassignment must happen only to stream threads who are declared ready to serve this task."
Scale Down Running Application
When performing the scale down of stream group, it is also favorable to initiate learner tasks before actually shutting down the instances. Although standby tasks could help in this case, it requires user to pre-set num.standby.tasks which may not be available when administrator performs scaling down. Besides the standby tasks are not guaranteed up-to-date. The plan is to use command line tool to tell certain stream members that a shutdown is on the way to be executed. These informed members will send join group request to indicate that they are “leaving soon”. During assignment phase, leader will perform the learner assignment among members who are not leaving. And the leaving member will shut down itself once received the instruction to revoke all its active tasks.
For ease of operation, a new tool for scaling down the stream app shall be built. It will have access to the application instances, and ideally could do two types of scaling down:
- Percentage scaling. Compute targeting scaled down members while end user just needs to provide a %. For example, if the current cluster size is 40 and we choose to scale down to 80%, then the script will attempt to inform 8 of 40 hosts to “prepare leaving” the group.
- Name-based scaling. Name the stream instances that we want to shut down soon. This is built for online hot swapping and host replacement.
Online Host Swapping (Scaling Up Then Down)
This is a typical use case where user wants to replace entire application's host type. Normally administrator will choose to do host swap one by one, which could cause endless KStream resource shuffling. The recommended approach under cooperative rebalancing is like:
- Increase the capacity of the current stream job to 2X and boost up new type instances.
- Mark existing stream instances as leaving.
- Learner tasks finished on new hosts, shutting down old ones.
Backing Up Information On Leader
Since the incremental rebalancing requires certain historical information of last round assignment, the leader stream thread will need to maintain the knowledge of:
- Who participated in the last round of rebalance. This is required information to track new comers.
- Who will be leaving the consumer group. This is for scaling down support as the replay could take longer time than the scaling down timeout. Under static membership, since we don't send leave group information, we could leverage leader to explicitly trigger rebalance when the scale-down timeout reaches. Maintaining set of leaving members are critical in making the right task shuffle judgement.
These are essential group state knowledges leader wants to memorize. To avoid the severity of leader crash during scaling, we are avoiding backing up too much information on leader for now. The following edge cases are around leader incident during scaling.
Leader Transfer During Scaling
Leader crash could cause a missing of historical assignment information. For the learners already assigned, however, each stream thread maintains its own assignment status, so when the learner task's id has no corresponding active task running, the transfer will happen immediately. Leader switch in this case is not a big concern.
Leader Transfer Before Scaling
However, if the leader dies before new instances join, the potential risk is that leader could not differentiate which stream instance is "new", because it relies on the historical information. For version 1.0, final assignment is probably not ideal in this case if we only attempt to assign learner task to new comers. This also motivates us to figure out a better task coordination strategy for load balance in long term.
The above examples are focusing more on demonstrating expected behaviors with KStream incremental rebalancing "end picture". Next, we will present a holistic view of the new learner assignment algorithm during each actual rebalance.
The assignment will be broken down in the order of: active, learner and standby tasks.
Stream Task Tagging
To enable learner resource shuffling behavior, we need to have the following task status indicators to be provided:
|Tag Name||Task Type||Explanation|
|isStateful||both||Indicate whether given task has a state to restore.|
|isLearner||standby||Indicate whether standby task is a learner task.|
|beingLearned||active||Indicate whether active task is being learned by some other stream thread.|
|isReady||standby||Indicate whether standby task is ready to serve as active task.|
Stateful vs Stateless Tasks
For stateless tasks the ownership transfer should happen immediately without the need of a learning stage, because there is nothing to restore. We should fallback the algorithm towards KIP-415 where the stateless tasks will only be revoked during second rebalance. This feature requires us to add a new tag towards a stream task, so that when we eventually consider the load balance of the stream applications, this could help us separate out tasks into two buckets and rebalance independently.
Sometimes the restoration time of learner tasks are not equivalent. When assigned with 1+ tasks to replay, the stream thread could require immediate rebalance as a subset of learning tasks are finished in order to speed up the load balance and resource waste of double task processing, with the sacrifice of global efficiency by introducing many more rebalances. We could supply user with a config to decide whether they want to take eager approach or stable approach eventually, with some follow-up benchmark tools of the rebalance efficiency. Example:
A stream thread S1 takes two learner tasks T1, T2, where restoring time time(T1) < time(T2). Under eager rebalance approach, the stream thread will call out rebalance immediately when T1 finishes replaying. While under conservative approach, stream thread will rejoin the group until it finishes replaying both T1 and T2.
Standby Task Utilization
Don’t forget the original purpose of standby task is to mitigate the issue during scaling down. When performing learner assignment, we shall prioritize stream threads which currently have standby tasks that match learner assignment. Therefore the group should rebalance pretty soon and let the leaving member shutdown themselves fairly quickly.
Scale Down Timeout
User naturally wants to reach a sweet spot between ongoing task transfer and streaming resource free-up. So we want to take a similar approach as KIP-415, where we shall introduce a client config to make sure the scale down is time-bounded. If the time takes to migrate tasks outperforms this config, the leader will send out join group request and force removing active tasks on the leaving members and transfer those tasks to other staying members, so that leaving members will shut down themselves immediately after this round of rebalance.
More Rebalances vs Global Efficiency
The new algorithm will invoke many more rebalances than the current protocol as one could perceive. As we have discussed in the overall incremental rebalancing design, it is not always bad to have multiple rebalances when we do it wisely, and after KIP-345 we have a future proposal to avoid scale up rebalances for static members. The goal is to pre-register the members that are planning to be added. The broker coordinator will augment the member list and wait for all the new members to join the group before rebalancing, since by default stream application’s rebalance timeout is infinity. The conclusion is that: it is server’s responsibility to avoid excessive rebalance, and client’s responsibility to make each rebalance more efficient.
Metadata Space vs Allocation Efficiency
Since we are carrying over more information during rebalance, we should be alerted on the metadata size increase. So far the hard limit is 1MB per metadata response, which means if we add-on too much information, the new protocol could hit hard failure. This is a common pain point for finding better encoding scheme for metadata if we are promoting incremental rebalancing KIPs like 415 and 429. Some thoughts from Guozhang have started in this JIRA and we will be planning to have a separate KIP discussing different encoding technologies and see which one could work.
For the smooth delivery of all the features discussed so far, the iteration is divided into four stages:
Delivery goal: Scale up support, conservative rebalance
The goal of first version is to realize the foundation of learner algorithm for scaling up scenario. The leader stream thread will use previous round assignment to figure out which instances are new ones, and the learner tasks shall only be assigned to new instances once. The reason for only implementing new instances logic is because there is a potential edge case that could break current naive learner assignment: when the number of tasks are much smaller than total cluster capacity, we could fall in endless resource shuffling. We plan to better address this issue in version 4.0 where we take eventual load balance into consideration. Some discussions have been initiated on marking task weight for a while. To me, it is unclear so far what kind of eventual balance model we are going to implement at current stage. In conclusion, we want to postpone the finalized design for eventual balance until last version.
Delivery goal: Scale down support
We will focus on the delivery of scaling down support upon the success of version 1.0. We need to extend on the v1 protocol since we need existing instances to take the extra learning load. We shall break the statement in v1 which claims that "only new instances could take learner tasks". To make this happen, we need to deliver in following steps:
- Create new tooling for marking instances as ready to scale down.
- Tag the leaving information for targeted members.
- Scale down timeout support.
Delivery goal: Eager rebalance
A detailed analysis and benchmark test need to be built before fully devoting effort to this feature. Intuitively most applications should be able to tolerate minor discrepancy of task replaying time, while the cost of extra rebalances and increased debugging complexity are definitely unfavorable.
The version 3.0 is upon version 1.0 success, and could be done concurrently with version 2.0. We may choose to adopt or discard this change, depending on the benchmark result.
Version 4.0 (Stretch)
Delivery goal: Task state labeling, eventual workload balance
Question here: we could deviate a bit from designing the ultimate goal, instead providing user a handy tool to do that.
The 4.0 and the final version will take application eventual load balance into consideration. If we define a balancing factor x, the total number of tasks each instance owns should be within the range of +-x% of the expected number of tasks (according to relative instance capacity), which buffers some capacity in order to avoid imbalance. A stream.imbalance.percentage will be provided for the user to configure. The smaller this number sets to, the more strict the assignment protocol will behave.
Some optimizations such as balancing the load separately for stateful tasks and stateless tasks could also be applied here. So far version 4.0 still has many unknowns and is slightly beyond the incremental rebalancing scope. Our plan is to keep iterating on the details or bake a separate KIP for balancing algorithm in the future.
We are going to add a new type of protocol called "stream" for the protocol type.
Also adding new configs for user to better apply and customize the scaling change.
The setting to help ensure no downtime upgrade of online application.
Options : upgrading, incremental
Time in milliseconds to force terminate the stream thread when informed to be scaled down.
Default : true
If this config is set to true, new member will proactively trigger rebalance when it finishes restoring one learner task state each time, until it eventually finishes all the replaying. Otherwise, new stream thread will batch the ready call to ask for a single round of rebalance.
Default: 0.2 (20%)
The tolerance of task imbalance factor between hosts to trigger rebalance.
To make sure the delivery is smooth with fundamental changes of KStream internals, we build a separate Google Doc here that could be sharable to outline the step of changes. Feel free to give your feedback on this plan while reviewing the algorithm, because some of the algorithm requirements are highly coupled with internal architecture reasoning.
Compatibility, Deprecation, and Migration Plan
Minimum Version Requirement
This change requires Kafka broker version >= 0.9, where broker will react with a rebalance when a normal consumer rejoin the encoded metadata. Client application needs to update to the earliest version which includes KIP-429 version 1.0 change.
Recommended Upgrade Procedure
As we have mentioned above, a new protocol type shall be created. To ensure smooth upgrade, we need to make sure the existing job won't fail. The procedure is like:
- Set the `stream.rebalancing.mode` to `upgrading`, which will force the stream application to stay with protocol type "consumer".
- Rolling restart the stream application and the change is automatically applied. This is safe because we are not changing protocol type.
In long term we are proposing a more smooth and elegant upgrade approach than the current one. However it requires broker upgrade which may not be trivial effort for the end user. So far, user could choose to take this much easier workaround.
N/A for the algorithm part. For implementation plan trade-off, please review the doc in implementation plan.