Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagescala
titleniceGroupCoordinator.scala
val GroupMaxSize = "group.max.size"
...
.define(GroupMaxSizeProp, INT, Defaults.GroupMaxSize, MEDIUM, GroupMaxSizeDocdef handleJoinGroup(...)

def handleSyncGroup(...)


Proposed Changes


We shall add a config called group.max.size on the coordinator side.

Code Block
languagescala
titleKafkaConfig
val GroupMaxSizeGroupMaxSizeProp = "group.max.size"
...
val GroupMaxSize = 1000000
...
.define(GroupMaxSizeProp, INT, Defaults.GroupMaxSize, MEDIUM, GroupMaxSizeDoc)

The default value 1_000_000 proposed here is based on a rough size estimation of member metadata (100B), so the max allowed memory usage per group is 100B * 1_000_000 = 100 MB which should be sufficient large number for most use cases I know. Welcome discussion on this number.


Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Rejected Alternatives

  • This is backward compatible change.

Rejected Alternatives

Some discussion here proposed other approaches like enforcing memory limit or changing initial rebalance delay. We believe that those approaches are "either not strict or not intuitive" (Quote from Stanislav), compared with group size cap which is very easy to understand and config by end user in the customized mannerIf there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.