This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Child pages
  • (Partition Map) Exchange - under the hood

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  • Following notation is used: words written in italics and wihout without spacing mean class names without package name, for example, GridCacheMapEntry


On receiving full map for current exchange, nodes calls onDone() for this future. Listeners for full map are called (order is no guaranteed).


Example of exchange and rebalancing for node join event


Node 2 does not throw data and does not change partition state even it is not owning partition by affinity on new topology version.


Step 3. Node 4 starts to issue demand requests for cache data to any node which have partition data.


Step 5. Node 4 after some delay (timer) sends single message to Coordinator. This message will have absent/'null' version of exchange.


Step 6. Coordinator sends updated full map to other nodes.


Partition state is changed to Renting locally (other nodes think it is Owning).


Step 8.  Node 2 after some delay (timer) sends single message to crd.


All nodes eventually get renting state for P3 backup for Node 4.



SingleMessage with exchId=null is sent when a node updates local partitions' state and schedules a background cluster notification.

In contrast, when a partition map exchange happends, it is completed with exchId != null.


If more topology updates occurred, this causes more cluster nodes will be responsible for same partition at particular moment.


Additional synthetic Exchange will be issued.