Skip to end of metadata
Go to start of metadata

 

The class GMSJoinLeave is responsible for knowing who is in the distributed system.  It interacts with Locators and Membership Coordinators and makes decisions about who should fill the role of Coordinator.  It is also responsible for detecting network partitions.

The diagrams in this document show some of the interactions that take place when new members join and how network partitions are detected.

 

 

Joining an existing distributed system

  

The above diagram shows interaction with a Locator and the membership Coordinator during startup.  The joining member first connects to all locators that it knows about and asks them for the ID of the current membership coordinator.  Locators know about this because they, themselves, are part of the distributed system and receive all membership views.  In the current implementation the method GMSJoinLeave.findCoordinator() is used to do this in the new member.  The locators also tell the new member which membership view they are aware of so that the new member can choose the most recent, should the coordinator be in the process of establishing a new view.  This is especially important if a new coordinator is in the process of taking control.

Once the coordinator's ID is known the new member sends a JoinRequest to it.  The new member will know that it has been accepted when it receives a membership view containing its ID.  At this point it sets the view-ID of its identifiers.  The view-ID, in combination with its IP address and membership port (the UDP port used by JGroups) uniquely identifies this member.

  

Simultaneous joins and leaves

  

Geode can handle simultaneous membership additions and removals.  Join, Leave and Remove requests are queued in a list in GMSJoinLeave and are processed by a ViewCreator thread.  It is the ViewCreator thread that creates and sends new membership views.

 

Two locators starting with no other members

  

It's best to stagger the start-up of locators but Geode can handle simultaneous startup as well.  GMSLocator maintains a collection of what it calls "registrants" who have contacted it requesting the ID of the current coordinator.  If there is no coordinator it will respond with the collection of registrants and the processes that are trying to join will use this information to determine who is most likely to be chosen as membership coordinator.  They will then send a JoinRequest to that process in hope that it will figure out that it should become coordinator and take action on the request.

In the above diagram we see L1 make the decision to become coordinator and create an initial membership view.  Since it has received a JoinRequest from L2 it includes it in this initial view.

 

Locators restarting and taking control

  

Geode prefers to have locators be the membership coordinator when network partition detection is enabled, or when peer-to-peer authentication is enabled.  Other members will take on the role if there are no locators in the membership view but they will be deposed once a new locator joins the distributed system.

But what happens if two locators are started at the same time?  Which one becomes the new coordinator?  The diagram above shows this interaction from the perspective of one of the locators.  The diagram below shows this same interaction from the perspective of the other locator.

In the initial implementation of GMSJoinLeave the current coordinator recognized that a locator was attempting to join and responded with a "become coordinator" message.  This lead to a lot of complications when a second locator was also trying to join so we decided to remove the whole "become coordinator" notion and have the current coordinator accept and process the JoinRequest.  This allows the locator to join and then detect that it should be the coordinator.

 

Locators starting and taking control - continued

  

Here we see L2 starting up and attempting to join while L1 is in the process of joining and deposing C as the coordinator.

L2 contacts L1 to find the coordinator and sees that it should become the coordinator.  L2 attempts to join by sending a JoinRequest to L1 but it is not yet coordinator so it merely queues the request and continues in its own attempt to join.

L2 gives up waiting for a response from L1 and, having received a view from the GMSLocators it has contacted, attempts to join using coordinators selected from that view.  Eventually it attempts to join using C as the coordinator.

By the time C receives L2's JoinRequest it has been deposed as coordinator.  In response to the request it sends L2 a JoinResponse telling it that L1 is now coordinator.  Since L2 has already sent a JoinRequest to L2 it now knows that it must be patient and wait for a new view admitting it into the distributed system.

 

Detecting a network partition

  

Here we see a network partition with three members on one side (L, M and N) and two members on the other (A and B).  When this happens the HealthMonitor is responsible for initiating suspect processing on members that haven't been heard from and cannot be contacted.  It eventually tells the JoinLeave component to remove one of the members.  All members of the distributed system perform this kind of processing and it can lead to a member deciding that the current Coordinator is no longer there and electing itself as the new Coordinator.

In the above diagram we see this happen on the A,B side of the partition.  B initiates suspect processing on the current Coordinator (process L) and it notifies A.  A performs a health check on L and decides it is unreachable, electing itself coordinator.  It then sends a prepare-view message with (A,B,M,N) and expects acknowledgement but receives none from M or N.  After checking on M and N via the HealthMonitor it kicks them out, creating the new view (A,B).  This view change passes the quorum check so A prepares and installs it.

On the losing side of the partition L is notified that A is suspect and kicks it out, creating view (L,B,M,N).  It prepares the view but gets no response from B.  It kicks B out, forming view (L,M,N).  This view has lost quorum so instead of sending it out it notifies M and N that they should shut down and then shuts itself down as well.

Let's look at the quorum calculations themselves:  If L is a locator and the others are normal members the initial membership view, (L,A,B,M,N) would have weights (3,15,10,10,10) .  Locators only have 3 points of weight, regular members get 10 and the so-called Lead Member gets an additional 5 points of weight.  View (A,B) represents a loss of 23 of those points while view (L,M,N) represents a loss of 25 points which is more than 50% of the total weight of the initial view.  This why A and B remained standing while L, M and N shut down even though there were more processes on their side of the network split.