Page tree
Skip to end of metadata
Go to start of metadata

Top-Level Goal

The top-level goal is a single API for managing cluster configuration.

The beneficiaries of this work are those who want to change the configuration of the cluster (create/destroy regions, indices or gateway receivers/senders etc), and have these changes replicated on all the applicable servers and persisted in the cluster configuration service. In addition to developers building Geode-based applications, the target user group includes developers working on different parts of the Geode code such as Spring Data for Apache, queries for Lucene index, or storage for the JDBC connector.

Problem Statement

In the current implementation:

  • Most cluster configuration tasks are possible, but only by coordinating XML file-based configuration files, properties files, and gfsh commands. 
  • Many of the desired outcomes are achievable through multiple paths.
  • Establishing a consistent configuration and persisting it across the cluster is difficult, sometimes impossible.

Product Goals 

The developer should be able to:

  • Create regions/indices on the fly.

  • Persist the configuration and apply it to the cluster (when a new node joins, it has the config; when the server restarts, it has the config)

  • Obtain a consistent view of the current configuration

  • Apply the same change to the cluster in the same way

  • Be able to change the configuration in one place

  • Obtain this configuration without being on the cluster

Proposed Solution

The proposed solution includes:

  • Address the multiple path issue by presenting a single public API for configuring the cluster, including such tasks as creating a region  destroying an index, or update an async event queue.
  • Provide a means to persist the change in the cluster configuration.
  • Save a configuration to the Cluster Management Service without having to restart the servers
  • Obtain the cluster management service from a cache when calling from a client or a server
  • Pass a config object to the cluster management service
  • Use CRUD operations to manage config objects

This solution should meet the following requirements:

  • The user needs to be authenticated and authorized for each API call based on the resource he/she is trying to access.

  • User can call the API from either the client side or the server side.

  • The outcome (behavior) is the same on both client and server:

    •  affects cluster wide

    •  idempotent

What We Have Now

Our admin rest API "sort of" already serves this purpose, but it has these shortcomings:

  1. It's not a public API
  2. The API is restricted to the operations implemented as gfsh commands, as the argument to the API is a gfsh command string.
  3. Each command does similar things, yet commands may not be consistent with each other.

Below is a diagram of the current state of things:

commands

From the current state of commands, It's not easy to extract a common interface for all the commands. And developers do not want to use gfsh command strings as a "makeshift" API to call into the command. We are in need of a unified interface and a unified workflow for all the commands.

Proposal

We propose a new Cluster Management Service (CMS) which has two responsibilities:

  • Update runtime configuration of servers (if any running)
  • Persist configuration (if enabled)

Note that in order to use this API, Cluster Configuration needs to be enabled.


highlevel

The CMS API is exposed as a new endpoint as part of "Admin REST APIs", accepting configuration objects (JSON) that need to be applied to the cluster. CMS adheres to the standard REST semantics, so users can use POST, PATCH, DELETE and GET to create, update, delete or read, respectively. The API returns a JSON body that contains a message describing the result along with standard HTTP status codes.


Root End Point

APIStatus CodeResponse Body

Endpoint: http://locator:8080/geode/v2

Method: GET

Headers:

user: user1

password: password1


 

200

Success Response
{
    "number_of_locators": 3,
	"number_of_servers": 8,
	"region_url": "/geode/v2/regions",
	"gateway_receiver_url": "/geode/v2/gwr",
	"gateway_sender_url": "/geode/v2/gws"
}



401
Error Response
{
    "message": "Missing authentication credential header(s)"
}
403
Error Response
{
    "message": "User1 not authorized for CLUSTER:READ"
}


Create End Point

APIStatus CodeResponse Body

Endpoint: http://locator:8080/geode/v2/regions

Method: POST

Headers:

user: user1

password: password1

Body:

Request Body
{
  "regionConfig": {
      "name": "Foo",
      "type": "REPLICATE" 
  }
}
201
Success Response
{
  "Metadata": {
    "Url": "/geode/v2/regions/Foo"
  }
}
304
Success Response
{
  "message": "Region /Foo already exists"
}
400
Error Response
{
    "message": "Region type is a required parameter"
}
401
Error Response
{
    "message": "Missing authentication credential header(s)"
}
403
Error Response
{
    "message": "User1 not authorized for DATA:MANAGE"
}
500
Error Response
{
    "message": "Failed to create region /Foo because of <reason>"
}

Note that the CREATE endpoint is idempotent – i.e. it should be a NOOP if the region already exists.

List End Point

APIStatus CodeResponse Body

Endpoint: http://locator:8080/geode/v2/regions

Method: GET

Headers:

user: user1

password: password1


 

200

Success Response
{
    "Total_results": 10,
    "Regions" : [
     {
       "Name": "Foo",
       "Url": "/geode/v2/regions/Foo"
     },
     ...
     ]
}
401
Error Response
{
    "message": "Missing authentication credential header(s)"
}
403
Error Response
{
    "message": "User1 not authorized for CLUSTER:READ"
}

 

Describe End Point

APIStatus CodeResponse Body

Endpoint: http://locator:8080/geode/v2/regions/Foo

Method: GET

Headers:

user: user1

password: password1


 

200

Success Response
{
    "Name": "Foo",
    "Data_Policy": "partition",
    "Hosting_Members": [
      "s1",
      "s2",
      "s3"
      ],
    "Size": 0,
    "Indices": [
     {
     "Id": 111,
     "Url": "/geode/v2/regions/Customer/index/111"
     }
    ]

}
401
Error Response
{
    "message": "Missing authentication credential header(s)"
}
403
Error Response
{
    "message": "User1 not authorized for CLUSTER:READ"
}
404
Error Response
{
     "message": "Region with name '/Foo' does not exist"
}

Update End Point

APIStatus CodeResponse Body

Endpoint: http://locator:8080/geode/v2/regions/Foo

Method: PATCH

Headers:

user: user1

password: password1

Body:

Request Body
{
  "regionConfig": {
      "gateway_sender_id": ["1","2"]
  }
}

 

 

200

Success Response
{
  "Metadata": {
    "Url": "/geode/v2/regions/Foo"
  }
}
400
Error Response
{
    "message": "Invalid parameter specified"
}
401
Error Response
{
    "message": "Missing authentication credential header(s)"
}
403
Error Response
{
    "message": "User1 not authorized for DATA:MANAGE"
}
404
Error Response
{
    "message": "Region with name '/Foo' does not exist"
}


500
Error Response
{
    "message": "Failed to update region /Foo because of <reason>"
}

Delete End Point

APIStatus CodeResponse Body

Endpoint: http://locator:8080/geode/v2/regions/Foo

Method: DELETE

Headers:

user: user1

password: password1


 

 

204

<Successful deletion>

304
Error Response
{
    "message": "Region with name '/Foo' does not exist"
}
401
Error Response
{
    "message": "Missing authentication credential header(s)"
}
403
Error Response
{
    "message": "User1 not authorized for DATA:MANAGE"
}
500
Error Response
{
    "message": "Failed to delete region /Foo because of <reason>"
}

Note that the DELETE endpoint is idempotent – i.e. it should be a NOOP if the region does not exist.


Let's look at some code to see how users can use this service. The below example shows how to create a region using CMS.

Curl (any standard REST client)

Curl
curl http://locator.host:8080/geode/v2/regions -XPOST -d '
{
  "regionConfig": {
      "name": "Foo" 
      "type": "PARTITIONED"
  }
}'

On Client 

 

Client
public class MyApp {
  public static void main(String[] args) {
    //1. Get the service from Cache
    ClientCache cache = new ClientCacheFactory().addPoolLocator("127.0.0.1", 10334).create();
    ClusterManagementService cms = cache.getClusterManagementService();
    
    //2. Create the config object, these are just JAXB generated POJOs
    RegionConfig regionConfig = new RegionConfig(); //These are JAXB generated configuration objects
    regionConfig.setrefId("REPLICATE");
    
    //3. Invoke create, update, delete or get depending on what you want to do.
    ConfigResult result = cms.createRegion("Foo", regionConfig); //create(regionName, config) returns a ConfigResult or throws an exception   
  }
}

onClient-Sequence

On Server

Here's how one can use CMS on a server.

Server
public class MyFunction implements Function<String> {
  @Override
  public void execute(FunctionContext context) {
    //1. Get the service from cache
 	Cache cache = context.getCache();
    ClusterConfigurationService cms = Cache.getClusterManagementService();
    
    //2. Create the config object, these are just JAXB generated POJOs
    RegionConfig regionConfig = new RegionConfig(); //These are JAXB generated configuration objects
    regionConfig.setrefId("REPLICATE");
    
    //3. Invoke create, update, delete or get depending on what you want to do.
    ConfigResult result = cms.createRegion("Foo", regionConfig); //create(regionName, config) returns a ConfigResult or throws an exception
  }
}


onServer-Sequence

Behind the scenes

Following the effort here, Configuration Persistence Service, we already have a set of configuration objects derived from the cache XML schema. This would serve a common object that the developer would use to configure the config instance. The developer would then ask the cluster management service to persist it, either on the cache (creating the real thing on an existing cache) or on the configuration persistence service (persisting the configuration itself). 

ConfigElement

 

On the locator side, the configuration service framework will just handle the workflow. It's up to each individual ClusterConfigElement to implement how it needs to be persisted and applied. 

Pros and Cons:

Pros:

  1. A common interface to call either on the locator/server/client side
  2. A common workflow to enforce behavior consistency
  3. Modularized implementation. The configuration object needs to implement the additional interfaces in order to be used in this API. This allows us to add functionality gradually and per function groups.

Cons:

  1. Existing gfsh commands need to be refactored to use this API as well, otherwise we would have duplicate implementations, or have different behaviors between this API and gfsh commands.
  2. When refactoring gfsh commands, some commands' behaviors will change if they want to strictly follow this workflow, unless we add additional APIs for specific configuration objects.

Migration Strategy:

Our current commands uses numerous options to configure the behavior of the commands. We will have to follow these steps to refactor the commands.

  1. Combine all the command options into one configuration object inside the command itself.
  2. Have the command execution call the public API if the command conforms to the new workflow. In this step, the config objects needs to implement the ClusterConfigElement.
  3. If the command can't use the common workflow, make a special method in the API for that specific configuration object. (We need to evaluate carefully - we don't want to make too many exceptions to the common workflow.)

The above work can be divided into functional groups so that different groups can share the workload.

Once all the commands are converted using the ClusterManagementService API, each command class can be reduced to a facade that collects the options and their values, builds the config object and calls into the API. At this point, the command objects can exist only on the gfsh client.

The end architecture would look like this:

migration

Project Milestones

  1. API is clearly defined
  2. All commands are converted using this API
  3. Command classes exist only on a gfsh client. The GfshHttpInvoker uses the REST API to call this ClusterConfigurationService with the configuration objects directly.

 

  • No labels

1 Comment

  1. The REST API design does not follow the best practice for designing a REST API. Rather relying on the payload, we should probably do something like:

    VERBPATHAction
    GET/geode/v1/regionsReturns all the regions created
    POST/geode/v1/regions/customercreates a region "customer" based on the payload
    GET/geode/v1/regions/customergets the current configuration of the region named customer
    DELETE/geode/v1/regions/customerdestroys the customer region