This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Page tree
Skip to end of metadata
Go to start of metadata



We've been writing a new client-server protocol as a public API for the creation of new Geode clients. We settled on using Protobuf as the encoding for the protocol as it makes writing clients easier by abstracting away a lot of the encoding details.


One of the big challenges in designing a new protocol has been how to encode values. Like the old binary client protocol, the PDX encoding is complicated, underdocumented, and stateful. However, we need a way for users to send values that are more complex than mere primitives or maps of primitives..

The first approach was to use the JSON-PDX conversion that is already used for the REST API. Many languages have libraries to encode objects as JSON, and it's a familiar format to many developers. However, using JSON for encoding has downsides, in that it's large and slow.

Selecting an Encoding Mechanism

Adding this encoding mechanism to the protocol is not exclusive with allowing JSON or custom encodings – the object encoding can be pluggable, and the user's desired response encoding should be selected during the handshake process.

Regardless of proposal, we should allow users to have a pluggable object encoding that they can register a handler with on the server. This encoder will receive a byte array and return an Object. This allows users to do custom serialization if desired.

Protobuf Struct Encoding and Extension

This section is intended as useful background on the start of some thoughts about encoding a PDX-like type with Protobuf; for the proposed encoding, see "The Proposed Encoding", below.

Protobuf ships with a Struct message, defined in struct.proto, that can be recursively nested to encode JSON.

By extending the same sort of structure, we can encode almost any value with a type that is supported, including adding support for more complex types like dates or UUIDs. This should make it possible to write serializers and should enable driver developers to write auto-serializers that will serialize these objects via reflection to Protobuf.

Protobuf "struct"
message Struct {
  string typeName = 1;
  repeated StructEntry entries = 2;

message StructEntry {
  string fieldName = 1;
  oneof value {
    Struct nestedStruct = 2;
    int64 numericValue = 3;
    byte byteValue = 4;
    bool booleanValue = 5;
    double doubleValue = 6;
    float floatValue = 7;
    bytes binaryValue = 8;
    string stringValue = 9;
    google.protobuf.NullValue nullValue = 10;
    // Collections
    List listResult = 15;
    Map mapResult = 16;

message List {
    repeated EncodedValue elements = 1;

message Map {
    repeated Entry entries = 1;


The typeName field can be used for other clients to recognize the same type. Internally, it will be stored in the PDXInstance that this is converted to, but that detail shouldn't need to be exposed to the driver developer.

So for example, the following class and value:

class User {
  String name;
  int age;

value = new User("Amy", 64);

would encode as (using a pseudo-static initializer syntax):

  typeName: "User",
  entries: [
      fieldName: "name",
      value: stringValue{"Amy"}
      fieldName: "age",
      value: intValue{44}

This all gets compiled down to binary for an encoding that is more efficient than JSON.

Ideally, a driver developer would provide annotations or registration for application developers to specify the manner in which a type should be serialized. In languages that use setters and getters by convention, it would probably be more idiomatic to refer to setters getters for reflection rather than the member variables of the object.

The Proposed Encoding

As an optimization to avoid sending field names with every message, allow clients (and servers) to cache metadata for data they are about to send. This is done by registering an ID that can be used in future messages to refer to the metadata without retransmitting that metadata. This encoding will not actually be smaller for single values of a type, but if multiple values of the same type are sent the savings can be significant.

Type registration will be per-connection (meaning IDs cannot be cached between connections). This eliminates the need to keep synchronization on the server, as well as decoupling type registrations from the internal details of PDX. It also means that the drivers only have to keep track of a relatively small amount of data.

The first time a client (or server) sends a type, it will send it with the NewStructType message, along with a unique ID number, which will lead the other side to cache it. After that, it should reuse that ID number and send StructById messages.

So for example, using the same User from above:

class User {
  String name;
  int age;

value = new User("Amy", 64);

Suppose the client chose ID 42 for this type. Then the first put message using such a value would have a value like so:

    key: EncodedValue{intValue: 12},
        newStructType: NewStructType{
            typename: "User",
            typeID: 42,
            fieldNames: ["name", "age"],
            fieldValues: [
                ValueField{stringField: "Amy"},
                ValueField{intField: 64}

A later PutRequest  would encode the value like this (enclosing Request omitted for succinctness):

  key: EncodedValue{intValue: 111},
  value: EncodedValue{
    structById: StructById{
      id: 42,
      fields: [
        ValueField{stringField: "Amy"},
        ValueField{intField: 64}

Message Definitions 

This is the proposed EncodedValue message that will contain values a client sends to the server or the server sends to the client:

Protobuf Type registration
message Entry {
    EncodedValue key = 1;
    EncodedValue value = 2;

message EncodedValue {
    oneof value{
        // primitives
        int32 intResult = 1;
        int64 longResult = 2;
        int32 shortResult = 3;
        int32 byteResult = 4;
        bool booleanResult = 5;
        double doubleResult = 6;
        float floatResult = 7;
        bytes binaryResult = 8;
        string stringResult = 9;
        google.protobuf.NullValue nullResult = 11;
        NewStruct newStruct = 12;
        StructByID structById = 13;

        // Result serialized using a custom serialization format. This can only be used if
        // A HandshakeRequest is sent with valueFormat set to a valid format.
        // See HandshakeRequest.valueFormat.
        bytes customObjectResult = 14;

        // Collections
        List listResult = 15;
        Map mapResult = 16;
        Set setResult = 17;

        // Primitive arrays
        NumericArray intArray = 18;
        NumericArray longArray = 19;
        NumericArray shortArray = 20;
        NumericArray booleanArray = 21;
        ByteArrayArray byteArrayArray = 22;
        ObjectArray  objectArray = 23;

        // Used in NewStruct messages for defining fields that can be of multiple types.
        // This encoded value will contain the actual type of the field but the type
        // definition will have Object for the field type.
        // This is kind of a hack, sorry.
        EncodedValue objectField = 23;

        // if we decide to add builtin support for additional types, they can go here.

message NewStruct {
    string typename = 1;
    int32 typeID = 2;
    repeated string fieldNames = 3;
    repeated EncodedValue fields = 4;

message StructByID {
    int32 typeID = 1;
    repeated EncodedValue fields = 2;

message List {
    repeated EncodedValue elements = 1;

message Map {
    repeated Entry entries = 1;

// All numeric values in Protobuf are encoded using the same varint encoding,
// so this encodes identically for all numbers and booleans.
message NumericArray {
    repeated int64 elements = 1;

message ByteArrayArray {
    repeated bytes arrays = 1;

message ObjectArray {
    repeated EncodedValue objects = 1;

Under this EncodedValue scheme, types defined by the server and types defined by the client will use different sets of IDs (though these can refer to the same cached values if they are the same). This is because we intend to add support for asynchronous messages and/or multiplexing of multiple channels of communication over one socket, and this avoids having the server and client race to assign IDs. If IDs were shared, the server would need to send back new IDs when it sent back types the client had not seen before.

Type definitions will encode all values that are not primitives or arrays of primitives as Objects that may be of any type, whereas primitives will be type checked. Clients may do their own validation. This is, in significant part, a leaky abstraction due to the way PDX saves values.

If a client is sending mutually recursive types or types that contain instances of themselves, it should send the type definition the first time one is seen (or in the parent instance) and send the type with ID in each later instance.

Whether a client must send all following values by ID or the values can be sent with a full ID each time should be configurable in the handshake.


In order to avoid arbitrary object serialization (which can lead to gadget chain exploits), we will probably need to constrain valid types to those registered as DataSerializable, or possibly even only those registered with the ReflectionBasedAutoSerializer. This may also mean that we need a special class of typenames for those types that are put first by a client.

The way that objects are deserialized on the server is dependent on how PDX behaves now.

A driver developer may wish to provide a way for users to register types before sending values. An earlier version of this document described a protocol where the types had to be defined in a separate message before the value in which they are first put. That had separate list of types for the registration method. Because using the same list of types as EncodedValue amounts to pretty much the same as sending a new value, we opted for the method above.

Driver developers will have to make sure that types they want to use in different language clients can be correlated. So package names may or may not make sense. The naming convention is not entirely decided, nor is whether we can register nameless types. It may be wise to reserve a set of names with special meaning ("JSON" perhaps?) and perhaps a set of names that would correspond to classes that have no domain class in Java (leading underscore, or just those with no package name?)

The use of NumericArray for all the integral types is because they all have the same varint encoding and will encode the same way on the wire. It may be advisable to use more restricted types and separate messages to get better typing in the generated Protobuf code.

Type Mappings

Each of the primitives maps to the corresponding Java primitive. Arrays map to arrays of Java primitives. Other fields will encode to the corresponding objects.

  • No labels


  1. Nice write up!


    Does the server have to be the one to assign ids? Could TypeRegistrationRequest and TypeDefinitionLookupResponse maybe be the same TypeDefinition message, which includes an id?

    I wonder type registrations would be better included inline in the message, rather than as a separate request. It seems like if a client is reading a bunch of streaming results with unknown types, it might complicate the client logic to make out of band requests to look up the types as it is trying to process the results. Maybe something like below? The sender should include the TypeDefinition with the first request and send the id for all subsequent requests.

    message Value {
      oneof {
        TypeDefinition type = 1;
        int typeID = 2;
      repeated ValueField fields = 3;
    1. The only downside I can think of is that the client will have to maintain a cache of types it's seen, and not prune that cache because it might need types later. If we want to remedy this later we can always add a "unregisterType" message or similar.

      1. Well, the sender can control the size of the cache on both the sender and the receiver. So the sender could decide that we will only cache 2^16 types or something like that. Once they are used any new types would have to be sent in full, or the sender could even decide to replace less frequently used types.

        I think the main downside with this approach is that it requires the client and the server to fully deserialize all values in the order they are received on the socket, because later values will use ids that were defined in earlier values.

        The approach you outlined with a separate message to request a type definition or ID allows the client or the server to lazily deserialize data, but I think it requires more complex logic on the client because the client has to send messages to look up types while trying to deserialize an object graph.

        1. How would this work with objects coming back to the client from the server that haven't already been registered on the client?

          1. There are two caches, one for each direction. So for client-> server messaging the client is generating ids for types and the server has to cache all of the types it sees from the client. For server->client messages the server is generating ids for types and the the client has to cache all of the types it has seen from the server. The ids can be completely different. The server and client might give different ids to the same type.

  2. I could see wanting to use the first method to quickly create a driver and then move to the second method as my code base matures and I want to get better performance.  I would definitely want to have the second option at some point.

  3. One other alternative suggestion - since what we are really saving with option 2 is just string fields on the wire, what if we just introduced a string cache, rather than a type definition cache? The idea here is that type names and field names could be cached and represented as varints. This might be less complicated for a client developer than creating a whole ValueTypeDefinition. Here's how that could look:

    message Value {
      //typeName is required for the first occurrence of this type. The sender may optionally assign a typeID to
      //the typeName, and in the future messages on the same connection only send the typeID.
      //a typeID of 0 indicates that the receiver does not need to cache the typeName.
      string typeName = 1;
      int32 typeID = 2;
      repeated ValueField fields = 2;
    message ValueField {
      //fieldName is required for the first occurrence of this type. The sender may optionally assign a fieldID to
      //the fieldName, and in the future messages on the same connection only send the fieldID.
      //a fieldID of 0 indicates that the receiver does not need to cache the fieldName.
      string fieldName = 1;
      int32 fieldID = 2;
      int32 intField = 3;

    We could save even more bytes on the wire by caching regionNames in a similar way.

    Hmm, maybe introduce a CachedString message for this purpose, and use in anywhere in the protocol we send a string that we might want to cache at the protocol level?

    1. In your example, is the typeName just cached as a string? Is that cache/numbering shared with fieldNames? Does the typeName imply a set of fieldNames?

      The idea of caching strings might be an effective one. In this case, perhaps we could replace anywhere we write a string with a new CacheableString {string, message} type of thing? It adds some complexity but has the advantage that we don't have to retransmit strings we send a lot. OTOH, I don't think it will help save time on PdxType lookups. Is that time significant? I don't know.


      1. I was thinking the typeName would be just cached as a string. For simplicity I think a single string cache for all strings is probably fine. So yes typeName and fieldNames would go into the same cache and would have to have unique numbers.

        With this approach typeName does not imply a set of field names.

  4. I think it would be good to spell out what parts of PDX will be represented by this format, and how they will be represented in this format. For example, if we want to support all of the "compatible" pdx types from this list, we could have a table like this:


    PDX Field TypeProtobuf Presentation
    Value {
      fields = //one ValueField for each map entry