Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

 

Table of Contents

 

Distributed Entity Cache Clear (DCC) Mechanism

Why and when use it?

The distributed cache clearing DCC mechanism is needed when you have multiple OFBiz servers in a cluster that are sharing a single database. When one entity engine has either a create, update or delete operation that goes through it, it will clear its own caches. But it can also sent out a message to the other servers in the pool to clear their caches.

This is a feature that runs through the service engine which operates on top of the entity engine. When you're doing a distributed cache clear it will result in service calls to the other OFBiz servers in the cluster. In most cases you will use Java Messaging Service, to send a message to a JMS server that can then be distributed to other servers in the cluster.

...

Info
titleRMI deactivated since OFBIZ-6942

Because of The infamous Java serialization vulnerability the RMI container has been disabled in the default configuration of OFBiz, and hence JNDI (relies on RMI in OFBiz). So if you want to use the DCC mechanism you will first need to uncomment as explained at

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyOFBIZ-6942
(Note: I'm not quite sure this is required. Because JNDI relies on the RMI registry service provider but I don't think the RMI loader is required for the DCC, and OFBIZ-6942 is only about disabling the RMI loader. To be checked and updated...

The Entity Engine

Info

entityengine.xml

This is the easiest part, for a given delegator you only have to set its distributed-cache-clear-enable attribute to "true" (false by default). As an example:

Code Block
    <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="true">
        <group-map group-name="org.ofbiz" datasource-name="localderby"/>
        <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/>
        <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/>
    </delegator>


The Service Engine

The location of the JMS definition is in the service-engineframework/service/config/serviceengine.xml file. By default you set a jms-service of name "serviceMessenger". You define there a JMS server with its name, a jndi name and a topic name. To make as less changes as possible we use "default" for the server name and set the values in the jndi.properties file. I could have also set a server name in jndiservers.xml but my catch phrase is "the less changes the better". This is the service-engine.xml setting, same on each servers

...

At this stage you need to install an ActiveMQ server somewhere. Initially, I decided to install the last available release of ActiveMQ : 5.5.0. But it turned that there are some known issues in this release. So I finally took the 5.4.2 release. To test, I installed it on my XP development machine, and on the cluster. It could can also be embedded in OFBiz, but I decided to simply run it as an external broker. I don't think it's interesting to have it embedded in OFBiz: you just install, run it and forget about it (it sets /etc/ for you). For testing I used the ActiveMQ recommended default setting for that. For production you will want to run it as a Unix Daemon (or Windows Service).

...

  1. set advisorySupport="false" for the broker (exceptif except if you want to use advisroy advisory messages)
  2. use transport.soTimeout=60000 and set enableStatusMonitor="true" for Openwire connector

    Code Block
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.base}/data" destroyApplicationContextOnStop="true" advisorySupport="false">
    ....
    <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?transport.soTimeout=60000" enableStatusMonitor="true"/>
    

...