Access to add and change pages is restricted. See:

Distributed Entity Cache Clear (DCC) Mechanism

Why and when use it?

The DCC mechanism is needed when you have multiple OFBiz servers in a cluster that are sharing a single database. When one entity engine has either a create, update or delete operation that goes through it, it will clear its own caches. But it can also sent out a message to the other servers in the pool to clear their caches.

This is a feature that runs through the service engine which operates on top of the entity engine. When you're doing a distributed cache clear it will result in service calls to the other OFBiz servers in the cluster. In most cases you will use Java Messaging Service, to send a message to a JMS server that can then be distributed to other servers in the cluster.

How to set it?

To keep it simple we will only set the mandatory values. There are other options which are covered by defaults.

RMI deactivated since OFBIZ-6942

Because of The infamous Java serialization vulnerability the RMI container has been disabled in the default configuration of OFBiz, and hence JNDI (relies on RMI in OFBiz). So if you want to use the DCC mechanism you will first need to uncomment as explained at OFBIZ-6942 - Getting issue details... STATUS (Note: I'm not quite sure this is required. Because JNDI relies on the RMI registry service provider but I don't think the RMI loader is required for the DCC, and OFBIZ-6942 is only about disabling the RMI loader. To be checked and updated...

The Entity Engine


This is the easiest part, for a given delegator you only have to set its distributed-cache-clear-enable attribute to "true" (false by default). As an example:

    <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="true">
        <group-map group-name="org.ofbiz" datasource-name="localderby"/>
        <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/>
        <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/>

The Service Engine

The location of the JMS definition is in the framework/service/config/serviceengine.xml file. By default you set a jms-service of name "serviceMessenger". You define there a JMS server with its name, a jndi name and a topic name. To make as less changes as possible we use "default" for the server name and set the values in the file. I could have also set a server name in jndiservers.xml but my catch phrase is "the less changes the better". This is the service.xml setting, same on each servers

<!-- JMS Service Active MQ Topic Configuration (set as default in, the less changes the better) -->
<jms-service name="serviceMessenger" send-mode="all">
    <server jndi-server-name="default"

I decided to use Apache ActiveMQ as JMS server and to simply set these properties in the files (commenting out the OOTB default):

connectionFactoryNames=connectionFactory, queueConnectionFactory, topicConnectionFactory

AvtiveMQ provides also a point to point model with queues. We rely rather on topics because they use a Publish/subscribe model and we need to broadcast messages to all servers in the cluster.

At this stage you need to install an ActiveMQ server somewhere. Initially, I decided to install the last available release of ActiveMQ : 5.5.0. But it turned that there are some known issues in this release. So I finally took the 5.4.2 release. To test, I installed it on my XP development machine, and on the cluster. It can also be embedded in OFBiz, but I decided to simply run it as an external broker. I don't think it's interesting to have it embedded in OFBiz: you just install, run it and forget about it (it sets /etc/ for you). For testing I used the ActiveMQ recommended default setting for that. For production you will want to run it as a Unix Daemon (or Windows Service).

You need also to put the corresponding activemq-all-x.x.x.jar in the framework/base/lib OFBiz directory. Then the Distributed Cache Clearing Mechanism should be ready to use.

You can then monitor ActiveMQ using the Web Console by pointing your browser at http://localhost:8161/admin/ and then topics page

Single point of failure

The setting above is sufficient in a staging environment but is a single point of failure in a production environment. So we need to create a cluster of ActiveMQ brokers. Since they should not consume much resources (only 256MB of memory at max and not much CPU cycles), we can put each instance on the same machines than the OFBiz instances.

There is a simple way to load balance ActiveMQ queues in an ActiveMQ cluster. But, as explained above, tough we use async services for clearing distributed caches, it does not make sense for us to use queues since we need to broadcast messages. There is also the so called virtual destinations solution but it's a bit complicated and it seems still uses a queue underneath. After some resarches, I have finally decided to go with the the Failover Transport solution.

It's fairly simple to set through JNDI. For this we only need to replace in files




You may add any number of AMQ instances you want in the failover/tcp chain. The soTimeout=60000 parameter prevents to keep too much useless connections opened.

I tried to add &backup=true&trackMessages=true at the end of the failover chain (ie after ?randomize=false) but the connectionsd created are held and it seems there are no means to close them. It's weird, sot I asked on ActiveMQ user ML, no answers yet...
See Transport Options for details on these 2 parameters. There is a also a link at bottom of this page if ever you need to escalate more smoothly dynamic setting of failover. But it would need more work in OFBiz...
Then you should not get issues with held connections or too many open connections

On the broker side (in activemq.xml) you need to

  1. set advisorySupport="false" for the broker (except if you want to use advisory messages)
  2. use transport.soTimeout=60000 and set enableStatusMonitor="true" for Openwire connector

    <broker xmlns="" brokerName="localhost" dataDirectory="${activemq.base}/data" destroyApplicationContextOnStop="true" advisorySupport="false">
    <transportConnector name="openwire" uri="tcp://" enableStatusMonitor="true"/>


  1. You can also have your DCC mechanism setup against a Apache Servicemix implementation on a different server in your network. Ensure it working and have your OFBiz spokes point to it.

    We use following setting in serviceengine.xml:

    <jms-service name="serviceMessenger" send-mode="all">
                <server jndi-server-name="dcc"


    And in jndiservers.xml we have:

    <jndi-server name="dcc"

    Of course <servicemixserver> point to either the name of the server where you have Apache Servicemix implemented, or its IP address.

  2. If using PostgreSQL as a database, then it might make sense to have an implementation that uses LISTEN and NOTIFY :

    In case the DB fails - well, both apps will notice and you have once less component to worry about (no JMS server). 

    1. Distributed cache clear is only used for cache maintenance in OFBiz when you have > 1 OFBiz instances using a single database instance. It is not doing anything towards the database itself.

      JMS is used to do the notifications between the OFBiz instances.

      The mechanism is implemented in a way that it is vendor independent / database agnostic. LISTEN/NOTIFY is not SQL standard so this would be a database specific implementation.

      1. Thanks Michael Brohl .

        I think both solutions have merit.

        > JMS is used to do the notifications between the OFBiz instances. 

        Yes, LISTEN/NOTIFY can be used for the same purpose when you don't need complexity of JMS and adding a new service to maintain.

        JMS is also very java centric and I would like to avoid it if possible. 

        Other messaging technologies that implement a PROTOCOL not a Java API (like JMS does) are preferable IMO. 

        On that front LISTEN/NOTIFY can be used by any app that can connect to PostgreSQL and I would argue is more portable from that regard than JMS which is Java centric. 

        Of course, if only OFBiz instances are talking over JMS then not a big portability issue. 

        My take on this is:

        •  I'm very happy this kind of functionality exists OOTB in OFBiz
        • There are reasons why someone (me ?) would like to use different implementations

        Personally I would avoid deploying JMS if possible. 

        I would use RabbitMQ or another messaging technology if I have to.

        I would use LISTEN/NOTIFY to avoid deploying another service in production that I have to maintain / update / monitor / security check.