This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Child pages
  • Scaling Queues
Skip to end of metadata
Go to start of metadata

Scaling to tens of thousands of Queues in a single broker is relatively straightforward - but requires some configuration changes from the default.

Reducing Threads

With the default configuration, ActiveMQ is configured to use a dispatch thread per Queue - you can use set the optimizedDispatch property on the destination policy entry - see configuring Queues.

ActiveMQ can optionally use internally a thread pool to control dispatching of messages - but as a lot of deployment operating systems are good at handling a large number of threads, this is off by default. To enable this option, either set the ACTIVEMQ_OPTS to disable dedicated task runners in the start up script, INSTALL_DIR/bin/activemq -e.g.

ACTIVEMQ_OPTS="-Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=false"  

or you can set ACTIVEMQ_OPTS in /etc/activemq.conf.

Note: From ActiveMQ 5.6 onwards the dedicated task runner is disabled by default (see AMQ-3667 - Getting issue details... STATUS ).

To reduce the number of threads used for the transport - take a look at using the NIO transport - see Configuring Transports

Here is an example of this in use in one of the provided sample broker configuration files.

Reducing Memory Consumption

Reduce the memory used per thread - see reducing memory consumption

Reduce number of file descriptors

ActiveMQ uses the amqPersistenceAdapter by default for persistent messages. Unfortunately, this persistence adapter (as well as the kahaPersistenceAdapter) opens a file descriptor for each queue. When creating large numbers of queues, you'll quickly run into the limit for your OS.

You can either choose another persistence option

or - try out the new KahaDB in version 5.3 and higher

Increase the limit on file descriptors per process

Try googling for the OS you are using

  • No labels