This component does not implement any kind of persistence or recovery, if the VM terminates while messages are yet to be processed. If you need persistence, reliability or distributed SEDA, try using either JMS or ActiveMQ.
The Direct component provides synchronous invocation of any consumers when a producer sends a message exchange.
someName can be any string that uniquely identifies the endpoint within the current CamelContext.
You can append query options to the URI in the following format:
The maximum capacity of the seda queue, i.e., the number of messages it can hold.
The default value in Camel 2.2 or older is
From Camel 2.3: the size is unbounded by default.
Note: Care should be taken when using this option. The size is determined by the value specified when the first endpoint is created. Each endpoint must therefore specify the same size.
From Camel 2.11: a validation is taken place to ensure if using mixed queue sizes for the same queue name, Camel would detect this and fail creating the endpoint.
Number of concurrent threads processing exchanges.
Option to specify whether the caller should wait for the asynchronous task to complete before continuing.
The following options are supported:
The first two values are self-explanatory.
The last value,
See Async messaging for more details.
Timeout (in milliseconds) before a seda producer will stop waiting for an asynchronous task to complete.
From Camel 2.2: you can now disable timeout by using
Specifies whether multiple consumers are allowed. If enabled, you can use SEDA for Publish-Subscribe messaging. That is, you can send a message to the seda queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint.
Whether to limit the number of
By default, an exception will be thrown if a seda endpoint is configured with a greater number. You can disable that check by turning this option off.
Whether a thread that sends messages to a full seda queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted.
Component only: the maximum size (capacity of the number of messages it can hold) of the seda queue.
This option is used when
Consumer only: the timeout used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown.
Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded.
Define the queue instance which will be used by seda endpoint
Whether the producer should fail by throwing an exception when sending to a seda queue with no active consumers.
Only one of the options
Whether the producer should discard the message (do not add the message to the queue) when sending to a seda queue with no active consumers.
Only one of the options
Choosing BlockingQueue implementation
Available as of Camel 2.12
By default, the seda component instantiates a
LinkedBlockingQueue. However, a different implementation can be chosen by specifying a custom
BlockingQueue implementation. When a custom implementation is configured the
size option is ignored.
The list of available
BlockingQueueFactory implementations includes:
Use of Request Reply
In the route above, we have a TCP listener on port
9876 that accepts incoming requests. The request is routed to the
seda:input queue. As it is a Request Reply message, we wait for the response. When the consumer on the
seda:input queue is complete, it copies the response to the original message response.
until 2.2: Works only with 2 endpoints
Using Request Reply over SEDA or VM only works with 2 endpoints. You cannot chain endpoints by sending to
A -> B -> C etc. Only between
A -> B. The reason is the implementation logic is fairly simple. To support 3+ endpoints makes the logic much more complex to handle ordering and notification between the waiting threads properly.
This has been improved in Camel 2.3, which allows you to chain as many endpoints as you like.
By default, the SEDA endpoint uses a single consumer thread, but you can configure it to use concurrent consumer threads. So, instead of thread pools you can use:
As for the difference between the two, note a thread pool can increase/shrink dynamically at runtime depending on load, whereas the number of concurrent consumers is always fixed.
Be aware that adding a thread pool to a seda endpoint by doing something like:
Can wind up with two
BlockQueues: one from the seda endpoint, and one from the workqueue of the thread pool, which may not be what you want. Instead, you might wish to configure a Direct endpoint with a thread pool, which can process messages both synchronously and asynchronously. For example:
You can also directly configure number of threads that process messages on a seda endpoint using the
In the route below we use the SEDA queue to send the request to this asynchronous queue to be able to send a fire-and-forget message for further processing in another thread, and return a constant reply in this thread to the original caller.Here we send a Hello World message and expects the reply to be OK. The
Hello Worldmessage will be consumed from the seda queue from another thread for further processing. Since this is from a unit test, it will be sent to a
mockendpoint where we can do assertions in the unit test.
Available as of Camel 2.2
In this example we have defined two consumers and registered them as spring beans.Since we have specified
multipleConsumers=trueon the seda
fooendpoint we can have those two consumers receive their own copy of the message as a kind of pub-sub style messaging.
As the beans are part of an unit test they simply send the message to a
mock endpoint. Note the use of
@Consume to consume from the seda queue.
Extracting Queue Information.
If needed, information such as queue size, etc. can be obtained without using JMX in this fashion: