The jdbc component enables you to access databases through JDBC, where SQL queries (SELECT) and operations (INSERT, UPDATE, etc) are sent in the message body. This component uses the standard JDBC API, unlike the SQL Component component, which uses spring-jdbc.
Maven users will need to add the following dependency to their
pom.xml for this component:
This component can only be used to define producer endpoints, which means that you cannot use the JDBC component in a
This component only supports producer endpoints.
You can append query options to the URI in the following format,
The default maximum number of rows that can be read by a polling query. The default value is 0.
Camel 2.1: Sets additional options on the
Camel 2.2: Sets whether to use JDBC 4/3 column label/name semantics. You can use this option to turn it
Camel 2.9: If true, Camel will set the autoCommit on the JDBC connection to be false, commit the change after executing the statement and reset the autoCommit flag of the connection at the end. If the JDBC connection does not support resetting the autoCommit flag, set this to false.
Camel 2.12: Whether to allow using named parameters in the queries.
Camel 2.12: Allows to plugin to use a custom
Camel 2.12: Set this option to
Camel 2.12.1: outputType='SelectList', for consumer or producer, will output a List of Map.
Camel 2.12.1: Specify the full package and class name to use as conversion when outputType=SelectOne. From Camel 2.14 onwards then SelectList is also supported.
Camel 2.12.1: To use a custom
|Camel 2.16: To read BLOB columns as bytes instead of string data. This may be needed for certain databases such as Oracle where you must read BLOB columns as bytes.|
By default the result is returned in the OUT body as an
ArrayList<HashMap<String, Object>>. The
List object contains the list of rows and the
Map objects contain each row with the
String key as the column name. You can use the option
outputType to control the result.
Note: This component fetches
ResultSetMetaData to be able to return the column name as the key in the
If the query is a
If the query is an
Camel 2.10: Rows that contains the generated keys.
Camel 2.10: The number of rows in the header that contains generated keys.
Camel 2.11.1: The column names from the ResultSet as a
Camel 2.12: A
Available as of Camel 2.10
If you insert data using SQL INSERT, then the RDBMS may support auto generated keys. You can instruct the JDBC producer to return the generated keys in headers.
To do that set the header
CamelRetrieveGeneratedKeys=true. Then the generated keys will be provided as headers with the keys listed in the table above.
You can see more details in this unit test.
Using generated keys does not work with together with named parameters.
Using named parameters
Available as of Camel 2.12
In the given route below, we want to get all the projects from the projects table. Notice the SQL query has 2 named parameters, :?lic and :?min.
Camel will then lookup these parameters from the message headers. Notice in the example above we set two headers with constant value
for the named parameters:
You can also store the header values in a
java.util.Map and store the map on the headers with the key
In the following example, we fetch the rows from the customer table.
First we register our datasource in the Camel registry as
testdbdatasource that was bound in the previous step: Or you can create a
DataSourcein Spring like this: We create an endpoint, add the SQL query to the body of the IN message, and then send the exchange. The result of the query is returned in the OUT body: If you want to work on the rows one by one instead of the entire ResultSet at once you need to use the Splitter EIP such as:
In Camel 2.13.x or olderIn Camel 2.14.x or newer
Sample - Polling the database every minute
If we want to poll a database using the JDBC component, we need to combine it with a polling scheduler such as the Timer or Quartz etc. In the following example, we retrieve data from the database every 60 seconds:
Sample - Move Data Between Data Sources
A common use case is to query for data, process it and move it to another data source (ETL operations). In the following example, we retrieve new customer records from the source table every hour, filter/transform them and move them to a destination table: