JdbcStorageHandler supports reading from jdbc data source in Hive. Currently writing to a jdbc data source is not supported. To use JdbcStorageHandler, you need to create an external table using JdbcStorageHandler. Here is a simple example:
You can also alter table properties of the jdbc external table using alter table statement, just like other non-native Hive table:
In the create table statement, you are required to specify the following table properties:
hive.sql.database.type: MYSQL, POSTGRES, ORACLE, DERBY, DB2
hive.sql.jdbc.url: jdbc connection string
hive.sql.jdbc.driver: jdbc driver class
hive.sql.dbcp.username: jdbc user name
hive.sql.dbcp.password: jdbc password in clear text, this parameter is strongly discouraged. The recommended way is to store it in a keystore. See the section “securing password” for detail
hive.sql.table / hive.sql.query: You will need to specify either “hive.sql.table” or “hive.sql.query” to tell how to get data from jdbc database. “hive.sql.table” denotes a single table, and “hive.sql.query” denotes an arbitrary sql query.
Besides the above required properties, you can also specify optional parameters to tune the connection details and performance:
hive.sql.catalog: jdbc catalog name (only valid if “hive.sql.table“ is specified)
hive.sql.schema: jdbc schema name (only valid if “hive.sql.table“ is specified)
hive.sql.jdbc.fetch.size: number of rows to fetch in a batch
hive.sql.dbcp.xxx: all dbcp parameters will pass to commons-dbcp. See https://commons.apache.org/proper/commons-dbcp/configuration.html for definition of the parameters. For example, if you specify hive.sql.dbcp.maxActive=1 in table property, Hive will pass maxActive=1 to commons-dbcp
Supported Data Type
The column data type for a Hive JdbcStorageHandler table can be:
Numeric data type: byte, short, int, long, float, double
Decimal with scale and precision
String date type: string, char, varchar
Note complex data type: struct, map, array are not supported
hive.sql.table / hive.sql.query defines a tabular data with a schema. The schema definition has to be the same as the table schema definition. For example, the following create table statement will fail:
However, column name and column type of hive.sql.table / hive.sql.query schema may be different than the table schema. In this case, database column maps to hive column by position. If data type is different, Hive will try to convert it according to Hive table schema. For example:
Hive will try to convert the double “gpa” of underlining table STUDENT to decimal(4,3) as the effective_gpa field of the student_jdbc table. In case the conversion is not possible, Hive will produce null for the field.
JdbcStorageHandler will ship required jars to MR/Tez/LLAP backend automatically if JdbcStorageHandler is used in the query. User don’t need to add jar manually. JdbcStorageHandler will also ship required jdbc driver jar to the backend if it detects any jdbc driver jar in classpath (include mysql, postgres, oracle and mssql). However, user are still required to copy jdbc driver jar to hive classpath (usually, lib directory in hive).
In most cases, we don’t want to store jdbc password in clear text in table property "hive.sql.dbcp.password". Instead, user can store password in a Java keystore file on HDFS using the following command:
This will create a keystore file located on hdfs://user/foo/test.jceks which contains two keys: host1.password and host2.password. When creating table in Hive, you will need to specify “hive.sql.dbcp.password.keystore” and “hive.sql.dbcp.password.key” instead of “hive.sql.dbcp.password” in create table statement:
You will need to protect the keystore file by only authorize targeted user to read this file using authorizer (such as ranger). Hive will check the permission of the keystore file to make sure user has read permission of it when creating/altering table.
Hive is able to split the jdbc data source and process each split in parallel. User can use the following table property to decide whether or not to split and how many splits to split into:
hive.sql.numPartitions: how many split to generate for the data source, 1 if no split
hive.sql.partitionColumn: which column to split on. If this is specified, Hive will split the column into hive.sql.numPartitions equal intervals from hive.sql.lowerBound to hive.sql.upperBound. If partitionColumn is not defined but numPartitions > 1, Hive will split the data source using offset. However, offset is not always reliable for some databases. It is highly recommended to define a partitionColumn if you want to split the data source. The partitionColumn must exist in the schema “hive.sql.table”/”hive.sql.query” produces.
hive.sql.lowerBound / hive.sql.upperBound: lower/upper bound of the partitionColumn used to calculate the intervals. Both properties are optional. If undefined, Hive will do a MIN/MAX query against the data source to get the lower/upper bound. Note both hive.sql.lowerBound and hive.sql.upperBound cannot be null. The first and last split are open ended. And all null value for the column will go to the first split.
This table will create 3 splits: num<4 or num is null, 4<=num<7, num>=7
Hive will do a jdbc query to get the MIN/MAX of the percentage column of the query, which is 60, 100. Then table will create 4 splits: (,70),[70,80),[80,90),[90,). The first split also include null value.
To see the splits generated by JdbcStorageHandler, looking for the following messages in hiveserver2 log or Tez AM log:
Hive will pushdown computation to jdbc table aggressively, so we can make best usage of the native capacity of jdbc data source.
For example, if we have another table voter_jdbc:
Then the following join operation will push down to mysql:
This can be manifest by explain:
Computation pushdown will only happen when the jdbc table is defined by “hive.sql.table”. Hive will rewrite the data source with a “hive.sql.query” property with more computation on top of the table. In the above example, mysql will run the query and retrieve the join result, rather than fetch both tables and do the join in Hive.
The operators can be pushed down include filter, transform, join, union, aggregation and sort.
The derived mysql query can be very complex and in many cases we don’t want to split the data source thus run the complex query multiple times on each split. So if the computation is more then just filter and transform, Hive will not split the query result even if “hive.sql.numPartitions” is more than 1.
Using a Non-default Schema
The notion of schema differs from DBMS to DBMS, such as Oracle, MSSQL, MySQL, and PostgreSQL. Correct usage of the hive.sql.schema table property can prevent problems with client connections to external JDBC tables. For more information, see Hive-25591. To create external tables based on a user-defined schema in a JDBC-compliant database, follow the examples below for respective databases.
Create a user and associate them with a default schema. For example:
Allow the user to connect to the database and run queries. For example:
In Oracle, dividing the tables into different namespaces/schemas is achieved through different users. The CREATE SCHEMA statement exists in Oracle, but has different semantics from those defined by SQL Standard and those adopted in other DBMS.
To create "local" users in Oracle you need to be connected to the Pluggable Database (PDB), not to the Container Database (CDB). The following example was tested in Oracle XE edition, using only PDB XEPDB1.
Create the bob schema/user and give appropriate connections to be able to connect to the database. For example:
Create the alice schema/user and give appropriate connections to be able to connect to the database. For example:
Without the SELECT ANY privilege, a user cannot see the tables/views of another user. When a user connects to the database using a specific user and schema it is not possible to refer to tables in another user/schema
-- namespace. You need to grant the SELECT ANY privilege. For example:
Allow the users to perform inserts on any table/view in the database, not only those present on their own schema. For example:
Create a user and associate them with a default schema <=> search_path. For example:
Grant the necessary permissions to access the schema. For example: