Table of Contents |
---|
This page describes the different clients supported by HiveServer2.
...
- <http_endpoint> is the corresponding HTTP endpoint configured in hive-site.xml. Default value is cliservice.
- Default port for HTTP transport mode is 10001
Supporting cookie replay in HTTP mode
HIVE-9709 introduced support to JDBC driver to enable cookie replay. This is turned to on by default so that incoming cookies can be send back to the server for authentication purpose.
Connection URL When SSL Is Enabled in HiveServer2
JDBC connection URLThe JDBC connection URL when enabled should look like : jdbc:hive2://<host>:<port>/<db>?transportMode;ssl=httptrue;httpPathsslTrustStore=<http<trust_store_endpoint>path>;cookieAuth=true;cookieName=<cookie_name>
- cookieAuth is set to default as true
- cookieName : If any of the incoming cookies' key matches the value of cookieName, the JDBC driver will not send any login credentials/kerberos ticket to the server. i.e. the client will just send the cookie alone back to the server for authentication purpose. The default value of cookieName is hive.server2.auth (this is the HiveServer2 cookie name).
- To turn off cookie replay, cookieAuth=false must be used in the JDBC url.
- Important Note : As part of HIVE-9709, we upgraded Apache http-client and http-core components of Hive to 4.4. To avoid any collision between this upgraded version of HttpComponents and other any versions that might be present in your system (such as the one provided by Apache Hadoop 2.6 which uses http-client and http-core components version of 4.2.5), the client is expected to set HADOOP_USER_CLASSPATH_FIRST=true before using hive-jdbc. Infact, in bin/beeline.sh we do this!
Connection URL When SSL Is Enabled in HiveServer2
...
trustStorePassword=<trust_store_password>
, where:
- <trust_store_path> is the path where client's truststore file lives.
- <trust_store_password> is the password to access the truststore.
In HTTP mode: jdbc:hive2://<host>:<port>/<db>;ssl=true;sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>?hive.server2.transport.mode=http;hive.server2.thrift.http.path=<http_endpoint>
.
Using JDBC
You can use JDBC to access data stored in a relational database or other tabular format.
Load the HiveServer2 JDBC driver. As of 1.2.0 applications no longer need to explicitly load JDBC drivers using Class.forName().
For example:No Format Class.forName("org.apache.hive.jdbc.HiveDriver");
Connect to the database by creating a
Connection
object with the JDBC driver.
For example:No Format Connection cnct = DriverManager.getConnection("jdbc:hive2://<host>:<port>
...
", "<user>", "<password>");
The default
<port>
is 10000. In non-secure configurations, specify a<user>
for the query to run as. The<password>
field value is ignored in non-secure mode.No Format Connection cnct = DriverManager.getConnection("
- <trust_store_path> is the path where client's truststore file lives.
- <trust_store_password> is the password to access the truststore.
...
jdbc:hive2://<host>:<port>
...
Using 2-way SSL in HTTP Mode
HIVE-10447 enabled JDBC driver to support for 2-way SSL in HTTP mode. Please note that HiveServer2 currently does not support 2-way SSL. So this feature is handy when there is an intermediate server such as Knox which requires client to support 2-way SSL.
JDBC connection URL: jdbc:hive2://<host>:<port>/<db>;ssl=true;twoWay=true;
sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>;sslKeyStore=<key_store_path>;keyStorePassword=<key_store_password>
?hive.server2.transport.mode=http;hive.server2.thrift.http.path=<http_endpoint>
.
- <trust_store_path> is the path where client's truststore file lives. This is a mandatory non-empty field
- <trust_store_password> is the password to access the truststore.
- <key_store_path> is the path where client's keystore file lives. This is a mandatory non-empty field.
- <key_store_password> is the password to access the keystore.
Using JDBC
You can use JDBC to access data stored in a relational database or other tabular format.
...
No Format |
---|
Class.forName("org.apache.hive.jdbc.HiveDriver");
|
...
", "<user>", "");
In Kerberos secure mode, the user information is based on the Kerberos credentials.
Submit SQL to the database by creating a
Statement
object and using itsexecuteQuery()
method.
For example:No Format Statement stmt = cnct.createStatement(); ResultSet rset = stmt.executeQuery("SELECT foo FROM bar");
- Process the result set, if necessary.
These steps are illustrated in the sample code below.
JDBC Client Sample Code
Code Block | ||||
---|---|---|---|---|
| ||||
import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveJdbcClient { private static String driverName = "org.apache.hive.jdbc.HiveDriver"; /** * @param args * @throws SQLException */ public static void main(String[] args) throws SQLException { try { Class.forName(driverName); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } //replace "hive" here with the name of the user the queries should run as Connection con = DriverManager.getConnection("jdbc:hive2:// |
...
localhost: |
...
10000/default", " |
...
hive", " |
...
");
|
...
The default <port>
is 10000. In non-secure configurations, specify a <user>
for the query to run as. The <password>
field value is ignored in non-secure mode.
No Format |
---|
Connection cnct = DriverManager.getConnection("jdbc:hive2://<host>:<port>", "<user>", "");
|
In Kerberos secure mode, the user information is based on the Kerberos credentials.
...
No Format |
---|
Statement stmt = cnct.createStatement();
ResultSet rset = stmt.executeQuery("SELECT foo FROM bar");
|
...
These steps are illustrated in the sample code below.
JDBC Client Sample Code
Code Block | ||
---|---|---|
java | java | import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveJdbcClient { private static String driverName = "org.apache.hive.jdbc.HiveDriver"; /** * @param args * @throws SQLException */ public static void main(String[] args) throws SQLException Statement stmt = con.createStatement(); String tableName = "testHiveDriverTable"; stmt.execute("drop table if exists " + tableName); stmt.execute("create table " + tableName + " (key int, value string)"); // show tables String sql = "show tables '" + tableName + "'"; System.out.println("Running: " + sql); ResultSet res = stmt.executeQuery(sql); if (res.next()) { try { Class.forName(driverNameSystem.out.println(res.getString(1)); } catch (ClassNotFoundException e) { // describe table // TODOsql Auto-generated catch block = "describe " + tableName; eSystem.out.printStackTrace(println("Running: " + sql); res = Systemstmt.exitexecuteQuery(1sql); } //replace "hive" here with the name of the user the queries should run aswhile (res.next()) { System.out.println(res.getString(1) + "\t" + res.getString(2)); } Connection// conload = DriverManager.getConnection("jdbc:hive2://localhost:10000/default", "hive", "");data into table Statement stmt = con.createStatement(); String tableName = "testHiveDriverTable"; stmt.execute("drop table if exists " + tableName); stmt.execute("create table " + tableName + " (key int, value string)"); // NOTE: filepath has to be local to the hive server // NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line String filepath = "/tmp/a.txt"; sql = "load data local inpath '" + filepath + "' into table " + tableName; System.out.println("Running: " + sql); stmt.execute(sql); // showselect * tablesquery String sql = "showselect * tablesfrom '" + tableName + "'"; System.out.println("Running: " + sql); ResultSet res = stmt.executeQuery(sql); ifwhile (res.next()) { System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(12)); } // describeregular hive tablequery sql = "describeselect count(1) from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1) + "\t" + res.getString(2)); } } } |
Running the JDBC Sample Code
Code Block | ||
---|---|---|
| ||
# Then //on load data into table // NOTE: filepath has to be local to the hive server // NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line String filepath = "/tmp/a.txt"; sql = "load data local inpath '" + filepath + "' into table " + tableName; System.out.println("Running: " + sql); stmt.execute(sql); // select * query sql = "select * from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2)); } // regular hive query sql = "select count(1) from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1)); } } } |
Running the JDBC Sample Code
Code Block | ||
---|---|---|
| ||
# Then on the command-line
$ javac HiveJdbcClient.java
# To run the program using remote hiveserver in non-kerberos mode, we need the following jars in the classpath
# from hive/build/dist/lib
# hive-jdbc*.jar
# hive-service*.jar
# libfb303-0.9.0.jar
# libthrift-0.9.0.jar
# log4j-1.2.16.jar
# slf4j-api-1.6.1.jar
# slf4j-log4j12-1.6.1.jar
# commons-logging-1.0.4.jar
#
#
# To run the program using kerberos secure mode, we need the following jars in the classpath
# hive-exec*.jar
# commons-configuration-1.6.jar
# and from hadoop
# hadoop-core*.jar
#
# To run the program in embedded mode, we need the following additional jars in the classpath
# from hive/build/dist/lib
# hive-exec*.jar
# hive-metastore*.jar
# antlr-runtime-3.0.1.jar
# derby.jar
# jdo2-api-2.1.jar
# jpox-core-1.2.2.jar
# jpox-rdbms-1.2.2.jar
# and from hadoop/build
# hadoop-core*.jar
# as well as hive/build/dist/conf, any HIVE_AUX_JARS_PATH set,
# and hadoop jars necessary to run MR jobs (eg lzo codec)
$ java -cp $CLASSPATH HiveJdbcClient
|
Alternatively, you can run the following bash script, which will seed the data file and build your classpath before invoking the client. The script adds all the additional jars needed for using HiveServer2 in embedded mode as well.
Code Block | ||
---|---|---|
| ||
#!/bin/bash
HADOOP_HOME=/your/path/to/hadoop
HIVE_HOME=/your/path/to/hive
echo -e '1\x01foo' > /tmp/a.txt
echo -e '2\x01bar' >> /tmp/a.txt
HADOOP_CORE=$(ls $HADOOP_HOME/hadoop-core*.jar)
CLASSPATH=.:$HIVE_HOME/conf:$(hadoop classpath)
for i in ${HIVE_HOME}/lib/*.jar ; do
CLASSPATH=$CLASSPATH:$i
done
java -cp $CLASSPATH HiveJdbcClient
|
JDBC Data Types
The following table lists the data types implemented for HiveServer2 JDBC.
Hive Type | Java Type | Specification |
---|---|---|
TINYINT | byte | signed or unsigned 1-byte integer |
SMALLINT | short | signed 2-byte integer |
INT | int | signed 4-byte integer |
BIGINT | long | signed 8-byte integer |
FLOAT | double | single-precision number (approximately 7 digits) |
DOUBLE | double | double-precision number (approximately 15 digits) |
DECIMAL | java.math.BigDecimal | fixed-precision decimal value |
BOOLEAN | boolean | a single bit (0 or 1) |
STRING | String | character string or variable-length character string |
TIMESTAMP | java.sql.Timestamp | date and time value |
BINARY | String | binary data |
Complex Types |
|
|
ARRAY | String – json encoded | values of one data type |
MAP | String – json encoded | key-value pairs |
STRUCT | String – json encoded | structured values |
JDBC Client Setup for a Secure Cluster
When connecting to HiveServer2 with Kerberos authentication, the URL format is:
jdbc:hive2://<host>:<port>/<db>;principal=<Server_Principal_of_HiveServer2>
The client needs to have a valid Kerberos ticket in the ticket cache before connecting.
NOTE: If you don't have a "/" after the port number, the jdbc driver does not parse the hostname and ends up running HS2 in embedded mode . So if you are specifying a hostname, make sure you have a "/" or "/<dbname>" after the port number.
In the case of LDAP, CUSTOM or PAM authentication, the client needs to pass a valid user name and password to the JDBC connection API.
To use sasl.qop, add the following to the sessionconf part of your Hive JDBC hive connection string, e.g.
jdbc:hive://hostname/dbname;sasl.qop=auth-int
For more information, see Setting Up HiveServer2.
Multi-User Scenarios and Programmatic Login to Kerberos KDC
In the current approach of using Kerberos you need to have a valid Kerberos ticket in the ticket cache before connecting. This entails a static login (using kinit, key tab or ticketcache) and the restriction of one Kerberos user per client. These restrictions limit the usage in middleware systems and other multi-user scenarios, and in scenarios where the client wants to login programmatically to Kerberos KDC.
One way to mitigate the problem of multi-user scenarios is with secure proxy users (see HIVE-5155). Starting in Hive 0.13.0, support for secure proxy users has two components:
- Delegation token based connection for Oozie (OOZIE-1457). This is the common mechanism for Hadoop ecosystem components.
- Direct proxy access for privileged Hadoop users (HIVE-5155). This enables a privileged user to directly specify an alternate session user during the connection. If the connecting user has Hadoop level privilege to impersonate the requested userid, then HiveServer2 will run the session as that requested user.
The other way is to use a pre-authenticated Kerberos Subject (see HIVE-6486). In this method, starting with Hive 0.13.0 the Hive JDBC client can use a pre-authenticated subject to authenticate to HiveServer2. This enables a middleware system to run queries as the user running the client.
Using Kerberos with a Pre-Authenticated Subject
To use a pre-authenticated subject you will need the following changes.
- Add hive-exec*.jar to the classpath in addition to the regular Hive JDBC jars (commons-configuration-1.6.jar and hadoop-core*.jar are not required).
- Add auth=kerberos and kerberosAuthType=fromSubject JDBC URL properties in addition to having the “principal" url property.
- Open the connection in Subject.doAs().
The following code snippet illustrates the usage (refer to HIVE-6486 for a complete test case):
the command-line
$ javac HiveJdbcClient.java
# To run the program using remote hiveserver in non-kerberos mode, we need the following jars in the classpath
# from hive/build/dist/lib
# hive-jdbc*.jar
# hive-service*.jar
# libfb303-0.9.0.jar
# libthrift-0.9.0.jar
# log4j-1.2.16.jar
# slf4j-api-1.6.1.jar
# slf4j-log4j12-1.6.1.jar
# commons-logging-1.0.4.jar
#
#
# To run the program using kerberos secure mode, we need the following jars in the classpath
# hive-exec*.jar
# commons-configuration-1.6.jar
# and from hadoop
# hadoop-core*.jar
#
# To run the program in embedded mode, we need the following additional jars in the classpath
# from hive/build/dist/lib
# hive-exec*.jar
# hive-metastore*.jar
# antlr-runtime-3.0.1.jar
# derby.jar
# jdo2-api-2.1.jar
# jpox-core-1.2.2.jar
# jpox-rdbms-1.2.2.jar
# and from hadoop/build
# hadoop-core*.jar
# as well as hive/build/dist/conf, any HIVE_AUX_JARS_PATH set,
# and hadoop jars necessary to run MR jobs (eg lzo codec)
$ java -cp $CLASSPATH HiveJdbcClient
|
Alternatively, you can run the following bash script, which will seed the data file and build your classpath before invoking the client. The script adds all the additional jars needed for using HiveServer2 in embedded mode as well.
Code Block | ||
---|---|---|
| ||
#!/bin/bash
HADOOP_HOME=/your/path/to/hadoop
HIVE_HOME=/your/path/to/hive
echo -e '1\x01foo' > /tmp/a.txt
echo -e '2\x01bar' >> /tmp/a.txt
HADOOP_CORE=$(ls $HADOOP_HOME/hadoop-core*.jar)
CLASSPATH=.:$HIVE_HOME/conf:$(hadoop classpath)
for i in ${HIVE_HOME}/lib/*.jar ; do
CLASSPATH=$CLASSPATH:$i
done
java -cp $CLASSPATH HiveJdbcClient
|
JDBC Data Types
The following table lists the data types implemented for HiveServer2 JDBC.
Hive Type | Java Type | Specification |
---|---|---|
TINYINT | byte | signed or unsigned 1-byte integer |
SMALLINT | short | signed 2-byte integer |
INT | int | signed 4-byte integer |
BIGINT | long | signed 8-byte integer |
FLOAT | double | single-precision number (approximately 7 digits) |
DOUBLE | double | double-precision number (approximately 15 digits) |
DECIMAL | java.math.BigDecimal | fixed-precision decimal value |
BOOLEAN | boolean | a single bit (0 or 1) |
STRING | String | character string or variable-length character string |
TIMESTAMP | java.sql.Timestamp | date and time value |
BINARY | String | binary data |
Complex Types |
|
|
ARRAY | String – json encoded | values of one data type |
MAP | String – json encoded | key-value pairs |
STRUCT | String – json encoded | structured values |
JDBC Client Setup for a Secure Cluster
When connecting to HiveServer2 with Kerberos authentication, the URL format is:
jdbc:hive2://<host>:<port>/<db>;principal=<Server_Principal_of_HiveServer2>
The client needs to have a valid Kerberos ticket in the ticket cache before connecting.
NOTE: If you don't have a "/" after the port number, the jdbc driver does not parse the hostname and ends up running HS2 in embedded mode . So if you are specifying a hostname, make sure you have a "/" or "/<dbname>" after the port number.
In the case of LDAP, CUSTOM or PAM authentication, the client needs to pass a valid user name and password to the JDBC connection API.
To use sasl.qop, add the following to the sessionconf part of your Hive JDBC hive connection string, e.g.
jdbc:hive://hostname/dbname;sasl.qop=auth-int
For more information, see Setting Up HiveServer2.
Multi-User Scenarios and Programmatic Login to Kerberos KDC
In the current approach of using Kerberos you need to have a valid Kerberos ticket in the ticket cache before connecting. This entails a static login (using kinit, key tab or ticketcache) and the restriction of one Kerberos user per client. These restrictions limit the usage in middleware systems and other multi-user scenarios, and in scenarios where the client wants to login programmatically to Kerberos KDC.
One way to mitigate the problem of multi-user scenarios is with secure proxy users (see HIVE-5155). Starting in Hive 0.13.0, support for secure proxy users has two components:
- Delegation token based connection for Oozie (OOZIE-1457). This is the common mechanism for Hadoop ecosystem components.
- Direct proxy access for privileged Hadoop users (HIVE-5155). This enables a privileged user to directly specify an alternate session user during the connection. If the connecting user has Hadoop level privilege to impersonate the requested userid, then HiveServer2 will run the session as that requested user.
The other way is to use a pre-authenticated Kerberos Subject (see HIVE-6486). In this method, starting with Hive 0.13.0 the Hive JDBC client can use a pre-authenticated subject to authenticate to HiveServer2. This enables a middleware system to run queries as the user running the client.
Using Kerberos with a Pre-Authenticated Subject
To use a pre-authenticated subject you will need the following changes.
- Add hive-exec*.jar to the classpath in addition to the regular Hive JDBC jars (commons-configuration-1.6.jar and hadoop-core*.jar are not required).
- Add auth=kerberos and kerberosAuthType=fromSubject JDBC URL properties in addition to having the “principal" url property.
- Open the connection in Subject.doAs().
The following code snippet illustrates the usage (refer to HIVE-6486 for a complete test case):
Code Block | ||
---|---|---|
| ||
static Connection getConnection( Subject signedOnUserSubject ) throws Exception{
Connection conn = (Connection) Subject.doAs(signedOnUserSubject, new PrivilegedExceptionAction<Object>()
{
public Object run()
{
Connection con = null;
String JDBC_DB_URL = "jdbc:hive2://HiveHost:10000/default;" ||
"principal=hive/localhost.localdomain@EXAMPLE.COM;" ||
"auth=kerberos;kerberosAuthType=fromSubject";
try {
Class.forName(JDBC_DRIVER);
con = DriverManager.getConnection(JDBC_DB_URL);
} catch (SQLException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
| ||
Code Block | ||
| ||
static Connection getConnection( Subject signedOnUserSubject ) throws Exception{ Connection conn = (Connection) Subject.doAs(signedOnUserSubject, new PrivilegedExceptionAction<Object>() return {con; } public Object run() {}); Connection con = null; String JDBC_DB_URL = "jdbc:hive2://HiveHost:10000/default;" || "principal=hive/localhost.localdomain@EXAMPLE.COM;" || "auth=kerberos;kerberosAuthType=fromSubject"; try { Class.forName(JDBC_DRIVER); con = DriverManager.getConnection(JDBC_DB_URL); } catch (SQLException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } return con; } }); return conn; } |
Python Client
A Python client driver is available on github. For installation instructions, see Setting Up HiveServer2: Python Client Driver.
Ruby Client
A Ruby client driver is available on github at https://github.com/forward3d/rbhive.
Integration with SQuirrel SQL Client
- Download, install and start the SQuirrel SQL Client from the SQuirrel SQL website.
- Select 'Drivers -> New Driver...' to register Hive's JDBC driver that works with HiveServer2.
Enter the driver name and example URL:
Code Block language text Name: Hive Example URL: jdbc:hive2://localhost:10000/default
Select 'Extra Class Path -> Add' to add the following jars from your local Hive and Hadoop distribution.
Code Block HIVE_HOME/build/dist/lib/*.jar HADOOP_HOME/hadoop-*-core.jar
Select 'List Drivers'. This will cause SQuirrel to parse your jars for JDBC drivers and might take a few seconds. From the 'Class Name' input box select the Hive driver for working with HiveServer2:
Code Block org.apache.hive.jdbc.HiveDriver
Click 'OK' to complete the driver registration.
- Select 'Aliases -> Add Alias...' to create a connection alias to your HiveServer2 instance.
- Give the connection alias a name in the 'Name' input box.
- Select the Hive driver from the 'Driver' drop-down.
- Modify the example URL as needed to point to your HiveServer2 instance.
- Enter 'User Name' and 'Password' and click 'OK' to save the connection alias.
- To connect to HiveServer2, double-click the Hive alias and click 'Connect'.
When the connection is established you will see errors in the log console and might get a warning that the driver is not JDBC 3.0 compatible. These alerts are due to yet-to-be-implemented parts of the JDBC metadata API and can safely be ignored. To test the connection enter SHOW TABLES in the console and click the run icon.
...
return conn;
} |
Python Client
A Python client driver is available on github. For installation instructions, see Setting Up HiveServer2: Python Client Driver.
Ruby Client
A Ruby client driver is available on github at https://github.com/forward3d/rbhive.
Integration with SQuirrel SQL Client
- Download, install and start the SQuirrel SQL Client from the SQuirrel SQL website.
- Select 'Drivers -> New Driver...' to register Hive's JDBC driver that works with HiveServer2.
Enter the driver name and example URL:
Code Block language text Name: Hive Example URL: jdbc:hive2://localhost:10000/default
Select 'Extra Class Path -> Add' to add the following jars from your local Hive and Hadoop distribution.
Code Block HIVE_HOME/build/dist/lib/*.jar HADOOP_HOME/hadoop-*-core.jar
Select 'List Drivers'. This will cause SQuirrel to parse your jars for JDBC drivers and might take a few seconds. From the 'Class Name' input box select the Hive driver for working with HiveServer2:
Code Block org.apache.hive.jdbc.HiveDriver
Click 'OK' to complete the driver registration.
- Select 'Aliases -> Add Alias...' to create a connection alias to your HiveServer2 instance.
- Give the connection alias a name in the 'Name' input box.
- Select the Hive driver from the 'Driver' drop-down.
- Modify the example URL as needed to point to your HiveServer2 instance.
- Enter 'User Name' and 'Password' and click 'OK' to save the connection alias.
- To connect to HiveServer2, double-click the Hive alias and click 'Connect'.
When the connection is established you will see errors in the log console and might get a warning that the driver is not JDBC 3.0 compatible. These alerts are due to yet-to-be-implemented parts of the JDBC metadata API and can safely be ignored. To test the connection enter SHOW TABLES in the console and click the run icon.
Also note that when a query is running, support for the 'Cancel' button is not yet available.
Advanced features for integration with other tools
Supporting cookie replay in HTTP mode
HIVE-9709 introduced support to JDBC driver to enable cookie replay. This is turned to on by default so that incoming cookies can be send back to the server for authentication purpose.
The JDBC connection URL when enabled should look like : jdbc:hive2://<host>:<port>/<db>?transportMode=http;httpPath=<http_endpoint>;cookieAuth=true;cookieName=<cookie_name>
- cookieAuth is set to default as true
- cookieName : If any of the incoming cookies' key matches the value of cookieName, the JDBC driver will not send any login credentials/kerberos ticket to the server. i.e. the client will just send the cookie alone back to the server for authentication purpose. The default value of cookieName is hive.server2.auth (this is the HiveServer2 cookie name).
- To turn off cookie replay, cookieAuth=false must be used in the JDBC url.
- Important Note : As part of HIVE-9709, we upgraded Apache http-client and http-core components of Hive to 4.4. To avoid any collision between this upgraded version of HttpComponents and other any versions that might be present in your system (such as the one provided by Apache Hadoop 2.6 which uses http-client and http-core components version of 4.2.5), the client is expected to set HADOOP_USER_CLASSPATH_FIRST=true before using hive-jdbc. Infact, in bin/beeline.sh we do this!
Using 2-way SSL in HTTP Mode
HIVE-10447 enabled JDBC driver to support for 2-way SSL in HTTP mode. Please note that HiveServer2 currently does not support 2-way SSL. So this feature is handy when there is an intermediate server such as Knox which requires client to support 2-way SSL.
JDBC connection URL: jdbc:hive2://<host>:<port>/<db>;ssl=true;twoWay=true;
sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>;sslKeyStore=<key_store_path>;keyStorePassword=<key_store_password>
?hive.server2.transport.mode=http;hive.server2.thrift.http.path=<http_endpoint>
.
- <trust_store_path> is the path where client's truststore file lives. This is a mandatory non-empty field
- <trust_store_password> is the password to access the truststore.
- <key_store_path> is the path where client's keystore file lives. This is a mandatory non-empty field.
- <key_store_password> is the password to access the keystore.