This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Advanced features for integration with other tools

 

Supporting cookie replay in HTTP mode 

HIVE-9709 introduced support to JDBC driver to enable cookie replay. This is turned to on by default so that incoming cookies can be send back to the server for authentication purpose.  

 

The JDBC connection URL when enabled should look like : jdbc:hive2://<host>:<port>/<db>?transportMode=http;httpPath=<http_endpoint>;cookieAuth=true;cookieName=<cookie_name>

 

  • cookieAuth is set to default as true 
  • cookieName : If any of the incoming cookies' key matches the value of cookieName, the JDBC driver will not send any login credentials/kerberos ticket to the server. i.e. the client will just send the cookie alone back to the server for authentication purpose. The default value of cookieName is hive.server2.auth (this is the HiveServer2 cookie name). 
  • To turn off cookie replay, cookieAuth=false must be used in the JDBC url.
  • Important Note : As part of HIVE-9709, we upgraded Apache http-client and http-core components of Hive to 4.4. To avoid any collision between this upgraded version of HttpComponents and other any versions that might be present in your system (such as the one provided by Apache Hadoop 2.6 which uses http-client and http-core components version of 4.2.5), the client is expected to set HADOOP_USER_CLASSPATH_FIRST=true before using hive-jdbc. Infact, in bin/beeline.sh we do this!

 

Using 2-way SSL in HTTP Mode 

HIVE-10447 enabled JDBC driver to support for 2-way SSL in HTTP mode. Please note that HiveServer2 currently does not support 2-way SSL. So this feature is handy when there is an intermediate server such as Knox which requires client to support 2-way SSL.

 

JDBC connection URL: jdbc:hive2://<host>:<port>/<db>;ssl=true;twoWay=true;sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>;sslKeyStore=<key_store_path>;keyStorePassword=<key_store_password>?hive.server2.transport.mode=http;hive.server2.thrift.http.path=<http_endpoint>. 

  • <trust_store_path> is the path where client's truststore file lives. This is a mandatory non-empty field
  • <trust_store_password> is the password to access the truststore.
  • <key_store_path> is the path where client's keystore file lives. This is a mandatory non-empty field.
  • <key_store_password> is the password to access the keystore.

Passing HTTP Header Key/Value Pairs via JDBC Driver

HIVE-10339 provided an option for clients to provide custom HTTP headers that can be send to the underlying server.

JDBC connection URL: jdbc:hive2://<host>:<port>/<db>?hive.server2.transport.mode=http;hive.server2.thrift.http.path=<http_endpoint>;http.header.<name1>=<value1>;http.header.<name2>=<value2>
When the above URL is specified, the beeline will call underlying requests to add a HTTP header to <name1> and <value1> and another HTTP header to <name2> and <value2>. This is helpful when the end user needs to send identity in a HTTP header down to intermediate servers such as Knox via beeline for authentication purpose.