This page describes the different clients supported by HiveServer2.

Introduced in Hive version 0.11. See HIVE-2935.

Beeline – New Command Line Shell

HiveServer2 supports a new command shell Beeline that works with HiveServer2. It's a JDBC client that is based on the SQLLine CLI (http://sqlline.sourceforge.net/). There’s detailed documentation of SQLLine which is applicable to Beeline as well.

The Beeline shell works in both embedded mode as well as remote mode. In the embedded mode, it runs an embedded Hive (similar to Hive CLI) whereas remote mode is for connecting to a separate HiveServer2 process over Thrift.

In remote mode HiveServer2 only accepts valid Thrift calls – even in HTTP mode, the message body contains Thrift payloads.

Beeline Example

% bin/beeline
Hive version 0.11.0-SNAPSHOT by Apache
beeline> !connect jdbc:hive2://localhost:10000 scott tiger org.apache.hive.jdbc.HiveDriver
!connect jdbc:hive2://localhost:10000 scott tiger org.apache.hive.jdbc.HiveDriver
Connecting to jdbc:hive2://localhost:10000
Connected to: Hive (version 0.10.0)
Driver: Hive (version 0.10.0-SNAPSHOT)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000> show tables;
show tables;
+-------------------+
|     tab_name      |
+-------------------+
| primitives        |
| src               |
| src1              |
| src_json          |
| src_sequencefile  |
| src_thrift        |
| srcbucket         |
| srcbucket2        |
| srcpart           |
+-------------------+
9 rows selected (1.079 seconds)

If you'd like to connect via NOSASL mode, you must specify authentication mode explicitly:

% bin/beeline
beeline> !connect jdbc:hive2://<host>:<port>/<db>;auth=noSasl hiveuser pass org.apache.hive.jdbc.HiveDriver

Beeline Command Options

The Beeline CLI supports these command line options:

Option

Description

-u <database URL>

The JDBC URL to connect to.

Usage: beeline -u db_URL 

-n <username>

The username to connect as.

Usage: beeline -n valid_user

-p <password>

The password to connect as.

Usage: beeline -p valid_password

-d <driver class>

The driver class to use.

Usage: beeline -d driver_class

-e <query>

Query that should be executed.

Usage: beeline -e "query_string"

Bug fix (null pointer exception): 0.13.0 (HIVE-5765)

-f <file>

Script file that should be executed.

Usage: beeline -f filepath

Version: 0.12.0 (HIVE-4268)
Note: If the script contains tabs, query compilation fails in version 0.12.0. This bug is fixed in version 0.13.0 (HIVE-6359).

--hiveconf property=value

Use value for the given configuration property. Properties that are listed in hive.conf.restricted.list cannot be reset with hiveconf (see Restricted List and Whitelist).

Usage: beeline --hiveconf prop1=value1

Version: 0.13.0 (HIVE-6173)

--hivevar name=value

Hive variable name and value. This is a Hive-specific setting in which variables can be set
at the session level and referenced in Hive commands or queries.

Usage: beeline --hivevar var1=value1

--color=[true/false]

Control whether color is used for display. Default is false.

Usage: beeline --color=true

--showHeader=[true/false]

Show column names in query results (true) or not (false). Default is true.

Usage: beeline --showHeader=false

--headerInterval=ROWS

The interval for redisplaying column headers, in number of rows, when outputformat is table.
Default is 100.

Usage: beeline --headerInterval=50

--fastConnect=[true/false]

When connecting, skip building a list of all tables and columns for tab-completion of
HiveQL statements (true) or build the list (false). Default is true.

Usage: beeline --fastConnect=false

--autoCommit=[true/false]

Enable/disable automatic transaction commit. Default is false.

Usage: beeline --autoCommit=true

--verbose=[true/false]

Show verbose error messages and debug information (true) or do not show (false).
Default is false.

Usage: beeline --verbose=true

--showWarnings=[true/false]

Display warnings that are reported on the connection after issuing any HiveQL commands.
Default is false.

Usage: beeline --showWarnings=true

--showNestedErrs=[true/false]

Display nested errors. Default is false.

Usage: beeline --showNestedErrs=true

--numberFormat=[pattern]

Format numbers using a DecimalFormat pattern.

Usage: beeline --numberFormat="#,###,##0.00"

--force=[true/false]

Continue running script even after errors (true) or do not continue (false). Default is false.

Usage: beeline--force=true

--maxWidth=MAXWIDTH

The maximum width to display before truncating data, in characters, when outputformat is table.
Default is to query the terminal for current width, then fall back to 80.

Usage: beeline --maxWidth=150

--maxColumnWidth=MAXCOLWIDTH

The maximum column width, in characters, when outputformat is table. Default is 15.

Usage: beeline --maxColumnWidth=25

--silent=[true/false]

Reduce the amount of informational messages displayed (true) or not (false). Default is false.

Usage: beeline --silent=true

--autosave=[true/false]

Automatically save preferences (true) or do not autosave (false). Default is false.

Usage: beeline --autosave=true

--outputformat=[table/vertical/csv/tsv]

Format mode for result display. Default is table.

Usage: beeline --outputformat=tsv

--isolation=LEVEL

Set the transaction isolation level to TRANSACTION_READ_COMMITTED
or TRANSACTION_SERIALIZABLE.
See the "Field Detail" section in the Java Connection documentation.

Usage: beeline --isolation=TRANSACTION_SERIALIZABLE

--nullemptystring=[true/false]

Use historic behavior of printing null as empty string (true) or use current behavior of printing
null as NULL (false). Default is false.

Usage: beeline --nullemptystring=false

Version: 0.13.0 (HIVE-4485)

--incremental=[true/false]

Print output incrementally.

--help

Display a usage message.

Usage: beeline --help

JDBC

HiveServer2 has a new JDBC driver. It supports both embedded and remote access to HiveServer2.

Connection URLs

Connection URL for Remote or Embedded Mode

The JDBC connection URL format has the prefix jdbc:hive2:// and the Driver class is org.apache.hive.jdbc.HiveDriver. Note that this is different from the old HiveServer.

Connection URL When HiveServer2 Is Running in HTTP Mode

JDBC connection URL:  jdbc:hive2://<host>:<port>/<db>?hive.server2.transport.mode=http;hive.server2.thrift.http.path=<http_endpoint>, where:

Connection URL When SSL Is Enabled in HiveServer2

JDBC connection URL:  jdbc:hive2://<host>:<port>/<db>;ssl=true;sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>, where:

In HTTP mode:  jdbc:hive2://<host>:<port>/<db>;ssl=true;sslTrustStore=<trust_store_path>;trustStorePassword=<trust_store_password>?hive.server2.transport.mode=http;hive.server2.thrift.http.path=<http_endpoint>.

Using JDBC

You can use JDBC to access data stored in a relational database or other tabular format.

  1. Load the HiveServer2 JDBC driver.

    For example:

    Class.forName("org.apache.hive.jdbc.HiveDriver");
    
  2. Connect to the database by creating a Connection object with the JDBC driver.

    For example:

    Connection cnct = DriverManager.getConnection("jdbc:hive2://<host>:<port>", "<user>", "<password>");
    

    The default <port> is 10000. In non-secure configurations, specify a <user> for the query to run as. The <password> field value is ignored in non-secure mode.

    Connection cnct = DriverManager.getConnection("jdbc:hive2://<host>:<port>", "<user>", "");
    

    In Kerberos secure mode, the user information is based on the Kerberos credentials.

  3. Submit SQL to the database by creating a Statement object and using its executeQuery() method.

    For example:

    Statement stmt = cnct.createStatement();
    ResultSet rset = stmt.executeQuery("SELECT foo FROM bar");
    
  4. Process the result set, if necessary.

These steps are illustrated in the sample code below.

JDBC Client Sample Code

import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager;

public class HiveJdbcClient {
  private static String driverName = "org.apache.hive.jdbc.HiveDriver";

  /**
   * @param args
   * @throws SQLException
   */
  public static void main(String[] args) throws SQLException {
      try {
      Class.forName(driverName);
    } catch (ClassNotFoundException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
      System.exit(1);
    }
    //replace "hive" here with the name of the user the queries should run as
    Connection con = DriverManager.getConnection("jdbc:hive2://localhost:10000/default", "hive", "");
    Statement stmt = con.createStatement();
    String tableName = "testHiveDriverTable";
    stmt.execute("drop table if exists " + tableName);
    stmt.execute("create table " + tableName + " (key int, value string)");
    // show tables
    String sql = "show tables '" + tableName + "'";
    System.out.println("Running: " + sql);
    ResultSet res = stmt.executeQuery(sql);
    if (res.next()) {
      System.out.println(res.getString(1));
    }
       // describe table
    sql = "describe " + tableName;
    System.out.println("Running: " + sql);
    res = stmt.executeQuery(sql);
    while (res.next()) {
      System.out.println(res.getString(1) + "\t" + res.getString(2));
    }

    // load data into table
    // NOTE: filepath has to be local to the hive server
    // NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line
    String filepath = "/tmp/a.txt";
    sql = "load data local inpath '" + filepath + "' into table " + tableName;
    System.out.println("Running: " + sql);
    stmt.execute(sql);

    // select * query
    sql = "select * from " + tableName;
    System.out.println("Running: " + sql);
    res = stmt.executeQuery(sql);
    while (res.next()) {
      System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));
    }

    // regular hive query
    sql = "select count(1) from " + tableName;
    System.out.println("Running: " + sql);
    res = stmt.executeQuery(sql);
    while (res.next()) {
      System.out.println(res.getString(1));
    }
  }
}

Running the JDBC Sample Code

# Then on the command-line
$ javac HiveJdbcClient.java

# To run the program using remote hiveserver in non-kerberos mode, we need the following jars in the classpath
# from hive/build/dist/lib
#     hive-jdbc*.jar
#     hive-service*.jar
#     libfb303-0.9.0.jar
#  	  libthrift-0.9.0.jar
# 	  log4j-1.2.16.jar
# 	  slf4j-api-1.6.1.jar
# 	  slf4j-log4j12-1.6.1.jar
# 	  commons-logging-1.0.4.jar
#
#
# To run the program using kerberos secure mode, we need the following jars in the classpath 
#     hive-exec*.jar
#     commons-configuration-1.6.jar
#  and from hadoop
#     hadoop-core*.jar
#
# To run the program in embedded mode, we need the following additional jars in the classpath
# from hive/build/dist/lib
#     hive-exec*.jar
#     hive-metastore*.jar
#     antlr-runtime-3.0.1.jar
#     derby.jar
#     jdo2-api-2.1.jar
#     jpox-core-1.2.2.jar
#     jpox-rdbms-1.2.2.jar
# and from hadoop/build
#     hadoop-core*.jar
# as well as hive/build/dist/conf, any HIVE_AUX_JARS_PATH set, 
# and hadoop jars necessary to run MR jobs (eg lzo codec)

$ java -cp $CLASSPATH HiveJdbcClient

Alternatively, you can run the following bash script, which will seed the data file and build your classpath before invoking the client. The script adds all the additional jars needed for using HiveServer2 in embedded mode as well.

#!/bin/bash
HADOOP_HOME=/your/path/to/hadoop
HIVE_HOME=/your/path/to/hive

echo -e '1\x01foo' > /tmp/a.txt
echo -e '2\x01bar' >> /tmp/a.txt

HADOOP_CORE=$(ls $HADOOP_HOME/hadoop-core*.jar)
CLASSPATH=.:$HIVE_HOME/conf:$(hadoop classpath)

for i in ${HIVE_HOME}/lib/*.jar ; do
    CLASSPATH=$CLASSPATH:$i
done

java -cp $CLASSPATH HiveJdbcClient

JDBC Data Types

The following table lists the data types implemented for HiveServer2 JDBC.

Hive Type

Java Type

Specification

TINYINT

byte

signed or unsigned 1-byte integer

SMALLINT

short

signed 2-byte integer

INT

int

signed 4-byte integer

BIGINT

long

signed 8-byte integer

FLOAT

double

single-precision number (approximately 7 digits)

DOUBLE

double

double-precision number (approximately 15 digits)

DECIMAL

java.math.BigDecimal

fixed-precision decimal value

BOOLEAN

boolean

a single bit (0 or 1)

STRING

String

character string or variable-length character string

TIMESTAMP

java.sql.Timestamp

date and time value

BINARY

String

binary data

Complex Types

 

 

ARRAY

String – json encoded

values of one data type

MAP

String – json encoded

key-value pairs

STRUCT

String – json encoded

structured values

JDBC Client Setup for a Secure Cluster

When connecting to HiveServer2 with Kerberos authentication, the URL format is:

jdbc:hive2://<host>:<port>/<db>;principal=<Server_Principal_of_HiveServer2>

The client needs to have a valid Kerberos ticket in the ticket cache before connecting.

NOTE: If you don't have a "/" after the port number, the jdbc driver does not parse the hostname and ends up running HS2 in embedded mode . So if you are specifying a hostname, make sure you have a "/" or "/<dbname>" after the port number.

In the case of LDAP or customer pass through authentication, the client needs to pass the valid user name and password to the JDBC connection API.

To use sasl.qop, add the following to the sessionconf part of your Hive jdbc hive connection string, eg

jdbc:hive://hostname/dbname;sasl.qop=auth-int 
For more information, see Setting Up HiveServer2.

Python Client

A Python client driver is available on github. For installation instructions, see Setting Up HiveServer2: Python Client Driver.

Integration with SQuirrel SQL Client 

  1. Download, install and start the SQuirrel SQL Client from the SQuirrel SQL website.
  2. Select 'Drivers -> New Driver...' to register Hive's JDBC driver that works with HiveServer2.
    1. Enter the driver name and example URL:

         Name: Hive
         Example URL: jdbc:hive2://localhost:10000/default
      
  3. Select 'Extra Class Path -> Add' to add the following jars from your local Hive and Hadoop distribution.

       HIVE_HOME/build/dist/lib/*.jar
       HADOOP_HOME/hadoop-*-core.jar 
  4. Select 'List Drivers'. This will cause SQuirrel to parse your jars for JDBC drivers and might take a few seconds. From the 'Class Name' input box select the Hive driver for working with HiveServer2:

       org.apache.hive.jdbc.HiveDriver
       
  5. Click 'OK' to complete the driver registration. 

  6. Select 'Aliases -> Add Alias...' to create a connection alias to your HiveServer2 instance.
    1. Give the connection alias a name in the 'Name' input box.
    2. Select the Hive driver from the 'Driver' drop-down.
    3. Modify the example URL as needed to point to your HiveServer2 instance.
    4. Enter 'User Name' and 'Password' and click 'OK' to save the connection alias.
    5. To connect to HiveServer2, double-click the Hive alias and click 'Connect'.

When the connection is established you will see errors in the log console and might get a warning that the driver is not JDBC 3.0 compatible. These alerts are due to yet-to-be-implemented parts of the JDBC metadata API and can safely be ignored. To test the connection enter SHOW TABLES in the console and click the run icon.

Also note that when a query is running, support for the 'Cancel' button is not yet available.