This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Child pages
  • Tutorial

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Types are associated with the columns in the tables. The following Primitive types are supported:
  • Integers
    • TINYINT1 byte integer
    • SMALLINT2 byte integer
    • INT4 byte integer
    • BIGINT8 byte integer
  • Boolean type
    • BOOLEANTRUE/FALSE
  • Floating point numbers
    • FLOATsingle precision
    • DOUBLEDouble precision
  • Fixed point numbers
    • DECIMALa fixed point value of user defined scale and precision
  • String types
    • STRINGsequence of characters in a specified character set
    • VARCHARsequence of characters in a specified character set with a maximum length
    • CHARsequence of characters in a specified character set with a defined length
  • Date and time types
    • TIMESTAMP a specific A date and time without a timezone ("LocalDateTime" semantics)
    • TIMESTAMP WITH LOCAL TIME ZONE  A point in time , up to nanosecond precisionmeasured down to nanoseconds ("Instant" semantics)
    • DATEa date
  • Binary types
    • BINARYa sequence of bytes

The Types are organized in the following hierarchy (where the parent is a super type of all the children instances):

  • Type

    • Primitive Type


      • Number


        • DOUBLE


          • FLOAT


            • BIGINT


              • INT


                • SMALLINT


                  • TINYINT



          • STRING



      • BOOLEAN


This type hierarchy defines how the types are implicitly converted in the query language. Implicit conversion is allowed for types from child to an ancestor. So when a query expression expects type1 and the data is of type2, type2 is implicitly converted to type1 if type1 is an ancestor of type2 in the type hierarchy. Note that the type hierarchy allows the implicit conversion of STRING to DOUBLE.

...

  • genderwhich is a STRING.
  • activewhich is a BOOLEAN.

Timestamp

Anchor
timestamp
timestamp


Timestamps have been the source of much confusion, so we try to document the intended semantics of Hive.

Timestamp ("LocalDateTime" semantics)

Java's "LocalDateTime" timestamps record a date and time as year, month, date, hour, minute, and seconds without a timezone. These timestamps always have those same values regardless of the local time zone.

For example, the timestamp value of "2014-12-12 12:34:56" is decomposed into year, month, day, hour, minute and seconds fields, but with no time zone information available. It does not correspond to any specific instant. It will always be the same value regardless of the local time zone. Unless your application uses UTC consistently, timestamp with local time zone is strongly preferred over timestamp for most applications. When users say an event is at 10:00, it is always in reference to a certain timezone and means a point in time, rather than 10:00 in an arbitrary time zone.

Timestamp with local time zone ("Instant" semantics)

Java's "Instant" timestamps define a point in time that remains constant regardless of where the data is read. Thus, the timestamp will be adjusted by the local time zone to match the original point in time.


TypeValue in America/Los_AngelesValue in America/New_York
timestamp2014-12-12 12:34:56

2014-12-12 12:34:56

timestamp with local time zone2014-12-12 12:34:562014-12-12 15:34:56


Comparisons with other tools


SQL 2003OracleSybasePostgresMySQLMicrosoft SQLIBM DB2PrestoSnowflakeHive >= 3.1IcebergSpark
timestampLocalLocalLocalLocalInstantOtherLocalLocalLocalLocalLocalInstant
timestamp with local time zone
Instant





InstantInstant

timestamp with time zoneOffsetOffsetOffsetInstant

OffsetOffsetOffset
Instant

timestamp without time zone

LocalLocal
Local

Local




Other definitions:

  • Offset = Recording a point in time as well as the time zone offset in the writer's time zone.

Built In Operators and Functions

...

  • Relational OperatorsThe following operators compare the passed operands and generate a TRUE or FALSE value, depending on whether the comparison between the operands holds or not.

Relational Operator

Operand types

Description

A = B

all primitive types

TRUE if expression A is equivalent to expression B; otherwise FALSE

A != B

all primitive types

TRUE if expression A is not equivalent to expression B; otherwise FALSE

A < B

all primitive types

TRUE if expression A is less than expression B; otherwise FALSE

A <= B

all primitive types

TRUE if expression A is less than or equal to expression B; otherwise FALSE

A > B

all primitive types

TRUE if expression A is greater than expression B] otherwise FALSE

A >= B

all primitive types

TRUE if expression A is greater than or equal to expression B otherwise FALSE

A IS NULL

all types

TRUE if expression A evaluates to NULL otherwise FALSE

A IS NOT NULL

all types

FALSE if expression A evaluates to NULL otherwise TRUE

A LIKE B

strings

TRUE if string A matches the SQL simple regular expression B, otherwise FALSE. The comparison is done character by character. The _ character in B matches any character in A (similar to . in posix regular expressions), and the % character in B matches an arbitrary number of characters in A (similar to .* in posix regular expressions). For example, 'foobar' LIKE 'foo' evaluates to FALSE where as 'foobar' LIKE 'foo___' evaluates to TRUE and so does 'foobar' LIKE 'foo%'. To escape % use \ (% matches one % character). If the data contains a semicolon, and you want to search for it, it needs to be escaped, columnValue LIKE 'a\;b'

A RLIKE B

strings

NULL if A or B is NULL, TRUE if any (possibly empty) substring of A matches the Java regular expression B (see Java regular expressions syntax), otherwise FALSE. For example, 'foobar' rlike 'foo' evaluates to TRUE and so does 'foobar' rlike '^f.*r$'.

A REGEXP B

strings

Same as RLIKE

  • Arithmetic OperatorsThe following operators support various common arithmetic operations on the operands. All of them return number types.

Arithmetic Operators

Operand types

Description

A + B

all number types

Gives the result of adding A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands, for example, since every integer is a float. Therefore, float is a containing type of integer so the + operator on a float and an int will result in a float.

A - B

all number types

Gives the result of subtracting B from A. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

A * B

all number types

Gives the result of multiplying A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. Note that if the multiplication causing overflow, you will have to cast one of the operators to a type higher in the type hierarchy.

A / B

all number types

Gives the result of dividing B from A. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. If the operands are integer types, then the result is the quotient of the division.

A % B

all number types

Gives the reminder resulting from dividing A by B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

A & B

all number types

Gives the result of bitwise AND of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

A | B

all number types

Gives the result of bitwise OR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

A ^ B

all number types

Gives the result of bitwise XOR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.

~A

all number types

Gives the result of bitwise NOT of A. The type of the result is the same as the type of A.

  • Logical Operators The following operators provide support for creating logical expressions. All of them return boolean TRUE or FALSE depending upon the boolean values of the operands.

Logical Operators

Operands types

Description

A AND B

boolean

TRUE if both A and B are TRUE, otherwise FALSE

A && B

boolean

Same as A AND B

A OR B

boolean

TRUE if either A or B or both are TRUE, otherwise FALSE

A || B

boolean

Same as A OR B

NOT A

boolean

TRUE if A is FALSE, otherwise FALSE

!A

boolean

Same as NOT A

  • Operators on Complex TypesThe following operators provide mechanisms to access elements in Complex Types

Operator

Operand types

Description

A[n]

A is an Array and n is an int

returns the nth element in the array A. The first element has index 0, for example, if A is an array comprising of ['foo', 'bar'] then A[0] returns 'foo' and A[1] returns 'bar'

M[key]

M is a Map<K, V> and key has type K

returns the value corresponding to the key in the map for example, if M is a map comprising of
{'f' -> 'foo', 'b' -> 'bar', 'all' -> 'foobar'} then M['all'] returns 'foobar'

S.x

S is a struct

returns the x field of S, for example, for struct foobar {int foo, int bar} foobar.foo returns the integer stored in the foo field of the struct.

Built In Functions

Return Type

Function Name (Signature)

Description

BIGINT

round(double a)

returns the rounded BIGINT value of the double

BIGINT

floor(double a)

returns the maximum BIGINT value that is equal or less than the double

BIGINT

ceil(double a)

returns the minimum BIGINT value that is equal or greater than the double

double

rand(), rand(int seed)

returns a random number (that changes from row to row). Specifiying the seed will make sure the generated random number sequence is deterministic.

string

concat(string A, string B,...)

returns the string resulting from concatenating B after A. For example, concat('foo', 'bar') results in 'foobar'. This function accepts arbitrary number of arguments and return the concatenation of all of them.

string

substr(string A, int start)

returns the substring of A starting from start position till the end of string A. For example, substr('foobar', 4) results in 'bar'

string

substr(string A, int start, int length)

returns the substring of A starting from start position with the given length, for example,
substr('foobar', 4, 2) results in 'ba'

string

upper(string A)

returns the string resulting from converting all characters of A to upper case, for example, upper('fOoBaR') results in 'FOOBAR'

string

ucase(string A)

Same as upper

string

lower(string A)

returns the string resulting from converting all characters of B to lower case, for example, lower('fOoBaR') results in 'foobar'

string

lcase(string A)

Same as lower

string

trim(string A)

returns the string resulting from trimming spaces from both ends of A, for example, trim(' foobar ') results in 'foobar'

string

ltrim(string A)

returns the string resulting from trimming spaces from the beginning(left hand side) of A. For example, ltrim(' foobar ') results in 'foobar '

string

rtrim(string A)

returns the string resulting from trimming spaces from the end(right hand side) of A. For example, rtrim(' foobar ') results in ' foobar'

string

regexp_replace(string A, string B, string C)

returns the string resulting from replacing all substrings in B that match the Java regular expression syntax(See Java regular expressions syntax) with C. For example, regexp_replace('foobar', 'oo|ar', ) returns 'fb'

int

size(Map<K.V>)

returns the number of elements in the map type

int

size(Array<T>)

returns the number of elements in the array type

value of <type>

cast(<expr> as <type>)

converts the results of the expression expr to <type>, for example, cast('1' as BIGINT) will convert the string '1' to it integral representation. A null is returned if the conversion does not succeed.

string

from_unixtime(int unixtime)

convert the number of seconds from the UNIX epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the format of "1970-01-01 00:00:00"

string

to_date(string timestamp)

Return the date part of a timestamp string: to_date("1970-01-01 00:00:00") = "1970-01-01"

int

year(string date)

Return the year part of a date or a timestamp string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") = 1970

int

month(string date)

Return the month part of a date or a timestamp string: month("1970-11-01 00:00:00") = 11, month("1970-11-01") = 11

int

day(string date)

Return the day part of a date or a timestamp string: day("1970-11-01 00:00:00") = 1, day("1970-11-01") = 1

string

get_json_object(string json_string, string path)

Extract json object from a json string based on json path specified, and return json string of the extracted json object. It will return null if the input json string is invalid.

  • The following built in aggregate functions are supported in Hive:

Return Type

Aggregation Function Name (Signature)

Description

BIGINT

count(*), count(expr), count(DISTINCT expr[, expr_.])

count(*)Returns the total number of retrieved rows, including rows containing NULL values; count(expr)Returns the number of rows for which the supplied expression is non-NULL; count(DISTINCT expr[, expr])Returns the number of rows for which the supplied expression(s) are unique and non-NULL.

DOUBLE

sum(col), sum(DISTINCT col)

returns the sum of the elements in the group or the sum of the distinct values of the column in the group

DOUBLE

avg(col), avg(DISTINCT col)

returns the average of the elements in the group or the average of the distinct values of the column in the group

DOUBLE

min(col)

returns the minimum value of the column in the group

DOUBLE

max(col)

returns the maximum value of the column in the group

Language Capabilities

Hive's SQL provides the basic SQL operations. These operations work on tables or partitions. These operations are:

...

Table of Content Zone
maxLevel2
locationtop
typelist

Creating, Showing, Altering, and Dropping Tables

See Hive Data Definition Language for detailed information about creating, showing, altering, and dropping tables.

Creating Tables

An example statement that would create the page_view table mentioned above would be like:

Code Block
    CREATE TABLE page_view(viewTime INT, userid BIGINT,
                    page_url STRING, referrer_url STRING,
                    ip STRING COMMENT 'IP Address of the User')
    COMMENT 'This is the page view table'
    PARTITIONED BY(dt STRING, country STRING)
    STORED AS SEQUENCEFILE;

In this example, the columns of the table are specified with the corresponding types. Comments can be attached both at the column level as well as at the table level. Additionally, the partitioned by clause defines the partitioning columns which are different from the data columns and are actually not stored with the data. When specified in this way, the data in the files is assumed to be delimited with ASCII 001(ctrl-A) as the field delimiter and newline as the row delimiter.

The field delimiter can be parametrized if the data is not in the above format as illustrated in the following example:

Code Block
    CREATE TABLE page_view(viewTime INT, userid BIGINT,
                    page_url STRING, referrer_url STRING,
                    ip STRING COMMENT 'IP Address of the User')
    COMMENT 'This is the page view table'
    PARTITIONED BY(dt STRING, country STRING)
    ROW FORMAT DELIMITED
            FIELDS TERMINATED BY '1'
    STORED AS SEQUENCEFILE;

The row delimintor currently cannot be changed since it is not determined by Hive but Hadoop delimiters.

It is also a good idea to bucket the tables on certain columns so that efficient sampling queries can be executed against the data set. If bucketing is absent, random sampling can still be done on the table but it is not efficient as the query has to scan all the data. The following example illustrates the case of the page_view table that is bucketed on the userid column:

Code Block
    CREATE TABLE page_view(viewTime INT, userid BIGINT,
                    page_url STRING, referrer_url STRING,
                    ip STRING COMMENT 'IP Address of the User')
    COMMENT 'This is the page view table'
    PARTITIONED BY(dt STRING, country STRING)
    CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETS
    ROW FORMAT DELIMITED
            FIELDS TERMINATED BY '1'
            COLLECTION ITEMS TERMINATED BY '2'
            MAP KEYS TERMINATED BY '3'
    STORED AS SEQUENCEFILE;

In the example above, the table is clustered by a hash function of userid into 32 buckets. Within each bucket the data is sorted in increasing order of viewTime. Such an organization allows the user to do efficient sampling on the clustered columnn this case userid. The sorting property allows internal operators to take advantage of the better-known data structure while evaluating queries with greater efficiency.

Code Block
    CREATE TABLE page_view(viewTime INT, userid BIGINT,
                    page_url STRING, referrer_url STRING,
                    friends ARRAY<BIGINT>, properties MAP<STRING, STRING>
                    ip STRING COMMENT 'IP Address of the User')
    COMMENT 'This is the page view table'
    PARTITIONED BY(dt STRING, country STRING)
    CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETS
    ROW FORMAT DELIMITED
            FIELDS TERMINATED BY '1'
            COLLECTION ITEMS TERMINATED BY '2'
            MAP KEYS TERMINATED BY '3'
    STORED AS SEQUENCEFILE;

In this example, the columns that comprise of the table row are specified in a similar way as the definition of types. Comments can be attached both at the column level as well as at the table level. Additionally, the partitioned by clause defines the partitioning columns which are different from the data columns and are actually not stored with the data. The CLUSTERED BY clause specifies which column to use for bucketing as well as how many buckets to create. The delimited row format specifies how the rows are stored in the hive table. In the case of the delimited format, this specifies how the fields are terminated, how the items within collections (arrays or maps) are terminated, and how the map keys are terminated. STORED AS SEQUENCEFILE indicates that this data is stored in a binary format (using hadoop SequenceFiles) on hdfs. The values shown for the ROW FORMAT and STORED AS clauses in the above, example represent the system defaults.

Table names and column names are case insensitive.

Browsing Tables and Partitions

Code Block
    SHOW TABLES;

To list existing tables in the warehouse; there are many of these, likely more than you want to browse.

Code Block
    SHOW TABLES 'page.*';

To list tables with prefix 'page'. The pattern follows Java regular expression syntax (so the period is a wildcard).

Code Block
    SHOW PARTITIONS page_view;

To list partitions of a table. If the table is not a partitioned table then an error is thrown.

Code Block
    DESCRIBE page_view;

To list columns and column types of table.

Code Block
    DESCRIBE EXTENDED page_view;

To list columns and all other properties of table. This prints lot of information and that too not in a pretty format. Usually used for debugging.

Code Block
   DESCRIBE EXTENDED page_view PARTITION (ds='2008-08-08');

To list columns and all other properties of a partition. This also prints lot of information which is usually used for debugging.

Altering Tables

To rename existing table to a new name. If a table with new name already exists then an error is returned:

Code Block
    ALTER TABLE old_table_name RENAME TO new_table_name;

To rename the columns of an existing table. Be sure to use the same column types, and to include an entry for each preexisting column:

Code Block
    ALTER TABLE old_table_name REPLACE COLUMNS (col1 TYPE, ...);

To add columns to an existing table:

Code Block
    ALTER TABLE tab1 ADD COLUMNS (c1 INT COMMENT 'a new int column', c2 STRING DEFAULT 'def val');

Note that a change in the schema (such as the adding of the columns), preserves the schema for the old partitions of the table in case it is a partitioned table. All the queries that access these columns and run over the old partitions implicitly return a null value or the specified default values for these columns.

In the later versions, we can make the behavior of assuming certain values as opposed to throwing an error in case the column is not found in a particular partition configurable.

Dropping Tables and Partitions

Dropping tables is fairly trivial. A drop on the table would implicitly drop any indexes(this is a future feature) that would have been built on the table. The associated command is:

Code Block
    DROP TABLE pv_users;

To dropping a partition. Alter the table to drop the partition.

Code Block
    ALTER TABLE pv_users DROP PARTITION (ds='2008-08-08')
  • Note that any data for this table or partitions will be dropped and may not be recoverable. *

Loading Data

There are multiple ways to load data into Hive tables. The user can create an external table that points to a specified location within HDFS. In this particular usage, the user can copy a file into the specified location using the HDFS put or copy commands and create a table pointing to this location with all the relevant row format information. Once this is done, the user can transform the data and insert them into any other Hive table. For example, if the file /tmp/pv_2008-06-08.txt contains comma separated page views served on 2008-06-08, and this needs to be loaded into the page_view table in the appropriate partition, the following sequence of commands can achieve this:

Code Block
    CREATE EXTERNAL TABLE page_view_stg(viewTime INT, userid BIGINT,
                    page_url STRING, referrer_url STRING,
                    ip STRING COMMENT 'IP Address of the User',
                    country STRING COMMENT 'country of origination')
    COMMENT 'This is the staging page view table'
    ROW FORMAT DELIMITED FIELDS TERMINATED BY '44' LINES TERMINATED BY '12'
    STORED AS TEXTFILE
    LOCATION '/user/data/staging/page_view';

    hadoop dfs -put /tmp/pv_2008-06-08.txt /user/data/staging/page_view

    FROM page_view_stg pvs
    INSERT OVERWRITE TABLE page_view PARTITION(dt='2008-06-08', country='US')
    SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, null, null, pvs.ip
    WHERE pvs.country = 'US';

* This code results in an error due to LINES TERMINATED BY limitation

FAILED: SemanticException 6:67 LINES TERMINATED BY only supports newline '\n' right now. Error encountered near token ''12''

See

Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyHIVE-5999
Jira
serverASF JIRA
serverId5aa69414-a9e9-3523-82ec-879b028fb15b
keyHIVE-11996

 

In the example above, nulls are inserted for the array and map types in the destination tables but potentially these can also come from the external table if the proper row formats are specified.

This method is useful if there is already legacy data in HDFS on which the user wants to put some metadata so that the data can be queried and manipulated using Hive.

Additionally, the system also supports syntax that can load the data from a file in the local files system directly into a Hive table where the input data format is the same as the table format. If /tmp/pv_2008-06-08_us.txt already contains the data for US, then we do not need any additional filtering as shown in the previous example. The load in this case can be done using the following syntax:

Code Block
   LOAD DATA LOCAL INPATH /tmp/pv_2008-06-08_us.txt INTO TABLE page_view PARTITION(date='2008-06-08', country='US')

The path argument can take a directory (in which case all the files in the directory are loaded), a single file name, or a wildcard (in which case all the matching files are uploaded). If the argument is a directory, it cannot contain subdirectories. Similarly, the wildcard must match file names only.

In the case that the input file /tmp/pv_2008-06-08_us.txt is very large, the user may decide to do a parallel load of the data (using tools that are external to Hive). Once the file is in HDFS - the following syntax can be used to load the data into a Hive table:

Code Block
   LOAD DATA INPATH '/user/data/pv_2008-06-08_us.txt' INTO TABLE page_view PARTITION(date='2008-06-08', country='US')

It is assumed that the array and map fields in the input.txt files are null fields for these examples.

See Hive Data Manipulation Language for more information about loading data into Hive tables, and see External Tables for another example of creating an external table.

Querying and Inserting Data

...