# Hive Data Types

## Column types

These are all the supported column types in Hive:

Primitive types:

`TINYINT`

`SMALLINT`

`INT`

`BIGINT`

`BOOLEAN`

`FLOAT`

`DOUBLE`

`STRING`

`BINARY`

(Note: Only available starting with Hive 0.8.0)`TIMESTAMP`

(Note: Only available starting with Hive 0.8.0)`DECIMAL`

(Note: Only available starting with Hive 0.11.0)

Complex types:

- arrays:
`ARRAY<data_type>`

- maps:
`MAP<primitive_type, data_type>`

structs:

`STRUCT<col_name : data_type [COMMENT col_comment], ...>`

- union:
`UNIONTYPE<data_type, data_type, ...>`

### Timestamps

Supports traditional UNIX timestamp with optional nanosecond precision.

Supported conversions:

- Integer numeric types: Interpreted as UNIX timestamp in seconds
- Floating point numeric types: Intepreted as UNIX timestamp in seconds with decimal precision
- Strings: JDBC compliant java.sql.Timestamp format "YYYY-MM-DD HH:MM:SS.fffffffff" (9 decimal place precision)

Timestamps are interpreted to be timezoneless and stored as an offset from the UNIX epoch. Convenience UDFs for conversion to and from timezones are provided (to_utc_timestamp, from_utc_timestamp).

All existing datetime UDFs (month, day, year, hour, etc.) work with the TIMESTAMP data type.

### Decimal datatype

The Decimal datatype in Hive based on Java's BigDecimal which is used for representing immutable arbitrary precision decimal numbers in Java. All regular number operations (e.g. +, -, *, /) and relevant UDFs (e.g. Floor, Ceil, Round, and many more) have been updated to handle decimal types. You can cast to/from decimal types like you would do with other numeric types. The persistence format of decimal type supports both scientific and non-scientific notation. Therefore, regardless of whether your dataset contains data like 1E+44 (scientific notation) or 4004 (non-scientific notation) or a combination of both, decimal datatype can be used for it.

With the present implementation, the Decimal datatype has unlimited precision. While this has absolutely no impact on usability, it's worth noting for performance reasons. HIVE-3796 is tracking the progress of allowing users to specify scale and precision when creating tables with decimal datatype.

The Decimal datatype is discussed further in Floating point types below.

### Union types

Union types can at any one point hold exactly one of their specified data types. You can create an instance of this type using the `create_union`

UDF:

CREATE TABLE union_test(foo UNIONTYPE<int, double, array<string>, struct<a:int,b:string>>); SELECT foo FROM union_test; {0:1} {1:2.0} {2:["three","four"]} {3:{"a":5,"b":"five"}} {2:["six","seven"]} {3:{"a":8,"b":"eight"}} {0:9} {1:10.0}

The first part in the deserialized union is the *tag* which lets us know which part of the union is being used. In this example `0`

means the first data_type from the definition which is an `int`

and so on.

To create a union you have to provide this tag to the `create_union`

UDF:

SELECT create_union(0, key), create_union(if(key<100, 0, 1), 2.0, value), create_union(1, "a", struct(2, "b")) FROM src LIMIT 2; {0:"238"} {1:"val_238"} {1:{"col1":2,"col2":"b"}} {0:"86"} {0:2.0} {1:{"col1":2,"col2":"b"}}

## Literals

### Integral types

Integral literals are assumed to be INT by default, unless the number exceeds the range of INT in which case it is interpreted as a BIGINT, or if one of the following postfixes is present on the number.

Type |
Postfix |
Example |
---|---|---|

TINYINT |
Y |
100Y |

SMALLINT |
S |
100S |

BIGINT |
L |
100L |

### String types

String literals can be expressed with either single quotes (') or double quotes ("). Hive uses C-style escaping within the strings.

### Floating point types

Floating point literals are assumed to be DOUBLE. Scientific notation is not yet supported.

##### Decimal types

Decimal literals provide greater precision and range for floating point numbers than the DOUBLE data type. (See Decimal datatype above.) They are needed for use cases in which the precision of a DOUBLE is insufficient, such as financial applications, and for cases that deal with numbers outside the DOUBLE range (approximately -10^{308} to 10^{308}) or very close to zero (-10^{-308} to 10^{-308}). The precision of a Decimal type is practically unlimited.

For a general discussion of the limits of the DOUBLE type, see the Wikipedia article Double-precision floating-point format.

##### Using Decimal types

You can create a table in Hive that uses the Decimal type with the following syntax:

create table decimal_1 (t decimal);

The table `decimal_1`

is a table having one field of type decimal which is basically a Decimal value.

You can read and write values in such a table using either the LazySimpleSerDe or the LazyBinarySerDe. For example:

alter table decimal_1 set serde 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe';

or:

alter table decimal_1 set serde 'org.apache.hadoop.hive.serde2.lazy.LazyBinarySerDe';

You can use a cast to convert a Decimal value to any other primitive type such as a BOOLEAN. For example:

select cast(t as boolean) from decimal_2;

Decimal also supports many mathematical UDFs with the same syntax as used in the case of DOUBLE.

Basic mathematical operations that can use decimal types include:

- Plus
- Negative
- Addition
- Subtraction
- Multiplication
- Division
- Modulus

These rounding functions can also take decimal types:

- Floor
- Ceiling
- Round

But these mathematical UDFs are not currently supported:

- Exp
- Log (Ln)
- Log2
- Log10
- Sqrt

Power(decimal, n) only supports positive integer values for the exponent n.

Casting is supported between decimal values and integer, double, and so on.

##### Testing Decimal Types

Two new tests have been added as part of the TestCliDriver framework within Hive. They are decimal_1.q and decimal_2.q. Other tests such as udf7.q cover the gamut of UDFs mentioned above.

More tests need to be added that demonstrate failure or when certain types of casts are prevented (for example, casting to date). There is some ambiguity in the round function because the rounding of Decimal does not work exactly as the SQL standard, and therefore it has been omitted in the current work.

For general information about running Hive tests, see How to Contribute to Apache Hive and Hive Developer FAQ.

## Handling of NULL Values

Missing values are represented by the special value NULL. To import data with NULL fields, check documentation of the SerDe used by the table. (The default Text Format uses LazySimpleSerDe which interprets the string \N as NULL when importing.)