This document describes user configuration properties (sometimes called parameters, variables, or options) for Hive and notes some of the releases that introduced new properties.
The canonical list of configuration properties is managed in the HiveConf
Java class, so refer to the HiveConf.java
file for a complete list of configuration properties available in your Hive release.
For information about how to use these configuration properties, see Configuring Hive. That document also describes administrative configuration properties for setting up Hive in the Configuration Variables section. Hive Metastore Administration describes additional configuration properties for the metastore.
As of Hive 0.14.0 (HIVE-7211), configuration name starts with "hive." regarded as hive system property. With "hive.conf.validation" option true(default), attempts to set configuration starts with "hive." which is not registered to hive system will make a throw exception. |
Chooses execution engine. Options are: mr
(Map reduce, default) or tez
(Tez execution, for Hadoop 2 only).
See Hive on Tez for more information, and see the Tez section below for Tez configuration properties.
-1
The default number of reduce tasks per job. Typically set to a prime close to the number of available hosts. Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
1000000000
Size per reducer. The default is 1G, that is, if the input size is 10G then 10 reducers will be used.
999
Max number of reducers will be used. If the one specified in the configuration property mapred.reduce.tasks is negative, Hive will use this one as the max number of reducers when automatically determine number of reducers.
/tmp/hive-${user.name
}Scratch space for Hive jobs.
TextFile
Default file format for CREATE TABLE statement. Options are TextFile, SequenceFile, RCfile, and ORC. Users can explicitly say CREATE TABLE ... STORED AS TEXTFILE|SEQUENCEFILE|RCFILE|ORC to override.
true
Whether to check file format or not when loading data files.
TextFile
File format to use for a query's intermediate results. Options are TextFile, SequenceFile, and RCfile. Set to SequenceFile if any columns are string type and contain new-line characters (HIVE-1608, HIVE-3065).
If turned on, splits generated by ORC will include metadata about the stripes in the file. This data is read remotely (from the client or HiveServer2 machine) and sent to all the tasks.
Cache size for keeping meta information about ORC splits cached in the client.
How many threads ORC should use to create splits in parallel.
Use zerocopy reads with ORC. (This requires Hadoop 2.3 or later.)
false
If ORC reader encounters corrupt data, this value will be used to determine whether to skip the corrupt data or throw an exception. The default behavior is to throw an exception.
false
If ORC reader encounters corrupt data, this value will be used to determine whether to skip the corrupt data or throw an exception. The default behavior is to throw an exception.
false
Whether to combine small input files so that fewer mappers are spawned.
true
in Hive 0.3 and later; false
in Hive 0.2Whether to use map-side aggregation in Hive Group By queries.
false
Whether there is skew in data to optimize group by queries.
100000
Number of rows after which size of the grouping keys/aggregation classes is performed.
30
Whether a new map-reduce job should be launched for grouping sets/rollups/cubes.
For a query like "select a, b, c, count(1) from T group by a, b, c with rollup;" four rows are created per row: (a, b, c), (a, b, null), (a, null, null), (null, null, null). This can lead to explosion across the map-reduce boundary if the cardinality of T is very high, and map-side aggregation does not do a very good job.
This parameter decides if Hive should add an additional map-reduce job. If the grouping set cardinality (4 in the example above) is more than this value, a new MR job is added under the assumption that the orginal "group by" will reduce the data size.
0
For local mode, memory of the mappers/reducers.
0.9
The max memory to be used by map-side group aggregation hash table, if the memory usage is higher than this number, force to flush data.
0.5
Portion of total memory to be used by map-side group aggregation hash table.
0.5
Hash aggregation will be turned off if the ratio between hash table size and input rows is bigger than this number. Set to 1 to make sure hash aggregation is never turned off.
true
Whether to enable the bucketed group by from bucketed partitions/tables.
false
Whether to optimize multi group by query to generate a single M/R job plan. If the multi group by query has common group by keys, it will be optimized to generate a single M/R job.
true
Whether to optimize multi group by query to generate a single M/R job plan. If the multi group by query has common group by keys, it will be optimized to generate a single M/R job.
true
Whether to enable column pruner. (This configuration property was removed in release 0.13.0.)
false
Whether to enable automatic use of indexes.
Note: See Indexing for more configuration properties related to Hive indexes.
true
Whether to enable predicate pushdown.
true
Whether to push predicates down into storage handlers. Ignored when hive.optimize.ppd is false.
true
Whether to transitively replicate predicate filters over equijoin conditions.
1000
How many rows in the right-most join operand Hive should buffer before
emitting the join result.
25000
How many rows in the joining tables (except the streaming table)
should be cached in memory.
100
How many values in each key in the map-joined table should be cached in memory.
0.3
Portion of total memory to be used by map-side group aggregation hash table, when this group by is followed by map join.
25000000
The threshold for the input file size of the small tables; if the file size is smaller than this threshold, it will try to convert the common join into map join.
This number means how much memory the local task can take to hold the key/value into in-memory hash table; If the local task's memory usage is more than this number, the local task will be aborted. It means the data of small table is too large to be held in memory.
0.55
This number means how much memory the local task can take to hold the key/value into in-memory hash table when this map join followed by a group by; If the local task's memory usage is more than this number, the local task will be aborted. It means the data of small table is too large to be held in the memory.
The number means after how many rows processed it needs to check the memory usage.
10000
How many rows with the same key value should be cached in memory per sort-merge-bucket joined table.
Whether a MapJoin hashtable should use optimized (size-wise) keys, allowing the table to take less memory. Depending on the key, memory savings for the entire table can be 5-15% or so.
Whether a MapJoin hashtable should deserialize values on demand. Depending on how many values in the table the join will actually touch, it can save a lot of memory by not creating objects for rows that are not needed. If all rows are needed, obviously there's no gain.
false
Whether to enable skew join optimization. (Also see hive.optimize.skewjoin.compiletime.)
100000
Determine if we get a skew key in join. If we see more than the specified number of rows with the same key in join operator, we think the key as a skew join key.
10000
Determine the number of map task used in the follow up map join job for a skew join. It should be used together with hive.skewjoin.mapjoin.min.split to perform a fine grained control.
33554432
Determine the number of map task at most used in the follow up map join job for a skew join by specifying the minimum split size. It should be used together with hive.skewjoin.mapjoin.map.tasks to perform a fine grained control.
false
Whether to create a separate plan for skewed keys for the tables in the join. This is based on the skewed keys stored in the metadata. At compile time, the plan is broken into different joins: one for the skewed keys, and the other for the remaining keys. And then, a union is performed for the two joins generated above. So unless the same skewed key is present in both the joined tables, the join for the skewed key will be performed as a map-side join.
The main difference between this paramater and hive.optimize.skewjoin is that this parameter uses the skew information stored in the metastore to optimize the plan at compile time itself. If there is no skew information in the metadata, this parameter will not have any effect.
Both hive.optimize.skewjoin.compiletime and hive.optimize.skewjoin should be set to true. (Ideally, hive.optimize.skewjoin should be renamed as hive.optimize.skewjoin.runtime, but for backward compatibility that has not been done.)
If the skew information is correctly stored in the metadata, hive.optimize.skewjoin.compiletime will change the query plan to take care of it, and hive.optimize.skewjoin will be a no-op.
false
Whether to remove the union and push the operators between union and the filesink above union. This avoids an extra scan of the output by union. This is independently useful for union queries, and especially useful when hive.optimize.skewjoin.compiletime is set to true, since an extra union is inserted.
The merge is triggered if either of hive.merge.mapfiles or hive.merge.mapredfiles is set to true. If the user has set hive.merge.mapfiles to true and hive.merge.mapredfiles to false, the idea was that the number of reducers are few, so the number of files anyway is small. However, with this optimization, we are increasing the number of files possibly by a big margin. So, we merge aggresively.
false
Whether the version of Hadoop which is running supports sub-directories for tables/partitions. Many Hive optimizations can be applied if the Hadoop version supports sub-directories for tables/partitions. This support was added by MAPREDUCE-1501.
nonstrict
The mode in which the Hive operations are being performed. In strict mode, some risky queries are not allowed to run.
100000
Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task). This prevents runaway scripts from filling logs partitions to capacity.
false
When enabled, this option allows a user script to exit successfully without consuming all the data from the standard input.
HIVE_SCRIPT_OPERATOR_ID
Name of the environment variable that holds the unique script operator ID in the user's transform function (the custom mapper/reducer that the user has specified in the query).
false
This controls whether the final outputs of a query (to a local/hdfs file or a Hive table) is compressed. The compression codec and other options are determined from Hadoop configuration variables mapred.output.compress* .
false
This controls whether intermediate files produced by Hive between multiple map-reduce jobs are compressed. The compression codec and other options are determined from Hadoop configuration variables mapred.output.compress*.
false
Whether to execute jobs in parallel.
8
How many jobs at most can be executed in parallel.
false
Whether to provide the row offset virtual column.
false
Whether Hive should periodically update task progress counters during execution. Enabling this allows task progress to be monitored more closely in the job tracker, but may impose a performance penalty. This flag is automatically set to true for jobs with hive.exec.dynamic.partition set to true.
HIVE
Counter group name for counters used during query execution. The counter group is used for internal Hive variables (CREATED_FILE, FATAL_ERROR, and so on).
Comma-separated list of pre-execution hooks to be invoked for each statement. A pre-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
Comma-separated list of post-execution hooks to be invoked for each statement. A post-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
Comma-separated list of on-failure hooks to be invoked for each statement. An on-failure hook is specified as the name of Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
true
Merge small files at the end of a map-only job.
false
Merge small files at the end of a map-reduce job.
true
Try to generate a map-only job for merging files if CombineHiveInputFormat is supported.
256000000
Size of merged files at the end of the job.
16000000
When the average output file size of a job is less than this number, Hive will start an additional map-reduce job to merge the output files into bigger files. This is only done for map-only jobs if hive.merge.mapfiles is true, and for map-reduce jobs if hive.merge.mapredfiles is true.
1000
Send a heartbeat after this interval – used by mapjoin and filter operators.
false
in 0.10.0; true
in 0.11.0 and later (HIVE-3297) Whether Hive enables the optimization about converting common join into mapjoin based on the input file size. (Note that hive-default.xml.template incorrectly gives the default as false in Hive 0.11.0 through 0.13.1.)
Whether Hive enables the optimization about converting common join into mapjoin based on the input file size. If this parameter is on, and the sum of size for n-1 of the tables/partitions for an n-way join is smaller than the size specified by hive.auto.convert.join.noconditionaltask.size, the join is directly converted to a mapjoin (there is no conditional task).
10000000
If hive.auto.convert.join.noconditionaltask is off, this parameter does not take effect. However, if it is on, and the sum of size for n-1 of the tables/partitions for an n-way join is smaller than this size, the join is directly converted to a mapjoin (there is no conditional task). The default is 10MB.
true
For conditional joins, if input stream from a small alias can be directly applied to the join operator without filtering or projection, the alias need not be pre-staged in the distributed cache via a mapred local task. Currently, this is not working with vectorization or Tez execution engine.
false
Whether Hive Tranform/Map/Reduce Clause should automatically send progress information to TaskTracker to avoid the task getting killed because of inactivity. Hive sends progress information when the script is outputting to stderr. This option removes the need of periodically producing stderr messages, but users should be cautious because this may prevent infinite loops in the scripts to be killed by TaskTracker.
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
The default SerDe for transmitting input data to and reading output data from the user scripts.
org.apache.hadoop.hive.ql.exec.TextRecordReader
The default record reader for reading data from the user scripts.
org.apache.hadoop.hive.ql.exec.TextRecordWriter
The default record writer for writing data to the user scripts.
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
The default input format. Set this to HiveInputFormat if you encounter problems with CombineHiveInputFormat.
false
Whether Hive should automatically send progress information to TaskTracker when using UDTF's to prevent the task getting killed because of inactivity. Users should be cautious because this may prevent TaskTracker from killing tasks with infinite loops.
true
Whether speculative execution for reducers should be turned on.
1000
The interval with which to poll the JobTracker for the counters the running job. The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be.
false
Whether bucketing is enforced. If true, while inserting into the table, bucketing is enforced.
false
Whether sorting is enforced. If true, while inserting into the table, sorting is enforced.
true
Remove extra map-reduce jobs if the data is already clustered by the same key which needs to be used again. This should always be set to true. Since it is a new feature, it has been made configurable.
false
Whether or not to allow dynamic partitions in DML/DDL.
strict
In strict mode, the user must specify at least one static partition in case the user accidentally overwrites all partitions.
1000
Maximum number of dynamic partitions allowed to be created in total.
100
Maximum number of dynamic partitions allowed to be created in each mapper/reducer node.
100000
Maximum number of HDFS files created by all mappers/reducers in a MapReduce job.
_HIVE_DEFAULT_PARTITION_
The default partition name in case the dynamic partition column value is null/empty string or any other values that cannot be escaped. This value must not contain any special character used in HDFS URI (e.g., ':', '%', '/' etc). The user has to be aware that the dynamic partition value should not contain this value to avoid confusions.
org.apache.hadoop.hive.serde2.DelimitedJSONSerDe
The SerDe used by FetchTask to serialize the fetch output.
false
Let Hive determine whether to run in local mode automatically.
true
Do not report an error if DROP TABLE/VIEW specifies a non-existent table/view.
true
If a job fails, whether to provide a link in the CLI to the task with the most failures, along with debugging hints if applicable.
0
How long to run autoprogressor for the script/UDTF operators (in seconds). Set to 0 for forever.
Default property values for newly created tables.
true
This enables substitution using syntax like ${var
} ${system:var
} and ${env:var
}.
false
Whether to throw an exception if dynamic partition insert generates empty results.
hdfs,pfile
A comma separated list of acceptable URI schemes for import and export.
100000
When trying a smaller subset of data for simple LIMIT, how much size we need to guarantee each row to have at least.
10
When trying a smaller subset of data for simple LIMIT, maximum number of files we can sample.
false
Whether to enable to optimization to trying a smaller subset of data for simple LIMIT first.
50000
Maximum number of rows allowed for a smaller subset of data for simple LIMIT, if it is a fetch query. Insert queries are not restricted by this limit.
false
Should rework the mapred work or not. This is first introduced by SymlinkTextInputFormat to replace symlink files with real paths at compile time.
0
A number used to percentage sampling. By changing this number, user will change the subsets of data sampled.
A list of I/O exception handler class names. This is used to construct a list of exception handlers to handle exceptions thrown by record readers
_c
String used as a prefix when auto generating column alias. By default the prefix label will be appended with a column position number to form the column alias. Auto generation would happen if an aggregate function is used in a select clause without an explicit alias.
false
Whether to include function name in the column alias auto generated by Hive.
org.apache.hadoop.hive.ql.log.PerfLogger
The class responsible logging client side performance metrics. Must be a subclass of org.apache.hadoop.hive.ql.log.PerfLogger.
false
To cleanup the Hive scratch directory while starting the Hive server.
String used as a file extension for output files. If not set, defaults to the codec extension for text files (e.g. ".gz"), or no extension otherwise.
false
Where to insert into multilevel directories like "insert directory '/HIVEFT25686/chinna/' from table".
true
Make column names unique in the result set by qualifying column names with table alias if needed. Table alias will be added to column names for queries of type "select *" or if query explicitly uses table alias "select r1.x..".
column
Whether to use quoted identifiers. Value can be "none
" or "column
".
column
: Column names can contain any Unicode character. Any column name that is specified within backticks (`
) is treated literally. Within a backtick string, use double backticks (``
) to represent a backtick character.none
: Only alphanumeric and underscore characters are valid in identifiers. Backticked names are interpreted as regular expressions. This is also the behavior in releases prior to 0.13.0.
true
In older Hive versions (0.10 and earlier) no distinction was made between partition columns or non-partition columns while displaying columns in DESCRIBE TABLE. From version 0.12 onwards, they are displayed separately. This flag will let you get the old behavior, if desired. See test-case in patch for HIVE-6689.
-1
To protect the cluster, this controls how many partitions can be scanned for each partitioned table. The default value "-1" means no limit. The limit on partitions does not affect metadata-only queries.
0002
Obsolete: The dfs.umask
value for the Hive-created folders.
true
Controls whether to connect to remote metastore server or open a new metastore server in Hive Client JVM. As of Hive 0.10 this is no longer used. Instead if hive.metastore.uris
is set then remote
mode is assumed otherwise local
.
jdbc:derby:;databaseName=metastore_db;create=true
JDBC connect string for a JDBC metastore.
org.apache.derby.jdbc.EmbeddedDriver
Driver class name for a JDBC metastore.
org.datanucleus.jdo.JDOPersistenceManagerFactory
Class implementing the JDO PersistenceManagerFactory.
true
Detaches all objects from session so that they can be used after transaction is committed.
true
Reads outside of transactions.
APP
Username to use against metastore database.
mine
Password to use against metastore database.
true
Set this to true if multiple threads access metastore through JDO concurrently.
DBCP
in Hive 0.7 to 0.11; BoneCP
in 0.12 and later Uses a BoneCP connection pool for JDBC metastore in release 0.12 and later (HIVE-4807), or a DBCP connection pool in releases 0.7 to 0.11.
false
Validates existing schema against code. Turn this on if you want to verify existing schema
false
Validates existing schema against code. Turn this on if you want to verify existing schema.
false
Validates existing schema against code. Turn this on if you want to verify existing schema.
rdbms
Metadata store type.
true
unless hive.metastore.schema.verification is true
Creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.
In Hive 0.12.0 and later releases, datanucleus.autoCreateSchema is disabled if hive.metastore.schema.verification is true
.
checked
Throw exception if metadata tables are incorrect.
read-committed
Default transaction isolation level for identity generation.
false
This parameter does nothing.
Warning note: For most installations, Hive should not enable the DataNucleus L2 cache, since this can cause correctness issues. Thus, some people set this parameter to false assuming that this disables the cache – unfortunately, it does not. To actually disable the cache, set datanucleus.cache.level2.type to "none".
none
in Hive 0.9 and later; SOFT
in Hive 0.7 to 0.8.1NONE = disable the datanucleus level 2 cache, SOFT = soft reference based cache, WEAK = weak reference based cache.
Warning note: For most Hive installations, enabling the datanucleus cache can lead to correctness issues, and is dangerous. This should be left as "none".
datanucleus
Name of the identifier factory to use when generating table/column names etc. 'datanucleus' is used for backward compatibility.
LOG
Defines what happens when plugin bundles are found and are duplicated: EXCEPTION, LOG, or NONE.
/user/hive/warehouse
Location of default database for the warehouse.
false
Set this to true if table directories should inherit the permissions of the warehouse or database directory instead of being created with permissions derived from dfs umask. (This configuration property replaced hive.files.umask.value before Hive 0.9.0 was released.)
false
in Hive 0.8.1 through 0.13.0, true
starting in Hive 0.14.0In unsecure mode, true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that it's best effort. If client sets it to true and server sets it to false, the client setting will be ignored.
List of comma-separated listeners for metastore events.
List of comma-separated keys occurring in table properties which will get inherited to newly created partitions. * implies all the keys will get inherited.
List of comma-separated listeners for the end of metastore functions.
0
Duration after which events expire from events table (in seconds).
0
Frequency at which timer task runs to purge expired events in metastore(in seconds).
5
Number of retries while opening a connection to metastore.
1
Number of seconds for the client to wait between consecutive connection attempts.
20
MetaStore Client socket timeout in seconds.
org.apache.hadoop.hive.metastore.ObjectStore
Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store and retrieval of raw metadata objects such as table, database.
300
Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the client side.
Name of the hook to use for retriving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used.
1
The number of times to retry a metastore call if there were a connection error.
1000
The number of milliseconds between metastore retry attempts
200
Minimum number of worker threads in the Thrift server's pool.
100000
Maximum number of worker threads in the Thrift server's pool.
true
Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections.
false
If true, the metastore thrift interface will be secured with SASL. Clients must authenticate with Kerberos.
The path to the Kerberos Keytab file containing the metastore thrift server's service principal.
hive-metastore/_HOST@EXAMPLE.COM
The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct host name.
Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order
List of comma-separated metastore object types that should be pinned in the cache.
false
Should the metastore do authorization checks against the underlying storage for operations like drop-partition (disallow the drop-partition if the user in question doesn't have permissions to delete the corresponding directory on the storage).
false
Enforce metastore schema version consistency.
True: Verify that version information stored in metastore matches with one from Hive jars. Also disable automatic schema migration attempt (see datanucleus.autoCreateSchema). Users are required to manually migrate schema after Hive upgrade which ensures proper metastore schema migration.
False: Warn if the version information stored in metastore doesn't match with one from Hive jars.
For more information, see Metastore Schema Consistency and Upgrades.
false
Allow JDO query pushdown for integral partition columns in metastore. Off by default. This improves metastore performance for integral columns, especially if there's a large number of partitions. However, it doesn't work correctly with integral values that are not normalized (for example, if they have leading zeroes like 0012). If metastore direct SQL is enabled and works (hive.metastore.try.direct.sql), this optimization is also irrelevant.
true
Whether the Hive metastore should try to use direct SQL queries instead of the DataNucleus for certain read paths. This can improve metastore performance when fetching many partitions or column statistics by orders of magnitude; however, it is not guaranteed to work on all RDBMS-es and all versions. In case of SQL failures, the metastore will fall back to the DataNucleus, so it's safe even if SQL doesn't work for all queries on your datastore. If all SQL queries fail (for example, your metastore is backed by MongoDB), you might want to disable this to save the try-and-fall-back cost.
true
Same as hive.metastore.try.direct.sql, for read statements within a transaction that modifies metastore data. Due to non-standard behavior in Postgres, if a direct SQL select query has incorrect syntax or something similar inside a transaction, the entire transaction will fail and fall-back to DataNucleus will not be possible. You should disable the usage of direct SQL inside transactions if that happens in your case.
HiveServer2 was added in Hive 0.11.0 with HIVE-2935. For more information see Setting Up HiveServer2 and HiveServer2 Clients.
10000
Port number of HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT.
localhost
Bind host on which to run the HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_BIND_HOST.
5
Minimum number of Thrift worker threads.
100
in Hive 0.11.0, 500
in Hive 0.12.0 and laterMaximum number of Thrift worker threads.
NONE
Client authentication types.
NONE: no authentication check
LDAP: LDAP/AD based authentication
KERBEROS: Kerberos/GSSAPI authentication
CUSTOM: Custom authentication provider (use with property hive.server2.custom.authentication.class)
PAM: Pluggable authentication module (added in Hive 0.13.0 with HIVE-6466)
Kerberos keytab file for server principal.
Kerberos server principal.
Custom authentication class. Used when property hive.server2.authentication is set to 'CUSTOM'. Provided class must be a proper implementation of the interface org.apache.hive.service.auth.PasswdAuthenticationProvider. HiveServer2 will call its Authenticate(user, passed) method to authenticate requests. The implementation may optionally extend Hadoop's org.apache.hadoop.conf.Configured class to grab Hive's Configuration object.
Setting this property to true will have HiveServer2 execute Hive operations as the user making the calls to it.
LDAP connection URL.
LDAP base DN (distinguished name).
LDAP domain.
binary
Server transport mode. Value can be "binary" or "http".
10001
Port number when in HTTP mode.
cliservice
Path component of URL endpoint when in HTTP mode.
5
Minimum number of worker threads when in HTTP mode.
500
Maximum number of worker threads when in HTTP mode.
auth
Sasl QOP value; set it to one of the following values to enable higher levels of protection for HiveServer2 communication with clients.
"auth" – authentication only (default)
"auth-int" – authentication plus integrity protection
"auth-conf" – authentication plus integrity and confidentiality protection
Note that hadoop.rpc.protection being set to a higher level than HiveServer2 does not make sense in most situations. HiveServer2 ignores hadoop.rpc.protection in favor of hive.server2.thrift.sasl.qop.
This is applicable only if HiveServer2 is configured to use Kerberos authentication.
50
in Hive 0.12.0, 100
in Hive 0.13.0 and laterNumber of threads in the async thread pool for HiveServer2.
10
Time (in seconds) for which HiveServer2 shutdown will wait for async threads to terminate.
CLASSIC
This setting reflects how HiveServer2 will report the table types for JDBC and other client implementations that retrieve the available tables and supported table types.
HIVE: Exposes Hive's native table types like MANAGED_TABLE, EXTERNAL_TABLE, VIRTUAL_VIEW
CLASSIC: More generic types like TABLE and VIEW
Session-level hook for HiveServer2.
30
The number of times HiveServer2 will attempt to start before exiting, sleeping 60 seconds between retries. The default of 30 will keep trying for 30 minutes.
100
Size of the wait queue for async thread pool in HiveServer2. After hitting this limit, the async thread pool will reject new requests.
10
Time (in seconds) that an idle HiveServer2 async thread (from the thread pool) will wait for a new task to arrive before terminating.
5000L
Time in milliseconds that HiveServer2 will wait, before responding to asynchronous calls that use long polling.
true
Allow alternate user to be specified as part of HiveServer2 open connection request.
Keytab file for SPNEGO principal, optional. A typical value would look like /etc/security/keytabs/spnego.service.keytab
. This keytab would be used by HiveServer2 when Kerberos security is enabled and HTTP transport mode is used. This needs to be set only if SPNEGO is to be used in authentication.
SPNEGO authentication would be honored only if valid hive.server2.authentication.spnego.principal and hive.server2.authentication.spnego.keytab are specified.
SPNEGO service principal, optional. A typical value would look like HTTP/_HOST@EXAMPLE.COM
. The SPNEGO service principal would be used by HiveServer2 when Kerberos security is enabled and HTTP transport mode is used. This needs to be set only if SPNEGO is to be used in authentication.
List of the underlying PAM services that should be used when hive.server2.authentication type is PAM. A file with the same name must exist in /etc/pam.d.
false
Set this to true for using SSL encryption in HiveServer2.
SSL certificate keystore location.
SSL certificate keystore password.
A list of comma separated values corresponding to YARN queues of the same name. When HiveServer2 is launched in Tez mode, this configuration needs to be set for multiple Tez sessions to run in parallel on the cluster.
1
A positive integer that determines the number of Tez sessions that should be launched on each of the queues specified by hive.server2.tez.default.queues. Determines the parallelism on each queue.
false
This flag is used in HiveServer 2 to enable a user to use HiveServer 2 without turning on Tez for HiveServer 2. The user could potentially want to run queries over Tez without the pool of sessions.
Apache Tez was added in Hive 0.13.0 (HIVE-4660 and HIVE-6098). For information see the design document Hive on Tez.
Besides the configuration properties listed in this section, some properties in other sections are also related to Tez:
This is the location that Hive in Tez mode will look for to find a site-wide installed Hive instance. See hive.user.install.directory for the default behavior.
If Hive (in Tez mode only) cannot find a usable Hive jar in hive.jar.directory, it will upload the Hive jar to <hive.user.install.directory>/<user_name> and use it to run queries.
Whether to generate the splits locally or in the ApplicationMaster (Tez only).
Whether to send the query plan via local resource or RPC.
Enables container prewarm for Tez (Hadoop 2 only).
Controls the number of containers to prewarm for Tez (Hadoop 2 only).
Merge small files at the end of a Tez DAG.
org.apache.hadoop.hive.ql.io.HiveInputFormat
The default input format for Tez. Tez groups splits in the AM (ApplicationMaster).
By default Tez will spawn containers of the size of a mapper. This can be used to overwrite the default.
By default Tez will use the Java options from map tasks. This can be used to overwrite the default.
false
Whether joins can be automatically converted to bucket map joins in Hive when Tez is used as the execution engine (hive.execution.engine is set to "tez
").
INFO
The log level to use for tasks executing as part of the DAG. Used only if hive.tez.java.opts is used to configure Java options.
5000
Time in milliseconds to wait for another thread to localize the same resource for Hive-Tez.
5
The number of attempts waiting for localizing a resource in Hive-Tez.
Indexing was added in Hive 0.7.0 with HIVE-417, and bitmap indexing was added in Hive 0.8.0 with HIVE-1803. For more information see Indexing.
false
When true
the HDFS location stored in the index file will be ignored at runtime. If the data got moved or the name of the cluster got changed, the index data should still be usable.
false
Whether to enable automatic use of indexes.
5368709120
Minimum size (in bytes) of the inputs on which a compact index is automatically used.
-1
Maximum size (in bytes) of the inputs on which a compact index is automatically used. A negative number is equivalent to infinity.
10737418240
The maximum number of bytes that a query using the compact index can read. Negative value is equivalent to infinity.
10000000
The maximum number of index entries to read during a query that uses the compact index. Negative value is equivalent to infinity.
true
If this sets to true, Hive will throw error when doing ALTER TABLE tbl_name [partSpec] CONCATENATE on a table/partition that has indexes on it. The reason the user want to set this to true is because it can help user to avoid handling all index drop, recreation, rebuild work. This is very helpful for tables with thousands of partitions.
false
true
Whether or not to use a binary search to find the entries in an index table that match the filter, where possible.
See Statistics in Hive for information about how to collect and use Hive table statistics.
jdbc:derby
(Hive 0.7 to 0.12) or fs
(Hive 0.13 and later)Hive 0.7 to 0.12: The default database that stores temporary Hive statistics. Other options are jdbc:mysql
and hbase
as defined in StatsSetupConst.java.
Hive 0.13 and later: The storage that stores temporary Hive statistics. In FS based statistics collection, each task writes statistics it has collected in a file on the filesystem, which will be aggregated after the job has finished. Supported values are fs
(filesystem), jdbc(:.*)
, hbase
, counter
and custom
(HIVE-6500).
true
A flag to gather statistics automatically during the INSERT OVERWRITE command.
org.apache.derby.jdbc.EmbeddedDriver
The JDBC driver for the database that stores temporary Hive statistics.
jdbc:derby:;databaseName=TempStatsStore;create=true
The default connection string for the database that stores temporary Hive statistics.
The Java class (implementing the StatsPublisher interface) that is used by default if hive.stats.dbclass is not JDBC or HBase (Hive 0.12.0 and earlier), or if hive.stats.dbclass is a custom type (Hive 0.13.0 and later: HIVE-4632).
The Java class (implementing the StatsAggregator interface) that is used by default if hive.stats.dbclass is not JDBC or HBase (Hive 0.12.0 and earlier), or if hive.stats.dbclass is a custom type (Hive 0.13.0 and later: HIVE-4632).
30
Timeout value (number of seconds) used by JDBC connection and statements.
false
If this is set to true then the metastore statistics will be updated only if all types of statistics (number of rows, number of files, number of bytes, etc.) are available. Otherwise metastore statistics are updated in a best effort fashion with whatever are available.
0
Maximum number of retries when stats publisher/aggregator got an exception updating intermediate database. Default is no tries on failures.
3000
The base waiting window (in milliseconds) before the next retry. The actual wait time is calculated by baseWindow * failues + baseWindow * (failure + 1) * (random number between 0.0,1.0).
true
If true, the raw data size is collected when analyzing tables.
Comma-separated list of statistics publishers to be invoked on counters on each job. A client stats publisher is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface.
Subset of counters that should be of interest for hive.client.stats.publishers (when one wants to limit their publishing). Non-display names should be used.
Whether queries will fail because statistics cannot be collected completely accurately. If this is set to true, reading/writing from/into a partition or unpartitioned table may fail because the statistics could not be computed accurately. If it is set to false, the operation will succeed.
In Hive 0.13.0 and later, if hive.stats.reliable is false and statistics could not be computed correctly, the operation can still succeed and update the statistics but it sets a partition property "areStatsAccurate" to false. If the application needs accurate statistics, they can then be obtained in the background.
Standard error allowed for NDV estimates, expressed in percentage. This provides a tradeoff between accuracy and compute cost. A lower value for the error indicates higher accuracy and a higher compute cost. (NDV means number of distinct values.)
false
Whether join and group by keys on tables are derived and maintained in the QueryPlan. This is useful to identify how tables are accessed and to determine if they should be bucketed.
false
Whether column accesses are tracked in the QueryPlan. This is useful to identify how tables are accessed and to determine if there are wasted columns that can be trimmed.
200
(Hive 0.11 and 0.12) or 150
(Hive 0.13 and later)Determines if, when the prefix of the key used for intermediate statistics collection exceeds a certain length, a hash of the key is used instead. If the value < 0 then hashing is never used, if the value >= 0 then hashing is used only when the key prefixes' length exceeds that value. The key prefix is defined as everything preceding the task ID in the key. For counter type statistics, it's maxed by mapreduce.job.counters.group.name.max, which is by default 128.
24
Reserved length for postfix of statistics key. Currently only meaningful for counter type statistics which should keep the length of the full statistics key smaller than the maximum length configured by hive.stats.key.prefix.max.length. For counter type statistics, it should be bigger than the length of LB spec if exists.
100
To estimate the size of data flowing through operators in Hive/Tez (for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics, for variable length columns (like string, bytes, etc.) this value will be used. For fixed length columns their corresponding Java equivalent sizes are used (float – 4 bytes, double – 8 bytes, etc.).
10
To estimate the size of data flowing through operators in Hive/Tez (for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics and for variable length complex columns like list, the average number of entries/values can be specified using this configuration property.
10
To estimate the size of data flowing through operators in Hive/Tez (for reducer estimation etc.), average row size is multiplied with the total number of rows coming out of each operator. Average row size is computed from average column size of all columns in the row. In the absence of column statistics and for variable length complex columns like map, the average number of entries/values can be specified using this configuration property.
1
The Hive/Tez optimizer estimates the data size flowing through each of the operators. For the GROUPBY operator, to accurately compute the data size map-side parallelism needs to be known. By default, this value is set to 1 since the optimizer is not aware of the number of mappers during compile-time. This Hive configuration property can be used to specify the number of mappers for data size computation of the GROUPBY operator.
true
Annotation of the operator tree with statistics information requires partition level basic statistics like number of rows, data size and file size. Partition statistics are fetched from the metastore. Fetching partition statistics for each needed partition can be expensive when the number of partitions is high. This flag can be used to disable fetching of partition statistics from the metastore. When this flag is disabled, Hive will make calls to the filesystem to get file sizes and will estimate the number of rows from the row schema.
false
Annotation of the operator tree with statistics information requires column statistics. Column statistics are fetched from the metastore. Fetching column statistics for each needed column can be expensive when the number of columns is high. This flag can be used to disable fetching of column statistics from the metastore.
(float) 1.1
The Hive/Tez optimizer estimates the data size flowing through each of the operators. The JOIN operator uses column statistics to estimate the number of rows flowing out of it and hence the data size. In the absence of column statistics, this factor determines the amount of rows flowing out of the JOIN operator.
(float) 1.0
The Hive/Tez optimizer estimates the data size flowing through each of the operators. In the absence of basic statistics like number of rows and data size, file size is used to estimate the number of rows and data size. Since files in tables/partitions are serialized (and optionally compressed) the estimates of number of rows and data size cannot be reliably determined. This factor is multiplied with the file size to account for serialization and compression.
10000
In the absence of table/partition statistics, average row size will be used to estimate the number of rows/data size.
false
When set to true Hive will answer a few queries like min, max, and count(1) purely using statistics stored in the metastore. For basic statistics collection, set the configuration property hive.stats.autogather to true. For more advanced statistics collection, run ANALYZE TABLE queries.
Comma separated list of configuration properties which are immutable at runtime. For example, if hive.security.authorization.enabled is set to true, it should be included in this list to prevent a client from changing it to false at runtime.
Comma separated list of non-SQL Hive commands that users are authorized to execute. This can be used to restrict the set of authorized commands. The currently supported command list is "set,reset,dfs,add,delete,compile" and by default all these commands are authorized. To restrict any of these commands, set hive.security.command.whitelist to a value that does not have the command in it.
false
Enable or disable the Hive client authorization.
org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider
The Hive client authorization manager class name. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider.
org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator
Hive client authenticator manager class name. The user-defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.
The privileges automatically granted to some users whenever a table gets created. An example like "userX,userY:select;userZ:create" will grant select privilege to userX and userY, and grant create privilege to userZ whenever a new table created.
The privileges automatically granted to some groups whenever a table gets created. An example like "groupX,groupY:select;groupZ:create" will grant select privilege to groupX and groupY, and grant create privilege to groupZ whenever a new table created.
The privileges automatically granted to some roles whenever a table gets created. An example like "roleX,roleY:select;roleZ:create" will grant select privilege to roleX and roleY, and grant create privilege to roleZ whenever a new table created.
The privileges automatically granted to the owner whenever a table gets created. An example like "select,drop" will grant select and drop privilege to the owner of the table.
Metastore-side security was added in Hive 0.10.0 (HIVE-3705). For more information, see Metastore Server Security in the Authorization document.
The pre-event listener classes to be loaded on the metastore side to run code whenever databases, tables, and partitions are created, altered, or dropped. Set this configuration property to org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener
in hive-site.xml to turn on Hive metastore-side security.
org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider
The authorization manager class name to be used in the metastore for authorization. The user-defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider
. The DefaultHiveMetastoreAuthorizationProvider implements the standard Hive grant/revoke model. A storage-based authorization implementation is also provided to use as the value of this configuration property:
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
which uses HDFS permissions to provide authorization instead of using Hive-style grant-based authorization.
org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator
The authenticator manager class name to be used in the metastore for authentication. The user-defined authenticator class should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider
.
Hive 0.13.0 introduces fine-grained authorization based on the SQL standard authorization model. This is still a work in progress – see HIVE-5837 for the functional specification and list of subtasks. |
A comma separated list of users which will be added to the ADMIN role when the metastore starts up. More users can still be added later on.
org.apache.hadoop.hive.shims.HiveHarFileSystem
The implementation for accessing Hadoop Archives. Note that this won't be applicable to Hadoop versions less than 0.20.
false
Whether archiving operations are permitted.
false
In new Hadoop versions, the parent directory must be set while creating a HAR. Because this functionality is hard to detect with just version numbers, this configuration variable needs to be set manually.
See Hive Concurrency Model for general information about locking.
false
Whether Hive supports concurrency or not. A Zookeeper instance must be up and running for the default Hive lock manager to support read-write locks.
false
This configuration property is to control whether or not only do lock on queries that need to execute at least one mapred job.
100
The number of times you want to try to get all the locks.
10
The number of times you want to retry to do one unlock.
60
The sleep time (in seconds) between various retries.
The list of Zookeeper servers to talk to. This is only needed for read/write locks.
2181
The port of Zookeeper servers to talk to. This is only needed for read/write locks.
600000
Zookeeper client's session timeout. The client is disconnected, and as a result, all locks released, if a heartbeat is not sent in the timeout.
hive_zookeeper_namespace
The parent node under which all Zookeeper nodes are created.
false
Clean extra nodes at the end of the session.
org.apache.hadoop.hive.thrift.MemoryTokenStore
The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster.
localhost:2181
The ZooKeeper token store connect string.
/hive/cluster/delegation
The root path for token store data.
sasl:hive/host1@EXAMPLE.COM:cdrwa,sasl:hive/host2@EXAMPLE.COM:cdrwa
ACL for token store entries. List comma separated all server principals for the cluster.
true
When creating a table from an input table, create the table in the input table's primary region.
default
The default region name.
The default filesystem and jobtracker for a region.
false
Whether to print the names of the columns in query output.
false
Whether to include the current database in the Hive prompt.
true
Whether writes to HBase should be forced to the write-ahead log. Disabling this improves HBase write performance at the risk of lost writes in case of a crash.
True when HBaseStorageHandler should generate hfiles instead of operate against the online table.
lib/hive-hwi-.war
This sets the path to the HWI war file, relative to ${HIVE_HOME
}.
0.0.0.0
This is the host address the Hive Web Interface will listen on.
9999
This is the port the Hive Web Interface will listen on.
false
Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.
test_
If Hive is running in test mode, prefixes the output table by this string.
32
If Hive is running in test mode and table is not bucketed, sampling frequency.
If Hive is running in test mode, don't sample the above comma separated list of tables.
Starting in Hive release 0.11.0, HCatalog is installed and configured with Hive. The HCatalog server is the same as the Hive metastore. See Hive Metastore Administration for metastore configuration properties. For Hive releases prior to 0.11.0, see the "Thrift Server Setup" section in the HCatalog 0.5.0 document Installation from Tarball for information about setting the Hive metastore configuration properties.
Jobs submitted to HCatalog can specify configuration properties that affect storage, error tolerance, and other kinds of behavior during the job. See HCatalog Config Properties for details.
For WebHCat configuration, see Configuration Variables in the WebHCat manual.