Child pages
  • HiveDeveloperFAQ
Skip to end of metadata
Go to start of metadata

Hive Developer FAQ


Hive is using Maven as its build tool. Versions prior to 0.13 were using Ant.


How do I add a new MiniDriver test?

See MiniDriver Tests for information about MiniDriver and Beeline tests.

How do I move some files?

Post a patch for testing purposes which simply does add and deletes. SVN will not understand these patches are actually moves, therefore you should actually upload the following, in order so the last upload is the patch for testing purposes:

  1. A patch which has only the non-move changes for commit e.g. HIVE-XXX-for-commit.patch
  2. A script of of commands required to make the moves
  3. A patch for testing purposes HIVE-XXX.patch

The script should be a set of svn mv commands along with any perl commands required for find/replace. For example:

$ svn mv
$ perl -i -pe 's<at:var at:name="MyCLass" />MyClass@g'


Maven settings

You might have to set the following Maven options on certain systems to get build working: Set MAVEN_OPTS to "-Xmx2g -XX:MaxPermSize=256M".

How to build all source?

The way Maven is set up differs between the master branch and branch-1.  In branch-1, since both Hadoop 1.x and 2.x are supported, you need to specify whether you want to build Hive against Hadoop 1.x or 2.x.  This is done via Maven profiles.  There is a profile for each version of Hadoop, hadoop-1 and hadoop-2.  For most Maven operations one of these profiles needs to be specified or the build will fail.

In master, only Hadoop 2.x is supported, thus there is no need to specify a Maven profile for most build operations.

In master, MVN:

In branch-1, MVN:

To build against Hadoop 1.x, switch the above to -Phadoop-1.

For the remainder of this page we will assume master and not show the profiles.  However, if you are working on branch-1 remember that you will need to add in the appropriate profile.

How do I import into Eclipse?

Build and generate Eclipse files (the conservative method):

In Eclipse define M2_REPO in Preferences -> Java -> Build Path -> Classpath Variables to either:

Mac Example

Linux Example

Windows Example

Then import the workspaces. If you get an error about "restricted use of Signal" for Beeline and CLI, follow these instructions.

Note that if you use the Hive git base directory as the Eclipse workspace, then it does not pick the right project names (for example, picks 'ant' instead of 'hive-ant'). Therefore it's recommended to have the workspace directory one up from the git directory. For example workspaces/hive-workspace/hive where hive-workspace is the Eclipse workspace and hive is the git base directory.

How to generate tarball?


It will then be located in the packaging/target/ directory.

How to generate protobuf code?


How to generate Thrift code?


How to run findbugs after a change?

Note:  Available in Hive 1.1.0 onward (see HIVE-8327).

How to compile ODBC?


How do I publish Hive artifacts to my local Maven repository?



For general information, see Unit Tests and Debugging in the Developer Guide.

Where is the log output of a test?

Logs are put in a couple locations:

  • From the root of the source tree: find . -name hive.log
  • /tmp/$USER/ (Linux) or $TMPDIR/$USER/ (MacOS)

How do I run a single test?


Note that any test in the itests directory needs to be executed from with the itests directory. The pom is disconnected from the parent project for technical reasons.

Single test class:

mvn test -Dtest=ClassName

Single test method:

mvn test -Dtest=ClassName#methodName

Note that a pattern can also be supplied to -Dtests to run multiple tests matching the pattern:

For more usage see the documentation for the Maven Surefire Plugin.

Why isn't the itests pom connected to the root pom?

The qfile tests in itests require the packaging phase. The Maven test phase is after compile and before packaging. We could change the qfile tests to run during the integration-test phase using the "failsafe" plugin but the "failsafe" plugin is different than surefire and IMO is hard to use. If you'd like to give that a try, by all means, go ahead.

A test fails with a NullPointerException in MiniDFSCluster

If any test fails with the error below it means you have an inappropriate umask setting. It should be set to 0022.

java.lang.NullPointerException: null
    at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(
    at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(
    at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(

How do I run all of the unit tests?

Note that you need to have previously built and installed the jars:

Legacy information for the Ant build

Make sure that your JAVA_HOME is appropriately set (some tests need this), and set ANT_OPTS to increase the size allocated to the Permanent Generation as per the following:

Then, for a clean build, run

Note that running ant test will not work; ant package does some setup work that is required for the testcases to run successfully.

How do I run all of the unit tests except for a certain few tests?

Similar to running all tests, but define test.excludes.additional to specify a test/pattern to exclude from the test run. For example the following will run all tests except for the CliDriver tests:

How do I update the output of a CliDriver testcase?


As of Hive 0.11.0+ you can cut this time in half by specifying that only the ql module needs to rebuild

How do I update the results of many test cases?

Assume that you have a file like below which you'd like to re-generate output files for. Such a file could be created by copying the output from the precommit tests.

You can re-generate all those output files in batches of 20 with the command below

How do I run the clientpositive/clientnegative unit tests?

All of the below require that you have previously run ant package.

To run clientpositive tests


To run a single clientnegative test alter1.q


To run all of the clientpositive tests that match a regex, for example the partition_wise_fileformat tests


To run a single contrib test alter1.q and overwrite the result file


To run a single test groupby1.q and output detailed information during execution

As of Hive 0.11.0+ you can cut down the total build time by specifying that only the ql module needs to rebuild. For example, run all the partition_wise_fileformat tests

How do I rerun precommit tests over the same patch?

Upload the exact same patch again to the JIRA.


How do I debug into a single test in Eclipse?

You can debug into a single JUnit test in Eclipse by first making sure you've built the Eclipse files and imported the project into Eclipse as described here. Then set one or more breakpoints, highlight the method name of the JUnit test method you want to debug into, and do Run->Debug.

Another useful method to debug these tests is to attach a remote debugger. When you run the test, enable the debug mode for surefire by passing in "-Dmaven.surefire.debug". Additional details on how to turning on debugging for surefire can be found here. Now when you run the tests, it will wait with a message similar to

Listening for transport dt_socket at address: 5005

Note that this assumes that you are still using the default port 5005 for surefire. Otherwise you might see a different port. Once you see this message, in Eclipse right click on the project you want to debug, go to "Debug As -> Debug Configurations -> Remote Java Application" and hit the "+" sign on far left top. This should bring up a dialog box. Make sure that the host is "localhost" and the port is "5005". Before you start debugging, make sure that you have set appropriate debug breakpoints in the code. Once ready, hit "Debug". Now if you go back to the terminal, you should see the tests running and they will stop at the breakpoints that you set for debugging.

How do I debug my queries in Hive?

You can also interactively debug your queries in Hive by attaching a remote debugger. To do so, start Beeline with the "--debug" option.

$ beeline --debug
Listening for transport dt_socket at address: 8000

Once you see this message, in Eclipse right click on the project you want to debug, go to "Debug As -> Debug Configurations -> Remote Java Application" and hit the "+" sign on far left top. This should bring up a dialog box. Make sure that the host is the host on which the Beeline CLI is running and the port is "8000". Before you start debugging, make sure that you have set appropriate debug breakpoints in the code. Once ready, hit "Debug". The remote debugger should attach to Beeline and proceed.

$ beeline --debug
Listening for transport dt_socket at address: 8000
Beeline version 1.2.0 by Apache Hive

At this point, run your queries as normal and it should stop at the breakpoints that you set so that you can start debugging.

This method should work great if your queries are simple fetch queries that do not kick off MapReduce jobs. If a query runs in a distributed mode, it becomes very hard to debug. Therefore, it is advisable to run in a "local" mode for debugging. In order to run Hive in local mode, do the following:


SET mapred.job.tracker=local

MRv2 (YARN):


At this point, attach the remote debugger as mentioned before to start debugging your queries.

  • No labels