MapReduce Job — POST mapreduce/jar
Creates and queues a standard Hadoop MapReduce job.
Version: Hive 0.13.0 and later
As of Hive 0.13.0, GET version/hadoop displays the Hadoop version used for the MapReduce job.
Name of the jar file for Map Reduce to use.
Name of the class for Map Reduce to use.
Comma separated jar files to include in the classpath.
Comma separated files to be copied to the map reduce cluster.
Set a program argument.
Set a Hadoop configuration variable using the syntax
A directory where WebHCat will write the status of the Map Reduce job. If provided, it is the caller's responsibility to remove this directory when done.
If statusdir is set and enablelog is "true", collect Hadoop job configuration and logs into a directory named
This parameter was introduced in Hive 0.12.0. (See HIVE-4531.)
|Optional in Hive 0.12.0+||None|
Define a URL to be called upon job completion. You may embed a specific job ID into this URL using
Specify that the submitted job uses HCatalog and therefore needs to access the metastore, which requires additional steps for WebHCat to perform in a secure cluster. (See HIVE-5133.) This parameter will be introduced in Hive 0.13.0.
Optional in Hive 0.13.0+
The standard parameters are also supported.
A string containing the job ID similar to "job_201110132141_0001".
A JSON object containing the information returned when the job was queued. See the Hadoop documentation (
Code and Data Setup
Prior to Hive 0.13.0, user.name was specified in POST requests as a form parameter:
curl -d user.name=<user>.
In Hive 0.13.0 onward, user.name should be specified in the query string (as shown above):
'. Specifying user.name as a form parameter is deprecated. <name>'