...
- Install spark (either download pre-built spark, or build assembly from source).
- Install/build a compatible version. To find out Hive root pom.xml's <spark.version> defines what version of Spark that your particular Hive build spark it was built/tested on, check your Hive's root pom.xml for <spark.version>. with.
- Install/build a compatible distribution. Each version of Spark has several distributions, corresponding with different versions of Hadoop.
- Once spark is installed, find and keep note of the <spark-assembly-*.jar> location.
- Start Spark cluster (Master and workers).
- Keep note of the <Spark Master URL>. This can be found in Spark master WebUI.
...
- As Hive on Spark is still in development, currently only a Hive assembly built from hive/spark development branch works against sparksupports Spark execution. Branch is located here: https://github.com/apache/hive/tree/spark. Build hive assembly from this branch as described in https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ.
Start hive and add the <spark-assembly-*.jar> to the hive auxpath.:
Code Block hive --auxpath /location/to/spark-assembly-spark_version-hadoop_version.jar
Configure hive execution engine to run on spark:
Code Block hive> set hive.execution.engine=spark;
Configure required properties for spark-confSpark-application configs for Hive. See: http://spark.apache.org/docs/latest/configuration.html. This can be done either by adding a file "spark-defaults.conf" to the hive classpath, or configured as normal properties from hive.:
Code Block hive> set spark.master=<Spark Master URL> hive> set spark.eventLog.enabled=true; hive> set spark.executor.memory=512m; hive> set spark.serializer=org.apache.spark.serializer.KryoSerializer;
...