You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

== Hive using Derby in Server Mode ==
Hive in embedded mode has a limitation of 1 active user at a time. You may want to run Derby as a Network Server, this way multiple users can access it simultaneously from different systems.

Download Derby

It is suggested you download the version of Derby that ships with Hive. If you have already ran Hive in embedded mode the first line of derby.log contains the version.

My structure looks like this:

/opt/hadoop/hadoop-0.17.2.1
/opt/hadoop/db-derby-10.4.1.3-bin
/opt/hadoop/hive
cd /opt/hadoop
<download>
tar -xzf db-derby-10.4.1.3-bin.tar.gz
mkdir db-derby-10.4.1.3-bin/data

Set environment

The variable to set has changed over the years. DERBY_HOME is now the proper name. I set this and the legacy name.

/etc/profile.d/derby.sh

DERBY_INSTALL=/opt/hadoop/db-derby-10.4.1.3-bin
DERBY_HOME=/opt/hadoop/db-derby-10.4.1.3-bin
export DERBY_INSTALL
export DERBY_HOME

Hive also likes to know where hadoop is installed

/etc/profile.d/hive.sh

HADOOP=/opt/hadoop/hadoop-0.17.2.1/bin/hadoop
export HADOOP

Starting Derby

Likely you are going to want to run Derby when Hadoop starts up. An interesting place for this other then as an lsb-init-script might be alongside Hadoop scripts like start-dfs. By default Derby will create databases in the directory it was started from.

cd /opt/hadoop/db-derby-10.4.1.3-bin/data
nohup /opt/hadoop/db-derby-10.4.1.3-bin/startNetworkServer -h 0.0.0.0 &

Configuring Hive to use Network Derby

Edit /opt/hadoop/hive/conf/hive-site.xml as follows. Note that "hadoop1" should be replaced with the hostname or IP address where the Derby network server can be found.

<property>
  <name>hive.metastore.local</name>
  <value>true</value>
  <description>controls whether to connect to remove metastore server or open a new metastore server in Hive Client JVM</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:derby://hadoop1:1527/metastore_db;create=true</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>org.apache.derby.jdbc.ClientDriver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

/opt/hadoop/hive/conf/jpox.properties

JPOX properties is NOT used in hive 5.0 or later. JPOX properties can be specified in hive-site.xml. Normally jpox property changes are not required.

javax.jdo.PersistenceManagerFactoryClass=org.jpox.PersistenceManagerFactoryImpl
org.jpox.autoCreateSchema=false
org.jpox.validateTables=false
org.jpox.validateColumns=false
org.jpox.validateConstraints=false
org.jpox.storeManagerType=rdbms
org.jpox.autoCreateSchema=true
org.jpox.autoStartMechanismMode=checked
org.jpox.transactionIsolation=read_committed
javax.jdo.option.DetachAllOnCommit=true
javax.jdo.option.NontransactionalRead=true
javax.jdo.option.ConnectionDriverName=org.apache.derby.jdbc.ClientDriver
javax.jdo.option.ConnectionURL=jdbc:derby://hadoop1:1527/metastore_db;create=true
javax.jdo.option.ConnectionUserName=APP
javax.jdo.option.ConnectionPassword=mine

Copy Derby Jar Files

Now since there is a new client you MUST make sure hive has these in the lib directory or in the classpath. The same would be true if you used MySQL or some other DB.

cp /opt/hadoop/db-derby-10.4.1.3-bin/lib/derbyclient.jar /opt/hadoop/hive/lib
cp /opt/hadoop/db-derby-10.4.1.3-bin/lib/derbytools.jar /opt/hadoop/hive/lib

If you receive the error "javax.jdo.JDOFatalInternalException: Error creating transactional connection factory" where the stack trace originates "org.datanucleus.exceptions.ClassNotResolvedException: Class 'org.apache.derby.jdbc.ClientDriver' was not found in the CLASSPATH. Please check your specification and your CLASSPATH", you may benefit from putting the derby jar files directly in the hadoop lib directory:

cp /opt/hadoop/db-derby-10.4.1.3-bin/lib/derbyclient.jar /opt/hadoop/hadoop-0.17.2.1/lib
cp /opt/hadoop/db-derby-10.4.1.3-bin/lib/derbytools.jar /opt/hadoop/hadoop-0.17.2.1/lib

Start Up Hive

The metastore will not be created until the first query hits it.

cd /opt/hadoop/hive
bin/hive
hive> show tables;

A directory should be created /opt/hadoop/db-derby-10.4.1.3-bin/data/metastore_db

The Result

Now you can run multiple hive instances working on the same data simultaneously and remotely.
a

  • No labels