- The following settings are required in hive-site.xml to enable ACID support for streaming:
- “stored as orc” must be specified during table creation. Only ORC storage format is supported currently.
- tblproperties("transactional"="true") must be set on the table during creation.
- The Hive table must be bucketed, but not sorted. So something like “clustered by (colName) into 10 buckets” must be specified during table creation. The number of buckets is ideally the same as the number of streaming writers.
- User of the client streaming process must have the necessary permissions to write to the table or partition and create partitions in the table.
- (Temporary requirements) When issuing queries on streaming tables, the client needs to set
hive.vectorized.execution.enabled to false (for Hive version < 0.14.0)
hive.input.format to org.apache.hadoop.hive.ql.io.HiveInputFormat
Important: To connect using Kerberos, the 'authenticatedUser' argument to EndPoint.newConnection() should have been used to do a Kerberos login. Additionally the 'hive.metastore.kerberos.principal' setting should be set correctly either in hive-site.xml or in the 'conf' argument (if not null). If using hive-site.xml, its directory should be included in the classpath.
import org.apache.hadoop.security.UserGroupInformation; HiveEndPoint hiveEP2 = ... ; UserGroupInformation ugi = .. authenticateWithKerberos(principal,keytab); StreamingConnection secureConn = hiveEP2.newConnection(true, null, ugi); DelimitedInputWriter writer3 = new DelimitedInputWriter(fieldNames, ",", hiveEP2); TransactionBatch txnBatch3= secureConn.fetchTransactionBatch(10, writer3); ///// Batch 1 - First TXN – over secure connection txnBatch3.beginNextTransaction(); txnBatch3.write("28,Eric Baldeschwieler".getBytes()); txnBatch3.write("29,Ari Zilka".getBytes()); txnBatch3.commit(); txnBatch3.close(); secureConn.close();