Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: fix grammar in first sentences per HIVE-16250 (thanks, Deborah Hunt); also change HQL to Hive SQL or just SQL

...

Hive is widely applied as a solution to numerous distinct problem types in the domain of big data. Quite clearly it is often used for the ad hoc querying of large datasets. However it is also used to implement ETL type processes. Unlike ad hoc queries, the HQL Hive SQL written for ETLs has some distinct attributes:

...

There are a number of challenges posed by both Hive and HQL Hive SQL that can make it difficult to construct suites of unit tests for Hive based ETL systems. These can be broadly described as follows:

  • Defining boundaries between components: How can and how should a problem be decomposed into smaller, testable units. The ability to do this is limited by the set of language features provided by HQLHive SQL.
  • Harness provision: Providing a local execution environment that seamlessly supports Hive’s features in a local IDE setting (UDFs etc). Ideally the harness should have no environmental dependencies such as a local Hive or Hadoop installation. Developers should be able to simply check out a project and run the tests.
  • Speed of execution: The goal is to have large numbers of isolated, small tests. Test isolation requires frequent setup and teardown and the costs incurred are multiplied the number of tests. The Hive CLI is fairly heavy process to repeatedly start and stop and so some Hive test frameworks attempt to optimise this aspect of test execution.

...

By modularising processes implemented using Hive they become easier to test effectively and more resilient to change. Although Hive provides a number of vectors for modularisation it is not always clear how a large process can be decomposed. Features for encapsulation of query logic into components is separated into two perpendicular concerns: column level logic, and set level logic. Column level logic refers to the expressions applied to individual columns or groups of columns in the query, commonly described as ‘functions’. Set level logic concerns HQL Hive SQL constructs that manipulate groupings of data such as: column projection with SELECT, GROUP BY aggregates, JOINs, ORDER BY sorting, etc. In either case we expect individual components to live in their own source file or deployable artifact and imported as needed by the composition. For HQL Hive SQL-based components, the SOURCE command provides this functionality.

...

  • Execution environment configuration: usually hiveconf and hivevar parameters.
  • Declaring input test data: creating or selecting files that back some source tables.
  • Definition of the executable component of the test: normally the HQL SQL script under test.
  • Expectations: These can be in the form of a reference data file or alternatively fine grained assertions can be made with further queries.

...

  1. Configure Hive execution environment.
  2. Setup test input data.
  3. Execute HQL SQL script under test.
  4. Extract data written by the executed script.
  5. Make assertions on the data extracted.

...

  • HiveRunner: Test cases are declared using Java, HQL Hive SQL and JUnit and can execute locally in your IDE. This library focuses on ease of use and execution speed. No local Hive/Hadoop installation required. Provides full test isolation, fine grained assertions, and seamless UDF integration (they need only be on the project classpath). The metastore is backed by an in-memory database to increase test performance.
  • beetest: Test cases are declared using HQL Hive SQL and 'expected' data files. Test suites are executed using a script on the command line. Apparently requires HDFS to be installed in the environment in which the tests are executed.
  • hive_test: Test cases are declared using Java, HQL Hive SQL and JUnit and can execute locally in your IDE.
  • HiveQLUnit: Test your Hive scripts inside your favourite IDE. Appears to use Spark to execute the tests. 
  • How to utilise the Hive project's internal test framework.

...

  • Modularise large or complex queries into multiple smaller components. These are easier to comprehend, maintain, and test.
  • Use macros or UDFs to encapsulate repeated or complex column expressions.
  • Use Hive variables to decouple HQL SQL scripts from specific environments. For example it might be wise to use LOCATION ${myTableLocation} in preference to LOCATION /hard/coded/path.
  • Keep the scope of tests small. Making coarse assertions on the entire contents of a table is brittle and has a high maintenance requirement.
  • Use the SOURCE command to combine multiple smaller HQL SQL scripts.
  • Test macros and the integration of UDFs by creating simple test tables and applying the functions to columns in those tables.
  • Test UDFs by invoking the lifecycle methods directly (initialize, evaluate, etc.) in a standard testing framework such as JUnit.

...

Although not specifically related to HQLHive SQL, tooling exists for the testing of other aspects of the Hive ecosystem. In particular the BeeJU project provides JUnit rules to simplify the testing of integrations with the Hive Metastore and HiveServer2 services. These are useful, if for example, you are developing alternative data processing frameworks or tools that aim to leverage Hive's metadata features.

...