Introduction

Objectives

HCatMix is the performance testing framework for hcatalog.

The objective is:

In order to meet the above objective following would be measured:

Implementation

Test setup

Test setup needs to perform two tasks:

Both of these are driven by configuration file. Following is an example of setup xml file.

HCatMix Test Setup Configuration

<database>
    <tables>
        <table>
            <namePrefix>page_views_1brandnew</namePrefix>
            <dbName>default</dbName>
            <columns>
                <column>
                    <name>user</name>
                    <type>STRING</type>
                    <avgLength>20</avgLength>
                    <distribution>zipf</distribution>
                    <cardinality>1000000</cardinality>
                    <percentageNull>10</percentageNull>
                </column>
                <column>
                    <name>timespent</name>
                    <type>INT</type>
                    <distribution>zipf</distribution>
                    <cardinality>5000000</cardinality>
                    <percentageNull>25</percentageNull>
                </column>
                <column>
                    <name>query_term</name>
                    <type>STRING</type>
                    <avgLength>5</avgLength>
                    <distribution>zipf</distribution>
                    <cardinality>10000</cardinality>
                    <percentageNull>0</percentageNull>
                </column>
                 .
                 .
                 .
            </columns>
            <partitions>
                <partition>
                    <name>timestamp</name>
                    <type>INT</type>
                    <distribution>zipf</distribution>
                    <cardinality>1000</cardinality>
                </partition>
                <partition>
                    <name>action</name>
                    <type>INT</type>
                    <distribution>zipf</distribution>
                    <cardinality>8</cardinality>
                </partition>
                 .
                 .
                 .
            </partitions>
            <instances>
                <instance>
                    <size>1000000</size>
                    <count>1</count>
                </instance>
                <instance>
                    <size>100000</size>
                    <count>1</count>
                </instance>
                 .
                 .
                 .
            </instances>
        </table>
    </tables>
</database>

A column/parition has the following details:

The instances section defines how many instance of table with the same specification to be created and the number of rows for each of them.

Tests

Loader/Storer tests

Description
  1. Use PigStorage to load and HCatStorer to store
  2. Use HCatLoader to load and HCatStorer to store
  3. Use HCatLoader to load and PigStorage to store
  4. Use PigStorage to load and PigStorage to store

Input Data Size

Number of partitions

105MB

0, 300, 600, 900, 1200, 1500, 2000

1GB

0, 300

10GB

0, 300

100GB

0, 300

These tests are driven by configuration and new test could be added by dropping configuration.

How to run:

Load Tests

Description

The hadoop map reduce framework itself has been used to do concurrency test, where in the map phase increases the number of tasks over time and keeps on generating
statistics every minute. The reduce phase aggregates the statistics of all the maps and outputs statistics as number of concurrent clients were increasing. Given map/reduce is used this tool can scale to any number of parallel clients required to do concurrency test.

Concurrency tests are done for the following api call:

  1. List partition
  2. Get Table
  3. Add Partition
  4. List Partition/Add Partition/Get Table together

The test is defined in a properties file

# For the following example the number of threads will increase from
# 80 to 2000 over a period of 25 minutes. T25 = 4*20 + (25 - 1)*4*20 = 2000

# The comma separated task classes which contains the getTable() call
task.class.names=org.apache.hcatalog.hcatmix.load.tasks.HCatGetTable

# The number of map tasks to run
num.mappers=20

# How many threds to increase at the end of fixed interval
thread.increment.count=4

# The interval at which number of threads are increased
thread.increment.interval.minutes=1

# For how long the map would run
map.runtime.minutes=25

# Extra wait time to let the individual tasks to finish
thread.completion.buffer.minutes=1

# The interval at which statistics would be collected
stat.collection.interval.minutes=1

# input directory where dummy files are created to control the number of mappers
input.dir=/tmp/hcatmix/loadtest/input

# The location where the collected statistics would be stored
output.dir=/tmp/hcatmix/loadtest/output

More concurrent tests can be added by adding configuration files and adding a class that implements the Task interface.

How to run

Prerequisites

The following environment variables need to be defined:

  1. HADOOP_HOME
  2. HCAT_HOME
  3. HADOOP_CONF_DIR

Misc Links