You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Next »

Use Cases

Inside facebook, we are running out of power inside a data center (physical cluster), and we have a need to have a bigger cluster.
We can divide the cluster into multiple clusters - multiple hive instances, multiple mr and multiple dfs. This will put a burden on
the user - he needs to know which cluster to use. It will be very difficult to support joins across tables in different clusters, and
will lead to a lot of duplication of data in the long run. To get around these problems, we are planning to extend hive to span
multiple data centers, and make the existence of multiple clusters transparent for the end users in the long term. Note that, even
today, different partitions/tables can span multiple dfs's, and hive does not enforce any restrictions. Those dfs's can be in different
data centers also. However, a single table/partition can only have a single location. We need to enhance this. We will not be able to
partition our warehouse cluster into multiple disjoint clusters, and therefore some tables/partitions will be present in multiple clusters.

In order to do so, we need to make some changes in hive, and this document primarily talks about those. The changes should be generic
enough, so that they can be used by others (outside Facebook) also, if they have such a requirement. The following restrictions have
been imposed to simplify the problem:

  • There will be a single hive instance. possibly spanning multiple clusters (both dfs and mr)
  • There will be a single hive metastore to keep track of the table/partition locations across different clusters.
  • A table/partition can exist in more than one cluster. A table will have a single primary cluster, and can have multiple
    secondary clusters.
  • Table/Partition's metadata will be enhanced to support multiple clusters/locations of the table.
  • All the data for a table is available in the primary cluster, but a subset can be available in the secondary cluster.
    However, an object (unpartitioned table/partition) is either fully present or not present at all in the secondary cluster.
    It is not possible to have partial data of a partition in the secondary cluster.
  • The user can only update the table (or its partition) in the primary cluster.
  • The following mapping will be added. Cluster -> JobTracker
  • By default, the user will not specify any cluster for the session, and the behavior will be as follows:
    • The query will be processed in a single cluster, and use the jobtracker for that cluster.
    • If the primary cluster of any output table is different from the query processing cluster, an error is thrown.
      So, a multi-table insert with tables belonging to different primary clusters will always fail.
    • If the input's table primary cluster is different from the query processing cluster, the query will only succeed
      if all the partitions for that input table are also present on the query processing cluster.
    • If an output is specified, the primary cluster for that output will be used.
    • If the output specified is a new table, the output is not used in determining the query processing cluster.
    • If no output is specified (or the output is a new table), and there are multiple inputs for the query, all the input tables
      primary clusters are tried one-by-one, till a valid cluster is found.
  • Few examples will illustrate the scenario better:
  • Say T11, T12, T21, T31 are tables belonging to cluster C1, C1, C2 and C3 respectively, and it has no secondary clusters.
    • The query 'select .. from T11 .. ' will be processed in C1
    • The query 'select .. from T11 join T12 .. ' will be processed in C1
    • The query 'select .. from T21 .. ' will be processed in C2
    • The query 'select .. from T11 join T21 .. ' will fail
    • 'Insert .. T13 select .. from T11 ..' will be processed in C1 and the T13 will be created in C1
    • 'Insert .. T21 select .. from T11 ..' will fail
  • If we change the example slightly:
  • Say T11, T12, T21, T31 are tables belonging to cluster C1, C1, C2 and C3 respectively.
    T11's secondary cluster is C2 (and all the data for T11 is also present in C2).
    • The query 'select .. from T11 .. ' will be processed in C1
    • The query 'select .. from T11 join T12 .. ' will be processed in C1
    • The query 'select .. from T21 .. ' will be processed in C2
    • The query 'select .. from T11 join T21 .. ' will be processed in C2
    • The query 'select .. from T11 join T31 .. ' will fail
    • 'Insert .. T13 select .. from T11 ..' will be processed in C1 and the T13 will be created in C1
    • 'Insert .. T21 select .. from T11 ..' will be processed in C2, and T21 will remain in C2

The same idea can be extended for partitioned tables.

  • The user can also decide to run in a particular cluster.
    • Use cluster <ClusterName>
  • The system will not make an attempt to choose the cluster for the user, but only try to figure out if the query can be run
    in the <clusterName>. If the query can run in this cluster, it will succeed. Otherwise, it will fail.
  • Eventually, hive will provide some utilities to copy a table/partition from the primary cluster to the secondary clusters.
    In the first cut, the user needs to do this operation outside hive (one simple way to do so, is distcp the partition from the
    primary to the secondary cluster, and then update the metadata directly - via the thrift api).

is only allowed in this cluster for table C1. but need to allow exceptions
here 3) new partitions are only allowed to be created in C1. 4) all
data changes to T1 happened in the primary cluster should be
replicated to other clusters if there are any secondary clusters. but
there should be a conf to disable it as there are some exception
situations.

We are planning to make hive run across multiple data centers (physical clusters). We prefer to use hive metastore to provide a
unified namespace.

Tables/partitions can exist in more than one cluster. And one cluster
is defined as a primary cluster. A primary cluster is a table level
property. A table T1's primary cluster is C1 meaning :1) C1 contains
all data that is available in all other clusters. 2) write is only
allowed in this cluster for table C1. but need to allow exceptions
here 3) new partitions are only allowed to be created in C1. 4) all
data changes to T1 happened in the primary cluster should be
replicated to other clusters if there are any secondary clusters. but
there should be a conf to disable it as there are some exception
situations.

The first thing that needs to be done is to make hive metastore have a
concept of cluster. And that also means all thrift communication calls
to metastore need to provide a cluster parameter. So we have there
options here:
1) add a cluster parameter to existing thrift interfaces
or
2) add new interfaces which do exactly the same set of functionalities
as old ones but using a different name (use _on_cluster suffifx
maybe?) and have a cluster parameter
or
3) overwrite database name for the purpose of cluster name. And allow
a table co-exist in multiple databases. But that require to promote
table to top level citizen, and degrade database. For example, "show
tables" used to scan all tables in current db, but now need to scan
all tables in all databases.

We would like to get more ideas about which one to choose, and we are
definitely open to other alternatives that we missed here.

  • No labels