Look up join is commonly used feature in Flink SQL. We have received many optimization requirements on look up join. For example:
1. Suggests left side of lookup join do a hash partitioner to raise cache hint ratio

2. Solves the data skew problem after introduces hash lookup join

3. As we know, in Hive dimension source, each task would load all data into cache. After introduce hash partitioner in point 1, each task could only load part of cache instead of load all cache.

4. Enables mini-batch optimization to reduce RPC call

We would focus on point 1 in this FLIP, and continue to discuss point2, point3 and point 4 in the later FLIP.

Many Lookup table sources introduce cache to reduce the RPC call, such as JDBC, CSV, HBase connectors.

For those connectors, we could raise cache hit ratio by routing the same lookup keys to the same task instance. This is the purpose of this FLIP.

There are many similar requirements from user mail list and JIRA about hash Lookup Join, for example:

  1. FLINK-23687 - Getting issue details... STATUS
  2. FLINK-25396 - Getting issue details... STATUS
  3. FLINK-25262 - Getting issue details... STATUS

SQL Syntax

To enable hash lookup join, user only needs specify a new hint (SHUFFLE_HASH) in select clause in query, which is similar with spark[2] sql.

SELECT /*+ SHUFFLE_HASH('Customers') */ o.order_id,,,
FROM Orders AS o
JOIN Customers FOR SYSTEM_TIME AS OF o.proc_time AS c
ON o.customer_id =;


  1. Table name in SHUFFLE_HASH hint is build table name. Lookup join only supports dimension table to be build table, does not support left side to be build table.
  2. The hint only provides a suggestion to the optimization, it is not an enforcer.

Proposed Changes

Define Hint Strategy

"SHUFFLE_HASH", this hint can only be applied to CORRELATE relations which satisfy the following conditions:

  1. the correlate is a look up join, other correlate would ignore the hint
  2. the correlate has  "Customers" as dimension table name. 

The code below shows how we define hint strategy for hash lookupJoin.

SHUFFLE_HASH hint strategy
                HintPredicates.and(HintPredicates.CORRELATE, isLookupJoin(), withBuildTableName())))


it has a blocker on Calcite version upgrade.

Calcite would translate above sql into Correlate  instead of Join . Correlate  is not a kind of RelNode  that can attach RelHint s until 1.30.0 version.

I've report a CALCITE-4967 - Getting issue details... STATUS in Calcite, which has been merged and would be published in CALCITE 1.30.0 version.

Hint Propagation in Optimizer

We need to ensure the hint would not missed before it is finally used to require the distribution on inputs of LookupJoin. 

This includes some refactors of existed rules and FlinkLogicalJoin.

Refactor rules in temporal_join_rewrite phase

In temporal_join_rewrite phase, the rules would check whether LogicalCorrelate  is a temporal table join which implemented by lookup join. If yes, the rules translate it to LogicalJoin.

Currently the rules call RelBuilder#join  in Calcite which results a join without any hints, so the hint would be missed here.

Specifically, we need refactor LogicalCorrelateToJoinFromLookupTableRuleWithFilter  and LogicalCorrelateToJoinFromLookupTableRuleWithoutFilter .

Refactor FlinkLogicalJoin

FlinkLogicalJoin constructor does not take any hint now, so hints would be missed when converting LogicalJoin to FlinkLogicalJoin

We need refactor FlinkLogicalJoin  constructor.

Refactor FlinkLogicalJoinConverter

Refactor FlinkLogicalJoin  rules to propagate the hints of LogicalJoin . Besides, other places which crate FlinkLogicalJoin instances which also need upgrade.

Use Hint to require Hash Distribution

LookupJoinRules would check whether FlinkLogicalJoin contain SHUFFLE_HASH Hint. If yes, and the rules require the input must have hash distribution on join keys when converting FlinkLogicalJoin to LookupJoin.

Note: If the input stream is not an insert stream, which means, it could contain update_before, update_after or delete record, if it's upsert key is different with join key, update before and update may be sent to different tasks after hash partition which may leads to wrong result.

So hash lookup Join requires that the input stream should be insert_only stream or its upsert keys contains hash keys.

Compatibility, Deprecation, and Migration Plan

Because Hash LookupJoin and skew LookupJoin are only enabled by hint syntax. The existed job in old version Flink would not be effected. Besides, their behavior are compatible even after they are upgraded to new version.

Test Plan

Each new feature would be covered by unit tests.

Besides, we would add integration tests for connectors to verify it can cooperate with existing source/sink implementations.

Rejected Alternatives

Hint Syntax

we could use name 'use_hash' just like hint in Oracle.

SELECT /*+ USE_HASH('Customers') */ o.order_id,,,
FROM Orders AS o
JOIN Customers FOR SYSTEM_TIME AS OF o.proc_time AS c
ON o.customer_id =;

SQL Server[4] uses keyword 'hash' instead of query hint, it's not a good choise for use, so we ignore this.


There is a simpler but a little hacky implementation, this is also what we apply in the internal version.

That is, propagating 'SHUFFLE_HASH' hint to TableScan  with matched table names. In this way, the hint would not be missed by Flink optimizer until it needs the hint in  LookupJoinRules.

LookupJoinRules would only check whether the dimension table scan has  'use_hash' hint. If yes, it would require the input must have hash distribution on join keys.

Compared with the previous solution, this solution has two advantages:

  1. It does not need Calcite version upgrade, see CALCITE-4967 - Getting issue details... STATUS
  2. It does not need code refactor to ensure the hint would not be missed by Flink optimizer.

However, this method is a bit hacky conceptually because whether to enable Hash is the attribute of the lookup Join instead of dimension tableScan.

Anyway, the difference between the two solution is only about the internal implementation and has no impact on the user.


[1] Oracle USE_Hash hint

SELECT /*+ USE_HASH(l h) */ *
  FROM orders h, order_items l
  WHERE l.order_id = h.order_id
    AND l.order_id > 3500;

[2] Spark SHUFFLE_HASH hint

SELECT /*+ SHUFFLE_HASH(t1) */ * FROM t1 INNER JOIN t2 ON t1.key = t2.key;


SELECT straight_join weather.wind_velocity, geospatial.altitude
  FROM weather JOIN /* +SHUFFLE */ geospatial
  ON = AND weather.long = geospatial.long;

[4] SQL Server Hash Keyword

SELECT p.Name, pr.ProductReviewID FROM Production.Product AS p LEFT OUTER HASH JOIN Production.ProductReview AS pr ON p.ProductID = pr.ProductID ORDER BY ProductReviewID DESC;