You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Status

Current state[One of "Under Discussion", "Accepted", "Rejected"]

Discussion thread: 

JIRA or Github Issue: 

Released: <Doris Version>

Google Doc: <If the design in question is unclear or needs to be discussed and reviewed, a Google Doc can be used first to facilitate comments from others.>

Motivation

Doris' Unique-Key model is widely used in scenarios with update requirements such as Flink-CDC synchronization, user portrait, and e-commerce orders, but Unique-Key currently uses the Merge-On-Read implementation. Due to the following reasons, query performance is poor:

  1. Since it is merge on read, it is necessary to read all the filtered data during the query and perform a merge sort to confirm which row of data is the latest data. This merge sort brings additional cpu-cost

  2. Since it is impossible to confirm whether the data of a row in a segment is the latest data, UniqueKey does not support push-down of aggregate function predicates, which increases the amount of data read

The following figure is a simplified representation of the execution process of the Unique-Key model:

Related Research

We investigated the implementation of other mainstream data warehouses for the data update model. In addition to the Merge-On-Read used by UniqueKey, there are basically the following categories

Merge-On-Write(Mark Delete and Insert)

  • Program overview:

    • Checks if it is an update request via a primary key index and locates to which line of the file the overwritten key is in

    • Delete the rowid of the key through a bitmap record tag, and filter based on the rowid recorded in the bitmap when reading

  • Advantages: key comparison is avoided when reading, and various predicate push-down and secondary indexes are supported
  • Cons: Additional query index and cost of updating bitmap on write
  • Representative system: Hologres, Aliyun ADB, ByteHouse

Copy-On-Write

  • Program overview:

    • When writing/updating data, the original file is directly merged synchronously to generate a new version of the file. Even if only one byte of new data is committed, the entire column data file needs to be rewritten.

  • Advantages: Highest query efficiency

  • Disadvantages: The cost of writing data is very high, suitable for low-frequency mass update data, high-frequency reading scenarios

  • Representative system: Delta Lake, Hudi

Delta-Store

  • Program overview:

    • Divide data into base data and delta data. Each key is unique in base data

    • When writing/updating data, first query the base data, if there is an update request, write the updated data to the delta store, and merge the data in the delta store with the base data to obtain the latest data when reading; if it cannot be queried in base data, write it to memtable and flush it into base data

  • Advantages: Good writing performance, efficient support for partial updates

  • Cons: Poor support for secondary indexesRepresentative system: Kudu

Solution Selection

Combining Doris' Unique Key usage scenarios (high real-time requirements for import, high import frequency, and hope to query as fast as possible), Merge-On-Write is a more suitable solution for optimization.

Detailed Design

Key Design Points

To implement the Mark Delete and Insert scheme, consider the following:

  1. Add a primary key index, which can quickly check whether the key exists, as well as the file location and line number when importing

  2. Add a delete bitmap to mark the old key as deleted when an update occurs

  3. The update of delete bitmap needs to consider transaction support and multi-version support

  4. Query adaptation delete bitmap

  5. Conflict handling between Compaction and data import. Compaction may have overwritten a row of a rowset by an ongoing import task after reading the data of the rowset before committing. The row needs to be re-marked and deleted in the newly generated rowset

Primary key index

Several key points of primary key index design and selection:

  1. High QPS (a single BE target can support 10w QPS)

    1. Doris uses columnar storage. When the primary key column contains multiple columns, a single click needs to read the value of each column column column by column, involving multiple disk IO . In order to improve the performance of the primary key index, we need to store the encoded primary key in the primary key index. When the primary key column contains multiple columns, these columns will be encoded together and stored in the primary key index.

  2. Do not consume internal memory excessively

    1. The primary key index selection uses a scheme similar to RocksDB Partitioned Index [refer to https://github.com/facebook/rocksdb/wiki/Partitioned-Index-Filters ], created in Segment when MemTable flushes. The main considerations for choosing this scheme are as follows:

    2. RocksDB has proved that its index scheme can support very high QPS. With the reasonable configuration of BlockCache, the query performance can be guaranteed under the condition of ensuring the index page 90 +% Cache hit rate

    3. Because it is a file-based index, it can be loaded on demand, and the consumption of memory is controllable

    4. The scheme prefixes the primary key data, which can further save space, reduce the occupation of memory cache and the amount of data on disk IO

  3. Influence of Too Many Small Files on Check Performance under High Frequency Import

    1. In the default configuration, a tablet may have 500 rowsets, and the performance of seeking one by one may be poor

    2. In response to this situation, each tablet can consider maintaining an additional CuckooFilter for small file click acceleration. Compared with BloomFilter, CuckooFilter has the advantages of low FP rate (same size), deletion, and value storage (such as storing line numbers). (This scheme is inspired by SlimDB: https://www.vldb.org/pvldb/vol10/p2037-ren.pdf , using a scheme similar to Multi Cuckoo Filter in SlimDB can quickly determine whether a key exists with lower memory consumption and IO consumption)

For the primary key index, add the corresponding BloomFilter. The overall design is shown in the figure below

Delete Bitmap

Delete Bitmap is recorded using Roaring Bitmap, which is stored in RocksDB together with tablet meta, and each segment corresponds to a bitmap. In order to maintain the original version semantics of Unique Key, Delete Bitmap also needs to support multiple versions. Considering that each import generates a version of the bitmap, which may cause a relatively large memory consumption, it is more appropriate to generate an incremental modification for each version .

Write process

The writing process of Delete Bitmap is shown in the following figure:

  1. DeltaWriter will first flush the data to disk

  2. In the publish phase, check all the keys in batches, and update the bitmap corresponding to the overwritten keys. In the figure below, the newly written rowset version is 8, which modifies the data in 3 rowsets, so it will generate 3 bitmap modification records

  3. Updating the bitmap during the publish phase ensures that no visible rowset will be added during batch key checking and bitmap update, ensuring the correctness of bitmap updates

  4. If a segment is not modified, there will be no bitmap record for the corresponding version. For example, segment1 of rowset1 has no bitmap corresponding to version 8

Read process

The reading process of Bitmap is shown in the following figure:

  1. A query that requests version 7 will only see the data corresponding to version 7

  2. When reading the data of rowset5, the bitmap generated by the modification of v6 and v7 will be merged together to obtain the complete DeleteBitmap corresponding to version7, which is used to filter the data

  3. In the example below, the version 8 import overwrites a piece of data in segment2 of rowset1, but Query requesting version 7 can still read the piece of data

  4. In the high-frequency import scenario, there may be a large number of versions of bitmaps, and merging these bitmaps may also have a large CPU computing consumption, so we introduced an LRU cache, and each version of the bitmap only needs to be merged once.

Transaction support

DeltaWriter does not check and update the bitmap in the writing stage, but does it in the publish stage

Query layer adaptation

Scan adaptation

  1. TabletReader obtains the specified version of DeleteBitmap in TabletMeta according to the version specified by Query

  2. Pass the DeleteBitmap of the specified rowset to the RowsetReaderContext via RowsetReader

  3. Then pass the DeleteBitmap corresponding to the Segment to the SegmentIterator through StorageReadOptions

  4. SegmentIterator itself maintains a _row_bitmap for quick conditional filtering by subtracting incoming DeleteBitmap

 Adaption for point Lookup

  1. At present, Doris' data reading is designed for Scan, and there is no support for point-and-see capabilities, so the following interfaces need to be added

  2. Add minkey/maxkey records to the meta of each Segment, and organize all segments in memory through the line segment tree structure

  3. Through the line segment tree, you can quickly locate which segment a encoded_key may be in

  4. Add a lookup_row_key interface to SegmentIterator for point checking

Handling Compaction and Write Conflicts

  1. Compaction normal flow

    1. When compaction reads data, it obtains the version Vx of the currently processed rowset, and will automatically filter out the rows marked for deletion through the delete bitmap (see the previous query layer adaptation section)

    2. After the compaction is over, you can clean up all DeleteBitmaps on the source rowset that are less than or equal to version Vx

  2. Handling Compaction and Write Conflicts

    1. During compaction execution, new import tasks may be submitted, assuming the corresponding version is Vy. If the write corresponding to Vy has modifications to the compaction source rowset, it will be updated to the Vy of the DeleteBitmap of the rowset

    2. After the compaction ends, check all DeleteBitmaps on the rowset that are larger than Vx, and update the row number in them to the segment row number in the newly generated rowset

Scheduling

specific implementation steps and approximate scheduling.

  • No labels