Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Motivation

Position delete is a solution to implement the Merge-On-Read (MOR) structure, which has been adopted by other formats such as Iceberg[1] and Delta[2]. By combining with Paimon's LSM tree, we can create a new position deletion mode called `deletion vectors mode` unique to Paimon.

Under this mode, extra overhead (lookup and write delete file) will be introduced during writing, but during reading, data can be directly retrieved using "data + filter with position deletedeletion vector", avoiding additional merge costs between different files. Furthermore, this mode can be easily integrated into native engine solutions like Spark + Gluten[3] in the future, thereby significantly enhancing read performance.

Goals

Must

  1. Data read and write operations are accurate.
  2. Reader can directly obtain the final data through "data + filter with position delete" without additional merging.

Should

  1. The number of delete files written each time is controllable.
  2. The additional overhead caused by writing is controllable.
  3. Read performance is superior to the original LSM merge.
  4. Unused delete files can be automatically cleaned up.

Implement

1. Delete File

Delete file is used to mark the deletion of original file. The following figure illustrates how data updating and deleting under the delete file mode:


Currently, there are two ways to represent the deletion of records:

  • Position delete: marking the specific row in a file as deleted.
  • Equality delete: writing a filter directly to represent the deletion.


Taking into account:

  • Paimon can obtain the old records during lookup compaction.
  • Inserts in paimon may also result in updates (deletes), which are difficult to represent using equality delete.
  • Position delete is sufficiently efficient for reader.

...

Therefore, we do not consider equality delete and will only implement delete file using the position delete. There are three design approachs as follows:

1.1. Approach 1


Store deletes as list<file_name, pos> , which is doubly sorted by file_name and pos.

...

  • High redundancy, with the file_name being repeated extensively.
  • When reading, it is necessary to read all the delete files first, and then construct the bitmap for the corresponding data file.


Approach 1 is inefficient, don’t choose it, Approach 2 and Approach 3 both directly store bitmap in delete file, but the implementations are different.

1.2. Approach 2(pick)


One delete file per bucket, with a structure of  map<file_name, bitmap>. When reading a specific data file, read it and construct the map<file_name, bitmap>, and then get the corresponding bitmap by file_name.

...

  • Reading and writing of delete file is on bucket-level.
  • In extreme cases, if the deletion is distributed across all buckets, the delete files for all buckets will need to be rewritten.

1.3. Approach 3


One delete file per writing, with a structure of list<bitmap>,and add additional metadata <delete file name, offset, size> to point to its bitmap (this structure is also called delete vector).

When reading a specific data file, obtain the delete_file's file name based on the metadata, and then according to the offset + size, retrieve the corresponding bitmap.

...

  • More changes to the Paimon protocol are needed, file become a tuple <data_file, delete_meta>, and the logic for cleaning up delete files is more complex.
  • When writing, it is necessary to merge the bitmaps generated by each bucket into a single delete file.
  • In extreme cases, if there are deletions with every write, then a new delete file will be generated with each write operation (however, there is a maximum number guaranteed because with each full compaction, all delete files become invalid).

1.4. Test

Before deciding on which approach to go with, let's first conduct a performance test on bitmaps, based on org.roaringbitmap.RoaringBitmap[4]. The reasons for choosing it are as follows:

...

data rate / max num

add(ms)

serialization(ms)

deserialization(ms)

file size(MB)

constains(ms)

20% /2,000,000

43

5

26

0.24

7

50% /2,000,000

47

3

52

0.24

5

80% /2,000,000

57

1

24

0.24

8

20% /20,000,000

450

13

247

2.4

49

50% /20,000,000

629

6

222

2.4

76

80% /20,000,000

1040

5

222

2.4

121

20% /200,000,000

5079

44

2262

24

442

50% /200,000,000

9469

43

2773

24

1107

80% /200,000,000

13625

38

2233

24

1799

20% /2,000,000,000

93753

568

22290

239

5747

50% /2,000,000,000

166070

679

22339

239

14735

80% /2,000,000,000

218233

553

22684

239

26504


Summarize the following points:

  • The serialization and deserialization of the bitmap and its file size and add&contains cost are basically proportional to the amount of data.
  • When the data volume reaches 2 billion, it is essentially unusable.


Let's do some choices:


1. RoaringBitmap or Roaring64NavigableMap?

...

Therefore, considering both implementation and performance aspects, Approach 2 is ultimately chosen.

2. protocal design

2.1. layout

Reuse the current index layout and just treat deletemap the deletionVectors as a new index file type

...

2.2. Deletion vectors index file encoding

Like hash index, one bucket one deletionVector index. Therefore, a deletionVector index file needs to contain bitmaps of multiple files in the same bucket, its structure is actually a map<fileName, bitmap>, to support high-performance reads, we have designed the following file encoding to store this :

In IndexFileMeta:

{
  "org.apache.paimon.avro.generated.record": {
    "_VERSION": 1,
    "_KIND":

...

0,

...


    "_PARTITION":

...

"\u0000\u0000\u0000\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000p\u0000\u0000\u0000\u0000\u0000\u0000",

...


    "_BUCKET":

...

0,

...


    "_TYPE":

...

"

...

DELETION_

...

VECTORS",

...


    "_FILE_NAME":

...

"index-32f16270-5a81-4e5e-9f93-0e096b8b58d3-0",

...


    "_FILE_SIZE":

...

x,

...


    "_ROW_COUNT":

...

count of the map,
    "_DELETION_VECTORS_RANGES": "binary Map<String, Pair<Integer, Integer>>", key is the fileName, value is <start offset of the serialized bitmap in Index file, size of the serialized bitmap>
  }
}

In IndexFile:

  • First, record version by a byte.
  • Then, record <serialized bitmap' size , serialized bitmap, serialized bitmap's checksum> in sequence.

For each serialized bitmap:

  • First, record a const magic number by an int.
  • Then, record serialized bitmap

2.2. DeleteMap index file encoding

Like hash index, one bucket one deleteMap index. Therefore, a deleteMap index file needs to contain bitmaps of multiple files in the same bucket, its structure is actually a map<fileName, bitmap>, to support high-performance reads, we have designed the following file encoding to store this map :

  • First, record the size of this map by an int.
  • Then, record the max fileName length by an int (because the length of the file name may not be consistent).
  • Next, record <fileName (padding to max length), the starting pos of the serialized bitmap, the size of the bitmap> in sequence.
  • Finally, record <serialized bitmap> in sequence
  • .

e.g:

Image RemovedImage Added

...

3.

...

 BitmapDeleteIndex Implement DeleteIndex  based on RoaringBitmap 

Code Block
languagejava
titleDeleteIndex.java
public interface DeleteIndex {

    void delete(long position);

    boolean isDeleted(long position);

    boolean isEmpty();

    byte[] serializeToBytes();

    DeleteIndex deserializeFromBytes(byte[] bytes);
}
Code Block
languagejava
titleDeleteMapIndexFile.java
public class DeleteMapIndexFile {  
   
    public long fileSize(String fileName);
     
    public Map<String, long[]> readDeleteIndexBytesOffsets(String fileName);
    
    public Map<String, DeleteIndex> readAllDeleteIndex(String fileName, Map<String, long[]> deleteIndexBytesOffsets);
    
    public DeleteIndex readDeleteIndex(String fileName, long[] deleteIndexBytesOffset);

    public String write(Map<String, DeleteIndex> input);

    public void delete(String fileName);
}

Extend the IndexMaintainer  interface, and create  DeleteMapIndexMaintainer implements IndexMaintainer<KeyValue, DeleteIndex>  

Code Block
languagejava
titleDeleteMapIndexMaintainer.java
public interface IndexMaintainer<T, U> {

    void notifyNewRecord(T record);

    List<IndexFileMeta> prepareCommit();
    
    /* (new) delete file's index */
    default void delete(String fileName) {
        throw new UnsupportedOperationException();
    }

    /* (new) get file's index */
    default Optional<U> indexOf(String fileName) {
        throw new UnsupportedOperationException();
    }

    /** Factory to restore {@link IndexMaintainer}. */
    interface Factory<T, U> {
        IndexMaintainer<T, U> createOrRestore(
                @Nullable Long snapshotId, BinaryRow partition, int bucket);
    }
}

3. Write

3.1. Overview

Refer to the existing lookup mechanis, design a deleteFile generation mechanism based on compaction + lookup:

  1. New data is written to the level-0 layer.
  2. Perform a compaction with each write and force merge of level-0 layer data, see ForceUpLevel0Compaction.
  3. Implement a merge like LookupDeleteFileMergeFunctionWrapper, which has the following characteristics:
    • a. When records do not belong to the level-0 layer, no deletiton is generated.
    • b. When records belong to the level-0 + level-x layers, no deletiton is generated.
    • c. When records belong only to the level-0 layer, look up other layers and update the map<fileName, bitmap>.

      4. After the compaction finish, the bitmap of the before files in this merge is no longer useful and will be deleted from map<fileName, bitmap>.

      5. Finally, write the new deleteFile and mark the old deleteFile as remove.

      6. For full compaction, an optimization can be made: directly clear the map<fileName, bitmap>.

Example:

Assume the LSM has a total of four layers, the initial stage is as follows (left to right are: file content, LSM tree, delete file):

Image Removed

Then, a new file f7, is initially added to the level-0. Suppose compaction picks the level-0 layer and the level-2 layer, the following changes will occur:

  • 1 belongs only to the level-0 layer, it needs to look up old data and finds that f1 also contains 1, so f1's bitmap is modified to add 1.
  • 2 and 9 belong to both the level-0 and level-2 layers, there's no need to modify the bitmap.
  • The bitmap of f6 can be removed because it has been compacted.
  • f5, f6, and f7 are marked as REMOVE, and the old delete file is marked as REMOVE.

Image Removed

FInally, assuming that compaction has generated f8 and f9, the final result is as follows:

  • f8 and f9 are marked as ADD, and the new delete file is marked as ADD.

Image Removed

3.2. Implementation

Considerations for implementation:

  • Currently, when set 'changelog-producer' = 'lookup', the data write behavior is not atomic but divided into two steps: first, data is written to create snapshot1, then lookup compaction generates snapshot2. We need to consider the atomicity of this.
  • In most cases, the data will be transferred to level-0 first, and then rewritten. The writing overhead is a bit high, and perhaps some optimization can be done in this regard.
  • If change log needs to be generated, in theory, change log and delete file can be produced simultaneously (without reading twice).
  • The merge engine is still available.

4. Read

4.1. Overview

  1. For each read task, load the corresponding deleteFile.
  2. Construct the map<fileName, bitmap> from deleteFile.
  3. Get the bitmap based on the filename, then pass it to the reader.

4.2. Classes

Code Block
languagejava
titleApplyDeleteIndexReader.java
public class ApplyDeleteIndexReader implements RecordReader<KeyValue> {

   public ApplyDeleteIndexReader(RecordReader<KeyValue> reader, DeleteIndex deleteIndex) {
        this.reader = reader;
        this.deleteIndex = deleteIndex;
    }

    @Nullable
    @Override
    public RecordIterator<KeyValue> readBatch() throws IOException {
        RecordIterator<KeyValue> batch = reader.readBatch();
        if (batch == null) {
            return null;
        }
        return new RecordIterator<KeyValue>() {
            @Override
            public KeyValue next() throws IOException {
                while (true) {
                    KeyValue kv = batch.next();
                    if (kv == null) {
                        return null;
                    }
                    if (!deleteIndex.isDeleted(kv.position())) {
                        return kv;
                    }
                }
            }
        };
    }
  ...
}

5. Maintenance

5.1. compaction

We can incorporate bitmap evaluation during compaction pick, such as when the proportion of deleted rows in a file reaches like 50%, we can pick it for compaction.

5.2. expire

Determine whether to delete based on the delete and add records in the deleteFileManifest.

6. Other considerations

  1. Impact on file meta: Currently, the stats (min, max, null count) in file meta are already unreliable, so no special handling will be performed for this aspect.
  2. ...

Compatibility, Deprecation, and Migration Plan

New conf

...

Write

3.1. Overview

Refer to the existing lookup mechanis, design a deleteFile generation mechanism based on compaction + lookup:

  1. New data is written to the level-0 layer.
  2. Perform a compaction with each write and force merge of level-0 layer data, see ForceUpLevel0Compaction.
  3. Implement a merge like LookupDeleteFileMergeFunctionWrapper, which has the following characteristics:
    • a. When records do not belong to the level-0 layer, no deletiton is generated.
    • b. When records belong to the level-0 + level-x layers, no deletiton is generated.
    • c. When records belong only to the level-0 layer, look up other layers and update the map<fileName, bitmap>.

      4. After the compaction finish, the bitmap of the before files in this merge is no longer useful and will be deleted from map<fileName, bitmap>.

      5. Finally, write the new deleteFile and mark the old deleteFile as remove.

      6. For full compaction, an optimization can be made: directly clear the map<fileName, bitmap>.


Example:

Assume the LSM has a total of four layers, the initial stage is as follows (left to right are: file content, LSM tree, delete file):

Image Added

Then, a new file f7, is initially added to the level-0. Suppose compaction picks the level-0 layer and the level-2 layer, the following changes will occur:

  • 1 belongs only to the level-0 layer, it needs to look up old data and finds that f1 also contains 1, so f1's bitmap is modified to add 1.
  • 2 and 9 belong to both the level-0 and level-2 layers, there's no need to modify the bitmap.
  • The bitmap of f6 can be removed because it has been compacted.
  • f5, f6, and f7 are marked as REMOVE, and the old delete file is marked as REMOVE.


Image Added

FInally, assuming that compaction has generated f8 and f9, the final result is as follows:

  • f8 and f9 are marked as ADD, and the new delete file is marked as ADD.

Image Added

3.2. Implementation

Considerations for implementation:

  • Currently, when set 'changelog-producer' = 'lookup', the data write behavior is not atomic but divided into two steps: first, data is written to create snapshot1, then lookup compaction generates snapshot2. We need to consider the atomicity of this.
  • In most cases, the data will be transferred to level-0 first, and then rewritten. The writing overhead is a bit high, and perhaps some optimization can be done in this regard.
  • If change log needs to be generated, in theory, change log and delete file can be produced simultaneously (without reading twice).
  • The merge engine is still available.

4. Read

4.1. Overview

  1. For each read task, load the corresponding deleteFile.
  2. Construct the map<fileName, bitmap> from deleteFile.
  3. Get the bitmap based on the filename, then pass it to the reader.

5. Maintenance

5.1. compaction

We can incorporate bitmap evaluation during compaction pick, such as when the proportion of deleted rows in a file reaches like 50%, we can pick it for compaction.

5.2. expire

Determine whether to delete based on the delete and add records in the deleteFileManifest.

6. Other considerations

  1. Impact on file meta: Currently, the stats (min, max, null count) in file meta are already unreliable, so no special handling will be performed for this aspect.
  2. ...

Public Interfaces

How to use

a new conf:

deletion-vectors.enabled: control whether to enable deletion vectors mode: write deletion vectors index and read using it without merge.

limitations:

  • Only support for tables with primary keys
  • Only support `changelog-producer`  = `none` or `lookup`
  • `changelog-producer.lookup-wait` can't be `false`
  • `merge-engine` can't be `first-row`, because the read of first-row is already no merging, deletion vectors are not needed
  • This mode will filter the data in level-0, so when using time travel to read `APPEND` snapshot, there will be data delay

other:

  • Since there is no need to merge when reading, in this mode, we can support filter pushdown of non-PK fields and data reading concurrency is no longer limited !

Classes

Add RecordWithPositionIterator to get row position

Code Block
languagejava
titleRecordWithPositionIterator.java
public interface RecordWithPositionIterator<T> extends RecordReader.RecordIterator<T> {

    /**
     * Get the row position of the row returned by {@link RecordReader.RecordIterator#next}.
     *
     * @return the row position from 0 to the number of rows in the file
     */
    long returnedPosition();
}

Abstract an interface DeletionVector to represent the deletion vector, and provide a BitmapDeletionVector based on RoaringBitmap to implement it: 

Code Block
languagejava
titleDeletionVector.java
public interface DeletionVector {

    void delete(long position);

    boolean checkedDelete(long position);
    
    boolean isDeleted(long position);

    boolean isEmpty();

    byte[] serializeToBytes();

    DeleteIndex deserializeFromBytes(byte[] bytes);
}

Add a DeletionVectorsIndexFile  to read, write and delete deletionVector:

Code Block
languagejava
titleDeletionVectorsIndexFile.java
public class DeletionVectorsIndexFile {  
   
    public long fileSize(String fileName);
    
    public Map<String, DeletionVector> readAllDeletionVectors(String fileName, Map<String, Pair<Integer, Integer>> deletionVectorRanges);
    
    public DeletionVector readDeletionVector(String fileName, Pair<Integer, Integer> deletionVectorRange);

    public Pair<String, Map<String, Pair<Integer, Integer>>> write(Map<String, DeletionVector> input);

    public void delete(String fileName);
}

Add DeletionVectorsMaintainer  to maintain dv:

Code Block
languagejava
titleDeletionVectorsMaintainer.java
public interface IndexMaintainer<T, U> {      

	public void notifyNewDeletion(String fileName, long position);

    public void removeDeletionVectorOf(String fileName);
	
	List<IndexFileMeta> prepareCommit();

    public Optional<DeletionVector> deletionVectorOf(String fileName);
}

Add ApplyDeletionVectorReader implements RecordReader<KeyValue> to read with DeletionVector

Code Block
languagejava
titleApplyDeletionVectorReader.java
public class ApplyDeletionVectorReader implements RecordReader<KeyValue> {

   public ApplyDeletionVectorReader(RecordReader<KeyValue> reader, DeletionVector deletionVector) {
        this.reader = reader;
        this.deletionVector = deletionVector;
    }

    @Nullable
    @Override     
    public RecordIterator<T> readBatch() throws IOException {
        RecordIterator<T> batch = reader.readBatch();

        if (batch == null) {
            return null;
        }

        FileRecordIterator<T> batchWithPosition = (FileRecordIterator<T>) batch;

        return batchWithPosition.filter(
                a -> !deletionVector.isDeleted(batchWithPosition.returnedPosition()));
    }   
    ...
}


Compatibility, Deprecation, and Migration Plan

Conversion between deletion vectors modeand original mode

  1. original mode -> deletion vectors mode: perform a full compaction, then set `deletion-vectors.enabled` = `true`, and time travel to the snapshots before enabled will be prohibited. 
  2. deletion vectors mode -> original mode, perform a full compaction, then set `deletion-vectors.enabled` = `false`,  and time travel to the snapshots before enabled will be prohibited.

Future work

  • Integrate deletion vectors with append table
  • ..

...

Migration

  1. Conversion between delete file mode (just temporarily call it) and original LSM

...

  • .


[1]: https://github.com/apache/iceberg

...