IDIEP-32
Author
Sponsor


Created06 Mar 2019
Status
DRAFT


Motivation

Currently batch updates on page memory level are not implemented in Ignite. Internal structures, such as BPlusTree and FreeList, do not support them. The performance of batch operations (putAll, datastreamer, preloader) can be improved by implementing batch updates on page memory level.

Competitive analysis

Profiling

Profiling the current rebalancing process shows that most of the time spent on working with FreeList and B+ Tree (details).

Process overview

Currently, when updating the batch of cache entries in off-heap, each of them is processed separately. Such update in PageMemory consists of the following steps:

  1. Search for key in B+ Tree
  2. Store key-value to data page
  3. Insert/update key in B+ tree page

To prevent a redundant secondary search in B+ Tree the invoke operation was introduced.

Invoke in B+ Tree

Let's describe the B+ Tree in more detail to understand the need for invoke operation.

The keys of the tree (hashes) are stored on the B+ Tree pages (index pages), the cache key-value itself is stored on data pages. Each item on the index page includes a link to the data page item. In general, a B+ Tree supports find, put and remove operations. For put and remove, you must first find the point of insertion/update/removal. So, cache entry update without invoke operation can look like this:

  • Search B+ Tree for link to old key-value (find)
  • The old value does not differ in length - a simple value update of key-value on the data page
  • The old value differs in length - the link to it changes:
    • Store a new key-value into data page
    • Put B+ Tree key (with "secondary" find) to update link to data page item
    • Remove old key-value from data page

The invoke operation uses an in-place update and has the following execution scheme:

Store key-value

Saving a key-value on data page consists of the following operations:

  1. Remove a page from FreeList, which has enough space (if there is no such page - allocate a new one)
  2. Lock page
  3. Write cache entry data
  4. Update page counters
  5. Unlock page
  6. Based on the remaining free space insert page into FreeList

Advantages

The batch will improve:

  1. Average insertion time by reducing count of FreeList actions on each data row
  2. Average search/update time in B+ Tree

Proposed changes

Implementation should consists of two major related improvements:

  1. Batch writing to data pages
  2. Batch updates in B+ Tree

Batch writing to data pages

Divide the input data rows into 2 lists:

  1. Objects whose size is equal to or greater than the size of a single data page.
  2. Other objects and remainders (heads) of large objects.

Sequentially write objects and fragments that occupy the whole page. The data page is taken from "reuse" bucket, if there is no page in reuse bucket - allocate a new one.

For remaining (regular) objects (including the remainders ("heads") of large objects) find page with enough space in FreeList (allocate new one if there is no such page) and fill it up to the end.

Batch update in B+ Tree

TBD: describe the implementation.

Proposed plan

Overall changes to support batch updates in PageMemory can be divided into following phases.

Phase 1: Batch insertion in FreeList to improve rebalancing

  • Implement insertDataRows operation in FreeList - insert several data rows at once.
  • Preloader should insert a batch of data rows before initializing cache entries. In the case when the cache entry is initialized incorrectly, preloader should rollback changes and remove pre-created data row.

Phase 2: DataStreamer support

  • Add support for batch inserts in FreeList in the isolated updater (similar to the preloader).

Phase 3: putAll support

  • Implement batch operations in B+ tree (findAll/putAll/removeALl/invokeAll).
  • Examine the performance difference between the following approaches and select the best:
    A.  single updates (current approach)
    B.  sort + BPlusTree.invokeAll() + FreeList.insertDataRow
    C.  sort + BPlusTree.findAll + FreeList.insertDataRows + BPlusTree.putAll

Phase 4: MVCC support

  • Add support for MVCC (TRANSACTIONAL_SNAPSHOT) cache mode.

Risks and Assumptions

  1. For BPlusTree batch operations, ordered keys are required, moreover, an attempt to simultaneously lock the same keys in a different order lead to a deadlock, so batch insertion into the page memory must be performed on an unlocked entries. Alternatively, keys passed in batches from different components (preloader, datastreamer, putAll) should be locked in the same order.
  2. Heap usage/GC pressure.

Prototype testing results

For testing purposes, a prototype was created with simplified Phase 1 implementation, which includes FreeList optimization (batch writing to data pages), but does not include optimization for B+ Tree (searching and inserting a range of keys). The rebalancing process was chosen as the easiest and most suitable for testing batch inserts in PageMemory.

Synthetic testing results.

Microbenchmark prepares a supply message and measures the time spent by the demander to handle it.

Parameters: 1 node, 1 cache, 1 partition, 100 objects, message size is not limited, 4k page.

Entry size (bytes)44-104140-340340-740740-12401240-30402000-3000
1040-8040

4040-16040

(fragmented)


100-32000

(fragmented mostly)

Time improvement (%)43.437.133.928.205.410.18.6
1.1

Testing on dedicated servers

Checked the total rebalancing time on the following configuration:

Cluster: 2 nodes
Cache: transactional, partitioned 1024 partitions, 1 backup
Data size: 40 GB
Page size: 4096 bytes
Rebalance message size: 512 KB
Count of prefetch messages: 10

The improvement in rebalancing time with batch insertion is mostly noticeable when writing small objects and decreases on larger objects.

Entry size (bytes)140-240240-540500-800700-800800-1200
Rebalancing time improvement (%)22199.582

Discussion Links

// Links to discussions on the devlist, if applicable.

Reference Links

// Links to various reference documents, if applicable.

Tickets

Key Summary T Updated Assignee Reporter P Status Resolution
Loading...
Refresh

  • No labels