Authors: Greg Harris, Ivan Yurchenko, Jorge Quilcate, Giuseppe Lillo, Anatolii Popov, Juha Mynttinen, Josep Prat, Filip Yonov

Status

Current state: Discarded

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: KAFKA-19161

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

This KIP was discarded due to a design change in KIP-1163 which made it unnecessary.

Glossary

  • Diskless Topic: A topic which does not append to or serve from block storage devices.
  • Object Storage: A shared, durable, concurrent, and eventually consistent storage supporting arbitrary sized byte values and a minimal set of atomic operations: put, delete, list, and ranged get.
  • Object Key: A unique reference to an object within Object Storage.
  • Batch: A container for Kafka records and a unit of record representation in the network protocol and storage format in both status quo Kafka and diskless topics.
  • Shared Log Segment Object: An object containing a shared log segment for one or more diskless topic-partitions on the object storage. Contains record batches similar to classic Kafka topics.
  • Batch Coordinate: A reference to a record batch within a shared log segment object, at some byte range.
  • Diskless Batch Coordinator: Component serving as source of truth about batch coordinates and log segment objects. Establishes a global order of batches within a diskless topic, and serves coordinates to enable retrieval of batches.
  • Object Compaction: Distinct from log compaction. A background asynchronous process which reads from and writes to multiple shared log segment objects to manage already-written objects.

Motivation

KIP-1150: Diskless Topics introduces the concepts of diskless topics, KIP-1163: Diskless Core describes in detail how data is written to and read from diskless topics. According to these KIPs, batches are forever attached to their original objects they were uploaded and committed in. This is problematic because of several reasons:

  1. Having many small objects instead of fewer bigger ones limits the possibility of sequential reads from the object storage. This increases the demand for parallel operations, the costs of object storage GET operation, and consume latency. It would be beneficial to merge several small (e.g. 8 MiB) recently uploaded objects into one big (e.g. 100-1000 MiB) while laying out neighbouring batches sequentially.
  2. The cluster may benefit from merging neighboring small batches themselves. Storing fewer distinct batches will reduce the batch metadata overhead. Data compression in batches may benefit from merging as well. This is something that the Apache Kafka permits, and is not implemented for classic topics, but would be beneficial to manage the Diskless metadata size.
  3. Performing topic compaction in the Kafka’s sense would require effectively rewriting batches.
  4. A batch may be deleted logically, but its data may still exist in the original object for the whole lifetime of the object because some other batch keeps the object alive, potentially forever. There may be a topic-level setting that specifies how sensitive is this data to be physically deleted. To satisfy this requirement, the object needs to be scanned and filtered for the dead batches, or data needs to be grouped by retention properties into multiple objects.

Proposed Changes

The desired characteristics listed in the Motivation sections are possible to achieve with object compaction. Compaction agents will run inside brokers (e.g. a dedicated thread). They ask for compaction jobs from the Batch Coordinator. Each job may be focused on one or multiple tasks:

  1. Merge small freshly uploaded files (including batch merging).
  2. Perform a partition compaction for compacted topics.
  3. Enforce a deletion deadline.

The compaction agent will perform the operation in the streaming manner (i.e. using as little local memory buffer as possible). Ordering batches by offsets and grouping by topic-partition in input and output files will play a key role in this. During the job, one or multiple output files will be produced. After finishing the job, the compaction agent will commit the performed changes atomically to the Batch Coordinator.

When shared log segments are uploaded, they contain batches of multiple topic-partitions. There are two ways to proceed when we first merge them:

  1. Enforce that one object contains data from only one topic-partition, i.e. make them not shared any more.
  2. Keep the shared approach.

The former approach seems not viable because the number of (usually relatively expensive) PUT operations to the object storage will grow significantly. The latter approach thus seems better. However, there's a caveat with compacted topics. Batches in compacted topics tend to be relocated and rewritten much more often than in non-compacted topics and this would cause unnecessary disturbance to bigger files where batches from compacted topics happen to be stored. There may be compromise approaches / optimizations, for example:

  1. Extract compacted partitions into exclusive objects, while keeping non-compacted ones in shared objects.
  2. Keep the shared approach, but limit the actual number of partitions in a single object.

It seems we’re at liberty to change the algorithm after the initial implementation as there’s no compatibility limitations.

Public Interfaces

Briefly list any new interfaces that will be introduced as part of this proposal or any existing interfaces that will be removed or changed. The purpose of this section is to concisely call out the public contract that will come along with this feature.

A public interface is any change to the following:

  • Binary log format

  • The network protocol and api behavior

  • Any class in the public packages under clientsConfiguration, especially client configuration

    • org/apache/kafka/common/serialization

    • org/apache/kafka/common

    • org/apache/kafka/common/errors

    • org/apache/kafka/clients/producer

    • org/apache/kafka/clients/consumer (eventually, once stable)

  • Monitoring

  • Command line tools and arguments

  • Anything else that will likely break existing users in some way when they upgrade


Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?
  • If we are changing behavior how will we phase out the older behavior?
  • If we need special migration tools, describe them here.
  • When will we remove the existing behavior?

Test Plan

Describe in few sentences how the KIP will be tested. We are mostly interested in system tests (since unit-tests are specific to implementation details). How will we know that the implementation works as expected? How will we know nothing broke?

Rejected Alternatives

If there are alternative ways of accomplishing the same thing, what were they? The purpose of this section is to motivate why the design is the way it is and not some other way.

  • No labels