You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Next »


Status

Current state"Under Discussion"

Discussion thread:  [DISCUSS] KIP-354 Time-based log compaction policy

JIRA: KAFKA-7321

Motivation

Compaction enables Kafka to remove old messages that are flagged for deletion while other messages can be retained for a relatively longer time.  Today, a log segment may remain un-compacted for a long time since the eligibility for log compaction is determined based on compaction ratio (“min.cleanable.dirty.ratio”) and min compaction lag ("min.compaction.lag.ms") setting.  Ability to delete a log message through compaction in a timely manner has become an important requirement in some use cases (e.g., GDPR).  For example,  one use case is to delete PII (Personal Identifiable information) data within 7 days while keeping non-PII indefinitely in compacted format.  The goal of this change is to provide a time-based compaction policy that ensures the cleanable section is compacted after the specified time interval regardless of dirty ratio and “min compaction lag”.  However, dirty ratio and “min compaction lag” are still honored if the time based compaction rule is not violated. In other words, if Kafka receives a deletion request on a key (e..g, a key with null value), the corresponding log segment will be picked up for compaction after the configured time interval to remove the key.

Example

A compacted topic with user id as key and PII in the value:

1 => {name: "John Doe", phone: "5555555"}
2 => {name: "Jane Doe", phone: "6666666"}

# to remove the phone number we can replace the value with a new message
1 => {name: "John Doe"}

# to completely delete key 1 we can send a tombstone record
1 => null

# but until compaction runs (and some other conditions are met), reading the whole topic will get all three values for key 1, and the old values are still retained on disk.

if there is a requirement to guarantee a maximum time an old record can exist (for example an interpretation of GDPR PII) a new topic setting is needed, because existing configurations focus only on the minimum time it should live, or sacrifice the efficiencies gained by the dirty ratio settings.

This example mentions GDPR because it is widely known, but the requirement here is to provide some guarantees around a tombstone or a new value leading to deletion of old values within a maximum time.

Note: This Change focuses on when to compact a log segment, and it doesn’t conflict with KIP-280, which focuses on how to compact log.

Current Behavior

For log compaction enabled topic, Kafka today uses min.cleanable.dirty.ratio” and "min.compaction.lag.ms" to determine what log segments it needs to pick up for compaction. "min.compaction.lag.ms" marks a log segment uncleanable until the segment is sealed and remains un-compacted for the specified "lag". The detailed information can be found in KIP-58.   “min.cleanable.dirty.ratio” is used to determine the eligibility of the entire partition for log compaction. Only log partitions whose dirty ratio is higher than min.cleanable.dirty.ratio” are picked up by log cleaner for compaction.  In addition, when log cleaner performs compaction on a log partition, there is no guarantee it will compact all cleanable segments determined by "min.compaction.lag.ms". On each compaction run, log cleaner will build an offsetmap, the number of records that can be inserted to offsetmap also limit the number of log segments that can be compacted. In summary, with these two compaction configurations,  Kafka cannot enforce a timely log compaction.

Proposed Changes

We propose adding a new topic level configuration: “max.compaction.lag.ms”, which controls the max time interval a message/segment can be skipped for log compaction (note that this interval includes the time the message resides in an active segment).  With this configuration and compaction enabled, log cleaner is required to pick up all log segments that contain messages older than “max.compaction.lag.ms” for compaction. A log segment has a guaranteed upper-bound in time to become mandatory for compaction despite min cleanable dirty ratio. The clock starts when a log segment is first created as an active segment.

Here are a list of changes to enforce such a time based compaction policy:

  1. Force a roll of non-empty active segment if the first record is older than "max.compaction.lag.ms" (or if the creation time of active segment is older than “max.compaction.lag.ms” when record timestamp is not available) so that compaction can be done on that segment.  The time to roll an active segments is controlled by "segment.ms" today.  However, to ensure messages currently in the active segment can be compacted in time, we need to seal the active segment when either "max.compaction.lag.ms" or "segment.ms" is reached.

  2. Estimate the earliest message timestamp of an un-compacted log segment.

    1. for the first (earliest) log segment:  The estimated earliest timestamp is set to the timestamp of the first message if timestamp is present in the message. Otherwise, the estimated earliest timestamp is set to "segment.largestTimestamp - min(“segment.ms”, “max.compaction.lag.ms")”  (segment.largestTimestamp is lastModified time of the log segment or max timestamp we see for the log segment. Due to the lack of record timestamp, segment.largestTimestamp might be earlier than the actual timestamp of latest record of that segment.). In the second case, the actual timestamp of the first message might be later than the estimation, but it is safe to pick up the log for compaction sooner.  However, we only need to estimate earliest message timestamp for un-compacted log segments because the deletion requests that belong to compacted segments have already been processed.

    2. from the second log segment onwards:  there are two methods to estimate the earliest message timestamp of a log segment. First method is to use the largestTimestamp (lastmodified time) of previous segment as an estimation. Second method is to use the timestamp of the first message if timestamp is present in the message.  Since getting the timestamp of a message requires additional IOs, the first method of estimation may be sufficient in practice.

  3. Let log cleaner pick up all logs with estimated earliest timestamp earlier than “now - max.compaction.lag.ms” for compaction.  
    The Rule is simple,  as long as there is an un-compacted log segment whose estimated timestamp is earlier than "max.compaction.lag.ms", the log is picked up for compaction. Otherwise, Kafka uses "min.cleanable.dirty.ratio" and "min.compaction.lag.ms"  to determine a log's eligibility for compaction as it does today. The logs to be compacted are currently sorted based on dirty ratio. With the change, the logs are sorted based on "must clean dirty ratio" first and then by dirty ratio.  "must clean dirty ratio" is calculated similar as dirty ratio except only the logs that are required to be compacted contribute to the "must clean dirty ratio". More specifically, “must clean dirty ratio” is the total size of cleanable segments whose records are older than “max.compaction.lag.ms” divided by total size (clean segment size + cleanable segment size). The reason is to compact the logs that are required to be cleaned by this time-based policy first.

Public Interfaces

  • Adding topic level configuration "max.compaction.lag.ms",  and corresponding broker configuration "log.cleaner.max.compaction.lag.ms", which is set to 0 (disabled) by default.  If both "max.compaction.lag.ms" and "min.compaction.lag.ms" are provided in topic creation, Kafka enforces "max.compaction.lag.ms" is no less than "min.compaction.lag.ms".
    -- Note that an alternative configuration is to use -1 as "disabled" and 0 as "immediate compaction". Because compaction lag is still determined based on min.compaction.lag and how long to roll an active segment,  the actual lag for compaction is undetermined if we use "0".  On the other hand, we can already set "min.cleanable.dirty.ratio" to achieve the same goal.  So here we choose "0" as "disabled".

  • "segment.ms" : no change in meaning.  The active segment is forced to roll when either "max.compaction.lag.ms" or "segment.ms" (log.roll.ms and log.roll.hours) has reached.  

  • min.cleanable.dirty.ratio : no change in meaning. However, the compaction decision that made based on "max.compaction.lag.ms" will override the compaction decision made based on "min.cleanable.dirty.ratio".

  • min.compaction.lag.msno change in meaning. However, when determining the eligibility of compaction, "max.compaction.lag.ms" has higher priority than "min.compaction.lag.ms".  

  • All above changes are only applicable for topics when compaction is enabled.

Compatibility, Deprecation, and Migration Plan

  • By default "max.compaction.lag.ms" is set to 0 and this time-based log compaction policy is disabled.  There are no compatibility issues and no migration is required. 

Performance impact

  • Kafka already collects compaction metrics (CleanerStats) that include how many bytes that are read/written during each compaction run and how long does it take to compact a log partition. Those metrics can be used to measure the performance impact when adapting this KIP.  For example, if most log partitions get compacted each day without time-based compaction,  setting the log compaction time interval to more than one day should have little impact on the amount of resource spent on compaction.

Rejected Alternatives

  • One way to force compaction on any cleanable log segment is setting “min.cleanable.dirty.ratio” to 0. However, compacting a log partition whenever a segment become cleanable (controlled by "min.compaction.lag.ms") is very expensive.  We still want to accumulate some amount of log segments before compaction is kicked out.

  • If compaction and time based retention are both enabled on a topic, the compaction might prevent records from being deleted on time.  The reason is when compacting multiple segments into one single segment, the newly created segment will have same lastmodified timestamp as latest original segment. We lose the timestamp of all original segments except the last one. As a result, records might not be deleted as it should be through time based retention.  We decide not to address this issue in this KIP because  we don't have obvious use cases that users must enable both time based retention and log compaction. Addressing this issue can be kept as a future work.  One solution is during log compaction, looking into record timestamp to delete expired records. This can be done in compaction logic itself or use AdminClient.deleteRecords() . But this solution assumes we have record timestamp.  Further investigation is needed if we have to deal with on-time retention on log compacted topic. 



  • No labels