You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Current »

Status

Current state: Released

Discussion threadhttp://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-49-Unified-Memory-Configuration-for-TaskExecutors-td31436.html

JIRAhttps://jira.apache.org/jira/browse/FLINK-13980

Released: Flink 1.10

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

This proposal addresses several shortcomings of current (Flink 1.9) Flink TaskExecutor memory configuration.

(1) Different configuration for Streaming and Batch.

Currently, TaskExecutor memory is configured differently for streaming and batch jobs.

  • Streaming
    • Memory is implicitly consumed, either on-heap by memory state backend, or off-heap by RocksDB.
    • Users have to manually align heap size and choice of state backend.
    • Users have to manually configure RocksDB to use enough memory for good performance, but not too much to exceed the budget.
    • No predictability in the memory consumption, neither on-heap by memory state backend, nor off-heap by RocksDB.
  • Batch
    • Users configure total memory size, and whether to use on-heap or off-heap memory in operators.
    • Flink reserves a fraction of the total memory as managed memory. It adjusts the heap size and “max direct memory” parameters automatically to account for managed memory on-heap or off-heap.
    • Flink allocates memory segments for the managed memory, to be used by operators. It guarantees that the reserved memory segments are never exceeded.

(2) Complex and difficult configuration of RocksDB in Streaming

  • Users have to manually decrease the JVM heap size, or setting Flink to use off-heap memory. 
  • Users have to manually configure the RocksDB memory.
  • No way for users to use available memory as much as possible, because RocksDB memory size has to be configured conservatively low enough to make sure the memory budget is not exceeded.

(3) Complicated, uncertain and hard to understand

  • There are some “magic” when determining sizes of containers, processes. Some of these are not easy to reason about, for example “cutoff ratio” on Yarn and with other containers.
  • Configuring an off-heap state backend like RocksDB means either also setting managed memory to off-heap or adjusting the cutoff ratio, to dedicate less memory to the JVM heap.
  • TaskExecutor relies on instantaneous JVM memory usage for determining sizes of different memory pools, by first triggering a GC and then obtaining JVM free memory size, which introduces uncertainty to sizes of different memory pools.

Public Interfaces

Task executor memory configuration options. A summary of backwards compatibility  

Proposed Changes

Unifying Managed Memory for Batch and Streaming

The basic idea is to consider memory used by RocksDB state backends as part of managed memory, and extend memory manager so that memory consumers can simply reserve certain amount of memory from it but not necessarily allocate the memory from it. In this way, users should be able to switch between streaming and batch jobs, without having to modify the cluster configurations.

Memory Use Cases

  • Streaming jobs with RocksDBStateBackend
    • Off-heap memory
    • Implicitly allocated by the state backend
    • Cannot exceed total memory size, which is configured during initialization
  • Batch jobs
    • Either on-heap or off-heap memory
    • Explicitly allocated from the memory manager
    • Cannot exceed total memory allocated from memory manager

To make managed memory work with both cases, we should always allocate managed memory off-heap.

Unifying Explicit and Implicit Memory Allocation

  • Memory consumers can acquire memory in two ways
    • Explicitly acquire from MemoryManager, in the form of MemorySegment.
    • Reserve from MemoryManager, in which case should return “use up to X bytes”, and implicitly allocate the memory by the consumer itself.
  • MemoryManager never pre-allocate any memory pages, so that we keep the managed memory budget available for both allocation from MemoryManager and allocation directly from memory consumers.
  • For off-heap memory explicitly acquired from MemoryManager, Flink always allocate with Unsafe.allocateMemory(), which is not limited by the JVM -XX:MaxDirectMemorySize parameter.
    • This eliminates the uncertainty about how many off-heap managed memory should be accounted for JVM max direct memory. 
    • The drawback is that Unsafe is no longer supported in Java 12.

Memory Pools and Configuration Keys

Framework Heap Memory

On-heap memory for the Flink task manager framework. It is not accounted for slot resource profiles.

(taskmanager.memory.framework.heap.size)

(default 128mb)

Framework Off-Heap Memory

Off-heap memory for the Flink task manager framework. It is not accounted for slot resource profiles.

(taskmanager.memory.framework.off-heap.size)

(default 128mb)

Task Heap Memory

On-heap memory for user code.

(taskmanager.memory.task.heap.size)

Task Off-Heap Memory

Off-heap memory for user code.

(taskmanager.memory.task.off-heap.size

(default 0b)

Network Memory

Off-heap memory for shuffle service, e.g., network buffers.

(taskmanager.memory.network.[min/max/fraction])

(default min=64mb, max=1gb, fraction=0.1)

Managed Memory

Off-heap Flink managed memory.

(taskmanager.memory.managed.[size|fraction])

(default fraction=0.4)

JVM Metaspace

Off-heap memory for JVM metaspace.

(taskmanager.memory.jvm-metaspace)

(default 96mb)

JVM Overhead

Off-heap memory for thread stack space, I/O direct memory, compile cache, etc.

(taskmanager.memory.jvm-overhead.[min/max/fraction])

(default min=192mb, max=1gb, fraction=0.1)

Total Flink Memory

Coarser config option for total flink memory, to make it easily configurable for users.

This includes Framework Heap Memory, Framework Off-Heap Memory, Task Heap Memory, Task Off-Heap Memory, Network Memory, and Managed Memory.

This excludes JVM Metaspace and JVM Overhead.

(taskmanager.memory.flink.size)

Total Process Memory

Coarser config option for everything, to make it easily configurable for users.

This includes Total Flink Memory, and JVM Metaspace and JVM Overhead.

(taskmanager.memory.process.size)

JVM Parameters

  • JVM heap memory
    • Includes Framework Heap Memory, Task Heap Memory
    • Explicitly set both  -Xmx and -Xms to this value
  • JVM direct memory
    • Includes Framework Off-Heap Memory, Task Off-heap Memory and Network Memory
    • Explicitly set -XX:MaxDirectMemorySize to this value
    • For Managed Memory, we always allocate memory with Unsafe.allocateMemory(), which will not be limited by this parameter.
  • JVM metaspace
    • Set -XX:MaxMetaspaceSize to configured JVM Metaspace

Memory Calculations

  • All the memory / pool size calculations take place before the task executor JVM is started. Once JVM is started, there should be no further calculations and deriving inside Flink TaskExecutor. 
  • The calculations should be performed in two places only.
    • In the startup shell scripts, for standalone.
    • On the resource manager side, for Yarn/Mesos/K8s.
  • The startup scripts can actually call java with the Flink runtime code to execute the calculation logics. In this way, we can make sure that standalone cluster and other active mode clusters have consistent memory calculation logics.
  • The calculated memory / pool sizes are passed into the task executor as dynamic configurations (via '-D').

Calculation Logics

We need either of these three options configured.

  • Task Heap Memory and Managed Memory
  • Total Flink Memory
  • Total Process Memory

The following logic describes how to derive these values from another.

  • If both Task Heap Memory and Managed Memory are configured, we use these to derive Total Flink Memory
    • If Network Memory is configured explicitly, we use that value
    • Otherwise, we compute it such that it makes up the configured fraction of the final Total Flink Memory (see getAbsoluteOrInverseFraction())
  • If Total Flink Memory is configured, but not Task Heap Memory and Managed Memory, then we derive Network Memory and Managed Memory, and leave the rest (excluding Framework Heap Memory, Framework Off-Heap Memory and Task Off-Heap Memory) as Task Heap Memory.
    • If Network Memory is configured explicitly, we use that value
    • Otherwise we compute it such that it makes up the configured fraction of the Total Flink Memory (see getAbsoluteOrFraction())
    • If Managed Memory is configured explicitly, we use that value
    • Otherwise we compute it such that it makes up the configured fraction of the Total Flink Memory (see getAbsoluteOrFraction())
  • If only the Total Process Memory is configured, we derive the Total Flink Memory in the following way
    • We get (or compute relative) and subtract the JVM Overhead from Total Process Memory (see getAbsoluteOrFraction())
    • We subtract JVM Metaspace from the remaining
    • We leave the rest as Total Flink Memory

def getAbsoluteOrFraction(key: ConfigOption, base: Long): Long = {

    conf.getOrElse(key) {

        val (min, max, fraction) = getRange(conf, key)

        val relative = fraction * base

        Math.max(min, Math.min(relative, max))

    }

}

def getAbsoluteOrInverseFraction(key: ConfigOption, base: Long): Long = {

    conf.getOrElse(key) {

        val (min, max, fraction) = getRange(conf, key)

        val relative = fraction / (1 - fraction) * base

        Math.max(min, Math.min(relative, max))

    }

}

Implementation Steps

Step 1. Introduce a switch for enabling the new task executor memory configurations

Introduce a temporal config option as a switch between the current / new task executor memory configuration code paths. This allows us to implement and test the new code paths without affect the existing code paths and behaviors.

Step 2. Implement memory calculation logics

  • Introduce new configuration options
  • Introduce data structures and utilities.
    • Data structure to store memory / pool sizes of task executor
    • Utility for calculating memory / pool sizes from configuration
    • Utility for generating dynamic configurations
    • Utility for generating JVM parameters

This step should not introduce any behavior changes.

Step 3. Launch task executor with new memory calculation logics

  • Invoke data structures and utilities introduced in Step 2 to generate JVM parameters and dynamic configurations for launching new task executors.
    • In startup scripts
    • In resource managers
  • Task executor uses data structures and utilities introduced in Step 2 to set memory pool sizes and slot resource profiles.
    • MemoryManager
    • ShuffleEnvironment
    • TaskSlotTable

Implement this step as separate code paths only for the new mode.

Step 4. Separate on-heap and off-heap managed memory pools

  • Update MemoryManager to have two separated pools.
  • Extend MemoryManager interfaces to specify which pool to allocate memory from.

Implement this step in common code paths for the legacy / new mode. For the legacy mode, depending to the configured memory type, we can set one of the two pools to the managed memory size and always allocate from this pool, leaving the other pool empty.

Step 5. Use native memory for managed memory.

  • Allocate memory with Unsafe.allocateMemory
    • MemoryManager

Implement this issue in common code paths for the legacy / new mode. This should only affect the GC behavior.

Step 6. Clean-up of legacy mode.

  • Fix / update / remove test cases for legacy mode
  • Deprecate / remove legacy config options.
  • Remove legacy code paths
  • Remove the switch for legacy / new mode.

Compatibility, Deprecation, and Migration Plan

This FLIP changes how users configure cluster resources, which in some cases may require re-configuring of cluster if migrated from prior versions.

Deprecated configuration keys are as follows:

Deprecated KeyAs Fallback of New KeyNotes
taskmanager.heap.size

Standalone: taskmanager.memory.flink.size
Yarn/Mesos/K8s: taskmanager.memory.process.size


taskmanager.heap.mb

Standalone: taskmanager.memory.flink.size
Yarn/Mesos/K8s: taskmanager.memory.process.size


taskmanager.memory.sizetaskmamager.memory.managed.size
taskmanager.memory.fractionN/A`taskmanager.memory.managed.fraction` now has different sementices.
taskmanager.memory.off-heapN/A`taskmanager.memory.off-heap` will be ignored, because we no-longer support on-heap managed memory.
taskmanager.memory.preallocateN/A`taskmanager.memory.preallocate` will be ignored, because we no-longer support pre-allocation of managed memory.
taskmanager.network.memory.[min/max/fraction]taskmanager.memory.shuffle.[min/max/fraction]

Test Plan

  • We need to update existing and add new integration tests dedicated to validate the new memory configuration behaviors.
  • It is also expected that other regular integration and end-to-end tests should fail if this is broken.

Limitations

  • The proposed design uses Unsafe.allocateMemory() for allocating managed memory, which is no longer supported Java 12. We need to look for alternative solutions in the future.

Follow Ups

  • This FLIP requires very good documentation to help users understand how to properly configure Flink processes and which knobs to turn in which cases.
  • It would be good to expose configured memory pool sizes in the web UI, so that users see immediately what amount of memory TMs assume to use for what purpose.

Rejected Alternatives

Regarding JVM direct memory, we have the following alternative.

  1. Have MemorySegments de-allocated by the GC, and trigger GC by setting proper JVM max direct memory size parameter.
  2. Have MemorySegments de-allocated by the GC, and trigger GC by a dedicated bookkeeping independent from JVM max direct memory size parameter.

  3. Manually allocate and de-allocate MemorySegments. 

We decided to go with 3, but depends on how segment fault safe it turns out to be, we may easily switch to other alternatives after the implementation.

  • No labels