Status

Current stateUnder Discussion

Discussion threadhere

JIRAhere

Motivation

KIP-84 introduced support SASL/SCRAM with the ability to persist configuration in Zookeeper. KIP-554 extended this feature to KRaft by introducing MetadataRecords to store SCRAM configuration in KRaft logs.

KIP-554 doesn't allow the ability to describe salt, stored_key and server_key fields using the --describe flag in kafka-configs.sh due to security concerns. This makes it hard to migrate SCRAM credentials from one cluster to another, unlike its Zookeeper counterpart where the znode contents can be copied over easily. The absence of such an option prevents cluster migrations without requiring users to re-configure their passwords in the new cluster, which is a non-trivial operation for large multi-tenant clusters. The ability to safely synchronise credentials across multiple clusters is also useful in scenarios where client identities are common across different clusters as this reduces toil for end users.

This KIP therefore proposes the ability to describe the aforementioned fields in a secure way, providing Kafka cluster operators the ability to export and import SCRAM credentials.

Public Interfaces

Broker Configuration Changes

A new broker property will be added to support encrypting sensitive SCRAM credential data. This configuration would represent an AES-256 key encoded in hex format.

`sasl.scram.encryption.key`

  • Type: Password
  • Mode: Static configuration
  • Description: Randomly generated 32 byte AES Key. This may be generated using openssl rand -hex 32
  • Default Value: null

describeUserScramCredentials

DescribeUserScramCredentialsResponse is updated to support the new fields


{  
  "apiKey": 50,  
  "type": "response",  
  "name": "DescribeUserScramCredentialsResponse",  
  "validVersions": "0-1",  
  "flexibleVersions": "0+",  
  "fields": [  
      (...)
      { "name": "CredentialInfos", "type": "[]CredentialInfo", "versions": "0+",  
        "about": "The mechanism and related information associated with the user's SCRAM credentials.", "fields": [  
        (...)
        { "name": "Salt", "type": "bytes", "versions": "1+",  
          "about": "The Salt generated by the client" },  
        { "name": "EncryptedStoredKey", "type": "bytes", "versions": "1+", "nullableVersions":  "1+",  
          "about": "Encrypted Stored Key" },  
        { "name": "EncryptedServerKey", "type": "bytes", "versions": "1+", "nullableVersions":  "1+",  
          "about": "Encrypted Server Key" }  
      ]}  
    ]}  
  ]  
}


Command-Line Changes

We will extend kafka-configs.sh command such that it returns encrypted secrets. For example:

$ bin/kafka-configs.sh --bootstrap-server localhost:9020 --entity-type users --entity-name alice --describe

Configs for user-principal 'alice' are SCRAM-SHA-512=iterations=8192,salt=<salt>,encrypted_stored_key=<encrypted_stored_key>,encrypted_server_key=<encrypted_server_key>


We will also extend the --alter flag such that it accepts the above fields in-lieu of a password

bin/kafka-configs.sh \
    --bootstrap-server localhost:9020 \
    --entity-type users \
    --entity-name alice \
    --alter \
    --add-config 'SCRAM-SHA512=[iterations=8192,salt=<salt>,encrypted_stored_key=<encrypted_stored_key>,encrypted_server_key=<encrypted_server_key]'

Proposed Changes

Threat Model

Boundaries

  • AdminClient → Kafka
    • Protocol: Kafka Wire Protocol, ideally over TLS
    • Authentication: SASL or mTLS
    • Authorization: Acl based

Assets

  • SCRAM parameters of Kafka cluster users, particularly stored_key and server_key as defined in RFC5802

Attacker Profiles

  • Unauthorised User: a user without DESCRIBE permissions on the CLUSTER resource cannot invoke the DescribeScramUserCredentials Admin API. They can however monitor network traffic for an authorized user in absence of TLS and act as a passive adversary.
  • Authorised User:
    • Passive: A user with DESCRIBE permissions on the CLUSTER resource can invoke DescribeScramUserCredentials API and therefore observe SCRAM related configurations
    • Active: A user with ALTER permissions on CLUSTER resource can alter existing SCRAM credentials using the Admin API. This sort of adversary is out of scope as they can impersonate any user by overwriting credentials.
  • Kafka cluster operators: They have shell acccess to the Kafka brokers and Zookeeper/Kraft metadata logs. They are therefore assumed to be trusted

Security Goals

Passive adversaries shouldn't be able to impersonate a Kafka broker or gather enough information to perform a bruteforce/dictionary attack on a user's password.

Assumptions

We assume that sasl.scram.encryption.key has high entropy.

We assume Kafka operators for both source and target clusters are able to share sasl.scram.encryption.key safely out of band.

Implementation

We propose encrypting stored_key and server_key using AES-GCM-256 with a random 12 byte nonce.

To avoid nonce reuse attacks, we propose deriving a key for each user by using HKDFExpand (RFC 5869) with SHA256 as the hash function and contents of sasl.scram.encryption.key as PRK, DescribeUserScramCredentials={user=<username>,purpose=stored_key} or DescribeUserScramCredentials={user=<username>,purpose=server_key} as the info and 32 as length parameter. The implementation's pseudocode is therefore:

prk = getConfig("sasl.scram.encryption.key");
aad = "{salt=" + salt + "iteration_count=" + iteration_count + "}"

info1 = "DescribeUserScramCredentials={user=" + username + ",purpose=stored_key}"
derived_key1 = HKDFSha256Expand(prk, info1, 32);
iv1 = random_bytes(12); // 12 byte random nonce
encrypted_stored_key = iv1 | AES_GCM_256_Enc(derived_key1, iv1, stored_key, aad);

info2 = "DescribeUserScramCredentials={user=" + username + ",purpose=server_key}"
derived_key2 = HKDFSha256Expand(prk, info2, 32);
iv2 = random_bytes(12); // 12 byte random nonce
encrypted_server_key = iv2 | AES_GCM_256_Enc(derived_key2, iv2, server_key, aad)

Decryption works as follows:

prk = getConfig("sasl.scram.encryption.key"); // safely shared out-of-band
aad = "{salt=" + salt + "iteration_count=" + iteration_count + "}"

info1 = "DescribeUserScramCredentials={user=" + username + ",purpose=stored_key}"
derived_key1 = HKDFSha256Expand(prk, info1, 32);
iv1 = encrypted_stored_key[:12]
stored_key = AES_GCM_256_Dec(derived_key1, iv1, encrypted_stored_key[12:], aad);

info2 = "DescribeUserScramCredentials={user=" + username + ",purpose=server_key}"
derived_key2 = HKDFSha256Expand(prk, info2, 32);
iv2 = encrypted_server_key[:12]
server_key = AES_GCM_256_Dec(derived_key2, iv2, encrypted_server_key[12:], aad)


Compatibility, Deprecation, and Migration Plan

The change remains backwards compatible as the new configuration sasl.scram.encryption.key remains optional.

When unset, the encrypted fields will not be populated in the response and will be set as null.

Note that the salt is a public parameter in the protocol and does not leak any additional information than is already available to anyone trying to authenticate with a valid username. As a result, it is safe to unconditionally include it in the response.

Rejected Alternatives

Expose fields without encrypting them

As alluded in the mailing list discussion for KIP-554 and RFC5802§9, there are security concerns for exposing the triple (salt, num_iterations, stored_key/server_key) over AdminAPI as that may allow a passive adversary to perform an offline dictionary attack. Additionally, it may allow a passive adversary to impersonate the server.

We therefore encrypt the stored_key and server_key fields. This is safe as the key is assumed to be only in control of the cluster operators who already have access to the KRaft logs and therefore the SCRAM configurations.

Create unique credentials per cluster

This is not viable when migrating multi-tenant clusters as it would require coordination from all tenants to either reconfigure their passwords in the new cluster. Even if the passwords are randomly generated by cluster operators and shared with clients, it would require clients to make configuration changes in order to authenticate with the new cluster.

Additionally, this doesn’t help in scenarios where there are global client identities which span across clusters and require having common passwords everywhere.

Use a custom authentication scheme and/or different backing store

While this may work for new use-cases, it doesn’t provide a migration path for existing use-cases which use SASL/SCRAM with KRaft.

Additionally, introducing another backing store adds a new potential point of failure. Keeping these configurations in Kafka with KRaft avoids any additional failure domain as all brokers and controllers would have access to the cached metadata required to handle the authentication requests.

  • No labels