Current state: Under Discussion

Discussion thread: here

Vote thread: here

JIRA: here

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).


Kafka Connect currently defines a default REST API request timeout of 90 seconds which isn't configurable. If a REST API request takes longer than this, a 500 Internal Server Error  response is returned with the message "Request timed out". In exceptional scenarios, a longer timeout may be required for operations such as connector config validation or connector creation / updation (both of which internally do a config validation first) to complete successfully. Consider a database / data warehouse connector which has elaborate validation logic involving querying information schema to get a list of tables/views to validate the user's connector configuration. If the database / data warehouse has a very high number of tables / views and the database / data warehouse is under a heavy load in terms of query volume, such information schema queries can end up taking longer than 90 seconds which will cause connector config validation / creation REST API calls to timeout. 

Public Interfaces

This KIP proposes to add a new request header "Request-Timeout"  (integer value in milliseconds; for instance "Request-Timeout: 120000" for a timeout of 120000 milliseconds / 2 minutes) to the following Kafka Connect REST API endpoints:

  • PUT /connector-plugins/{pluginName}/config/validate 
  • POST /connectors 
  • PUT /connectors/{connector}/config 

The POST /connectors  and PUT /connectors/{connector}/config  endpoints internally do a config validation first (and only proceed to connector creation / updation if the validation passes), which is why the "Request-Timeout" header is relevant for these endpoints too.

A new Kafka Connect worker configuration -  will be added to configure an upper bound for the "Request-Timeout" header on the above 3 REST API endpoints. The default value for this config will be 600000 (10 minutes) and it will be marked as a low importance config

Proposed Changes

The timeout here will be updated to use the value from the "Request-Timeout" header if specified (else fallback to the current default of 90 seconds) for the aforementioned endpoints. If the value for the "Request-Timeout" header is invalid (<= 0 or >, a 400 Bad Request response will be returned.

Note that a higher / lower configured timeout doesn't change how long requests actually run in the herder - currently, if a request exceeds the default timeout of 90 seconds we return a 500 Internal Server Error response but the request isn't interrupted or cancelled and is allowed to continue to completion. Another thing to note is that each connector config validation is  done on its own thread via a cached thread pool executor in the herder (create / update connector requests are processed asynchronously by simply writing a record to the Connect cluster's config topic, so config validations are the only relevant operation here).

This KIP also proposes to change the behavior of the POST /connectors  and the PUT /connectors/{connector}/config  endpoints on request timeouts - currently, even if the connector config validation takes too long and causes a timeout response to be returned to the user, the connector create/update request is still made if the config validation completes successfully eventually. This can be pretty confusing to users and is generally a poor user experience because a 500 Internal Server Error  response should mean that the request couldn't be fulfilled. This behavior will be changed to be more intuitive - if the config validation takes too long and exceeds the timeout (either configured via the proposed new "Request-Timeout" header or the default 90 seconds), the request will be aborted and the connector won't be created / updated.

Another small improvement will be made to avoid double connector config validations when Connect is running in distributed mode - currently, if a request to POST /connectors or PUT /connectors/{connector}/config is made on a worker that isn't the leader of the group, a config validation is done first, and the request is forwarded to the leader if the config validation is successful (only the leader is allowed to do writes to the config topic, which is what a connector create / update entails). The forwarded request results in another config validation before the write to the config topic can finally be done on the leader. The only benefit of this approach is that it avoids request forwarding to the leader for requests with invalid connector configs. However, it can be argued that it's cheaper and more optimal overall to forward the request to the leader at the outset, and allow the leader to do a single config validation before writing to the config topic. Since config validations are done on their own thread and are typically short lived operations, it should not be an issue even with large clusters to allow the leader to do all config validations arising from connector create / update requests (the only situation where we're adding to the leader's load is for requests with invalid configs, since the leader today already has to do a config validation for forwarded requests with valid configs). Note that the PUT /connector-plugins/{pluginName}/config/validate endpoint doesn't do any request forwarding and can be used if frequent validations are taking place (i.e. they can be made on any worker in the cluster to avoid overloading the leader).

Compatibility, Deprecation, and Migration Plan

The proposed changes are fully backward compatible since we're just introducing a new optional request header to 3 REST API endpoints along with a new worker configuration that has a default value.

Test Plan

A simple integration test will be added to ensure that a validate REST API request for a connector that takes longer than the default REST API request timeout (90 seconds) doesn't fail if the "Request-Timeout" header is set to a higher value. Unit tests will be added wherever applicable.

Rejected Alternatives

Introduce a new internal endpoint to persist a connector configuration without doing a config validation

Summary: Instead of forwarding all create / update requests to the leader directly, we could do a config validation on the non-leader worker first and if the validations pass forward the request to a new internal-only endpoint on the leader which will just do the write to the config topic without doing a config validation first.

Rejected because: Introduces additional complexity with very little benefit as opposed to simply delegating all config validations from create / update requests to the leader.

Configure the timeout via a worker configuration

Summary: A Kafka Connect worker configuration could be introduced to control the request timeouts.

Rejected because: This doesn't allow for per request timeout configuration and also requires a worker restart if changes are requested. Configuring the timeout via a request header allows for much more fine-grained control.

Allow configuring timeouts for ConnectClusterStateImpl

Summary: Currently, ConnectClusterStateImpl  is configured in the RestServer and passed to REST extensions via the context object (see here). ConnectClusterStateImpl takes a request timeout parameter for its operations such as list connectors and get connector config (implemented as herder requests). This timeout is set to the minimum of ConnectResource.DEFAULT_REST_REQUEST_TIMEOUT_MS (90 seconds) and DistributedConfig.REBALANCE_TIMEOUT_MS_CONFIG  (defaults to 60 seconds). We could allow configuring these timeouts too.

Rejected because: The overall behavior would be confusing to end users (they'll need to tweak two configs to increase the overall timeout) and there is seemingly no additional value here (as the herder requests should not take longer than the current configured timeout anyway).

Allow configuring producer zombie fencing admin request timeout

Summary: ConnectResource.DEFAULT_REST_REQUEST_TIMEOUT_MS is also used as the timeout for producer zombie fencings done in the worker for exactly once source tasks (see here). We could allow configuring this timeout as well.

Rejected because: Zombie fencing is an internal operation for Kafka Connect and users shouldn't be able to configure it.

  • No labels