This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Child pages
  • Handling Produce and Fetch Requests in KafkaApi
Skip to end of metadata
Go to start of metadata


1 parse the fetch request and make sure it is valid
2 if fetch request is from a follower (replicaId = -1), call maybeUpdatePartitionHW to see if the HW of the leader of a partition can be updated.
3 check to see if the fetch request can be satisfied immediately.
3.1 If yes, call readMessageSets() immediately and send the response back.
3.2 Otherwise, register a DelayedFetch watcher on each of the partition in the fetch request. The watcher will be checked and be potentially satisfied (DelayedFetch.checkSatisfied) if (a) the amount of available bytes for fetch exceeds a certain amount, or (b) a timeout has been reached.
3.2.1 Condition (a) will be checked at the end of handleProducerRequest(). For each satisfied fetch request, it calls readMessageSets() to fetch the data and to send the response.
3.2.2 When condition (b) is met, DelayedFetch.expire() will be called. It calls readMessageSets() to fetch whatever data is available and to send the response.
(TODO: need logic to unblock produce requests waiting on ack from followers)


1 produce(): for each partition
1.1 make sure leader is on this broker
1.2 add new data to local log (leader replica)
1.3 send response immediately (TODO: need logic to send response asynchronously when data is received by followers)
2 check if the produce request can satisfy an fetch request (meeting the minimum byte requirement)
3 for all satisfied produce requests, fetch data and send response

  • No labels

1 Comment

  1. Anonymous

    I just wanted to comment on your blog and say I really enjoyed reading your blog here. It was very informative and I also digg the way you write! Keep it up and I'll be back soon to find out more mate.USB Bluetooth Adapter