2

We are using Kafka Streams and Karpenter with normal Deployment in order to manage the pods for a service that we have.
After Karpenter decides to kill the pod, it brings a new Pod up, and we are seeing a delay when the new Pod takes over the processing of the partition.
That delay is around ~40 seconds.
I see the default value for normal Kafka Consumer session.timeout.ms is 45 seconds - which makes us think that the Pod is not removed until the session is timed out by the broker.
Is the default value for kafka Streams session.timeout.ms also 45 seconds?
If so that would explain the behavior that we have.
Apart from that, we are thinking of changing the internal.leave.group.on.close property to true - the default Kafka Streams one being false.

1 Answer 1

0

Yes, Kafka Streams uses the same default session.timeout as KafkaConsumer and not sending a leave group request may explain why the shutdown of the pod is not recognized by the Group Coordinator after the session.timeout expired.

Btw: There is WIP to allow you to control when to send a leave group request (cf https://cwiki.apache.org/confluence/display/KAFKA/KIP-1153%3A+Refactor+Kafka+Streams+CloseOptions+to+Fluent+API+Style) so you don't need to rely on this internal config you mentioned.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.