1

We have a kafka topic with 2 producers. I am trying to build a simple kafka tool in Java to consume and view the message from the topic. Idea is to get the offset and partition from user and to seek that particular offset message and show it to the user. My problem is my tool is able to find offsets of only one of the producers.

For Example : Producer 1 message is at Partition 3 and Offset 514124 , Producer 2 message is at Partition 3 and offset 547007. Both were produced around the same time back to back. When I seek offset 514124, it is working, but when I seek 547007 it is showing the below error

[MSG][Error consuming Kafka message: null][STACK][java.util.NoSuchElementException

I am assuming it is because of the gap in the offset ( 514124 and 547007) . How do I ensure that my tool consumes whatever offset I seek in a topic irrespective of which producer is producing it?

I have set a group id, I have not given anything for enable auto commit and auto offset reset

try {
    
    java.util.Properties props = new java.util.Properties();
    props.put("bootstrap.servers", "sample"); // Replace with your Kafka broker URL
    props.put("group.id", "sample"); // Consumer group ID
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("security.protocol", "SSL");
    props.put("ssl.truststore.location", truststorePath);
    props.put("ssl.truststore.password", truststorePassword);
    props.put("ssl.keystore.location", keystorePath);
    props.put("ssl.keystore.password", keystorePassword);

 
    org.apache.kafka.clients.consumer.KafkaConsumer<String, Object> consumer = new org.apache.kafka.clients.consumer.KafkaConsumer<>(props);

  
    String offsetStr = "OS"; // User Input offset
    String partitionStr = "PP"; // User Input Partition

    try {
        // Configure topic partition and offset
        int partition = Integer.parseInt(partitionStr);
        long offset = Long.parseLong(offsetStr);
        org.apache.kafka.common.TopicPartition topicPartition = new org.apache.kafka.common.TopicPartition(TN, partition); // User Input topic
        consumer.assign(java.util.Collections.singletonList(topicPartition));
        consumer.seek(topicPartition, offset);

        // Consume Kafka message
        org.apache.kafka.clients.consumer.ConsumerRecord<String, Object> record = consumer.poll(1000).iterator().next();
    }
}

4
  • The consumer has no knowledge of producers, so the ID/number doesn't matter. How is there a 33k offset difference if the producer sent messages back to back? Why do you need to seek at all when you can show records as they're consumed? Can you share the entire stacktrace and your code as a minimal reproducible example? Commented Aug 21, 2024 at 13:25
  • There are two different producers with different producing mechanisms , one uses a KAFKA REST API and other has direct broker integration, not sure if that is the reason for the offset difference. The purpose of this testing tool is for testers to check historical data, so showing the messages real-time as they are consumed wont solve their requirement. Commented Aug 22, 2024 at 0:20
  • @OneCricketeer - I have edited and added my code snippet Commented Aug 22, 2024 at 9:33
  • Thanks. The number/type of producers shouldn't matter. Offsets should always be consecutive, so unless you're producing 30k+ messages/sec, then that large a gap seems unlikely. In any case, the error is saying the iterator is empty; you should check the polled records count before calling next(). Also, do you really need a Java app to replace kafka-console-consumer --topic T --partition X --offset Y --max-messages 1? Commented Aug 22, 2024 at 14:04

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.