Skip to main content
Filter by
Sorted by
Tagged with
0 votes
0 answers
16 views

I’m using a Kafka Connect BigQuery Sink Connector. Some messages fail to insert into BigQuery (for example, value is null or not a valid float), but the connector continues processing without stopping....
Murat's user avatar
  • 1
Best practices
0 votes
7 replies
61 views

I am using Kafka Streams, and trying to figure out the best practice for handling multiple stores that are populated using the same data without having to read it twice. My Kafka Streams service is ...
Andreas10001's user avatar
0 votes
0 answers
67 views

I have a Kafka consumer in a spring boot java project that triggers a Spring State Machine when an event is consumed. The state machine starts and completes successfully, but Kafka keeps retrying the ...
Riddler's user avatar
  • 35
0 votes
1 answer
122 views

In a spring boot app, I have a message listener (@KafkaListener) with a concurrency of 28 listening from a topic which has 28 partitions. I have default configurations for all consumer properties. ...
Sathish's user avatar
  • 245
1 vote
0 answers
37 views

So basically I am trying to read data from kafka and dump it to some destination. Before committing the message to kafka, I commit it towards destination and after successful commit to destination, ...
Duke Dhal's user avatar
-1 votes
1 answer
39 views

I'm experimenting with Kafka consumers and want to understand the behavior of consumer.subscribe() in the following scenario: I have a single consumer group with multiple consumers (all alive, none ...
Oac Roland's user avatar
0 votes
0 answers
112 views

I am trying to understand a behavior which is a bit confusing. We are using Spring boot and KafkaListener approach for the consumers. The configuration has auto commit set to true with the default ...
sg2000's user avatar
  • 325
0 votes
0 answers
96 views

We are using Springboot Kafkalistner annotation to listen to a Kafka topic. In the consumer config, i have set ack mode to COUNT_TIME with the ackCount at 2000 and ackTime to 1 minute and the listener ...
sg2000's user avatar
  • 325
0 votes
2 answers
172 views

We are using Spring Boot Kafka listener to consume messages from Kafka. Currently, we have the auto commit offset enabled and so the offset commits happen every 5 seconds (default auto commit interval)...
sg2000's user avatar
  • 325
0 votes
1 answer
193 views

I have a topic with 2 partitions which has historic messages. I just made a new consumer group and I'm running 1 flakey consumer. I want reliability (no lost messages) from the point of my first ...
Phil's user avatar
  • 2,341
2 votes
0 answers
397 views

I'm building a Retrieval-Augmented Generation (RAG) application using Python + FastAPI. I'm also running a Kafka consumer alongside the FastAPI app to process incoming text and generate embeddings (...
Madhur Prakash's user avatar
0 votes
1 answer
36 views

We have a kafka topic and we are planning to introduce canary release to limit our blast radius during prod deployments. One of the idea that we are considering is to have some specific partition (say ...
Rahul Dobriyal's user avatar
1 vote
0 answers
58 views

So, I have a console .Net 6 application which runs as a BackgroundService using the extension method UseWindowsService(). In this app, I am using NewRelic custom instrumentations using a newrelic....
Akshay Chhangani's user avatar
1 vote
0 answers
47 views

Assume my kafka log contains messages as below. Offset-1 = non-transactional message Offset-2 = non-transactional message Offset-3 = transactional message1 for transaction T1 . . . offset-13 = ...
manjunath's user avatar
  • 336
0 votes
0 answers
78 views

I'm having an issue when using the landoop/fast-data-dev image on Docker. I have the following docker-compose file: version: "3.8" networks: minha-rede: driver: bridge services: ...
Wedla Melo's user avatar
0 votes
1 answer
168 views

I am trying generalize MDC log for Correlation-ID. Right now, I am putting MDC log in consume method of Listener class itself after getting consumer message. Then I clear the MDC log before consume ...
Bandita Pradhan's user avatar
0 votes
0 answers
61 views

I am currently working on a Java and TestNG framework.Here, we first define a data model like Erwin data model with source, targets and transformation logic. Then during runtime we publish data on the ...
ashish chauhan's user avatar
1 vote
1 answer
263 views

I am running a kafka cluster and I wanted to migrate to kafka 4.0 however I am not sure if all connected producer and consumers are compatible with it. I've already looked at kafka-consumer-list ...
Verthais's user avatar
  • 497
0 votes
0 answers
54 views

Followed some solutions like this And official doc But it seems above work for java setup but not other like Nodejs/NestJs. I tried with following code but fails connecting to broker/topic: private ...
Sunil Kumar Singh's user avatar
0 votes
0 answers
41 views

I have a Kafka DLQ (dead-letter queue) topic where I'm sending the failed records, but I don't want to keep reprocessing them until a patch is deployed. Thus, the idea would be to have the consumer ...
scripty's user avatar
0 votes
0 answers
170 views

I have an application with multi-cluster binder that reads from one Kafka cluster and produces message to another Kafka cluster. At some point I started to receive such log messages: 1 --- [pool-2-...
krn_645's user avatar
0 votes
0 answers
27 views

we have Kafka consumers running on two websphere servers under same cluster. Kafka consumers are configured with auto offset commit and using same consumer group, and polling every 5 seconds. The ...
Jmanroe's user avatar
1 vote
1 answer
86 views

My goal is to get data from an Oracle Database into Azure Data Explorer and in the future maybe also into another target system. To get data out from the oracle database I use the JDBC Connector from ...
Robsn's user avatar
  • 11
0 votes
0 answers
53 views

Code is not working with the latest springboot version. My springboot version is 3.4.3 and kafka version is 3.3.3. It is giving me an exception i.e java.lang.ClassCastException : class org....
Bilbo's user avatar
  • 91
0 votes
0 answers
15 views

I'm developing a chat application using Kafka (with KafkaJS) and WebSockets in Node.js. Users can dynamically join and leave chat groups, sometimes multiple times within short periods (online and ...
aloke deep Ganguly's user avatar
1 vote
0 answers
137 views

I have Spring Boot consumer that consumes Avro messages and I have configured it as retry topic with DLT. When SerializationException occur, it causes infinite loop and it produces huge amount of logs....
Dorian Pavetić's user avatar
0 votes
0 answers
262 views

I am polling 1000 records from broker using spring kafka. I am using the default consumer/listener factory beans provided by auto configuration. @KafkaListener(topics = "abcd-topic") public ...
cbot's user avatar
  • 162
0 votes
0 answers
279 views

To explain with an example, I am setting spring.kafka.consumer.max-poll-records=1000, but I want the listener to process only 100 records at a time. That is, break the 1000 polled records in to 10 sub-...
cbot's user avatar
  • 162
0 votes
2 answers
494 views

I’m working with two Kafka topics. The first topic is consumed by a continuous consumer that processes messages as soon as they arrive. If the processing fails for any message, that message is ...
Veoxer's user avatar
  • 460
0 votes
0 answers
81 views

I am working on a distributed event-driven system where multiple components interact via Kafka topics in an asynchronous manner. The flow is as follows: An event is triggered from the backend API and ...
Maddy's user avatar
  • 715
1 vote
1 answer
219 views

So Kafka client's for consuming messages uses poll() which along with the heartbeat is what makes the broker decide if a client is still alive. What is not clear to me is if the poll() and heartbeat ...
Jim's user avatar
  • 4,529
0 votes
0 answers
49 views

I need to modify the current offset for a Kafka topic at runtime from the messages receiving method, marked with the @KafkaListener annotation. The idea is to move the offset at runtime to an the ...
V.Lorz's user avatar
  • 395
2 votes
1 answer
113 views

I have two Kafka queues(queue1 & queue2), and multiple producers(clients) are posting messages to both queues. Each client sends different types of messages to the queues (e.g., client1 send "...
deen's user avatar
  • 21
1 vote
0 answers
60 views

I'm verifying the Protobuf schema using the Kafka schema registry. The problem is that even though I put in the correct schema, I still get the error Broker: Broker failed to verify record. The schema ...
Sudip Sikdar's user avatar
0 votes
0 answers
71 views

Is there any way we can point consumer application to a new bootstrap server url via config or file, without a restarting the application? I tried refreshing the kafka consumer cache but was not ...
Rocky's user avatar
  • 11
3 votes
2 answers
910 views

I have a FastAPI application that subscribes to Kafka topic using asynchronous code (i.e., async/await). I have to create a unit test for my application. My code: def create_consumer() -> ...
mascai's user avatar
  • 1,748
2 votes
2 answers
122 views

I'm developing a Quarkus microservice that utilizes Kafka Streams to process messages from multiple topics. Specifically, I'm attempting to join a KStream and a KTable derived from two of these topics....
Alex H's user avatar
  • 31
0 votes
1 answer
79 views

I am creating a new schema in KafkIO and I am unable to create new schemas sometimes. Simplest example, let's say I create a simple schema, version 1: Create simple schema Then I create a new schema ...
c burke's user avatar
0 votes
0 answers
62 views

Given: ConsumerFactory with ContainerProperties.AckMode.MANUAL set. @KafkaListener( autoStartup = "false", topics = "someTopic", ...
christopher.online's user avatar
1 vote
0 answers
34 views

As I am relatively new to Kafka, as I understand we have a hard limit of max 1 consumer per partition in Kafka setup. Also Kafka tries to maintain order within same partition at the expanse of ...
Dhruv's user avatar
  • 11
0 votes
2 answers
1k views

I have the following setup: Kafka broker (3.9.0) Kafka producer (for now, using the producer-console in kafka itself) This setup works fine for basic TCP, TLS and even tried SASL authentication using ...
Omi's user avatar
  • 1,158
1 vote
0 answers
52 views

I have to consume messages from a topic on a Kafka cluster with 10 partitions where messages are being evenly distributed accros them. I have a ASP.NET application with 10 background services where ...
Boris Marković's user avatar
2 votes
0 answers
26 views

Issue Description: Single Consumer: When Kafka is used by a single consumer, it works as expected, and the log shows the following partition assignment: test-consumer-group: partitions assigned: [...
Aspin M's user avatar
  • 21
0 votes
0 answers
65 views

My application polls records from a MSK Kafka cluster. The application maintains offset of each partition and hence has disabled autocommit. It actually never commits offset as it persists offset to ...
Ravi Gupta's user avatar
1 vote
2 answers
38 views

Lets say I have a kafka Queue, and a topic named TOPIC, and I have two consumer groups CONSUMER1 and CONSUMER2. and I have added 1000 data in TOPIC. Consumer1 has consumer 800 data and CONSUMER2 has ...
NativeJavaDeveloper's user avatar
0 votes
1 answer
73 views

I need to retrieve 1000 messages quickly from a Kafka topic, but the initial retrieval is slow kafka-clients 3.6.1 KafkaConsumer API. We are migrating from an old Kafka client (version 0.8.1) to a ...
Giri Mungi's user avatar
1 vote
0 answers
57 views

Scenario is: The user will give the offset as input and based on the offset we need to give 1000 messages from kafka topic and next offset.The kafka topic contains only one partition. We are trying to ...
World of Titans's user avatar
1 vote
0 answers
117 views

I have a redpanda kafka container running, where the logs show this brokers: {{id: 0, kafka_advertised_listeners: {{PLAINTEXT:{host: 127.0.0.1, port: 29092}}, {OUTSIDE:{host: localhost, port: 9092}}},...
curiousengineer's user avatar
2 votes
0 answers
933 views

I'm using docker desktop running Kubernetes. I'm setting up my environment using the next configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: kafka-network spec: ...
Ernesto Limon's user avatar
1 vote
0 answers
85 views

I'm using Kafka to communicate multiple microservices. Basically I have a microservice that registers entities and then publishes them as a message to Kafka for listeners to consume and sync. The ...
Aism793's user avatar
  • 11

1
2 3 4 5
87