Skip to content

Commit c858d38

Browse files
author
Marko Obrovac
committed
Merge pull request #1 from Pchelolo/upstream_sync
Upstream sync
2 parents 75c55e9 + 720bacd commit c858d38

23 files changed

+723
-88
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,4 @@ node_modules
44
*.swo
55
test/data.txt
66
docs
7+
.idea

CHANGELOG.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,16 @@
11
# kafka-node CHANGELOG
22

3+
## 2016-02-21, Version 0.3.2
4+
- Fix client socket when closing and error handling [#314](https://github.com/SOHU-Co/kafka-node/pull/314)
5+
- Make `commit()` handle case when only callback is passed [#306](https://github.com/SOHU-Co/kafka-node/pull/306)
6+
- Fix typo in offset.js [#304](https://github.com/SOHU-Co/kafka-node/pull/304)
7+
8+
## 2016-01-09, Version 0.3.1
9+
- Buffer batch for async producers [#262](https://github.com/SOHU-Co/kafka-node/pull/262)
10+
11+
## 2016-01-08, Version 0.3.0
12+
- Add partitions to producer [#260](https://github.com/SOHU-Co/kafka-node/pull/260)
13+
314
## 2015-05-11, Version 0.2.27
415
- Deps: upgrade snappy to 3.2.0
516
- Zookeeper#listConsumers: ignore error when there is no such node in zookeeper

CONTRIBUTING.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# How to contribute
2+
3+
All patches or feature evolutions are welcome.
4+
5+
## Getting Started
6+
7+
* Make sure you have a [GitHub account](https://github.com/signup/free)
8+
* Submit a ticket for your issue, assuming one does not already exist.
9+
* Clearly describe the issue including steps to reproduce when it is a bug.
10+
* Make sure you fill in the earliest version that you know has the issue.
11+
* Fork the repository on GitHub
12+
13+
## Making Changes
14+
15+
* Create a topic branch from where you want to base your work
16+
(This is usually the master branch on your forked project).
17+
* Make commits of logical units.
18+
* Check for unnecessary whitespace with `git diff --check` before committing.
19+
* The code style of current code base should be preserved
20+
* Make sure you have added the necessary tests for your changes, specially if
21+
you added a new feature.
22+
* Run _all_ the tests to assure nothing else was accidentally broken.
23+
24+
## Submitting Changes
25+
26+
* Push your changes to a topic branch in your fork of the repository.
27+
* Submit a pull request to the repository.
28+
* Make sure that the PR has a clean log message and don't hesitate to squash
29+
and rebase your commits in order to preserve a clean history log.
30+
31+
## Code reviewers
32+
33+
* For small fixes, one can merge PR directly.
34+
* For new features or big change of current code base, at least two
35+
collaborators should LGTM before merging.
36+
* Rebase instead of merge to avoid those "Merge ...." commits, is recommended
37+
(see https://github.com/blog/2141-squash-your-commits)

README.md

Lines changed: 38 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,11 @@ Follow the [instructions](http://kafka.apache.org/documentation.html#quickstart)
1616

1717
# API
1818
## Client
19-
### Client(connectionString, clientId, [zkOptions])
19+
### Client(connectionString, clientId, [zkOptions], [noAckBatchOptions])
2020
* `connectionString`: Zookeeper connection string, default `localhost:2181/`
2121
* `clientId`: This is a user-supplied identifier for the client application, default `kafka-node-client`
2222
* `zkOptions`: **Object**, Zookeeper options, see [node-zookeeper-client](https://github.com/alexguan/node-zookeeper-client#client-createclientconnectionstring-options)
23+
* `noAckBatchOptions`: **Object**, when requireAcks is disabled on Producer side we can define the batch properties, 'noAckBatchSize' in bytes and 'noAckBatchAge' in milliseconds. The default value is `{ noAckBatchSize: null, noAckBatchAge: null }` and it acts as if there was no batch
2324

2425
### close(cb)
2526
Closes the connection to Zookeeper and the brokers so that the node process can exit gracefully.
@@ -29,7 +30,18 @@ Closes the connection to Zookeeper and the brokers so that the node process can
2930
## Producer
3031
### Producer(client, [options])
3132
* `client`: client which keeps a connection with the Kafka server.
32-
* `options`: set `requireAcks` and `ackTimeoutMs` for producer, the default value is `{requireAcks: 1, ackTimeoutMs: 100}`
33+
* `options`: options for producer,
34+
35+
```js
36+
{
37+
// Configuration for when to consider a message as acknowledged, default 1
38+
requireAcks: 1,
39+
// The amount of time in milliseconds to wait for all acks before considered, default 100ms
40+
ackTimeoutMs: 100,
41+
// Partitioner type (default = 0, random = 1, cyclic = 2, keyed = 3), default 0
42+
partitionerType: 2
43+
}
44+
```
3345

3446
``` js
3547
var kafka = require('kafka-node'),
@@ -49,9 +61,10 @@ var kafka = require('kafka-node'),
4961
``` js
5062
{
5163
topic: 'topicName',
52-
messages: ['message body'],// multi messages should be a array, single message can be just a string or a KeyedMessage instance
53-
partition: 0, //default 0
54-
attributes: 2, // default: 0
64+
messages: ['message body'], // multi messages should be a array, single message can be just a string or a KeyedMessage instance
65+
key: 'theKey', // only needed when using keyed partitioner
66+
partition: 0, // default 0
67+
attributes: 2 // default: 0
5568
}
5669
```
5770

@@ -112,7 +125,18 @@ producer.createTopics(['t'], function (err, data) {});// Simply omit 2nd arg
112125
## HighLevelProducer
113126
### HighLevelProducer(client, [options])
114127
* `client`: client which keeps a connection with the Kafka server. Round-robins produce requests to the available topic partitions
115-
* `options`: set `requireAcks` and `ackTimeoutMs` for producer, the default value is `{requireAcks: 1, ackTimeoutMs: 100}`
128+
* `options`: options for producer,
129+
130+
```js
131+
{
132+
// Configuration for when to consider a message as acknowledged, default 1
133+
requireAcks: 1,
134+
// The amount of time in milliseconds to wait for all acks before considered, default 100ms
135+
ackTimeoutMs: 100,
136+
// Partitioner type (default = 0, random = 1, cyclic = 2, keyed = 3), default 2
137+
partitionerType: 3
138+
}
139+
```
116140

117141
``` js
118142
var kafka = require('kafka-node'),
@@ -132,7 +156,8 @@ var kafka = require('kafka-node'),
132156
``` js
133157
{
134158
topic: 'topicName',
135-
messages: ['message body'],// multi messages should be a array, single message can be just a string
159+
messages: ['message body'], // multi messages should be a array, single message can be just a string,
160+
key: 'theKey', // only needed when using keyed partitioner
136161
attributes: 1
137162
}
138163
```
@@ -205,7 +230,7 @@ producer.createTopics(['t'], function (err, data) {});// Simply omit 2nd arg
205230
// This is the minimum number of bytes of messages that must be available to give a response, default 1 byte
206231
fetchMinBytes: 1,
207232
// The maximum bytes to include in the message set for this partition. This helps bound the size of the response.
208-
fetchMaxBytes: 1024 * 10,
233+
fetchMaxBytes: 1024 * 1024,
209234
// If set true, consumer will fetch message from the given offset in the payloads
210235
fromOffset: false,
211236
// If set to 'buffer', values will be returned as raw buffer objects.
@@ -355,7 +380,10 @@ consumer.close(cb); //force is disabled
355380

356381
```js
357382
{
358-
groupId: 'kafka-node-group',//consumer group id, deafult `kafka-node-group`
383+
// Consumer group id, deafult `kafka-node-group`
384+
groupId: 'kafka-node-group',
385+
// Consumer id, defaults to `groupId`
386+
id: 'my-consumer-id',
359387
// Auto commit config
360388
autoCommit: true,
361389
autoCommitIntervalMs: 5000,
@@ -364,7 +392,7 @@ consumer.close(cb); //force is disabled
364392
// This is the minimum number of bytes of messages that must be available to give a response, default 1 byte
365393
fetchMinBytes: 1,
366394
// The maximum bytes to include in the message set for this partition. This helps bound the size of the response.
367-
fetchMaxBytes: 1024 * 10,
395+
fetchMaxBytes: 1024 * 1024,
368396
// If set true, consumer will fetch message from the given offset in the payloads
369397
fromOffset: false,
370398
// If set to 'buffer', values will be returned as raw buffer objects.

kafka.js

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,3 +5,7 @@ exports.Producer = require('./lib/producer');
55
exports.Client = require('./lib/client');
66
exports.Offset = require('./lib/offset');
77
exports.KeyedMessage = require('./lib/protocol').KeyedMessage;
8+
exports.DefaultPartitioner = require('./lib/partitioner').DefaultPartitioner;
9+
exports.CyclicPartitioner = require('./lib/partitioner').CyclicPartitioner;
10+
exports.RandomPartitioner = require('./lib/partitioner').RandomPartitioner;
11+
exports.KeyedPartitioner = require('./lib/partitioner').KeyedPartitioner;

lib/batch/KafkaBuffer.js

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
'use strict';
2+
3+
var KafkaBuffer = function (batch_size, batch_age) {
4+
5+
this._batch_size = batch_size;
6+
this._batch_age = batch_age;
7+
this._batch_age_timer = null;
8+
this._buffer = null;
9+
10+
}
11+
12+
KafkaBuffer.prototype.addChunk = function (buffer , callback) {
13+
14+
if (this._buffer == null) {
15+
this._buffer = new Buffer(buffer);
16+
} else {
17+
this._buffer = Buffer.concat([this._buffer, buffer]);
18+
}
19+
20+
if (typeof callback !== "undefined" && callback != null) {
21+
if (this._batch_size == null || this._batch_age == null ||
22+
(this._buffer && (this._buffer.length > this._batch_size))) {
23+
callback();
24+
} else {
25+
this._setupTimer(callback);
26+
}
27+
}
28+
29+
}
30+
31+
KafkaBuffer.prototype._setupTimer = function (callback) {
32+
33+
var self = this;
34+
35+
if (this._batch_age_timer != null) {
36+
clearTimeout(this._batch_age_timer);
37+
}
38+
39+
this._batch_age_timer = setTimeout( function() {
40+
if(self._buffer && (self._buffer.length > 0)) {
41+
callback();
42+
}
43+
}, this._batch_age);
44+
45+
}
46+
47+
KafkaBuffer.prototype.getBatch = function () {
48+
return this._buffer;
49+
}
50+
51+
KafkaBuffer.prototype.truncateBatch = function () {
52+
this._buffer = null;
53+
}
54+
55+
module.exports = KafkaBuffer;

0 commit comments

Comments
 (0)