-
Notifications
You must be signed in to change notification settings - Fork 465
Description
The comment in workerd on this says:
// The autoAllocateChunkSize mechanism allows byte streams to operate as if a BYOB
// reader is being used even if it is just a default reader. Support is optional
// per the streams spec but our implementation will always enable it. Specifically,
// if user code does not provide an explicit autoAllocateChunkSize, we'll assume
// this default.
static constexpr int DEFAULT_AUTO_ALLOCATE_CHUNK_SIZE = 4096;
The spec doesn't say you can enable it – it says you need to keep it undefined if the value isn't given (perhaps the spec has changed since this was put in place?)
The specific wording is: Let autoAllocateChunkSize be underlyingSourceDict["autoAllocateChunkSize"], if it exists, or undefined otherwise.
Then further: Let autoAllocateChunkSize be this.[[autoAllocateChunkSize]]. If autoAllocateChunkSize is not undefined, Let buffer be Construct(%ArrayBuffer%, « autoAllocateChunkSize »). etc...
ie, if autoAllocateChunkSize isn't given, then it should be undefined, and only if it is not undefined, then allocate the array buffer of the given size.
Node.js, Bun and Deno and all act as the web standard specifies here.
The reference implementation shows this exact logic too.
With workerd, it's essentially forcing autoAllocateChunkSize: 4096 on every byte ReadableStream. Aside from not adhering to the spec – this also means that a ReadableStream cannot enqueue any byte chunk more than 4096 bytes and have it read in a single read operation.
This code illustrates the problem:
new ReadableStream({
type: "bytes",
start(controller) {
controller.enqueue(new Uint8Array(16 * 1024));
},
}).pipeTo(
new WritableStream({
write(chunk) {
console.log(chunk.byteLength);
},
})
);On Node.js, Bun, Deno this will output:
16385
On workerd this will output:
4096
4096
4096
4096