-
Notifications
You must be signed in to change notification settings - Fork 363
Description
While exploring GPU-first and inter-worker workflows, I ran into a recurring friction point that I wanted to share for discussion.
Observation
Today, ImageBitmap already provides:
- a synchronous way to snapshot GPU-produced data,
- a transferable object across workers,
- without requiring mapAsync or CPU-side readback.
This makes it possible to move GPU results between workers or pipelines efficiently, as long as the data remains GPU-only.
However, ImageBitmap is image-oriented by design.
Frictions with ImageBitmap as a data container
Beyond the semantic mismatch, there are two practical issues:
Shape & size overhead
Unless a canvas or render target is created specifically to match the data layout, ImageBitmap often introduces significant unused space due to its 2D nature.
This waste directly impacts:
- memory footprint,
- bandwidth,
- decoding cost when ingesting the ImageBitmap into a compute pipeline to reconstruct the original data.
Infrastructure cost
Using ImageBitmap for arbitrary data requires:
- encoding buffer-shaped data into a 2D image,
- maintaining packing conventions,
- decoding it later in shaders.
This is manageable with a substantial framework, but difficult to adopt otherwise.
In short, ImageBitmap solves synchronization and transfer, but at the cost of forcing data into an image abstraction.
Idea: explicit buffer-oriented GPU snapshot
Would it make sense to formalize an explicit, buffer-oriented GPU snapshot abstraction, closer to GPUBuffer, but with snapshot semantics?
For example (purely illustrative):
getSnapshot(buffer: GPUBuffer): GPUBufferSnapshot
copySnapshotToBuffer(snapshot: GPUBufferSnapshot, buffer: GPUBuffer)
Such a type would:
- preserve the exact buffer size and layout,
- avoid 2D projection and wasted space,
- be transferable across workers,
- be intended strictly for GPU → GPU workflows,
- not expose CPU-readable memory.
Snapshot as a lightweight cache
One additional aspect worth mentioning is resilience to device lost.
Because ImageBitmap appears to exist outside the WebGPU device lifetime, it could theoretically act as a lightweight, synchronous cache of GPU results that survives device loss.
I haven’t tested this in practice, but if valid, the idea of a “free” snapshot (in the sense of synchronous and low-overhead) taken every frame could be valuable as a recovery mechanism.
An explicit snapshot type could clarify whether such usage is supported or out of scope, instead of leaving it implicit.
Closing
I’m not proposing new capabilities beyond what ImageBitmap already demonstrates is possible, but rather a clearer, buffer-shaped abstraction that:
- matches the data more naturally,
- reduces structural overhead,
- avoids semantic confusion around images.
I am not asking for a new capability, I am asking for feature parity between image-shaped GPU data and buffer-shaped GPU data.
Curious to hear whether such a concept could fit within WebGPU’s model, or if there are constraints I’m overlooking.