Replies: 1 comment 3 replies
-
|
the moment you need a SAB to represent Python bytes is the moment you need to convert those bytes into that SAB to then notify the worker something changed ... which is what we do the other but in the other way around: worker ask main to fill the SAB and wait to be notified how much data was written, it deserializes it and keep going, everything while blocking via Atomics the dance. A SAB must be known from both sides too, but our is reserved to make the whole thing work, we don't have double SAB (SAB is already problematic itself due headers and whatnot, handling 2 would be nightmare-ish) but again, the moment you need a SAB is the moment you convert that thing regardless. We have ways to pass buffers directly but not currently exposed as API or documented and I wonder if we should simplify somehow this use case but it wasn't planned and, as you found a solution already, I don't feel like current state is blocking so we might think about such improvements in the next quarter, this one is already overly-busy to me, I need to focus on many other things, can't find time to address unplanned feature requests that have already a workaround, sorry. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
When I try to pass large array, e. g.
numpy.ndarray((1000, 1000), bool)(representation of 1-bit B&W image) as an argument to a worker__export__ed function, the best I could get to the function istuple[tuple[bool, ...], ...].Is there a way to employ
SharedArrayBufferandmemoryviewto avoid copying megabytes of data and somehow getnumpy.ndarrayon the worker's side?Updated: I've managed to get
tuple[memoryview, ...]with a 1000 elements, but that seems's to be it. I couldn't find a way to transfer an array as a singlememoryview. :(Beta Was this translation helpful? Give feedback.
All reactions