0

In the example seen on superfastpython.com, the size of a shared memory segment to be used to support a 1-dimensional numpy array is calculated as the number of elements multiplied by the data type size.

We know that the size parameter given to the SharedMemory constructor is a minimum. Thus, in many cases, the actual size may be larger than that specified - and that's fine.

But what if the specified size is an exact multiple of the underlying memory page size?

Consider this:

import numpy as np
from multiprocessing.shared_memory import SharedMemory

n = 2048
s = n * np.dtype(np.double).itemsize
shm = SharedMemory(create=True, size=s)
try:
    assert s == shm.size
    a = np.ndarray((n,), dtype=np.double, buffer=shm.buf)
    a.fill(0.0)
finally:
    shm.close()
    shm.unlink()

In this case (Python 13.3.0 on macOS 15.0.1) the value of s is 16,384 which happens to be a precise multiple of the underlying page size and therefore shm.size is equal to s

Maybe I don't know enough about numpy but I would have imagined that the ndarray would need more space for internal / management structures.

Can someone please explain why this works and why there's no apparent need to allow extra space in the shared memory segment?

1 Answer 1

0

An array does have a bunch of metadata that takes a bunch of extra space, but that metadata is stored separately from its data buffer. The shared memory you're allocating is just for the data buffer.

Sign up to request clarification or add additional context in comments.

1 Comment

Apologies. I re-read your answer and reviewed my code. It makes sense now. Thank you

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.