Conversation
|
In examining MRI's test_coverage I realized that this use of newArray relies on being able to set/aset into the 0 sized array and anything under whatever is alloc'd (16 by default) will already be nil-filled. If the source is greater than 16 then it nil fills 17+ as it marches on. If I make an empty array then it all just works. To me, aset/set just assuming any alloc'd fields are already nil'd is a design failure on our part. The actual determinant should be realSize changing. If I elt(5) on an empty array which has a capacity of 16 0...4 should be nil-filled. There are two usage patterns at play here and I think we picked the wrong one as a default:
I think conceptually it is simpler to assume primitive array is a backing store but that realSize is the boundary where we do work to fill in Ruby values. |
…gger than alloc space but between current size and the original alloc'd space
This is a test but spec:ruby:fast does not break and I suspect it is because any operation which hits > realSize will fill at that point. Unsafe actions like eltOk always already know realSize.
Concurrent actions perhaps break less because we do this? We don't really make much in the way of guarantees so?