Add device.simulateLoss(), and prevent mappedAtCreation on destroyed devices#5115
Add device.simulateLoss(), and prevent mappedAtCreation on destroyed devices#5115kainino0x wants to merge 1 commit intogpuweb:mainfrom
Conversation
…devices Issue: fixes gpuweb#5102, fixes gpuweb#4177
mwyrzykowski
left a comment
There was a problem hiding this comment.
I think this attempts to workaround limitations or inconsistencies in the API by introducing another API function on the GPUDevice. Instead we should address the limitations or inconsistencies without adding new API.
Specifically for the getMappedRange case on a lost device, should we just align the behavior to mapAsync when it is known the device is lost on the content timeline?
No, my claim is that both
This would go against #1629. Of course, mapAsync already does, but I think that's more OK because it's async. That said, we did leave open the possibility of making mapAsync work on lost devices, too: #1629 (comment) |
mwyrzykowski
left a comment
There was a problem hiding this comment.
I may not be able to join the meeting today, but API calls which are only for testing application behavior don't seem appropriate for inclusion into the specification. destroy() already results in device lost, so this seems sufficient for testing device lost if needed.
|
I don't think it's a problem to provide things that are mainly for testing. Examples:
Applications need to be able to test their code in the standard web platform. I don't think it would be reasonable if any or all of these capabilities were hidden behind some special browser flags: code couldn't be tested under the same platform that runs in production, plus everyone who writes software for the web would need to know about this. |
|
Re: @kdashg's proposal that we change This is possible. It shouldn't have really direct impacts on application behavior. However a lot of WebGPU applications are pushing the resource limits of the system, so may be implicitly relying on I don't think it's going to be a common problem, because mappings generally shouldn't live that long anyway, but in particular patterns like the "queue of mapped buffers" for data upload could have several large mappings alive that would no longer be cleaned up promptly, until applications update their code to clean these up explicitly. |
|
Another side note: I was wondering if there's any conflict with triple-mapping, if keeping the mapping alive could have costs beyond just the raw memory allocation. I think there is no direct problem, since it always has to be safe anyway for the entire GPU process to crash. But I think triple-mapped buffers could be allocated in different physical memory spaces that are more memory-constrained than regular mappings, making cleanup a bit more important. |
GPU Web WG 2025-03-25/26 Pacific-time
|
EDIT: I propose having both
destroy()andsimulateLoss()because they are useful for different things.destroy()to clean up resources easily during shutdown, andsimulateLoss()for testing application behavior on device loss.Issue: fixes #5102 (see there for discussion and past minutes), fixes #4177