Add fallback memory resource for TCC devices#257
Conversation
42346c5 to
319a372
Compare
|
re-tested using synchronous malloc and free on Tesla T4 colossus instance. on main branch: (test_env) C:\cuda-python\cuda_core>python -m pytest tests\test_memory.py platform win32 -- Python 3.12.7, pytest-8.3.4, pluggy-1.5.0 tests\test_memory.py FFFF [100%] with change: (test_env) C:\cuda-python\cuda_core>python -m pytest tests\test_memory.py platform win32 -- Python 3.12.7, pytest-8.3.4, pluggy-1.5.0 tests\test_memory.py .... [100%] |
Co-authored-by: Leo Fang <leof@nvidia.com>
|
/ok to test |
|
Windows failures are known (#271) and irrelevant. Let's merge. Thanks, Keenan! |
For devices which don't support memory pools, we need to provide an alternate default memory resource.
This basic WAR implementation works. I used a colossus lease for a Tesla T4 on Friday Nov 29 and these were the results:
using the DefaultAsyncMempool --> python - m pytest tests/test_memory.py
=============================================== short test summary info ===============================================
FAILED tests/test_memory.py::test_buffer_initialization - cuda.core.experimental._utils.CUDAError: CUDA_ERROR_NOT_SUPPORTED: operation not supported
using the implementation in this branch --> python - m pytest tests/test_memory.py
collected 4 items
tests\test_memory.py .... (SUCCESS)
close #208