-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Implement Tensor.__cuda_array_interface__. #11984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add baseline test of python-level __cuda_array_interface__ for torch.Tensor. See interface reference at: https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html
| "Use var.detach().__cuda_array_interface__ instead." | ||
| ) | ||
|
|
||
| typestr = { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_torch.py
Outdated
| self.assertFalse(isinstance(idx, int)) | ||
| self.assertEqual(x[idx], x[int(idx)]) | ||
|
|
||
| @unittest.skipIf(not TEST_NUMPY, "Numpy not found") |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lack of actual numba test is blocker in my mind. I have a few other doc nits which I'll mention below.
torch/tensor.py
Outdated
| # hasattr(cpu_tensor, "__cuda_array_interface__") is False. | ||
| if not self.device.type == "cuda": | ||
| raise AttributeError( | ||
| "Not on cuda device, use Tensor.cuda() first: %r" % |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/tensor.py
Outdated
| if self.requires_grad: | ||
| raise RuntimeError( | ||
| "Can't get __cuda_array_interface__ on Variable that requires grad. " | ||
| "Use var.detach().__cuda_array_interface__ instead." |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Add initial pytorch/numba integration tests. Expand test common/common_cuda configuration to support optional numba installation. Add test/test_numba_integration.py, with baseline integration test cases.
|
The dir test failure is real; you probably should filter out |
746a582 to
999b39f
Compare
Install numba on xenial-cuda9 test runs, allowing integration tests.
2a33110 to
3a2012c
Compare
Fix numba __cuda_array_interface__ integration check to resolve difference in py2/py3 hasattr behavior. Tidy documentation.
Update `torch.Tensor.__dir__` to only include `__cuda_array_interface__` if property will be present (i.e. a dense, cuda tensor). Preserved RuntimeError behavior of cuda tensors that require grad.
39e968e to
fe24ad3
Compare
|
This is ready for another round of review when you get the chance. I've:
I'm a bit concerned about adding the numba installation to your test runner, rather than folding the numba package into your test image. I've pinned to installed version to attempt to mitigate the risk of a dependency on an external package, but this still adds ~400mb of external package access to the test run. |
Well, here's another crazy idea. We don't necessarily need to test against numba; we just need to test against something that supports cuda_array_interface. There's no reason we couldn't make PyTorch support cuda_array_interface. So what we could do is (1) add read-in support for cuda_array_interface to PyTorch, and then (2) test that going to a dummy object with cuda_array_interface and then back again works. Then as long as we don't have correlated errors between the decoder and encoder, you have a reasonably robust test. |
|
Other possibility: we have the numba test but don't run it in CI (only run it on a nightly build, or something.) |
|
@asford Would you mind if I got a version of your changes to test on ppc64le before it gets merged ? Specially if its going to be a default test in run_test.py. I see you require numba=0.39 cudatoolkit=9.0 for x86 (EDITED) and looks like they are both available on ppc64le. I'll see if I can merge your changes locally and run the test. Appreciate it . CC: @hartb @sdmonov |
|
@asford (EDITED) I had to install the torch/tensor.py into my environment and the test test_cuda_array_interface passes (it does not require numba). I installed numba and looks like the other test passes also. |
|
@avmgithub Great! Thinks for following up on that test. I was racking my brain trying to understand how it could be failing in that way. |
|
I'm comfortable with either solution you've proposed, mainly trying to avoid having a drive-by contribution that makes your testing system more fragile. I've been burned a few times by having dependencies on large external downloads in CI and think it's ideal to avoid it if possible. Having this run as a nightly may be best if you already have the infrastructure in place for that. Are the pytorch-examples run on a periodic basis as part of your CI? Having the ability to read data from numba seems generally useful. You can approximate this now by creating a numbabased view of the "to" tensor and then copy data via numba from the "from" data buffer, but executing this in torch is more natural. Given that it both improves the testing situation and improves the interface I'm in favor of this approach. Adding read support for |
Yes. @pjh5, do you think we could install numba when we do nightly build and tests?
SGTM. In the meantime, I'm going to merge this without the CI change. Existence of test LGTM. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
+1 @ezyang Thanks for driving this. Should we open a follow-up issue for the nightly? |
|
Yes please.
…On October 12, 2018 10:59:21 AM PDT, Alex Ford ***@***.***> wrote:
+1 @ezyang Thanks for driving this. Should we open a follow-up issue
for the nightly?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#11984 (comment)
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
Implements #11914, cc: @ezyang
Implements
__cuda_array_interface__for non-sparse cuda tensors,providing compatibility with numba (and other cuda projects...).
Adds
numbainstallation to thexenial-cuda9jenkins test environments via direct installation in.jenkins/pytorch/test.shand numba-oriented test suite intest/test_numba_integration.py.See interface reference at:
https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html