-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Fix "CUDA Tensor __rsub__ breaks when device is not 0" #12956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| {"__mul__", (PyCFunction)THPVariable_mul, METH_VARARGS | METH_KEYWORDS, NULL}, | ||
| {"__imul__", (PyCFunction)THPVariable_mul_, METH_VARARGS | METH_KEYWORDS, NULL}, | ||
| {"__sub__", (PyCFunction)THPVariable_sub, METH_VARARGS | METH_KEYWORDS, NULL}, | ||
| {"__rsub__", (PyCFunction)THPVariable_sub, METH_VARARGS | METH_KEYWORDS, NULL}, |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
da2b75f to
dfe1f4d
Compare
This reverts commit dfe1f4d26ebff0a5a5543ddfb6a067e46becac69.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yf225 is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary:
Currently, `a = 1 - torch.tensor([1]).to('cuda:1')` puts `a` in `cuda:1` but reports `a.device` as `cuda:0` which is incorrect, and it causes illegal memory access error when trying to access `a`'s memory (e.g. when printing). This PR fixes the error.
Fixes pytorch/pytorch#10850.
Pull Request resolved: pytorch/pytorch#12956
Differential Revision: D12835992
Pulled By: yf225
fbshipit-source-id: 5737703d2012b14fd00a71dafeedebd8230a0b04
Currently,
a = 1 - torch.tensor([1]).to('cuda:1')putsaincuda:1but reportsa.deviceascuda:0which is incorrect, and it causes illegal memory access error when trying to accessa's memory (e.g. when printing). This PR fixes the error.Fixes #10850.