-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Fix torch.tensor(...) device-type calculation when used with numpy an… #6995
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…d type inference.
|
Should fix #6985. |
|
@gchanan The same problem exists for I think this line pytorch/torch/csrc/utils/tensor_new.cpp Line 183 in 2b44c42
|
|
@zou3519 is correct; I'll fix that before merging. |
|
@gchanan Can you please briefly explain how this fixed the problem? |
|
@karandwivedi42 type inference means in this case means dtype (scalar type) inference, while the ATen type holds (scalar type, layout, device type) e.g. CUDASparseFloatType. So, if we do type inference, we only want to use the scalar type of the incoming variable, not the device-type / layout as well. |
|
Actually, the issue with Variable is more complicated; you want to do type-inference on cuda-ness if device isn't specified, otherwise you don't. |
pytorch#6995) * Fix torch.tensor(...) device-type calculation when used with numpy and type inference. * Fix tensor device type inference as well. * Better variable type inference: infer cuda-ness only if device is not specified.
pytorch#6995) * Fix torch.tensor(...) device-type calculation when used with numpy and type inference. * Fix tensor device type inference as well. * Better variable type inference: infer cuda-ness only if device is not specified.
…d type inference.