Skip to content

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Apr 26, 2018

…d type inference.

@gchanan
Copy link
Contributor Author

gchanan commented Apr 26, 2018

Should fix #6985.

@gchanan gchanan mentioned this pull request Apr 26, 2018
@zou3519
Copy link
Contributor

zou3519 commented Apr 26, 2018

@gchanan The same problem exists for torch.tensor with a CPU tensor:


In [6]: import torch

In [7]: x = torch.randn(3, 3, requires_grad=True)

In [8]: torch.tensor(x, device='cuda')
Out[8]:
tensor([[-0.3721,  0.5129,  0.6696],
        [-0.8509,  0.4349,  1.1458],
        [ 0.7075, -3.0103, -0.6825]])

In [9]: torch.tensor(x, device='cuda').device
Out[9]: device(type='cpu')

I think this line

const auto& type_to_use = type_inference ? var.type() : type;
might need to be changed too.

@gchanan
Copy link
Contributor Author

gchanan commented Apr 26, 2018

@zou3519 is correct; I'll fix that before merging.

@karandwivedi42
Copy link
Contributor

@gchanan Can you please briefly explain how this fixed the problem?

type_inference ? type.toScalarType(var.type().scalarType()) : type;

@gchanan
Copy link
Contributor Author

gchanan commented Apr 26, 2018

@karandwivedi42 type inference means in this case means dtype (scalar type) inference, while the ATen type holds (scalar type, layout, device type) e.g. CUDASparseFloatType. So, if we do type inference, we only want to use the scalar type of the incoming variable, not the device-type / layout as well.

@gchanan
Copy link
Contributor Author

gchanan commented Apr 26, 2018

Actually, the issue with Variable is more complicated; you want to do type-inference on cuda-ness if device isn't specified, otherwise you don't.

@gchanan gchanan merged commit 361648a into pytorch:master Apr 27, 2018
Jorghi12 pushed a commit to wsttiger/pytorch that referenced this pull request May 10, 2018
pytorch#6995)

* Fix torch.tensor(...) device-type calculation when used with numpy and type inference.

* Fix tensor device type inference as well.

* Better variable type inference: infer cuda-ness only if device is not specified.
weiyangfb pushed a commit to weiyangfb/pytorch that referenced this pull request Jun 11, 2018
pytorch#6995)

* Fix torch.tensor(...) device-type calculation when used with numpy and type inference.

* Fix tensor device type inference as well.

* Better variable type inference: infer cuda-ness only if device is not specified.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants