Skip to content

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Apr 23, 2018

Instead of deciding on the format based on all of the elements of the tensor, decide based on the elements that will actually be printed.

Instead of deciding on the format based on all of the elements of the tensor, decide based on the elements that will actually be printed.
@gchanan
Copy link
Contributor Author

gchanan commented Apr 23, 2018

Old:

In [1]: import torch

In [2]: x=torch.randn(1000,1000,1000)

In [3]: timeit repr(x)
1 loop, best of 3: 40.6 s per loop

In [4]: x=x.cuda()

In [5]: timeit repr(x)
1 loop, best of 3: 42.5 s per loop

New:

In [2]: import torch

In [3]: x=torch.randn(1000,1000,1000)

In [4]: timeit repr(x)
1.22 ms ± 14.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [5]: x=x.cuda()

In [6]: timeit repr(x)
4.35 ms ± 39.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

@ezyang
Copy link
Contributor

ezyang commented Apr 24, 2018

@pytorchbot retest this please

@gchanan
Copy link
Contributor Author

gchanan commented Apr 24, 2018

Going to merge this. The onnx failure doesn't look legit and this could be improved (i.e. use the summarized data in the actual printing so they don't need to be kept in sync), but this is an improvement.

@gchanan gchanan merged commit 90e75c6 into pytorch:master Apr 24, 2018
Jorghi12 pushed a commit to wsttiger/pytorch that referenced this pull request May 10, 2018
* Speed up printing of large tensors.

Instead of deciding on the format based on all of the elements of the tensor, decide based on the elements that will actually be printed.

* Fix flake8.

* Add else case.
weiyangfb pushed a commit to weiyangfb/pytorch that referenced this pull request Jun 11, 2018
* Speed up printing of large tensors.

Instead of deciding on the format based on all of the elements of the tensor, decide based on the elements that will actually be printed.

* Fix flake8.

* Add else case.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants