Skip to content

Conversation

@li-roy
Copy link
Contributor

@li-roy li-roy commented May 17, 2018

  • Integer tensors won't be printed in scientific notation
  • Removed scaling factor for all tensors
  • Only leave spaces for negative signs if one of the printed elements is going to have an negative sign

Fixes #7213, fixes #6918, fixes #6865, fixes #6811, fixes #6768.

Examples

>>> torch.tensor(2341234123412341)
tensor(2341234123412341)

>>> torch.tensor([1e28, 1e-28])
tensor([1.0000e+28, 1.0000e-28])

>>> torch.tensor([1e28, -1e-28])
tensor([ 1.0000e+28, -1.0000e-28])

>>> torch.tensor([1, 2])
tensor([1, 2])

>>> torch.tensor([1, -2])
tensor([ 1, -2])

>>> torch.randn(1001)
tensor([-0.5033, -0.4142,  0.7961,  ..., -0.9400, -0.4431, -0.6192])

>>> torch.randn(20)
tensor([ 0.8574, -1.1225,  0.3135,  1.7848, -0.8342,  0.1214, -0.6393, -0.3480,
         1.4114,  0.5102, -0.8206,  1.6022,  0.6310,  1.9507, -1.5694,  1.3168,
         0.1103,  0.5242,  0.3141, -0.0692])

>>> torch.randn(5, dtype=torch.double)
tensor([-1.2961,  0.6950,  0.1357, -0.6473,  0.2605], dtype=torch.float64)

@ssnl
Copy link
Collaborator

ssnl commented May 17, 2018

Cool! Can you post some example outputs?

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@fmassa
Copy link
Member

fmassa commented May 22, 2018

I think the prints now are a bit verbose.
Can't we print the dtype/device/requires_grad if they are not what we would expect to be default?

Also, quite a few people wanted to have a shape information in the print. While this would break the constructor-valid contract, it could be worth considering.

@li-roy
Copy link
Contributor Author

li-roy commented May 22, 2018

That's what it was like before, but there was some discussion on #6912 on whether it makes sense to do that, seeing as those defaults can be changed.

@ezyang
Copy link
Contributor

ezyang commented May 22, 2018

Given that we are adding ellipses (which breaks Python parseability), I don't think slavishly following the "it must be a valid Python expression" makes sense. For example, we can add the shape as a comment as a newline at the very end of the expression. This is not strictly following the Python guidelines, but it follows the spirit and solves the problem.

@fmassa
Copy link
Member

fmassa commented May 22, 2018

I personally would only print requires_grad if it's True. Same thing for device, only print it if it's not on the CPU.

Also, I think that at some point this becomes a matter of taste, and maybe adding options to the printoptions to always enable / disable certain prints might be worth considering.

@alok
Copy link

alok commented May 22, 2018

I don't think there should be a space before the first element of a tensor, since numpy and python lists don't do that

@li-roy
Copy link
Contributor Author

li-roy commented May 24, 2018

@alok Numpy does when it's leaving space for a negative.

>>> np.array([1,-2])
array([ 1, -2])

@li-roy
Copy link
Contributor Author

li-roy commented Jun 4, 2018

@pytorchbot retest this please

1 similar comment
@li-roy
Copy link
Contributor Author

li-roy commented Jun 5, 2018

@pytorchbot retest this please

@ezyang
Copy link
Contributor

ezyang commented Jun 6, 2018

So, what does the new formatting look like? An expect test for a simple tensor would go a long way towards making it more obvious to reviewers.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@li-roy
Copy link
Contributor Author

li-roy commented Jun 13, 2018

@pytorchbot retest this please

@ezyang
Copy link
Contributor

ezyang commented Jun 13, 2018

FYI, you can use assertExpected instead of hard coding the strings, and then pass --accept to automatically accept the actual string as the expected result. Saves a bunch of copy pasting.

@li-roy
Copy link
Contributor Author

li-roy commented Jun 13, 2018

@pytorchbot retest this please

@gchanan
Copy link
Contributor

gchanan commented Jun 13, 2018

FWIW I'm not a fan of assertExpected here -- these are really simple tests and it's nicer to just have them visible inline. But it's fine now that it's done.

Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some small comments / suggestions, but I think this is good to go.

self.max_width = max(self.max_width, len(value_str))

else:
copy = torch.DoubleTensor(tensor.size()).copy_(tensor).view(tensor.nelement())

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

if math.isnan(value) or math.isinf(value):
self.max_width = max(self.max_width, len(value_str))
else:
# for finites, appending a decimal will increase the len by 1

This comment was marked as off-topic.

else:
if exp_max == 0:
sz = 7
if exp_max - exp_min > PRINT_OPTS.precision or exp_max > 8 or exp_min < -4:

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

ret = ('{:' + str(self.max_width) +
'.' + str(PRINT_OPTS.precision) + 'e}').format(value)
else:
ret = ('{:.' + str(PRINT_OPTS.precision) + 'f}').format(value)

This comment was marked as off-topic.

This comment was marked as off-topic.

@ezyang
Copy link
Contributor

ezyang commented Jun 14, 2018

@pytorchbot retest this please

@li-roy li-roy merged commit 6a85b13 into pytorch:master Jun 14, 2018
@li-roy li-roy deleted the printn branch June 14, 2018 06:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

6 participants