Skip to content

Conversation

@Kiyosora
Copy link
Contributor

@Kiyosora Kiyosora commented Jun 24, 2020

>>> torch.iinfo(torch.int8)
iinfo(type=torch.int8, max=127, min=-128)
>>> torch.iinfo(torch.int16)
iinfo(type=torch.int16, max=32767, min=-32768)
>>> torch.iinfo(torch.int32)
iinfo(type=torch.int32, max=2.14748e+09, min=-2.14748e+09)
>>> torch.iinfo(torch.int64)
iinfo(type=torch.int64, max=9.22337e+18, min=-9.22337e+18)
>>> torch.finfo(torch.float16)
finfo(type=torch.float16, eps=0.000976563, max=65504, min=-65504, tiny=6.10352e-05)
>>> torch.finfo(torch.float32)
finfo(type=torch.float32, eps=1.19209e-07, max=3.40282e+38, min=-3.40282e+38, tiny=1.17549e-38)
>>> torch.finfo(torch.float64)
finfo(type=torch.float64, eps=2.22045e-16, max=1.79769e+308, min=-1.79769e+308, tiny=2.22507e-308)

@dr-ci
Copy link

dr-ci bot commented Jun 24, 2020

💊 CI failures summary and remediations

As of commit 42356a9 (more details on the Dr. CI page):


  • 1/2 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)
  • 1/2 broken upstream at merge base 6ef9459 since Jul 07

🚧 1 ongoing upstream failure:

These were probably caused by upstream breakages that are not fixed yet:


Extra GitHub checks: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 71 times.

@Kiyosora Kiyosora changed the title improve repr for torch.iinfo & torch.finfo Improve repr for torch.iinfo & torch.finfo Jun 24, 2020
@Kiyosora Kiyosora force-pushed the repr_improvement branch 2 times, most recently from 07a52ba to 2968455 Compare June 24, 2020 16:27
@Kiyosora Kiyosora marked this pull request as ready for review June 24, 2020 23:09
@gchanan gchanan requested a review from mruberry June 25, 2020 20:13
@gchanan gchanan added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 25, 2020
@gchanan
Copy link
Contributor

gchanan commented Jun 25, 2020

Can we make this closer to NumPy?

E.g. for iinfo:

In [15]: np.__version__
Out[15]: '1.18.1'

In [16]: np.iinfo(np.int8)
Out[16]: iinfo(min=-128, max=127, dtype=int8)

Similarly for finfo:

np.finfo(np.float32)
Out[17]: finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)

We don't have resolution; either adding it or just skipping it for now should be fine.

@mruberry
Copy link
Collaborator

Hey @Kiyosora, thank you for this PR! Following on what @gchanan said, we should try to be like NumPy here. Can you print each dtype using your PR and in NumPy? I think torch.bool, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.bfloat16, torch.float32, torch.float64, torch.complex64, torch.complex128. NumPy doesn't have np.bfloat16, so we won't compare torch.bfloat16 to it but that's OK.

The complex types are especially interesting, since NumPy will print their corresponding float types when they're given to finfo. @anjali411, what do you think about mimicking that behavior?

@anjali411
Copy link
Contributor

Hey @Kiyosora, thank you for this PR! Following on what @gchanan said, we should try to be like NumPy here. Can you print each dtype using your PR and in NumPy? I think torch.bool, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.bfloat16, torch.float32, torch.float64, torch.complex64, torch.complex128. NumPy doesn't have np.bfloat16, so we won't compare torch.bfloat16 to it but that's OK.

The complex types are especially interesting, since NumPy will print their corresponding float types when they're given to finfo. @anjali411, what do you think about mimicking that behavior?

sounds good!

@Kiyosora Kiyosora force-pushed the repr_improvement branch 3 times, most recently from ae0f931 to c14d4fb Compare June 30, 2020 06:02
@Kiyosora
Copy link
Contributor Author

Kiyosora commented Jun 30, 2020

Hey @mruberry @gchanan @anjali411 , Thanks for your kindly advice! I have improved my code to make our iinfo and finfo much closer to the NumPy. Including tweak the output format, adds the resolution to torch.finfo, and made it available to receive torch.complex64 & torch.complex128 types input. Here is the comparison of iinfo/finfo between NumPy and the one fixed in my PR, please have a check wheather does it reach the goal.

>>> import torch
>>> import numpy as np

>>> np.iinfo(np.uint8)
iinfo(min=0, max=255, dtype=uint8)
>>> torch.iinfo(torch.uint8)
iinfo(min=0, max=255, dtype=uint8)

>>> np.iinfo(np.int8)
iinfo(min=-128, max=127, dtype=int8)
>>> torch.iinfo(torch.int8)
iinfo(min=-128, max=127, dtype=int8)

>>> np.iinfo(np.int16)
iinfo(min=-32768, max=32767, dtype=int16)
>>> torch.iinfo(torch.int16)
iinfo(min=-32768, max=32767, dtype=int16)

>>> np.iinfo(np.int32)
iinfo(min=-2147483648, max=2147483647, dtype=int32)
>>> torch.iinfo(torch.int32)
iinfo(min=-2.14748e+09, max=2.14748e+09, dtype=int32)

>>> np.iinfo(np.int64)
iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
>>> torch.iinfo(torch.int64)
iinfo(min=-9.22337e+18, max=9.22337e+18, dtype=int64)

>>> np.finfo(np.float16)
finfo(resolution=0.001, min=-6.55040e+04, max=6.55040e+04, dtype=float16)
>>> torch.finfo(torch.float16)
finfo(resolution=0.001, min=-65504, max=65504, eps=0.000976563, tiny=6.10352e-05, dtype=float16)

>>> np.finfo(np.float32)
finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)
>>> torch.finfo(torch.float32)
finfo(resolution=1e-06, min=-3.40282e+38, max=3.40282e+38, eps=1.19209e-07, tiny=1.17549e-38, dtype=float32)

>>> np.finfo(np.float64)
finfo(resolution=1e-15, min=-1.7976931348623157e+308, max=1.7976931348623157e+308, dtype=float64)
>>> torch.finfo(torch.float64)
finfo(resolution=1e-15, min=-1.79769e+308, max=1.79769e+308, eps=2.22045e-16, tiny=2.22507e-308, dtype=float64)

>>> np.finfo(np.complex64)
finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)
>>> torch.finfo(torch.complex64)
finfo(resolution=1e-06, min=-3.40282e+38, max=3.40282e+38, eps=1.19209e-07, tiny=1.17549e-38, dtype=float32)

>>> np.finfo(np.complex128)
finfo(resolution=1e-15, min=-1.7976931348623157e+308, max=1.7976931348623157e+308, dtype=float64)
>>> torch.finfo(torch.complex128)
finfo(resolution=1e-15, min=-1.79769e+308, max=1.79769e+308, eps=2.22045e-16, tiny=2.22507e-308, dtype=float64)

>>> np.finfo(np.bfloat16)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'numpy' has no attribute 'bfloat16'
>>> torch.finfo(torch.bfloat16)
finfo(dtype=bfloat16)

>>> np.iinfo(np.bool)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\anaconda\envs\pytorch\lib\site-packages\numpy\core\getlimits.py", line 506, in __init__
    raise ValueError("Invalid integer data type %r." % (self.kind,))
ValueError: Invalid integer data type 'b'.
>>> np.finfo(np.bool)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\anaconda\envs\pytorch\lib\site-packages\numpy\core\getlimits.py", line 381, in __new__
    raise ValueError("data type %r not inexact" % (dtype))
ValueError: data type <class 'numpy.bool_'> not inexact

>>> torch.iinfo(torch.bool)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: torch.bool is not supported by torch.iinfo
>>> torch.finfo(torch.bool)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: torch.finfo() requires a floating point input type. Use torch.iinfo to handle 'torch.finfo'

@Kiyosora Kiyosora force-pushed the repr_improvement branch 2 times, most recently from 9479f1a to 3e67d4f Compare June 30, 2020 08:18
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, std::numeric_limits seem does not be well adapted to BFloat16 type. Calling it directly in the current version would cause a Python crash and the error was just like:

>>> torch.finfo(torch.bfloat16).max
terminate called after throwing an instance of 'c10::Error'
  what():  "max" not implemented for 'BFloat16

So I wrote this logic. Make it only output the dtype info, but not max/min/tiny or others when the input is a BFloat16 type. Hope I haven't done anything redundant.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's true you can't use std::numeric_limits, but can you use the values from

static constexpr c10::BFloat16 min() {
?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reminding me, I have made some improvements and the methods of finfo can successfully adapt to BFloat16 type now!

>>> torch.finfo(torch.bfloat16).max
3.3895313892515355e+38
>>> torch.finfo(torch.bfloat16).min
-3.3895313892515355e+38
>>> torch.finfo(torch.bfloat16).bits
16
>>> torch.finfo(torch.bfloat16).dtype
'bfloat16'
>>> torch.finfo(torch.bfloat16).eps
0.0078125
>>> torch.finfo(torch.bfloat16).tiny
1.1754943508222875e-38
>>> torch.finfo(torch.bfloat16).resolution
0.01
>>> torch.finfo(torch.bfloat16)
finfo(resolution=0.01, min=-3.38953e+38, max=3.38953e+38, eps=0.0078125, tiny=1.17549e-38, dtype=bfloat16)
>>>       

@Kiyosora Kiyosora force-pushed the repr_improvement branch 3 times, most recently from e1d580a to 502b6a5 Compare June 30, 2020 15:00
@Kiyosora
Copy link
Contributor Author

Kiyosora commented Jul 7, 2020

Hi @mruberry , Sorry to take up your time. Since this PR has been around for a while, would you please make a review on it? Any suggestion would be helpful. 😃

@mruberry
Copy link
Collaborator

mruberry commented Jul 7, 2020

Hi @mruberry , Sorry to take up your time. Since this PR has been around for a while, would you please make a review on it? Any suggestion would be helpful. 😃

Please don't apologize. It's my fault for not seeing the notification. Thank you for reminding me. I'll take a look now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should torch.bfloat16, torch.complex64, and torch.complex128 be added to this list?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also torch.bool?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should torch.bool be added to this list, too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed!

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really good. Thank you for the updates, @Kiyosora! Sorry I didn't realize this was ready for review again. I'll watch it more carefully.

I made a few small comments about testing and adding in the values for bfloat16. Once that's done I think this is good to go!

@Kiyosora Kiyosora force-pushed the repr_improvement branch from 7ec42bf to 0e21b5f Compare July 8, 2020 08:52
@Kiyosora Kiyosora force-pushed the repr_improvement branch from 0e21b5f to 42356a9 Compare July 8, 2020 09:33
@Kiyosora Kiyosora requested a review from mruberry July 8, 2020 11:42
@Kiyosora
Copy link
Contributor Author

Kiyosora commented Jul 8, 2020

Hi @mruberry , Thanks for your suggestions, I have completed the changes. As a reference, I re-compared the iinfo/finfo methods in my PR and Numpy, and here are the results:

>>> import torch
>>> import numpy as np

>>> np.iinfo(np.uint8)
iinfo(min=0, max=255, dtype=uint8)
>>> torch.iinfo(torch.uint8)
iinfo(min=0, max=255, dtype=uint8)

>>> np.iinfo(np.int8)
iinfo(min=-128, max=127, dtype=int8)
>>> torch.iinfo(torch.int8)
iinfo(min=-128, max=127, dtype=int8)

>>> np.iinfo(np.int16)
iinfo(min=-32768, max=32767, dtype=int16)
>>> torch.iinfo(torch.int16)
iinfo(min=-32768, max=32767, dtype=int16)

>>> np.iinfo(np.int32)
iinfo(min=-2147483648, max=2147483647, dtype=int32)
>>> torch.iinfo(torch.int32)
iinfo(min=-2.14748e+09, max=2.14748e+09, dtype=int32)

>>> np.iinfo(np.int64)
iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
>>> torch.iinfo(torch.int64)
iinfo(min=-9.22337e+18, max=9.22337e+18, dtype=int64)

>>> np.finfo(np.float16)
finfo(resolution=0.001, min=-6.55040e+04, max=6.55040e+04, dtype=float16)
>>> torch.finfo(torch.float16)
finfo(resolution=0.001, min=-65504, max=65504, eps=0.000976563, tiny=6.10352e-05, dtype=float16)

>>> np.finfo(np.float32)
finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)
>>> torch.finfo(torch.float32)
finfo(resolution=1e-06, min=-3.40282e+38, max=3.40282e+38, eps=1.19209e-07, tiny=1.17549e-38, dtype=float32)

>>> np.finfo(np.float64)
finfo(resolution=1e-15, min=-1.7976931348623157e+308, max=1.7976931348623157e+308, dtype=float64)
>>> torch.finfo(torch.float64)
finfo(resolution=1e-15, min=-1.79769e+308, max=1.79769e+308, eps=2.22045e-16, tiny=2.22507e-308, dtype=float64)

>>> np.finfo(np.complex64)
finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)
>>> torch.finfo(torch.complex64)
finfo(resolution=1e-06, min=-3.40282e+38, max=3.40282e+38, eps=1.19209e-07, tiny=1.17549e-38, dtype=float32)

>>> np.finfo(np.complex128)
finfo(resolution=1e-15, min=-1.7976931348623157e+308, max=1.7976931348623157e+308, dtype=float64)
>>> torch.finfo(torch.complex128)
finfo(resolution=1e-15, min=-1.79769e+308, max=1.79769e+308, eps=2.22045e-16, tiny=2.22507e-308, dtype=float64)

>>> np.finfo(np.bfloat16)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'numpy' has no attribute 'bfloat16'
>>> torch.finfo(torch.bfloat16)
finfo(resolution=0.01, min=-3.38953e+38, max=3.38953e+38, eps=0.0078125, tiny=1.17549e-38, dtype=bfloat16)

>>> np.iinfo(np.bool)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\anaconda\envs\pytorch\lib\site-packages\numpy\core\getlimits.py", line 506, in __init__
    raise ValueError("Invalid integer data type %r." % (self.kind,))
ValueError: Invalid integer data type 'b'.
>>> np.finfo(np.bool)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\anaconda\envs\pytorch\lib\site-packages\numpy\core\getlimits.py", line 381, in __new__
    raise ValueError("data type %r not inexact" % (dtype))
ValueError: data type <class 'numpy.bool_'> not inexact

>>> torch.iinfo(torch.bool)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: torch.bool is not supported by torch.iinfo
>>> torch.finfo(torch.bool)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: torch.finfo() requires a floating point input type. Use torch.iinfo to handle 'torch.finfo'

Please let me know if there's anything I can help, I'd be glad to assist. 😃

@mruberry
Copy link
Collaborator

mruberry commented Jul 8, 2020

Great! Here are the bfloat16 values in JAX for comparison (new review coming in a moment):

jax.dtypes.finfo(jax.dtypes.bfloat16).resolution
: 0.01
jax.dtypes.finfo(jax.dtypes.bfloat16).min
: -3.38953e+38
jax.dtypes.finfo(jax.dtypes.bfloat16).max
: 3.38953e+38
jax.dtypes.finfo(jax.dtypes.bfloat16).eps
: 0.0078125
jax.dtypes.finfo(jax.dtypes.bfloat16).tiny
: 1.17549e-38

And here's a copy of the values you're producing:

finfo(resolution=0.01, min=-3.38953e+38, max=3.38953e+38, eps=0.0078125, tiny=1.17549e-38, dtype=bfloat16)

They agree perfectly!

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, @Kiyosora!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 0651887.

@mruberry
Copy link
Collaborator

Thanks again for the great work, @Kiyosora! If you're interested in more open PyTorch issues let me know.

@Kiyosora
Copy link
Contributor Author

Glad to be help @mruberry , please feel free to assign me another issue, I‘d like to have a try! 😃

@Kiyosora Kiyosora deleted the repr_improvement branch July 13, 2020 01:24
@mruberry
Copy link
Collaborator

@Kiyosora awesome! This issue #38349 has a list of functions missing that we'd like to implement. If adding a new function to PyTorch sounds interesting, then you might want to pick one like nan_to_num. The PRs cited in that issue contain a lot of the pointers you'd need to implement it, and I'm also available if you have additional questions (you can just file a new issue like "implementing nan_to_num" to discuss them).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[feature request] Useful repr for torch.finfo/iinfo

6 participants