-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Improve repr for torch.iinfo & torch.finfo #40488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💊 CI failures summary and remediationsAs of commit 42356a9 (more details on the Dr. CI page):
🚧 1 ongoing upstream failure:These were probably caused by upstream breakages that are not fixed yet:
Extra GitHub checks: 1 failed
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 71 times. |
07a52ba to
2968455
Compare
|
Can we make this closer to NumPy? E.g. for iinfo: Similarly for We don't have resolution; either adding it or just skipping it for now should be fine. |
|
Hey @Kiyosora, thank you for this PR! Following on what @gchanan said, we should try to be like NumPy here. Can you print each dtype using your PR and in NumPy? I think torch.bool, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.bfloat16, torch.float32, torch.float64, torch.complex64, torch.complex128. NumPy doesn't have np.bfloat16, so we won't compare torch.bfloat16 to it but that's OK. The complex types are especially interesting, since NumPy will print their corresponding float types when they're given to finfo. @anjali411, what do you think about mimicking that behavior? |
6ea705a to
cf6fbfd
Compare
sounds good! |
ae0f931 to
c14d4fb
Compare
|
Hey @mruberry @gchanan @anjali411 , Thanks for your kindly advice! I have improved my code to make our iinfo and finfo much closer to the NumPy. Including tweak the output format, adds the resolution to |
9479f1a to
3e67d4f
Compare
torch/csrc/TypeInfo.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, std::numeric_limits seem does not be well adapted to BFloat16 type. Calling it directly in the current version would cause a Python crash and the error was just like:
>>> torch.finfo(torch.bfloat16).max
terminate called after throwing an instance of 'c10::Error'
what(): "max" not implemented for 'BFloat16
So I wrote this logic. Make it only output the dtype info, but not max/min/tiny or others when the input is a BFloat16 type. Hope I haven't done anything redundant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's true you can't use std::numeric_limits, but can you use the values from
pytorch/c10/util/BFloat16-inl.h
Line 234 in 1a0b95e
| static constexpr c10::BFloat16 min() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reminding me, I have made some improvements and the methods of finfo can successfully adapt to BFloat16 type now!
>>> torch.finfo(torch.bfloat16).max
3.3895313892515355e+38
>>> torch.finfo(torch.bfloat16).min
-3.3895313892515355e+38
>>> torch.finfo(torch.bfloat16).bits
16
>>> torch.finfo(torch.bfloat16).dtype
'bfloat16'
>>> torch.finfo(torch.bfloat16).eps
0.0078125
>>> torch.finfo(torch.bfloat16).tiny
1.1754943508222875e-38
>>> torch.finfo(torch.bfloat16).resolution
0.01
>>> torch.finfo(torch.bfloat16)
finfo(resolution=0.01, min=-3.38953e+38, max=3.38953e+38, eps=0.0078125, tiny=1.17549e-38, dtype=bfloat16)
>>>
e1d580a to
502b6a5
Compare
|
Hi @mruberry , Sorry to take up your time. Since this PR has been around for a while, would you please make a review on it? Any suggestion would be helpful. 😃 |
Please don't apologize. It's my fault for not seeing the notification. Thank you for reminding me. I'll take a look now. |
test/test_type_info.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should torch.bfloat16, torch.complex64, and torch.complex128 be added to this list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also torch.bool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed!
test/test_type_info.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should torch.bool be added to this list, too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed!
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks really good. Thank you for the updates, @Kiyosora! Sorry I didn't realize this was ready for review again. I'll watch it more carefully.
I made a few small comments about testing and adding in the values for bfloat16. Once that's done I think this is good to go!
7ec42bf to
0e21b5f
Compare
0e21b5f to
42356a9
Compare
|
Hi @mruberry , Thanks for your suggestions, I have completed the changes. As a reference, I re-compared the iinfo/finfo methods in my PR and Numpy, and here are the results: Please let me know if there's anything I can help, I'd be glad to assist. 😃 |
|
Great! Here are the bfloat16 values in JAX for comparison (new review coming in a moment): And here's a copy of the values you're producing: They agree perfectly! |
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work, @Kiyosora!
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Thanks again for the great work, @Kiyosora! If you're interested in more open PyTorch issues let me know. |
|
Glad to be help @mruberry , please feel free to assign me another issue, I‘d like to have a try! 😃 |
|
@Kiyosora awesome! This issue #38349 has a list of functions missing that we'd like to implement. If adding a new function to PyTorch sounds interesting, then you might want to pick one like |
min/max/eps/tinyvalues in repr oftorch.iinfo&torch.finfofor inspectiontorch.float16/torch.int16instead of uncorrespond namesHalf/Short