-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Support BFloat16 for binary logical operators on CUDA #42485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 16d2442 (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
❄️ 1 failure tentatively classified as flakybut reruns have not yet been triggered to confirm:
|
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
nairbv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we add include_bfloat16=True in torch/testing/init.py get_all_math_dtypes it'll turn on a lot of bfloat16 testing, wherever we currently test for torch.half (e.g. that will turn on all of the cross-dtype testing in test_type_promotion that currently tests torch.half)
[ghstack-poisoned]
|
@nairbv logical operators currently don't use |
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks xuhdev! Looks good now.
[ghstack-poisoned]
Stack from ghstack:
Differential Revision: D23684423