-
Notifications
You must be signed in to change notification settings - Fork 26.3k
CUDA BFloat16 unary ops part 1 #44813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💊 CI failures summary and remediationsAs of commit b95ad1d (more details on the Dr. CI page):
XLA failureJob pytorch_xla_linux_bionic_py3_6_clang9_test is failing. Please create an issue with title prefixed by This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 19 times. |
|
Test errors are real |
|
You need to adjust dtypesIfCUDA here https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_methods_invocations.py#L233 and in similar places. |
|
Now flake import error is real :-) |
|
Is XLA fixing those operators, or skipping tests on their side? Let me know when this is ready. |
|
I submit a fix on xla side(lowering for these ops) in pytorch/xla#2501 |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
No description provided.