Skip to content

Conversation

@zasdfgbnm
Copy link
Collaborator

No description provided.

@dr-ci
Copy link

dr-ci bot commented Sep 16, 2020

💊 CI failures summary and remediations

As of commit b95ad1d (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

XLA failure

Job pytorch_xla_linux_bionic_py3_6_clang9_test is failing. Please create an issue with title prefixed by [PT_BREAK] in pytorch/xla and link to to this PR. If you have questions, please reach out to @ailzhang / @dlibenzi / @JackCaoG.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 19 times.

@zasdfgbnm zasdfgbnm requested a review from ngimel September 16, 2020 21:34
@mruberry mruberry added module: cuda Related to torch.cuda, and CUDA support in general module: bfloat16 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Sep 17, 2020
@ngimel
Copy link
Collaborator

ngimel commented Sep 17, 2020

Test errors are real

@ngimel
Copy link
Collaborator

ngimel commented Sep 17, 2020

You need to adjust dtypesIfCUDA here https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_methods_invocations.py#L233 and in similar places.

@ngimel
Copy link
Collaborator

ngimel commented Sep 17, 2020

Now flake import error is real :-)

@ngimel
Copy link
Collaborator

ngimel commented Sep 18, 2020

Is XLA fixing those operators, or skipping tests on their side? Let me know when this is ready.

@JackCaoG
Copy link
Collaborator

I submit a fix on xla side(lowering for these ops) in pytorch/xla#2501

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ngimel merged this pull request in 581a364.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: bfloat16 module: cuda Related to torch.cuda, and CUDA support in general open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants