-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Enable lerp on half type; fix output memory format #43541
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
cc @ptrblck |
💊 CI failures summary and remediationsAs of commit 7ff5eb6 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
|
@ngimel Is there any update on this? |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
d7fb502 to
7ff5eb6
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@xwang233 This commit broke some changes I was making to torch.quantile when giving at::lerp_out a discontiguous view of out tensor as follows: If instead I change the out tensor itself instead of passing a view, then it works correctly as follows: |
|
This is not changed by this PR, it worked like this before. Here you are sending an intermediate variable as your produces expected results (that is, |
|
I removed the lines I changed my code to work with the current version so there's no need to change it if it's working correctly. |
|
That's weird,
|
|
The problem is not that out and out1 are different, that is ok. The problem is that in 1.6 out was correctly computed as But with the introduction of this PR, the result from the colab would be |
|
Oh, thank you, you are right! Yes, this is wrong behavior, normally if out is correct size, we should not change its strides, and now we make it contiguous: We should change resize_/resize_as_ implementation to respect that, but in the meantime, can you please remove |
|
Sure, removed here #44559. |
|
@heitorschueroff thanks for the test! I remember the @ngimel I think the problem with I remember there was a similar discussion here #41027, |
Summary: Pull Request resolved: #43541 Reviewed By: zou3519 Differential Revision: D23499592 Pulled By: ezyang fbshipit-source-id: 9efdd6cbf0a334ec035ddd467667ba874b892549
|
What are shape requirements for the |
Please refer to the discussion at the bottom of #43541 about the bug. Differential Revision: [D23655403](https://our.internmc.facebook.com/intern/diff/D23655403) [ghstack-poisoned]
Summary: Pull Request resolved: #43541 Reviewed By: zou3519 Differential Revision: D23499592 Pulled By: ezyang fbshipit-source-id: 9efdd6cbf0a334ec035ddd467667ba874b892549
No description provided.