-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Align max_pool1d Error Checking between CPU and CUDA/CPU requires_grad #90211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/90211
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 1b7306b: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
/easycla |
|
|
||
| TORCH_CHECK( | ||
| KW > 0, | ||
| kernel_size[0] > 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can the other error messages still be triggered? Do they need to removed or made consistent with these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took the original error checks in max_pool1d_impl and refactored them into an error checking function (check_max_pool1d) that is run for both CPU and CUDA/CPU with requires_grad. The error messages that were originally triggered only by CUDA or CPU with requires_grad should no longer be triggered by max_pool1d, as these checks should cover all the same scenarios. However, they should not be removed, since they can still be triggered when max_pool2d is called natively.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this PR fixes the inconsistency for max_pool1d (as it states) but not for max_pool2d?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not look into the code for max_pool2d as the issue was focused on inconsistencies in max_pool1d and these inconsistencies were well documented in pytorch/torch/testing/_internal/common_methods_invocations.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
aten/src/ATen/native/MaxPooling.cpp
Outdated
| !self.device().is_cpu() || | ||
| isTensorSubclassLike(self)) { | ||
| // Needs indices for grad and with_indices defines CUDA dispatch | ||
| check_max_pool1d(self, kernel_size, stride, padding, dilation); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bring check_max_pool1d to above the conditional, as you are calling ti in all cases.
|
Sorry if I've bothered you with notifications, I tried to click the re-request review button and it behaved weirdly on my end. |
|
Tests are running now, @deandyu. Once they run and reveal no issues relevant to this PR I think we'll be good to go. I'll just make a final check then. |
|
Hey @mruberry, have you had a chance to look at the tests yet? As far as I can tell, (xla, 1, 1, linux.2xlarge) is the only test that fails, due to an issue that has been fixed in PR #90229. |
|
@mruberry I just synced my fork to the main Repo as the PR was getting stale. Let me know if you need me to do anything else before this PR can be merged. |
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks @deandyu!
|
@pytorchbot merge -g |
Merge startedYour change will be merged once all checks on your PR pass since you used the green (-g) flag (ETA: 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@mruberry Could you approve the remaining workflow? Thanks! |
Fixes #85712
Standardizes error checking for max_pool1d between CPU and CPU requires_grad/CUDA.