-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[MPS] Add nonzero mps support #91616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Add nonzero op support for mps * Fix graph caching for nonzero op (use unranked placeholder for output) * Add support for nonzero op starting from macOS Ventura. Fallback to CPU for older OS versions
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91616
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit 04afc3c: FLAKY - The following jobs failed but were likely due to flakiness present on master:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
razarmehr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
@pytorchbot merge -g |
Merge startedYour change will be merged once all checks on your PR pass since you used the green (-g) flag (ETA: 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot merge -g |
|
The merge job was canceled. If you believe this is a mistake,then you can re trigger it through pytorch-bot. |
Merge startedYour change will be merged once all checks on your PR pass since you used the green (-g) flag (ETA: 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 2 additional jobs have failed, first few of them are: trunk ,trunk / linux-focal-rocm5.3-py3.8 / test (default, 2, 2, linux.rocm.gpu) Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -f "Lint+MPS is green" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
| return out_; | ||
| } | ||
|
|
||
| bool contiguous_output = (out_.is_contiguous() && !out_.is_view()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels wrong (i.e. non-contiguous tensors can sometimes be reported as contiguous and views are sometimes contiguous) Is there an umbrella issue that talks about MPS support for non-contiguous tensors?)
Adds nonzero support for mps:
Pseudocode: