[MPS] [Sparse] unique_dim and sparse broadcast#163694
[MPS] [Sparse] unique_dim and sparse broadcast#163694Isalia20 wants to merge 6 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163694
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 72fd152 with merge base e671dcc ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
To add the ciflow label This helps ensure we don't trigger CI on this PR until it is actually authorized to do so. Please ping one of the reviewers if you do not have access to approve and run workflows. |
Attention! native_functions.yaml was changedIf you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info. Caused by: |
|
|
||
| @expectedFailureMPS | ||
| @coalescedonoff | ||
| @dtypes(torch.double) |
There was a problem hiding this comment.
Normal version doesn't support torch float32? Or is it just that's tested by optest somewhere else?
There was a problem hiding this comment.
most of the sparse functions on other devices(CPU/CUDA) are all tested in double, not sure why. Maybe it's due to gradcheck being imprecise in float32
| self.assertEqual(self.safeToDense(res), self.safeToDense(true_result)) | ||
|
|
||
| @coalescedonoff | ||
| @expectedFailureMPS |
There was a problem hiding this comment.
Remove expectedFailureMPS wrappers from these tests
There was a problem hiding this comment.
why? We expect that test to fail
| # check_autograd(x, y) | ||
|
|
||
| @coalescedonoff | ||
| @expectedFailureMPS |
There was a problem hiding this comment.
A bunch of these need to be removed
There was a problem hiding this comment.
I think no? why should we remove
There was a problem hiding this comment.
Because it's no longer an expectedFailure on MPS?
There was a problem hiding this comment.
We expect this to fail. I added dtypesIfMPS to the test so expectFailureMPS triggers only when there’s an unexpected success. Without this, it would always fail because all tests using this decorator run in torch.float64, which always errors on MPS regardless of whether the op is implemented.
There was a problem hiding this comment.
This is a good change, but do you mind submitting this one as part of separate PR?
There was a problem hiding this comment.
removed it for this PR, will submit in a separate one
| # check_autograd(x, y) | ||
|
|
||
| @coalescedonoff | ||
| @expectedFailureMPS |
There was a problem hiding this comment.
This is a good change, but do you mind submitting this one as part of separate PR?
| } | ||
|
|
||
| Tensor perm; | ||
| for (int64_t c = cols - 1; c >= 0; --c) { |
There was a problem hiding this comment.
Nit
| for (int64_t c = cols - 1; c >= 0; --c) { | |
| for (auto c = cols - 1; c >= 0; --c) { |
| if (perm.defined()) { | ||
| keys = keys.index_select(0, perm); | ||
| } | ||
| Tensor idx = argsort(keys, /*dim=*/0, /*descending=*/false); |
There was a problem hiding this comment.
| Tensor idx = argsort(keys, /*dim=*/0, /*descending=*/false); | |
| const auto idx = argsort(keys, /*dim=*/0, /*descending=*/false); |
| } | ||
|
|
||
| static Tensor lexsort_rows_perm_mps(const Tensor& mat_2d) { | ||
| const auto rows = mat_2d.size(0), cols = mat_2d.size(1); |
There was a problem hiding this comment.
Aren't c++17 allows something like
| const auto rows = mat_2d.size(0), cols = mat_2d.size(1); | |
| const auto [rows, cols] = mat_2d.sizes(); |
There was a problem hiding this comment.
not supported from .sizes() I think, getting error when trying to compile:
/pytorch/aten/src/ATen/native/mps/operations/Unique.mm:320:15: error: cannot decompose private member 'Data' of 'c10::ArrayRef<long long>'
320 | const auto [rows, cols] = mat_2d.sizes();
| if (perm.defined()) { | ||
| keys = keys.index_select(0, perm); | ||
| } | ||
| Tensor idx = argsort(keys, /*dim=*/0, /*descending=*/false); | ||
| perm = perm.defined() ? perm.index_select(0, idx) : std::move(idx); |
There was a problem hiding this comment.
| if (perm.defined()) { | |
| keys = keys.index_select(0, perm); | |
| } | |
| Tensor idx = argsort(keys, /*dim=*/0, /*descending=*/false); | |
| perm = perm.defined() ? perm.index_select(0, idx) : std::move(idx); | |
| if (!perm.defined()) { | |
| perm = std::move(keys); | |
| continue; | |
| } | |
| keys = keys.index_select(0, perm); | |
| const auto idx = argsort(keys, /*dim=*/0, /*descending=*/false); | |
| perm = perm.index_select(0, idx); |
There was a problem hiding this comment.
rewrote in a simpler way
| auto output = at::empty(sizes, self.options()); | ||
| auto inverse_indices = at::empty({0}, self.options().dtype(kLong)); | ||
| auto counts = at::empty({0}, self.options().dtype(kLong)); | ||
| return std::make_tuple(output, inverse_indices, counts); |
There was a problem hiding this comment.
Nit
| return std::make_tuple(output, inverse_indices, counts); | |
| return {output, inverse_indices, counts}; |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Implements unique_dim, sparse broadcast ops and adds dtypes for mps for tests where we expect to fail, otherwise they would always fail due to being run in double precision Pull Request resolved: #163694 Approved by: https://github.com/malfet
Implements unique_dim, sparse broadcast ops and adds dtypes for mps for tests where we expect to fail, otherwise they would always fail due to being run in double precision Pull Request resolved: pytorch#163694 Approved by: https://github.com/malfet
Implements unique_dim, sparse broadcast ops and adds dtypes for mps for tests where we expect to fail, otherwise they would always fail due to being run in double precision
cc @kulinseth @malfet @DenisVieriu97 @jhavukainen