-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add torch.movedim
#41480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add torch.movedim
#41480
Conversation
💊 CI failures summary and remediationsAs of commit 3ad19aa (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 64 times. |
|
@zou3519 I assigned this to you because I remember seeing you implement a baby version of moveaxis for vmap. LMK if this assignment is wrong. |
|
Yeah, I'm happy to review this. @kshitij12345 -- can you let me know when this is ready for review, or if you want me to take a look at it now? (github says the PR is current a draft) |
|
I'll ping you once I feel the PR is complete from my side. Thank You! |
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I skimmed through this pretty fast to put up a few initial comments, but ping me when this is ready @kshitij12345 and I'll provide a full review
* move code to TensorShape * update error msg * update test to check for updated error msg
|
@zou3519 @mruberry |
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't read the test code or the documentation yet but the implementation looks correct to me
torch/_torch_docs.py
Outdated
| r""" | ||
| movedim(input, source, destination) -> Tensor | ||
| Move `source` dim/s of the :attr:`input` to position determined by `destination`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion:
"Moves the dimension(s) of :attr:input at the position(s) in :attr:source to the position(s) in :attr:destination.
Other dimensions of :attr:input that are not explicitly moved remain in their original order and appear at the positions not specified in :attr:destination."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I wasn' t sure how to phrase it better.
* pending src -> source, dst -> destination * update doc
Thanks @kshitij12345, docs look good to me now! I'll let @zou3519 have final say. |
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| auto source_iter = std::remove(source_dims.begin(), source_dims.end(), -1); | ||
| auto destination_iter = std::remove(destination_dims.begin(), destination_dims.end(), -1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An internal linter pointed these out. source_iter and destination_iter are never actually used, can we remove them and just do:
std::remove(source_dims.begin(), source_dims.end(), -1);
std::remove(destination_dims.begin(), destination_dims.end(), -1);
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I thought about it a little more. It would be nice to use source_iter and destination_iter to assert that source_dims and destination_dims have the correct number of elements. So something like
TORCH_INTERNAL_ASSERT(std::distance(source_dims.begin(), source_iter) == rest_dim);
TORCH_INTERNAL_ASSERT(std::distance(destination_dims.begin(), destination_iter) == rest_dim);
But either way, we should either use source_iter / destination_iter or delete them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense.
Will use them as suggested in second comment.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Thanks for the contribution, @kshitij12345! There were many times in the past when I wanted to use this feature and I am glad we finally added it to PyTorch |
Summary: Noticed this bug in `torch.movedim` (#41480). [`std::unique`](https://en.cppreference.com/w/cpp/algorithm/unique) only guarantees uniqueness for _sorted_ inputs. The current check lets through non-unique values when they aren't adjacent to each other in the list, e.g. `(0, 1, 0)` wouldn't raise an exception and instead the algorithm fails later with an internal assert. Pull Request resolved: #44307 Reviewed By: mrshenli Differential Revision: D23598311 Pulled By: zou3519 fbshipit-source-id: fd6cc43877c42bb243cfa85341c564b6c758a1bf
Summary: Noticed this bug in `torch.movedim` (#41480). [`std::unique`](https://en.cppreference.com/w/cpp/algorithm/unique) only guarantees uniqueness for _sorted_ inputs. The current check lets through non-unique values when they aren't adjacent to each other in the list, e.g. `(0, 1, 0)` wouldn't raise an exception and instead the algorithm fails later with an internal assert. Pull Request resolved: #44307 Reviewed By: mrshenli Differential Revision: D23598311 Pulled By: zou3519 fbshipit-source-id: fd6cc43877c42bb243cfa85341c564b6c758a1bf
Summary: Noticed this bug in `torch.movedim` (#41480). [`std::unique`](https://en.cppreference.com/w/cpp/algorithm/unique) only guarantees uniqueness for _sorted_ inputs. The current check lets through non-unique values when they aren't adjacent to each other in the list, e.g. `(0, 1, 0)` wouldn't raise an exception and instead the algorithm fails later with an internal assert. Pull Request resolved: #44307 Reviewed By: mrshenli Differential Revision: D23598311 Pulled By: zou3519 fbshipit-source-id: fd6cc43877c42bb243cfa85341c564b6c758a1bf
|
I have a question, does If I remember correct, many torch functions started to support |
|
@boeddeker Yes, It was named The plan was to add an alias for it. cc: @mruberry |
|
@kshitij12345 is correct and we would accept a PR adding the alias. It just hasn't been done yet. |
Summary: Reference: #38349 #36048 #41480 (comment) Pull Request resolved: #48581 Reviewed By: bdhirsh Differential Revision: D25276307 Pulled By: mruberry fbshipit-source-id: 3e3e4df1343c5ce5b71457badc43f08c419ec5c3
#38349 #36048
TODO: