-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[LTC] Remove lazy::View #87822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LTC] Remove lazy::View #87822
Conversation
Summary: This is the first part to remove the whole view and aliasing infrastructure in LTC, which is deprecated in favor of functionalization. It mainly removes things that use lazy::View. Test Plan: CI
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/87822
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 5836e21: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
antoniojkim
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks mostly good to me. Just one comment
| typedef SSIZE_T ssize_t; | ||
| #endif | ||
|
|
||
| void LazyGraphExecutor::BuildInputOutputAliases( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this not something that is needed in PyTorch/XLA? Why is it being removed? CC: @JackCaoG
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yea, I missed this. There has been some changes on our end of BuildInputOutputAliases since we now wrpas input parameter if it is too big. I think we can keep it for now but will need to circle back and update this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... Yea, I just remember TSBackend never uses this and the XLA one has diverged a lot already. @antoniojkim Does MLIR use this? If not, I don't know what to keep there? Upstream the latest XLA implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're not currently using this in Torch-MLIR. We found that the functionalization pass broke this function's ability to properly map the inputs to the outputs. So, we are currently using a custom MLIR specific solution. If PyTorch/XLA is not using it, we can remove it.
Can you point me to the latest XLA implemention? Maybe its something we can use in Torch-MLIR as well if its upstreamed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting, we need to make sure input_output_aliasing works, otherwise we will take a huge hit on memory. new version is in https://github.com/pytorch/xla/pull/3920/files#diff-17ef64dc5ef931bed0cb0d971cf537a610e90a9b4351658d6587ff59448f0f87, I don't think it will be too relevant to you guys. It is mostly updated because we need to wrap entry hlo compuation sometimes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove it for now then, we can reimplement it and test it again once we transitioned to functionization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wonjoolee95 Just a heads up.
JackCaoG
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Summary: This is a companion pull request to pytorch/pytorch#87822 which removes the view/aliasing infrastructure from LTC. Test Plan: CI
wconstab
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice. -1000 LOC :)
|
Thanks Will, Jack and Antonio for reviewing the pull request. |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Hey @alanwaketan. |
Summary: This is a companion pull request to pytorch/pytorch#87822 which removes the view/aliasing infrastructure from LTC. Test Plan: CI
Summary: This is the first part to remove the whole view and aliasing infrastructure in LTC, which is deprecated in favor of functionalization. It mainly removes things that use lazy::View. Test Plan: CI Pull Request resolved: pytorch#87822 Approved by: https://github.com/JackCaoG, https://github.com/antoniojkim, https://github.com/wconstab
Summary: This is the first part to remove the whole view and aliasing infrastructure in LTC, which is deprecated in favor of functionalization. It mainly removes things that use lazy::View. Test Plan: CI Pull Request resolved: pytorch#87822 Approved by: https://github.com/JackCaoG, https://github.com/antoniojkim, https://github.com/wconstab
Summary:
This is the first part to remove the whole view and aliasing infrastructure in LTC, which is deprecated in favor of functionalization. It mainly removes things that use lazy::View.
Test Plan:
CI