Skip to content

Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)"#165910

Closed
yingufan wants to merge 1 commit intopytorch:mainfrom
yingufan:export-D85025378
Closed

Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)"#165910
yingufan wants to merge 1 commit intopytorch:mainfrom
yingufan:export-D85025378

Conversation

@yingufan
Copy link
Contributor

@yingufan yingufan commented Oct 20, 2025

Summary:
Original commit changeset: d6d62d0c96dd

Original Phabricator Diff: D84468451 and D84613184

D84468451 caused CUDA OutOfMemoryError in model.

Test Plan:
D84468451 was found through bisect. Also double checked on recent trunk 9866939225248c2adc307be7a804b26db0b9b555: f815887517

With this diff that backs out D84468451 and D84613184 : f816114560

Differential Revision: D85025378

cc @ezyang @EikanWang @jgong5 @wenzhe-nrv

…rad wouldn't have decomposed (pytorch#164939)"

Summary:
Original commit changeset: d6d62d0c96dd

Original Phabricator Diff: D84468451 and D84613184

D84468451 caused CUDA OutOfMemoryError in model.

Test Plan:
D84468451 was found through bisect.  Also double checked on recent trunk 9866939225248c2adc307be7a804b26db0b9b555: f815887517

With this diff that backs out D84468451 and D84613184 : f816114560

Differential Revision: D85025378
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 20, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165910

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 5258bbe with merge base ab82456 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-codesync
Copy link

meta-codesync bot commented Oct 20, 2025

@yingufan has exported this pull request. If you are a Meta employee, you can view the originating Diff in D85025378.

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 21, 2025
@clee2000
Copy link
Contributor

@pytorchbot merge -f "merged internally, back out of two other diffs"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

zhudada0120 pushed a commit to zhudada0120/pytorch that referenced this pull request Oct 22, 2025
…rad wouldn't have decomposed (pytorch#164939)" (pytorch#165910)

Summary:
Original commit changeset: d6d62d0c96dd

Original Phabricator Diff: D84468451 and D84613184

D84468451 caused CUDA OutOfMemoryError in model.

Test Plan:
D84468451 was found through bisect.  Also double checked on recent trunk 9866939225248c2adc307be7a804b26db0b9b555: f815887517

With this diff that backs out D84468451 and D84613184 : f816114560

Differential Revision: D85025378

Pull Request resolved: pytorch#165910
Approved by: https://github.com/clee2000
ezyang added a commit that referenced this pull request Nov 2, 2025
…if autograd wouldn't have decomposed (#164939)" (#165910)"

This reverts commit e6ba4d0.


ghstack-source-id: c9758ef
Pull-Request: #166812
pytorchmergebot pushed a commit that referenced this pull request Nov 3, 2025
…if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)

This reverts commit e6ba4d0.

Pull Request resolved: #166812
Approved by: https://github.com/SherlockNoMad
pytorchmergebot added a commit that referenced this pull request Nov 3, 2025
… if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)

This reverts commit 5a3930a.

Reverted #166812 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](#166812 (comment)))
pytorchmergebot pushed a commit that referenced this pull request Nov 3, 2025
…if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)

This reverts commit e6ba4d0.

Pull Request resolved: #166812
Approved by: https://github.com/SherlockNoMad
pytorch-bot bot pushed a commit that referenced this pull request Nov 4, 2025
…if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)

This reverts commit e6ba4d0.

Pull Request resolved: #166812
Approved by: https://github.com/SherlockNoMad
pytorch-bot bot pushed a commit that referenced this pull request Nov 4, 2025
… if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)

This reverts commit 5a3930a.

Reverted #166812 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](#166812 (comment)))
pytorch-bot bot pushed a commit that referenced this pull request Nov 4, 2025
…if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)

This reverts commit e6ba4d0.

Pull Request resolved: #166812
Approved by: https://github.com/SherlockNoMad
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants