-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add Context Manager for Disabling Multithreading in Backwards, use in aot autograd #86245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… aot autograd [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/86245
Note: Links to docs will display an error until the docs builds have been completed. ✅ No Failures, 1 PendingAs of commit 67dd731: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This looks basically fine but deferring to @albanD for final review. |
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me.
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small error in doc, good to go otherwise!
docs/source/torch.rst
Outdated
| is_grad_enabled | ||
| inference_mode | ||
| is_inference_mode_enabled | ||
| set_multithreading_enabled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be reverted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, thanks
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM!
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
|
@pytorchbot rebase |
|
@pytorchbot successfully started a rebase job. Check the current status here |
…rds, use in aot autograd" We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. [ghstack-poisoned]
|
Successfully rebased |
|
@pytorchbot merge |
|
@pytorchbot successfully started a merge job. Check the current status here. |
|
Hey @eellison. |
… aot autograd (#86245) (#86245) Summary: We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity. Pull Request resolved: #86245 Approved by: https://github.com/albanD, https://github.com/yf225 Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/d04889323e2bc0b7321b76e564292565c88b9a5e Reviewed By: seemethere Differential Revision: D40167028 Pulled By: seemethere fbshipit-source-id: f427c71e528deaa494521a61fcbf789d1a964711
Stack from ghstack (oldest at bottom):
We were running into a few issues with running multithreaded backwards in aot_autograd: such as #86136, and
FakeTensorModegetting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity.