Do our own call into Python for DTensor dispatch#166370
Do our own call into Python for DTensor dispatch#166370swolchok wants to merge 17 commits intogh/swolchok/862/basefrom
Conversation
Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166370
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 15 New FailuresAs of commit 73ac204 with merge base 84776e1 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…spatch" Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. [ghstack-poisoned]
Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. [ghstack-poisoned]
| // REVIEW: without the device change here, | ||
| // test_dtensor_save_load breaks. I'm not at all sure why it was | ||
| // OK before. |
There was a problem hiding this comment.
based on test failures, changing the device to meta here clearly isn't correct . will go back.
Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. [ghstack-poisoned]
|
running into some weird coupling here. |
…l into Python for DTensor dispatch" Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…o our own call into Python for DTensor dispatch" Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
|
Current failure looks like Dynamo needs to know about the DTensor dispatch key somehow; we're hitting |
torch/_tensor.py
Outdated
| and ( | ||
| type(self).__torch_dispatch__ is not torch.Tensor.__torch_dispatch__ | ||
| or _is_dtensor(self) | ||
| ) |
There was a problem hiding this comment.
not sure how to structurally avoid is_dtensor special cases in N things that currently just care about __torch_dispatch__
There was a problem hiding this comment.
thinking about whether we put DTensor.torch_dispatch back and have it just upcall to the DTensor dispatch key stuff
…call into Python for DTensor dispatch" Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. ghstack-source-id: 0937067 Pull Request resolved: #167051
…dd C++ fast path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. ghstack-source-id: 2f0ea67 Pull Request resolved: #167051
This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. ghstack-source-id: b3451db Pull Request resolved: #167051
…r `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…ch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…n "Add C++ fast path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. ghstack-source-id: 672fa5d Pull Request resolved: #167051
…or.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…tch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
|
superseded by #167051 |
… path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…or.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…patch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…th for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…nd add a couple comments on "Add C++ fast path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…nd add a couple comments on "Add C++ fast path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…comments on "Add C++ fast path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
… gating in is_dtensor on "Add C++ fast path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…ensor on "Add C++ fast path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…C++ fast path for `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
…r `DTensor.__torch_dispatch__`" This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci [ghstack-poisoned]
This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370. Pull Request resolved: #167051 Approved by: https://github.com/ezyang
Another incremental step: don't deal with custom ops on the critical path, let the dispatcher do that. Take control of calling into Python on the critical path as well. ghstack-source-id: 453ddca Pull Request resolved: pytorch/pytorch#166370
This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike pytorch#166370 and pytorch#166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of pytorch#166370. Pull Request resolved: pytorch#167051 Approved by: https://github.com/ezyang
Stack from ghstack (oldest at bottom):
Another incremental step: don't deal with custom ops on the critical
path, let the dispatcher do that. Take control of calling into Python
on the critical path as well.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci