Skip to content

Conversation

@anjali411
Copy link
Contributor

@anjali411 anjali411 commented Feb 10, 2022

Stack from ghstack:

  1. torch_dispatch now returns the OpOverload instead of the OpOverloadPacket to Python.
  2. FX can trace the overloads (OpOverload objects) like torch.ops.aten.add.Tensor op, for example
import torch.fx as fx
def f(x):
   return torch.ops.aten.add.Tensor(x, x)
print(fx.symbolic_trace(f).code)

BC Breaking change
update func == torch.ops.aten.foo to

func.overloadpacket == torch.ops.aten.foo (if you want to perform the logical equivalent of the old check)

func == torch.ops.aten.foo.my_overload (if you want to add overload specific behavior)

Differential Revision: D34627164

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 10, 2022

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/b08e84ff26a813ba320dff23e367d154e6a8f285/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default
Add ciflow labels to this PR to trigger more builds:

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
linux-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-manywheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk, ciflow/xla ✅ triggered
linux-bionic-rocm4.5-py3.7 ciflow/all, ciflow/default, ciflow/linux, ciflow/rocm, ciflow/trunk ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
windows-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
docker-builds ciflow/all, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 10, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 54783e1 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

anjali411 added a commit that referenced this pull request Feb 10, 2022
…load packet function

ghstack-source-id: 203fe05
Pull Request resolved: #72673
… the opoverload packet function"

[ghstack-poisoned]
… the opoverload packet function"

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Feb 10, 2022
…load packet function

ghstack-source-id: 6bd557c
Pull Request resolved: #72673
@classmethod
def __torch_dispatch__(cls, func, types, args=(), kwargs=None):
if func == torch.ops.aten.alias:
if func.overload_packet == torch.ops.aten.alias:
Copy link
Contributor

@zou3519 zou3519 Feb 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this just be func == torch.ops.aten.alias[''] (or whatever the syntax is for the alias overload)? Ditto for the split example later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure so we could also change it to if func == torch.ops.aten.alias.default: (we use 'default' as the attribute to access '' overload name)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it took a while for us to make this BC-breaking change, it seems to me that we should introduce some goop to smooth over the transition. The simplest thing we can do is overload the meaning of == on overloads so that they test equal to their overload packet (perhaps throwing a deprecation warning saying that this behavior will get removed in a later version).

We also need to publish guidance on when you should match against the overload_packet (this is still such a weird name to me haha) as opposed to the overload directly. Hot take: matching against the overload packet is probably the wrong thing to do in most cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discussed offline that it's ok to just pull the plug

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this. Two main things that we should resolve:

  1. Are the test failures real?
  2. cc @Chillee -- AOTAutograd can be modified to work with this, right?

… the opoverload packet function"

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Feb 11, 2022
…load packet function

ghstack-source-id: 34a7758
Pull Request resolved: #72673
@albanD albanD requested a review from ezyang February 11, 2022 19:38
kwargs.ptr(),
"detach",
py::module::import("torch").attr("ops").attr("aten").attr("detach").ptr(),
py::module::import("torch").attr("ops").attr("aten").attr("detach").attr("default").ptr(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

huh, I'm surprised this works (granted, I didn't read the opoverload packet PR in detail haha). Does this mean that default is now a reserved keyword for overload names? Do we have tests that ensure that no one tries to name an overload default?

When I look at tools/codegen, I don't see any according logic:

ezyang-mbp:pytorch-tmp ezyang$ git grep \"default tools/codegen
tools/codegen/dest/register_dispatch_key.py:    // so this "default" kernel doesn't actually handle backends that don't support at::empty
ezyang-mbp:pytorch-tmp ezyang$ git grep \'default tools/codegen
tools/codegen/gen.py:            arg['default'] = cpp_a.default
tools/codegen/gen.py:        arg['default'] = pythonify_default(cpp.default_expr(a.default, a.type))
tools/codegen/gen.py:        'default': str(f.has_composite_kernel or has_autogenerated_composite_kernel(f))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang I added checks in the schema parser to disable default as an overload name in the schema parser to also disable that when registering through the RegisterOperators API https://github.com/pytorch/pytorch/pull/72206/files

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I recommend also synchronizing the parser in model.py to also have the same checking; otherwise you get the error only at runtime when you load torch. Doesn't have to be this PR.

@ezyang ezyang added the module: bc-breaking Related to a BC-breaking change label Feb 15, 2022
@ezyang
Copy link
Contributor

ezyang commented Feb 15, 2022

This broadly looks good to me, although I see there are a number of failing tests. Most of my questions have to do with the prior PR.

… the opoverload packet function"

[ghstack-poisoned]
torch/_ops.py Outdated
@property
def overload_name(self):
return self._schema.overload_name
def name(self):
Copy link
Contributor Author

@anjali411 anjali411 Mar 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does anyone have a name suggestion for this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually would suggest this as __str__ and your current __str__ become __repr__

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but usually repr should print something that can be used to construct the class object right? This is not true for the current __str__ return

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This property is currently called qualified_op_name for the OpOverloadPacket btw.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok so I changed it to __str__ as you suggested and added angular brackets to the string returned by __repr__ following the official python guideline: https://docs.python.org/3/library/functions.html#repr

torch/_ops.py Outdated
self._op = op
self._schema = schema
self._overloadpacket = overloadpacket
self.__name__ = 'default' if schema.overload_name is '' else schema.overload_name
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD does this look ok?

… the opoverload packet function"

[ghstack-poisoned]
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about the torch function handling, looks good otherwise.

… the opoverload packet function"


1. __torch_dispatch__ now returns the OpOverload instead of the OpOverloadPacket to Python.
2. FX can trace the overloads (OpOverload objects) like torch.ops.aten.add.Tensor op, for example
```
import torch.fx as fx
def f(x):
   return torch.ops.aten.add.Tensor(x, x)
print(fx.symbolic_trace(f).code)
```

Differential Revision: [D34627164](https://our.internmc.facebook.com/intern/diff/D34627164)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Mar 4, 2022
…load packet function

ghstack-source-id: 472b3aa
Pull Request resolved: #72673
@anjali411
Copy link
Contributor Author

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… the opoverload packet function"


1. __torch_dispatch__ now returns the OpOverload instead of the OpOverloadPacket to Python.
2. FX can trace the overloads (OpOverload objects) like torch.ops.aten.add.Tensor op, for example
```
import torch.fx as fx
def f(x):
   return torch.ops.aten.add.Tensor(x, x)
print(fx.symbolic_trace(f).code)
```

Differential Revision: [D34627164](https://our.internmc.facebook.com/intern/diff/D34627164)

[ghstack-poisoned]
@anjali411
Copy link
Contributor Author

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… the opoverload packet function"


1. __torch_dispatch__ now returns the OpOverload instead of the OpOverloadPacket to Python.
2. FX can trace the overloads (OpOverload objects) like torch.ops.aten.add.Tensor op, for example
```
import torch.fx as fx
def f(x):
   return torch.ops.aten.add.Tensor(x, x)
print(fx.symbolic_trace(f).code)
```

Differential Revision: [D34627164](https://our.internmc.facebook.com/intern/diff/D34627164)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Mar 4, 2022
…load packet function

ghstack-source-id: f185729
Pull Request resolved: #72673
@anjali411
Copy link
Contributor Author

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… the opoverload packet function"


1. __torch_dispatch__ now returns the OpOverload instead of the OpOverloadPacket to Python.
2. FX can trace the overloads (OpOverload objects) like torch.ops.aten.add.Tensor op, for example
```
import torch.fx as fx
def f(x):
   return torch.ops.aten.add.Tensor(x, x)
print(fx.symbolic_trace(f).code)
```

Differential Revision: [D34627164](https://our.internmc.facebook.com/intern/diff/D34627164)

[ghstack-poisoned]
@anjali411
Copy link
Contributor Author

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… the opoverload packet function"


1. __torch_dispatch__ now returns the OpOverload instead of the OpOverloadPacket to Python.
2. FX can trace the overloads (OpOverload objects) like torch.ops.aten.add.Tensor op, for example
```
import torch.fx as fx
def f(x):
   return torch.ops.aten.add.Tensor(x, x)
print(fx.symbolic_trace(f).code)
```

Differential Revision: [D34627164](https://our.internmc.facebook.com/intern/diff/D34627164)

[ghstack-poisoned]
@anjali411
Copy link
Contributor Author

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… the opoverload packet function"


1. __torch_dispatch__ now returns the OpOverload instead of the OpOverloadPacket to Python.
2. FX can trace the overloads (OpOverload objects) like torch.ops.aten.add.Tensor op, for example
```
import torch.fx as fx
def f(x):
   return torch.ops.aten.add.Tensor(x, x)
print(fx.symbolic_trace(f).code)
```

Differential Revision: [D34627164](https://our.internmc.facebook.com/intern/diff/D34627164)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Mar 7, 2022
…load packet function

ghstack-source-id: 550c007
Pull Request resolved: #72673
@anjali411
Copy link
Contributor Author

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Mar 7, 2022
…load packet function (#72673)

Summary: Pull Request resolved: #72673

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D34627164

Pulled By: anjali411

fbshipit-source-id: 3cb6406a392d530bf9da36b4d8e0a62b30e6497e
@github-actions
Copy link
Contributor

github-actions bot commented Mar 7, 2022

Hey @anjali411.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@anjali411 anjali411 added release notes: composability release notes category topic: bc breaking topic category labels Mar 7, 2022
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 9, 2022
…load packet function (#72673)

Summary: Pull Request resolved: pytorch/pytorch#72673

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D34627164

Pulled By: anjali411

fbshipit-source-id: 3cb6406a392d530bf9da36b4d8e0a62b30e6497e
(cherry picked from commit 65b85a0a67df4d0f16ac8964e2b685d478a610fb)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 9, 2022
…load packet function (#72673)

Summary: Pull Request resolved: pytorch/pytorch#72673

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D34627164

Pulled By: anjali411

fbshipit-source-id: 3cb6406a392d530bf9da36b4d8e0a62b30e6497e
(cherry picked from commit 65b85a0a67df4d0f16ac8964e2b685d478a610fb)
@facebook-github-bot facebook-github-bot deleted the gh/anjali411/154/head branch March 11, 2022 15:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed module: bc-breaking Related to a BC-breaking change release notes: composability release notes category topic: bc breaking topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants