Skip to content

Conversation

@wanchaol
Copy link
Collaborator

@wanchaol wanchaol commented Dec 13, 2022

Stack from ghstack (oldest at bottom):

This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:

  1. We ensure that the op registration registers the correct op, which
    means it would be faild if the op registration become wrong (this PR
    already fixing several op registration errors as we use direct
    OpOverload registration
  2. If the overload name get changed/deleted, we immediately know it at
    the source code compilation level, which is safer
  3. This also keep it consistents with the op registration mechanism with
    other tensor subclasses within PyTorch

Differential Revision: D42876250

This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 13, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/90735

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit d2f2977:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

wanchaol added a commit that referenced this pull request Dec 13, 2022
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

ghstack-source-id: 3c4d812
Pull Request resolved: #90735
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Dec 20, 2022
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

ghstack-source-id: 7c0eb4c
Pull Request resolved: #90735
@wanchaol wanchaol added the release notes: distributed (dtensor) release notes category label Dec 20, 2022
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Dec 29, 2022
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

ghstack-source-id: 72918af
Pull Request resolved: #90735
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Jan 3, 2023
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

ghstack-source-id: 2d70971
Pull Request resolved: #90735
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
wanchaol added a commit that referenced this pull request Jan 4, 2023
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

ghstack-source-id: c25cef7
Pull Request resolved: #90735
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

[ghstack-poisoned]
@wanchaol
Copy link
Collaborator Author

@wanchaol has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

Differential Revision: [D42876250](https://our.internmc.facebook.com/intern/diff/D42876250)

[ghstack-poisoned]
@wanchaol
Copy link
Collaborator Author

@wanchaol has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

Differential Revision: [D42876250](https://our.internmc.facebook.com/intern/diff/D42876250)

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

Differential Revision: [D42876250](https://our.internmc.facebook.com/intern/diff/D42876250)

[ghstack-poisoned]
This PR changes the op registration to a better mechanism, now
we require the directly overload registration instead of the op
key str, this have several benefits:
1. We ensure that the op registration registers the correct op, which
  means it would be faild if the op registration become wrong (this PR
  already fixing several op registration errors as we use direct
  OpOverload registration
2. If the overload name get changed/deleted, we immediately know it at
  the source code compilation level, which is safer
3. This also keep it consistents with the op registration mechanism with
  other tensor subclasses within PyTorch

Differential Revision: [D42876250](https://our.internmc.facebook.com/intern/diff/D42876250)

[ghstack-poisoned]
Copy link
Contributor

@fduwjj fduwjj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 1, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

wanchaol added a commit to pytorch/PiPPy that referenced this pull request Feb 1, 2023
@facebook-github-bot facebook-github-bot deleted the gh/wanchaol/239/head branch June 8, 2023 19:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: distributed (dtensor) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants