-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[quant][fx] Support some default ops in the native backend config #74600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 007eb11 (more details on the Dr. CI page):
🕵️ 3 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
| Job | Step | Action |
|---|---|---|
| Unknown | 🔁 rerun |
This comment was automatically generated by Dr. CI (expand for details).
Please report bugs/suggestions to the (internal) Dr. CI Users group.
|
@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
|
@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
| _op_in_base_sets_of_related_ops(base_op), | ||
| f"{base_op} not in sets of related ops") | ||
| else: | ||
| elif not _op_in_base_sets_of_related_ops(base_op): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vkuzo this is the change I made to the test, please take a look
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
basically we don't explicitly check the quantize handler type in this branch, but just directly check if op is in the related ops set
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
|
@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps python test/test_quantization.py TestFXNumericSuiteCoreAPIs python test/test_quantization.py TestFXGraphMatcher Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps python test/test_quantization.py TestFXNumericSuiteCoreAPIs python test/test_quantization.py TestFXGraphMatcher Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
|
@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps python test/test_quantization.py TestFXNumericSuiteCoreAPIs python test/test_quantization.py TestFXGraphMatcher Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
|
@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
… config" Summary: Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps python test/test_quantization.py TestFXNumericSuiteCoreAPIs python test/test_quantization.py TestFXGraphMatcher Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D35071437](https://our.internmc.facebook.com/intern/diff/D35071437) [ghstack-poisoned]
|
@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
…4600) Summary: Pull Request resolved: #74600 Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Imported from OSS Reviewed By: andrewor14 Differential Revision: D35071437 fbshipit-source-id: 70351d2810ca1ac7dc09d4a9c239f6757ccb51ca
|
Hey @jerryzh168. |
…4600) Summary: Pull Request resolved: #74600 Following #74210, this PR adds the support for some ops using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that Test Plan: python test/test_quantization.py TestQuantizeFxOps Imported from OSS Reviewed By: andrewor14 Differential Revision: D35071437 fbshipit-source-id: 70351d2810ca1ac7dc09d4a9c239f6757ccb51ca (cherry picked from commit 5e68f75)
Stack from ghstack (oldest at bottom):
Summary:
Following #74210, this PR adds the support for some ops
using the DefaultNodeQuantizeHandler in the backend_config_dict defintion for pytorch native backend
TODO: There is still a few ops we didn't handle with backend_config_dict path: gelu and softmax, need to discuss if we still need them, if so we can change the test
to use backend_config_dict and remove the DefaultNodeQuantizeHandler after that
Test Plan:
python test/test_quantization.py TestQuantizeFxOps
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXGraphMatcher
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D35071437