-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[quant][qat] Ensure fake_quant and observer can be disabled on scriptmodule #44773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…module Summary: The model is created and prepared using fx APIs and then scripted for training. In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant and observer modules on it. Test Plan: python test/test_quantization.py TestQuantizeFx.test_qat_and_script Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
…module Summary: The model is created and prepared using fx APIs and then scripted for training. In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant and observer modules on it. Test Plan: python test/test_quantization.py TestQuantizeFx.test_qat_and_script Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 79267f6 Pull Request resolved: #44773
💊 CI failures summary and remediationsAs of commit 1d1dbe5 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 16 times. |
torch/quantization/fake_quantize.py
Outdated
| if type(mod) == FakeQuantize: | ||
| if type(mod) == FakeQuantize or\ | ||
| isinstance(mod, torch.jit.RecursiveScriptModule)\ | ||
| and mod.original_name == 'FakeQuantize': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a way to get the full qualified path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean like torch.quantization.FakeQuantize? I wasn't able to find a way to do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, can you try to see if this works? https://github.com/pytorch/pytorch/blob/master/torch/_jit_internal.py#L736
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh this one:https://codebrowser.bddppq.com/pytorch/pytorch/torch/csrc/jit/python/script_init.cpp.html#1046
you can probably get this by m._c.qualified_name
…d on scriptmodule" Summary: The model is created and prepared using fx APIs and then scripted for training. In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant and observer modules on it. Test Plan: python test/test_quantization.py TestQuantizeFx.test_qat_and_script Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
…d on scriptmodule" Summary: The model is created and prepared using fx APIs and then scripted for training. In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant and observer modules on it. Test Plan: python test/test_quantization.py TestQuantizeFx.test_qat_and_script Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
torch/quantization/fake_quantize.py
Outdated
| if type(mod) == FakeQuantize or\ | ||
| isinstance(mod, torch.jit.RecursiveScriptModule)\ | ||
| and mod.original_name == 'FakeQuantize': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to clarify, why doesn't it work without this code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it is scripted, the type will be class 'torch.jit._script.RecursiveScriptModule so the old code will return false.
…d on scriptmodule" Summary: The model is created and prepared using fx APIs and then scripted for training. In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant and observer modules on it. Test Plan: python test/test_quantization.py TestQuantizeFx.test_qat_and_script Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D23741354](https://our.internmc.facebook.com/intern/diff/D23741354) [ghstack-poisoned]
torch/quantization/fake_quantize.py
Outdated
| else: | ||
| return false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this can be return False(without else), also it's False instead false I think.
jerryzh168
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks
| ''' | ||
| if isinstance(mod, torch.jit.RecursiveScriptModule): | ||
| # qualified name looks like '__torch__.torch.quantization.fake_quantize.___torch_mangle_2.FakeQuantize' | ||
| suffix = mod._c.qualified_name.split('.', 1)[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be OK to include __torch__
…d on scriptmodule" Summary: The model is created and prepared using fx APIs and then scripted for training. In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant and observer modules on it. Test Plan: python test/test_quantization.py TestQuantizeFx.test_qat_and_script Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D23741354](https://our.internmc.facebook.com/intern/diff/D23741354) [ghstack-poisoned]
Codecov Report
@@ Coverage Diff @@
## gh/supriyar/180/base #44773 +/- ##
=====================================================
Coverage 67.91% 67.92%
=====================================================
Files 384 384
Lines 49843 49850 +7
=====================================================
+ Hits 33852 33861 +9
+ Misses 15991 15989 -2
Continue to review full report at Codecov.
|
|
This pull request has been merged in 1fde54d. |
…module (#44773) Summary: Pull Request resolved: #44773 The model is created and prepared using fx APIs and then scripted for training. In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant and observer modules on it. Test Plan: python test/test_quantization.py TestQuantizeFx.test_qat_and_script Imported from OSS Reviewed By: jerryzh168 Differential Revision: D23741354 fbshipit-source-id: 3fee7aa9b049d9901313b977710f4dc1c4501532
Stack from ghstack:
Summary:
The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D23741354