Skip to content

Conversation

@supriyar
Copy link
Contributor

@supriyar supriyar commented Sep 16, 2020

Stack from ghstack:

Summary:
The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D23741354

…module

Summary:
The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Sep 16, 2020
…module

Summary:
The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 79267f6
Pull Request resolved: #44773
@dr-ci
Copy link

dr-ci bot commented Sep 16, 2020

💊 CI failures summary and remediations

As of commit 1d1dbe5 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 16 times.

if type(mod) == FakeQuantize:
if type(mod) == FakeQuantize or\
isinstance(mod, torch.jit.RecursiveScriptModule)\
and mod.original_name == 'FakeQuantize':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a way to get the full qualified path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean like torch.quantization.FakeQuantize? I wasn't able to find a way to do that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@jerryzh168 jerryzh168 Sep 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh this one:https://codebrowser.bddppq.com/pytorch/pytorch/torch/csrc/jit/python/script_init.cpp.html#1046

you can probably get this by m._c.qualified_name

…d on scriptmodule"

Summary:
The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
…d on scriptmodule"

Summary:
The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Comment on lines 158 to 160
if type(mod) == FakeQuantize or\
isinstance(mod, torch.jit.RecursiveScriptModule)\
and mod.original_name == 'FakeQuantize':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to clarify, why doesn't it work without this code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it is scripted, the type will be class 'torch.jit._script.RecursiveScriptModule so the old code will return false.

…d on scriptmodule"

Summary:
The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D23741354](https://our.internmc.facebook.com/intern/diff/D23741354)

[ghstack-poisoned]
Comment on lines 167 to 168
else:
return false
Copy link
Contributor

@jerryzh168 jerryzh168 Sep 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this can be return False(without else), also it's False instead false I think.

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks

'''
if isinstance(mod, torch.jit.RecursiveScriptModule):
# qualified name looks like '__torch__.torch.quantization.fake_quantize.___torch_mangle_2.FakeQuantize'
suffix = mod._c.qualified_name.split('.', 1)[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be OK to include __torch__

…d on scriptmodule"

Summary:
The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D23741354](https://our.internmc.facebook.com/intern/diff/D23741354)

[ghstack-poisoned]
@codecov
Copy link

codecov bot commented Sep 17, 2020

Codecov Report

Merging #44773 into gh/supriyar/180/base will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@                  Coverage Diff                  @@
##           gh/supriyar/180/base   #44773   +/-   ##
=====================================================
  Coverage                 67.91%   67.92%           
=====================================================
  Files                       384      384           
  Lines                     49843    49850    +7     
=====================================================
+ Hits                      33852    33861    +9     
+ Misses                    15991    15989    -2     
Impacted Files Coverage Δ
torch/quantization/fake_quantize.py 96.62% <100.00%> (+0.28%) ⬆️
torch/utils/_benchmark/utils/common.py 79.33% <0.00%> (+1.65%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 650bd6d...1d1dbe5. Read the comment docs.

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 1fde54d.

xuzhao9 pushed a commit that referenced this pull request Sep 18, 2020
…module (#44773)

Summary:
Pull Request resolved: #44773

The model is created and prepared using fx APIs and then scripted for training.
In order to test QAT on scriptmodel we need to be able to disable/enable fake_quant
and observer modules on it.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qat_and_script

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D23741354

fbshipit-source-id: 3fee7aa9b049d9901313b977710f4dc1c4501532
@facebook-github-bot facebook-github-bot deleted the gh/supriyar/180/head branch September 21, 2020 14:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants