-
Notifications
You must be signed in to change notification settings - Fork 26.3k
torch.quantization conversion utilities, observers for eager mode quantization #22010
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Differential Revision: D15375695 Differential Version: 85144759
Differential Revision: D15483071 Differential Version: 85154440
Differential Revision: D15375695 Differential Version: 85155178
Differential Revision: D15554224 Differential Version: 85162142
Differential Revision: D15483071 Differential Version: 85187565
Differential Revision: D15483071 Differential Version: 85213062
Differential Revision: D15554224 Differential Version: 85213957
Differential Revision: D15375695 Differential Version: 85219380
Differential Revision: D15554224 Differential Version: 85217753
Differential Revision: D15554183 Differential Version: 85267282
Differential Revision: D15554183 Differential Version: 85267665
Differential Revision: D15483071 Differential Version: 85322492
Differential Revision: D15554224 Differential Version: 85325488
Differential Revision: D15554224 Differential Version: 85330125
Differential Revision: D15483071 Differential Version: 85331722
dzhulgakov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some comments (I feel I already provided some of these on Raghu's prototype but I'm not sure where it was)
Differential Revision: D15375695 Differential Version: 85439410
Differential Revision: D15483071 Differential Version: 85439430
Differential Revision: D15554224 Differential Version: 85439441
Differential Revision: D15554183 Differential Version: 85439445
Differential Revision: D15554183 Differential Version: 85439488
dzhulgakov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getting there, but there's still a lot to improve in terms of clarity of code
dzhulgakov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yay! It looks good to me. I'll let @raghuramank100 do a pass and answer question about exponential smoothing. But I think it's good to go
raghuramank100
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there, a few more comments.
Differential Revision: D15554183 Differential Version: 86050037
| """ | ||
| self.observer(output) | ||
|
|
||
| # TODO(jerryzh): remove_observer? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we really need it. We can instead show in the tutorial how to do a deepcopy, so that the original float module is still available.
Differential Revision: D15554183 Differential Version: 86079795
raghuramank100
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see comments, thanks
Differential Revision: D15554183 Differential Version: 86093005
Differential Revision: D15554183 Differential Version: 86094049
raghuramank100
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add an option to prepare: i.e enable/disable add_quant_dequant. This allows for tq.quantize to work for all the test cases: including quantwrapper and manual quantstub insertion.
Differential Revision: D15554183 Differential Version: 86106388
raghuramank100
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one more test requested. Thanks!
|
@raghuramank100 what test? I have removed add_quant_dequant from default and changed the previous test to call everything explicitly. The quantize api are used in the last few tests(manual and quantwrapper test cases) |
Differential Revision: D15554183 Differential Version: 86168758
raghuramank100
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
|
This pull request has been merged in 5040d52. |
|
This broke lint https://travis-ci.org/pytorch/pytorch/jobs/556423310 |
Stack:
:black_circle: #22010 torch.quantization conversion utilities, observers for eager mode quantization 💚
torch.quantization module with observers and conversion routines
Differential Revision: D15554183