Skip to content

Conversation

@jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Jun 20, 2019

Stack:
    :black_circle:  #22010 torch.quantization conversion utilities, observers for eager mode quantization  💚

torch.quantization module with observers and conversion routines

Differential Revision: D15554183

Differential Revision: D15375695
Differential Version: 85144759
Differential Revision: D15483071
Differential Version: 85154440
Differential Revision: D15375695
Differential Version: 85155178
Differential Revision: D15554224
Differential Version: 85162142
Differential Revision: D15483071
Differential Version: 85187565
Differential Revision: D15483071
Differential Version: 85213062
Differential Revision: D15554224
Differential Version: 85213957
Differential Revision: D15375695
Differential Version: 85219380
Differential Revision: D15483071
Differential Version: 85220005
Differential Revision: D15554224
Differential Version: 85217753
Differential Revision: D15554224
Differential Version: 85223788
Differential Revision: D15554183
Differential Version: 85267282
Differential Revision: D15554183
Differential Version: 85267665
Differential Revision: D15483071
Differential Version: 85322492
Differential Revision: D15554224
Differential Version: 85325488
Differential Revision: D15554224
Differential Version: 85330125
Differential Revision: D15483071
Differential Version: 85331722
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some comments (I feel I already provided some of these on Raghu's prototype but I'm not sure where it was)

Differential Revision: D15375695
Differential Version: 85439410
Differential Revision: D15483071
Differential Version: 85439430
Differential Revision: D15554224
Differential Version: 85439441
Differential Revision: D15554183
Differential Version: 85439445
Differential Revision: D15554183
Differential Version: 85439488
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getting there, but there's still a lot to improve in terms of clarity of code

Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yay! It looks good to me. I'll let @raghuramank100 do a pass and answer question about exponential smoothing. But I think it's good to go

Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost there, a few more comments.

Differential Revision: D15554183
Differential Version: 86050037
"""
self.observer(output)

# TODO(jerryzh): remove_observer?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if we really need it. We can instead show in the tutorial how to do a deepcopy, so that the original float module is still available.

Differential Revision: D15554183
Differential Version: 86079795
Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see comments, thanks

Differential Revision: D15554183
Differential Version: 86093005
Differential Revision: D15554183
Differential Version: 86094049
Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an option to prepare: i.e enable/disable add_quant_dequant. This allows for tq.quantize to work for all the test cases: including quantwrapper and manual quantstub insertion.

Differential Revision: D15554183
Differential Version: 86106388
Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one more test requested. Thanks!

@jerryzh168
Copy link
Contributor Author

@raghuramank100 what test? I have removed add_quant_dequant from default and changed the previous test to call everything explicitly. The quantize api are used in the last few tests(manual and quantwrapper test cases)

@jerryzh168 jerryzh168 requested a review from raghuramank100 July 8, 2019 03:32
Differential Revision: D15554183
Differential Version: 86168758
Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 5040d52.

@ezyang
Copy link
Contributor

ezyang commented Jul 9, 2019

@ezyang ezyang deleted the export-D15554183 branch July 19, 2019 15:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: nn Related to torch.nn

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants