-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[quant] conv_transpose2d_prepack/conv_transpose2d_unpack #40351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit d0b783a (more details on the Dr. CI page):
ci.pytorch.org: 2 failed
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 166 times. |
| "(weight dimensions: ", | ||
| weight.sizes(), " , transpose: ", | ||
| (transpose ? "True)." : "False)."), | ||
| "\nDEBUG INFO: ", | ||
| bias_fp32.ndimension(), ", ", | ||
| bias_fp32.size(0), ", ", out_ch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug information
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
| // Conv | ||
| m.impl("conv2d_prepack", TORCH_FN(QConvPackWeightInt8<2>::run_conv)); | ||
| // ConvTranspose | ||
| m.impl("conv_transpose2d_prepack", TORCH_FN(QConvPackWeightInt8<2>::run_deconv)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just curious, what's the reason to add this twice (also on line 423)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand it correctly, the _ prefix is still used, and will be removed once the ops are migrated to mobile. However, before this is done, it should be registered for both (I think): #36510
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we still need to do it? @jerryzh168
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe let's follow up on if we need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fine either way -- the only concern is that if removed it will break the consistency with respect to the conv_Xd methods.
| class QConvOutputPadding final { | ||
| public: | ||
| static torch::List<int64_t> run( | ||
| const c10::intrusive_ptr<ConvPackedParamsBase<kSpatialDim>>& packed_weight) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: packed_params, since it's actually not the weight that's being passed in?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although we call it ConvPackedParams, I think on python level it is still called packed_weight. I can change it to the params -- I am just concerned if this would be equally confusing?
vkuzo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks reasonable, ideally we can review again after serialization is clear
| m.impl("conv_transpose2d_unpack", TORCH_FN(QConvUnpackWeightsInt8<2>::run)); | ||
|
|
||
| m.impl("conv_transpose2d_stride", TORCH_FN(QConvStride<2>::run)); | ||
| m.impl("conv_transpose2d_padding", TORCH_FN(QConvPadding<2>::run)); | ||
| m.impl("conv_transpose2d_output_padding", TORCH_FN(QConvOutputPadding<2>::run)); | ||
| m.impl("conv_transpose2d_dilation", TORCH_FN(QConvDilation<2>::run)); | ||
| m.impl("conv_transpose2d_groups", TORCH_FN(QConvGroups<2>::run)); | ||
| m.impl("conv_transpose2d_transpose", TORCH_FN(QConvTranspose<2>::run)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need different names?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean in addition to the conv2d_*? If so, this is just for consistency. They call the same functions, but because we have conv2d and conv_transpose2d, I thought it would be confusing to have conv2d_*, but not the conv_transpose2d*. LMK if I should change it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can avoid the duplicate methods and add a comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as my previous comment -- I am fine removing it, the only convern is the symmetry with respect to the conv_Xd methods.
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
| torch::List<int64_t> dilation, | ||
| int64_t groups) { | ||
| return _run(weight, bias, stride, padding, output_padding, dilation, groups, | ||
| /*transpose=*/true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
optional nit, would be good to avoid /*....*/, this removes the ability to easily comment out blocks of code for debugging
| // Conv | ||
| m.impl("conv2d_prepack", TORCH_FN(QConvPackWeightInt8<2>::run_conv)); | ||
| // ConvTranspose | ||
| m.impl("conv_transpose2d_prepack", TORCH_FN(QConvPackWeightInt8<2>::run_deconv)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe let's follow up on if we need this?
| m.impl("conv_transpose2d_unpack", TORCH_FN(QConvUnpackWeightsInt8<2>::run)); | ||
|
|
||
| m.impl("conv_transpose2d_stride", TORCH_FN(QConvStride<2>::run)); | ||
| m.impl("conv_transpose2d_padding", TORCH_FN(QConvPadding<2>::run)); | ||
| m.impl("conv_transpose2d_output_padding", TORCH_FN(QConvOutputPadding<2>::run)); | ||
| m.impl("conv_transpose2d_dilation", TORCH_FN(QConvDilation<2>::run)); | ||
| m.impl("conv_transpose2d_groups", TORCH_FN(QConvGroups<2>::run)); | ||
| m.impl("conv_transpose2d_transpose", TORCH_FN(QConvTranspose<2>::run)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can avoid the duplicate methods and add a comment?
Differential Revision: [D22158983](https://our.internmc.facebook.com/intern/diff/D22158983) [ghstack-poisoned]
Codecov Report
@@ Coverage Diff @@
## gh/z-a-f/39/base #40351 +/- ##
=================================================
Coverage 69.34% 69.35%
=================================================
Files 381 381
Lines 47170 47178 +8
=================================================
+ Hits 32712 32721 +9
+ Misses 14458 14457 -1
Continue to review full report at Codecov.
|
Stack from ghstack:
Differential Revision: D22158983