-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[quant] torch.max_pool1d #45152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[quant] torch.max_pool1d #45152
Conversation
[ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 045cb68 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 22 times. |
Differential Revision: [D23846473](https://our.internmc.facebook.com/intern/diff/D23846473) [ghstack-poisoned]
| // (C, L) -> (C, 1, L) => kSqueezeDim = 1 | ||
| // (N, C, L) -> (N, C, 1, L) => kSqueezeDim = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just double checking, this works as expected across different memory formats, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate -- are you asking if squeeze/unsqueeze works properly across the memory formats, or if this trick does? For the latter -- it does, that what they do for the FP pooling ops. As for the squeeze/unsqueeze -- not sure, need to check the ref
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, was just asking if this function will work as expected for various memory formats. If it does - sg.
Codecov Report
@@ Coverage Diff @@
## gh/z-a-f/66/base #45152 +/- ##
=================================================
Coverage 68.05% 68.05%
=================================================
Files 396 396
Lines 51232 51238 +6
=================================================
+ Hits 34865 34870 +5
- Misses 16367 16368 +1
Continue to review full report at Codecov.
|
Differential Revision: [D23846473](https://our.internmc.facebook.com/intern/diff/D23846473) [ghstack-poisoned]
Differential Revision: [D23846473](https://our.internmc.facebook.com/intern/diff/D23846473) [ghstack-poisoned]
| _packed_params = torch.ops.quantized.linear_prepack(weight, bias) | ||
| return torch.ops.quantized.linear(input, _packed_params, scale, zero_point) | ||
|
|
||
| def max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this? i feel user won't be directly using these things in general
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to support the similarity to the nn.functional. Should I remove it?
Differential Revision: [D23846473](https://our.internmc.facebook.com/intern/diff/D23846473) [ghstack-poisoned]
Stack from ghstack:
Differential Revision: D23846473