Skip to content

Conversation

@z-a-f
Copy link

@z-a-f z-a-f commented Sep 22, 2020

Stack from ghstack:

Differential Revision: D23846473

[ghstack-poisoned]
@z-a-f z-a-f requested a review from apaszke as a code owner September 22, 2020 18:26
@dr-ci
Copy link

dr-ci bot commented Sep 22, 2020

💊 CI failures summary and remediations

As of commit 045cb68 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 22 times.

Comment on lines +412 to +413
// (C, L) -> (C, 1, L) => kSqueezeDim = 1
// (N, C, L) -> (N, C, 1, L) => kSqueezeDim = 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just double checking, this works as expected across different memory formats, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate -- are you asking if squeeze/unsqueeze works properly across the memory formats, or if this trick does? For the latter -- it does, that what they do for the FP pooling ops. As for the squeeze/unsqueeze -- not sure, need to check the ref

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, was just asking if this function will work as expected for various memory formats. If it does - sg.

@codecov
Copy link

codecov bot commented Sep 23, 2020

Codecov Report

Merging #45152 into gh/z-a-f/66/base will increase coverage by 0.00%.
The diff coverage is 83.33%.

Impacted file tree graph

@@                Coverage Diff                @@
##           gh/z-a-f/66/base   #45152   +/-   ##
=================================================
  Coverage             68.05%   68.05%           
=================================================
  Files                   396      396           
  Lines                 51232    51238    +6     
=================================================
+ Hits                  34865    34870    +5     
- Misses                16367    16368    +1     
Impacted Files Coverage Δ
torch/overrides.py 97.08% <ø> (ø)
torch/nn/quantized/functional.py 62.75% <83.33%> (+0.88%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e4950a0...045cb68. Read the comment docs.

@z-a-f z-a-f requested a review from vkuzo September 23, 2020 19:23
Zafar added 2 commits September 28, 2020 14:41
@z-a-f z-a-f requested a review from vkuzo September 28, 2020 21:50
_packed_params = torch.ops.quantized.linear_prepack(weight, bias)
return torch.ops.quantized.linear(input, _packed_params, scale, zero_point)

def max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need this? i feel user won't be directly using these things in general

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to support the similarity to the nn.functional. Should I remove it?

@facebook-github-bot
Copy link
Contributor

@z-a-f merged this pull request in bb47881.

@facebook-github-bot facebook-github-bot deleted the gh/z-a-f/66/head branch October 2, 2020 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants