Skip to content

Conversation

@malfet
Copy link
Contributor

@malfet malfet commented Oct 12, 2020

Summary:
Cherry-pick of #46077 into release/1.7

Some of QNNPACK quantized kernels were not handling NHWC correctly,
the data written respected the input format but the memory flag
was always set to contiguous. This PR

  1. adds testing for NHWC for qnnpack activations
  2. fixes those activations which did not set the memory format on the output

Test Plan:

python test/test_quantization.py TestQuantizedOps.test_qhardsigmoid
python test/test_quantization.py TestQuantizedOps.test_leaky_relu
python test/test_quantization.py TestQuantizedOps.test_hardswish
python test/test_quantization.py TestQNNPackOps.test_qnnpack_tanh
python test/test_quantization.py TestQNNPackOps.test_qnnpack_sigmoid

Imported from OSS

Reviewed By: supriyar

Differential Revision: D24213257

fbshipit-source-id: 764fb588a8d8a0a6e6e4d86285904cdbab26d487

Summary:
Pull Request resolved: pytorch#46077

Some of QNNPACK quantized kernels were not handling NHWC correctly,
the data written respected the input format but the memory flag
was always set to contiguous.  This PR
1. adds testing for NHWC for qnnpack activations
2. fixes those activations which did not set the memory format on the output

Test Plan:
```
python test/test_quantization.py TestQuantizedOps.test_qhardsigmoid
python test/test_quantization.py TestQuantizedOps.test_leaky_relu
python test/test_quantization.py TestQuantizedOps.test_hardswish
python test/test_quantization.py TestQNNPackOps.test_qnnpack_tanh
python test/test_quantization.py TestQNNPackOps.test_qnnpack_sigmoid
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D24213257

fbshipit-source-id: 764fb588a8d8a0a6e6e4d86285904cdbab26d487
@malfet malfet requested a review from vkuzo October 12, 2020 22:01
@vkuzo vkuzo mentioned this pull request Oct 12, 2020
@dr-ci
Copy link

dr-ci bot commented Oct 12, 2020

💊 CI failures summary and remediations

As of commit d2bd0bd (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 1/2 non-CircleCI failure(s)

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Oct 12 22:36:00 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Oct 12 22:36:00 processing existing schema:  dilation(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int[] _0) 
Oct 12 22:36:00 processing existing schema:  groups(__torch__.torch.classes.quantized.Conv3dPackedParamsBase _0) -> (int _0) 
Oct 12 22:36:00 processing existing schema:  __getstate__(__torch__.torch.classes.xnnpack.LinearOpContext _0) -> ((Tensor, Tensor?, Scalar?, Scalar?) _0) 
Oct 12 22:36:00 processing existing schema:  __setstate__(__torch__.torch.classes.xnnpack.LinearOpContext _0, (Tensor, Tensor?, Scalar?, Scalar?) _1) -> (None _0) 
Oct 12 22:36:00 processing existing schema:  __getstate__(__torch__.torch.classes.xnnpack.Conv2dOpContext _0) -> ((Tensor, Tensor?, int[], int[], int[], int, Scalar?, Scalar?) _0) 
Oct 12 22:36:00 processing existing schema:  __setstate__(__torch__.torch.classes.xnnpack.Conv2dOpContext _0, (Tensor, Tensor?, int[], int[], int[], int, Scalar?, Scalar?) _1) -> (None _0) 
Oct 12 22:36:00 schema:  preprocess(Any self, Any mod, Dict(str, Any) method_compile_spec) -> (Any mod)  found on allowlist, skipping 
Oct 12 22:36:00 schema:  compile(Any self, Any processed, Dict(str, Any) method_compile_spec) -> (Dict(str, Any) handles)  found on allowlist, skipping 
Oct 12 22:36:00 schema:  execute(Any self, Any handle, Any[] input) -> (Any[] output)  found on allowlist, skipping 
Oct 12 22:36:00 processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (None _0) 
Oct 12 22:36:00 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Oct 12 22:36:00  
Oct 12 22:36:00 Broken ops: [ 
Oct 12 22:36:00 	aten::min_values(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor) 
Oct 12 22:36:00 	aten::min_values.names(Tensor self, str[1] dim, bool keepdim=False) -> (Tensor) 
Oct 12 22:36:00 	aten::max_values(Tensor self, int[1] dim, bool keepdim=False) -> (Tensor) 
Oct 12 22:36:00 	aten::max_values.names(Tensor self, str[1] dim, bool keepdim=False) -> (Tensor) 
Oct 12 22:36:00 ] 
Oct 12 22:36:00 + cleanup 
Oct 12 22:36:00 + retcode=1 
Oct 12 22:36:00 + set +x 

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 2 times.

@malfet malfet merged commit 7548f45 into pytorch:release/1.7 Oct 13, 2020
@malfet malfet deleted the malfet/cp-46077 branch October 13, 2020 00:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants