-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
oncall: quantizationQuantization support in PyTorchQuantization support in PyTorch
Description
🐛 Bug
THe quantize and dequantize currently call into contiguous, which may lead to underlying NHWC=>NCHW change.
To Reproduce
pytorch/aten/src/ATen/quantized/Quantizer.cpp
Line 577 in bd3c6e8
| qtensor = qtensor.contiguous(); |
Steps to reproduce the behavior:
- construct a residual add path, one is from qconv and another is directly from quantize input.
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (
conda,pip, source): - Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
Additional context
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo
Metadata
Metadata
Assignees
Labels
oncall: quantizationQuantization support in PyTorchQuantization support in PyTorch