-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
oncall: quantizationQuantization support in PyTorchQuantization support in PyTorch
Milestone
Description
🐛 Bug
repro:
- have a model with
nn.BatchNorm2dwhich is not intended to be fused - run
torch.quantization.prepare_qaton the model to prepare for QAT - run
torch.nn.SyncBatchNorm.convert_sync_batchnormon the model to prepare for DDP - run
torch.quantization.convert_qatto sub in the quantized modules
Currently step 3 does not carry over the qconfig, so the BNs never get quantized in step 4. We should make sure that
a. qconfig survives the swaps.
b. either convert works with SyncBN directly, or there is a utility to go from SyncBN back to BN (tracked in #41081)
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo
Metadata
Metadata
Assignees
Labels
oncall: quantizationQuantization support in PyTorchQuantization support in PyTorch