Skip to content

Commit 19971a9

Browse files
committed
Update on "[ao] making _is_activation_post_process private with BC"
same function in observer and quantize, consolidated to a single function note: this is a recreation of D40709276 which caused severa breakages due to not maintaining BC for models with cached code with calls to the old function name Differential Revision: [D41793604](https://our.internmc.facebook.com/intern/diff/D41793604/) **NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D41793604/)! [ghstack-poisoned]
2 parents 450158d + 01ef68e commit 19971a9

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

torch/ao/quantization/fx/utils.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@
2525
from torch.ao.quantization.stubs import DeQuantStub
2626
from torch.ao.quantization.utils import (
2727
activation_is_statically_quantized,
28-
is_per_tensor,
2928
)
3029
from torch.ao.quantization.observer import _is_activation_post_process
3130

0 commit comments

Comments
 (0)