-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[pt1][quant] Add dequantize_linear for JIT pass #20107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Differential Revision: D15094174 Differential Version: 80838655
Differential Revision: D15094174 Differential Version: 80839141
Differential Revision: D15094174 Differential Version: 80909406
Differential Revision: D15094174 Differential Version: 80918409
Differential Revision: D15137838 Differential Version: 80919404
Differential Revision: D15137838 Differential Version: 80940601
Differential Revision: D15094174 Differential Version: 80977784
Differential Revision: D15094174 Differential Version: 80996728
Differential Revision: D15137838 Differential Version: 80996800
Differential Revision: D15150715 Differential Version: 80996764
Differential Revision: D15094174 Differential Version: 81007920
Differential Revision: D15094174 Differential Version: 81099685
Differential Revision: D15137838 Differential Version: 81109398
Differential Revision: D15150715 Differential Version: 81113084
Differential Revision: D15202187 Differential Version: 81167463
Differential Revision: D15094174 Differential Version: 81173037
Differential Revision: D15094174 Differential Version: 81239113
Differential Revision: D15094174 Differential Version: 81253515
Differential Revision: D15094174 Differential Version: 81263921
Differential Revision: D15202187 Differential Version: 81281334
Differential Revision: D15094174 Differential Version: 81319070
Differential Revision: D15094174 Differential Version: 81328044
Differential Revision: D15094174 Differential Version: 81380836
Differential Revision: D15094174 Differential Version: 81411153
Differential Revision: D15137838 Differential Version: 81699511
Differential Revision: D15150715 Differential Version: 81699751
|
Please run clang-format on the .h/.cpp files |
Differential Revision: D15202187 Differential Version: 81740423
Differential Revision: D15137838 Differential Version: 82009220
Differential Revision: D15150715 Differential Version: 82024560
Differential Revision: D15202187 Differential Version: 82030992
Differential Revision: D15150715 Differential Version: 82077846
Differential Revision: D15150715 Differential Version: 82142681
Differential Revision: D15202187 Differential Version: 82189466
Differential Revision: D15202187 Differential Version: 82409133
Differential Revision: D15202187 Differential Version: 82456360
Differential Revision: D15202187 Differential Version: 82477545
|
dumb question - why do you need this if you have as_quantized_tensor in the following diff? also, how is it going to be used in the JIT pass, just curious? |
|
@dzhulgakov in JIT pass, we want explicit scale, zero_point argument in dequantize so that we don't need to traverse back to find these argument in quantize_linear. We don't expect this function to be used by user, this is for JIT pass only. |
Differential Revision: D15202187 Differential Version: 82551619
|
This pull request has been merged in cca923c. |
Summary: Pull Request resolved: pytorch/pytorch#20107 att Reviewed By: nishantpdce Differential Revision: D15202187 fbshipit-source-id: 7d6274a67fcca695c0425587f35046fecbc2ccdc
Stack:
:white_circle: #20656 [pt1][quant] int_repr for different quantized types 💛
:black_circle: #20107 [pt1][quant] Add dequantize_linear for JIT pass 💚
att
Differential Revision: D15202187