Skip to content

Conversation

@jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Jun 14, 2019

Stack:
    :black_circle:  #21767 [qat] Add FakeQuantize Module  💚

Adding FakeQuantize Module
for quantization aware training

Differential Revision: D15728503

Differential Revision: D15728503
Differential Version: 84919348
@pytorchbot pytorchbot added the module: nn Related to torch.nn label Jun 14, 2019
Differential Revision: D15728503
Differential Version: 84919576
Differential Revision: D15728503
Differential Version: 85103016
@pytorchbot pytorchbot added module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen module: operators oncall: quantization Quantization support in PyTorch labels Jun 18, 2019
Differential Revision: D15375695
Differential Version: 85144759
Differential Revision: D15483071
Differential Version: 85154440
Differential Revision: D15375695
Differential Version: 85155178
Differential Revision: D15554224
Differential Version: 85162142
Differential Revision: D15483071
Differential Version: 85187565
Differential Revision: D15483071
Differential Version: 85213062
Differential Revision: D15554224
Differential Version: 85213957
Differential Revision: D15375695
Differential Version: 85219380
Differential Revision: D15483071
Differential Version: 85220005
Differential Revision: D15554224
Differential Version: 85217753
Differential Revision: D15554224
Differential Version: 85223788
Differential Revision: D15554183
Differential Version: 85267282
Differential Revision: D15554183
Differential Version: 85267665
Differential Revision: D15483071
Differential Version: 85322492
Differential Revision: D15554224
Differential Version: 85325488
Differential Revision: D15554224
Differential Version: 85330125
Differential Revision: D15483071
Differential Version: 85331722
Differential Revision: D15375695
Differential Version: 85439410
Differential Revision: D15483071
Differential Version: 85439430
Differential Revision: D15554224
Differential Version: 85439441
Differential Revision: D15554183
Differential Version: 85439445
Differential Revision: D16185556
Differential Version: 86296584
Differential Revision: D16185556
Differential Version: 86309733
Differential Revision: D15728503
Differential Version: 86310060
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks close

Y.copy_(X); // We might want to just return the input here.
return Y;
}
TORCH_CHECK(self.scalar_type() == ScalarType::Float);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in a sense of generality - anything preventing you supporting other float types?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we only implemented quantize_linear and dequantize for float right now, but we could also do the double to float conversion when we convert to quantized model. Not sure how valuable it is to support more types though, do we train in double in practice?

def enalbe(self):
self.enable_fq = True

def calculate_qparams(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: why do you need this method? is it part of public api?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we are using the same signature as Observer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need this because of the conversion, in from_float, we'll call calculate_params on observer(fake_quant). see later prs for more context.

Differential Revision: D15728503
Differential Version: 86365862
@jerryzh168 jerryzh168 changed the base branch from export-D16185556 to master July 11, 2019 23:53
Differential Revision: D15728503
Differential Version: 86368540
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

build failures look legit

Differential Revision: D15728503
Differential Version: 86404688
@pytorchbot pytorchbot added the module: build Build system issues label Jul 12, 2019
Differential Revision: D15728503
Differential Version: 86408181
Differential Revision: D15728503
Differential Version: 86414971
Differential Revision: D15728503
Differential Version: 86421747
@pytorchbot pytorchbot added the module: rocm AMD GPU support for Pytorch label Jul 12, 2019
Differential Revision: D15728503
Differential Version: 86438127
Differential Revision: D15728503
Differential Version: 86495867
Differential Revision: D15728503
Differential Version: 86499899
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me module one tiny issue

self.enabled = True

def disable(self):
self.enable(False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it work? I think you're missing the arg for enable

Differential Revision: D15728503
Differential Version: 86507651
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in f7de9be.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 15, 2019
Summary:
Pull Request resolved: pytorch/pytorch#21767

Adding FakeQuantize Module
for quantization aware training

Reviewed By: dzhulgakov

Differential Revision: D15728503

fbshipit-source-id: 2a9a6a362812ede3deac42b93dddca35987bd8e6
@ezyang ezyang deleted the export-D15728503 branch July 19, 2019 15:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: build Build system issues module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen module: nn Related to torch.nn module: rocm AMD GPU support for Pytorch oncall: quantization Quantization support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants