Skip to content

Conversation

@jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Mar 25, 2022

Stack from ghstack (oldest at bottom):

Summary:
when both inputs are scalars, fx tracing will directly calculate the result, instead of generating an op in the fx graph
so num_tensor_args will always be greater than 1 for binary ops, so the input_output_observed will always return True
for BinaryQuantizeHandler

We will remove input_output_observed method after dynamic quantization in qconfig is properly supported

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D35153531

Summary:
when both inputs are scalars, fx tracing will directly calculate the result, instead of generating an op in the fx graph
so num_tensor_args will always be greater than 1 for binary ops, so the input_output_observed will always return True
for BinaryQuantizeHandler

We will remove input_output_observed method after dynamic quantization in qconfig is properly supported

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Mar 25, 2022
Summary:
when both inputs are scalars, fx tracing will directly calculate the result, instead of generating an op in the fx graph
so num_tensor_args will always be greater than 1 for binary ops, so the input_output_observed will always return True
for BinaryQuantizeHandler

We will remove input_output_observed method after dynamic quantization in qconfig is properly supported

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 814441c
Pull Request resolved: #74776
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 25, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit c6c16d1 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@jerryzh168
Copy link
Contributor Author

@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@jerryzh168 jerryzh168 requested review from andrewor14 and vkuzo and removed request for vkuzo March 25, 2022 20:11
facebook-github-bot pushed a commit that referenced this pull request Mar 28, 2022
…#74776)

Summary:
Pull Request resolved: #74776

when both inputs are scalars, fx tracing will directly calculate the result, instead of generating an op in the fx graph
so num_tensor_args will always be greater than 1 for binary ops, so the input_output_observed will always return True
for BinaryQuantizeHandler

We will remove input_output_observed method after dynamic quantization in qconfig is properly supported

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: albanD

Differential Revision: D35153531

fbshipit-source-id: fa777429eeb64a6a78a98f8d8dcd9e0903c8b209
@github-actions
Copy link
Contributor

Hey @jerryzh168.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@facebook-github-bot facebook-github-bot deleted the gh/jerryzh168/758/head branch April 1, 2022 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants