Skip to content

Conversation

@jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Mar 25, 2022

Stack from ghstack (oldest at bottom):

Summary:
We have simplified the way we insert observers, for add_scalar it now behaves the same way
as general_tensor_value ops, which means we only need to keep is_general_tensor_value_op now,
the other methods can be removed

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D35153532

Summary:
We have simplified the way we insert observers, for add_scalar it now behaves the same way
as general_tensor_value ops, which means we only need to keep is_general_tensor_value_op now,
the other methods can be removed

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 25, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 77c0557 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@jerryzh168
Copy link
Contributor Author

@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@jerryzh168 jerryzh168 requested review from andrewor14 and vkuzo March 25, 2022 20:11
@vkuzo
Copy link
Contributor

vkuzo commented Mar 28, 2022

We have simplified the way we insert observers, for add_scalar it now behaves the same way
as general_tensor_value ops

how does it work for add_scalar now?

@jerryzh168
Copy link
Contributor Author

We have simplified the way we insert observers, for add_scalar it now behaves the same way
as general_tensor_value ops

how does it work for add_scalar now?

when there is one tensor input, output observer will share with input; when there are two tensor inputs, we'll create a new output observer

facebook-github-bot pushed a commit that referenced this pull request Mar 28, 2022
Summary:
Pull Request resolved: #74775

We have simplified the way we insert observers, for add_scalar it now behaves the same way
as general_tensor_value ops, which means we only need to keep is_general_tensor_value_op now,
the other methods can be removed

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D35153532

fbshipit-source-id: 2d17189e167a9932bdbf5ae46b3ced25b7128c2f
@github-actions
Copy link
Contributor

Hey @jerryzh168.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@facebook-github-bot facebook-github-bot deleted the gh/jerryzh168/757/head branch April 1, 2022 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants