-
Notifications
You must be signed in to change notification settings - Fork 26.3k
quantized conv: add support for graph mode BC testing, and increase coverage #43524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: Adding coverage for quantized conv without bias in the backwards compatibility tests Test Plan: ``` python test/test_quantization.py TestSerialization.test_conv2d_nobias ``` Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: Adding coverage for quantized conv without bias in the backwards compatibility tests Test Plan: ``` python test/test_quantization.py TestSerialization.test_conv2d_nobias ``` Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
| module_id = self.__class__.__module__ | ||
| munged_id = remove_prefix(self.id(), module_id + ".") | ||
| test_file = os.path.realpath(sys.modules[module_id].__file__) | ||
| base_name = os.path.join(os.path.dirname(test_file), | ||
| "serialized", | ||
| munged_id) | ||
|
|
||
| subname_output = "" | ||
| if subname: | ||
| base_name += "_" + subname | ||
| subname_output = " ({})".format(subname) | ||
|
|
||
| input_file = base_name + ".input.pt" | ||
| state_dict_file = base_name + ".state_dict.pt" | ||
| scripted_module_file = base_name + ".scripted.pt" | ||
| traced_module_file = base_name + ".traced.pt" | ||
| expected_file = base_name + ".expected.pt" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, there is potential to reuse some part of this code, it didn't seem worth it at first glance as it would increase complexity and the benefit would be small as not all of this can be reused. I can take another look if you prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong preference, I just feel in general we should not be copying code.
… increase coverage" Summary: 1. adds support for testing BC on data format and numerics for graph mode quantized modules 2. using the above, adds coverage for quantized conv2d on graph mode Test Plan: ``` python test/test_quantization.py TestSerialization.test_conv2d_nobias python test/test_quantization.py TestSerialization.test_conv2d_graph python test/test_quantization.py TestSerialization.test_conv2d_nobias_graph ``` Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
… increase coverage" Summary: 1. adds support for testing BC on data format and numerics for graph mode quantized modules 2. using the above, adds coverage for quantized conv2d on graph mode Test Plan: ``` python test/test_quantization.py TestSerialization.test_conv2d_nobias python test/test_quantization.py TestSerialization.test_conv2d_graph python test/test_quantization.py TestSerialization.test_conv2d_nobias_graph ``` Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D23335222](https://our.internmc.facebook.com/intern/diff/D23335222) [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit b2e469d (more details on the Dr. CI page):
Extra GitHub checks: 1 failed
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 12 times. |
z-a-f
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| torch.save(scripted_q(input_tensor), expected_file) | ||
|
|
||
| input_tensor = torch.load(input_file) | ||
| qmodule_scripted = torch.jit.load(scripted_module_file) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also have a test to catch regressions in the size of the quantized module files? We have had one issue before where the quantized model was storing both fp32 and quantized weights.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, that would be valuable as well. It would probably be good for that live somewhere else since there could be things which don't break BC but change model size, and vice versa. For this PR stack we are doing manual testing of file size, and it's a reduction.
| qmodule_scripted = torch.jit.load(scripted_module_file) | ||
| qmodule_traced = torch.jit.load(traced_module_file) | ||
| expected = torch.load(expected_file) | ||
| self.assertEqual(qmodule_scripted(input_tensor), expected, atol=prec) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any value in comparing graph with eager expected outputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is definitely value in that, looks like that's already done in test_quantize_jit.py, so by those tests and the fact that both eager and graph are tested here for BC numerics, we can prove that eager vs graph condition is satisfied indirectly.
… increase coverage" Summary: 1. adds support for testing BC on data format and numerics for graph mode quantized modules 2. using the above, adds coverage for quantized conv2d on graph mode Test Plan: ``` python test/test_quantization.py TestSerialization.test_conv2d_nobias python test/test_quantization.py TestSerialization.test_conv2d_graph python test/test_quantization.py TestSerialization.test_conv2d_nobias_graph ``` Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D23335222](https://our.internmc.facebook.com/intern/diff/D23335222) [ghstack-poisoned]
… increase coverage" Summary: 1. adds support for testing BC on data format and numerics for graph mode quantized modules 2. using the above, adds coverage for quantized conv2d on graph mode Test Plan: ``` python test/test_quantization.py TestSerialization.test_conv2d_nobias python test/test_quantization.py TestSerialization.test_conv2d_graph python test/test_quantization.py TestSerialization.test_conv2d_nobias_graph ``` Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D23335222](https://our.internmc.facebook.com/intern/diff/D23335222) [ghstack-poisoned]
… increase coverage" Summary: 1. adds support for testing BC on data format and numerics for graph mode quantized modules 2. using the above, adds coverage for quantized conv2d on graph mode Test Plan: ``` python test/test_quantization.py TestSerialization.test_conv2d_nobias python test/test_quantization.py TestSerialization.test_conv2d_graph python test/test_quantization.py TestSerialization.test_conv2d_nobias_graph ``` Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D23335222](https://our.internmc.facebook.com/intern/diff/D23335222) [ghstack-poisoned]
Codecov Report
@@ Coverage Diff @@
## gh/vkuzo/131/base #43524 +/- ##
=====================================================
- Coverage 69.34% 69.33% -0.01%
=====================================================
Files 378 378
Lines 46698 46698
=====================================================
- Hits 32381 32380 -1
- Misses 14317 14318 +1
Continue to review full report at Codecov.
|
|
This pull request has been merged in af4ecb3. |
Stack from ghstack:
Summary:
quantized modules
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D23335222