Skip to content

Conversation

@jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Aug 5, 2020

Stack from ghstack:

Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:
regression tests

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D22939453

Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Aug 5, 2020
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 7b89e56
Pull Request resolved: #42576
@dr-ci
Copy link

dr-ci bot commented Aug 5, 2020

💊 CI failures summary and remediations

As of commit 998586a (more details on the Dr. CI page):


  • 3/3 failures possibly* introduced in this PR
    • 2/3 non-CircleCI failure(s)

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_windows_vs2019_py36_cuda11.0_build (1/1)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

FAILED: caffe2/CMakeFiles/torch_cpu.dir/operators/hard_sigmoid_op.cc.obj
caused by: Failed to read response header
caused by: failed to fill whole buffer
TION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\operators\gather_fused_8bit_rowwise_op.cc.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c ..\caffe2\operators\gather_fused_8bit_rowwise_op.cc 
FAILED: caffe2/CMakeFiles/torch_cpu.dir/operators/gather_fused_8bit_rowwise_op.cc.obj  
TION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\operators\gather_fused_8bit_rowwise_op.cc.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c ..\caffe2\operators\gather_fused_8bit_rowwise_op.cc 
error: failed to execute compile
caused by: error reading compile response from server
caused by: Failed to read response header
caused by: failed to fill whole buffer
ION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\operators\hard_sigmoid_op.cc.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c ..\caffe2\operators\hard_sigmoid_op.cc 
FAILED: caffe2/CMakeFiles/torch_cpu.dir/operators/hard_sigmoid_op.cc.obj  
ION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\operators\hard_sigmoid_op.cc.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c ..\caffe2\operators\hard_sigmoid_op.cc 
error: failed to execute compile
caused by: error reading compile response from server
caused by: Failed to read response header
caused by: failed to fill whole buffer
ninja: build stopped: subcommand failed. 
-- Building version 1.7.0a0+998586a 
ect\build\win_tmp\bin\randomtemp.exe -DCUDNN_LIBRARY=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\lib\x64 -DJAVA_HOME=C:\Program Files\OpenJDK\jdk-12.0.2 -DNUMPY_INCLUDE_DIR=C:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -DPYTHON_EXECUTABLE=C:\Jenkins\Miniconda3\python.exe -DPYTHON_INCLUDE_DIR=C:\Jenkins\Miniconda3\include -DPYTHON_LIBRARY=C:\Jenkins\Miniconda3/libs/python36.lib -DTORCH_BUILD_VERSION=1.7.0a0+998586a -DUSE_CUDA=1 -DUSE_NUMPY=True C:\Users\circleci\project 
cmake --build . --target install --config Release -- -j 16 
Traceback (most recent call last): 

ci.pytorch.org: 2 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 48 times.


if type(module) in white_list:
module.qconfig = module_qconfig
module.qconfig = module_qconfig
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we make sure only the ones that are allowed are getting the qconfig?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I realized that, just updated the diff. Ideally we don't have people using this "white_list" API, otherwise we'll need to make changes to callsite.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The numeric suite tests might be impacted by this change, please check if they are passing prior to landing.

@vkuzo
Copy link
Contributor

vkuzo commented Aug 6, 2020

btw, what's the context for this change? Is this for GraphModule, for adding finer grain control such as "don't quantize stateless aten op X", something else?

Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:
regression tests

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D22939453](https://our.internmc.facebook.com/intern/diff/D22939453)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Aug 6, 2020
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 01614db
Pull Request resolved: #42576
@jerryzh168
Copy link
Contributor Author

btw, what's the context for this change? Is this for GraphModule, for adding finer grain control such as "don't quantize stateless aten op X", something else?

Yeah this is for graphmodule, we need to attach qconfig to the parent_module to be able to quantize the ops in the forward of that module. for example:

class Linear(torch.nn.Module):
	def __init__(self, weight):
		super().__init__()

	def forward(self, x, weight):
		return F.linear(x, weight)

Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:
regression tests

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D22939453](https://our.internmc.facebook.com/intern/diff/D22939453)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Aug 6, 2020
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 38b8608
Pull Request resolved: #42576
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:
regression tests

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D22939453](https://our.internmc.facebook.com/intern/diff/D22939453)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Aug 7, 2020
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 545c486
Pull Request resolved: #42576
if hasattr(module, 'qconfig') and module.qconfig is not None and \
len(module._modules) == 0 and not isinstance(module, torch.nn.Sequential):
len(module._modules) == 0 and not isinstance(module, torch.nn.Sequential) \
and type(module) in DEFAULT_QCONFIG_PROPAGATE_WHITE_LIST:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be using white_list instead of DEFAULT_QCONFIG_PROPAGATE_WHITE_LIST

Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a few questions. Thanks!

Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:
regression tests

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D22939453](https://our.internmc.facebook.com/intern/diff/D22939453)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Aug 7, 2020
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: b8287c7
Pull Request resolved: #42576
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:
regression tests

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D22939453](https://our.internmc.facebook.com/intern/diff/D22939453)

[ghstack-poisoned]
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:
regression tests

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D22939453](https://our.internmc.facebook.com/intern/diff/D22939453)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Aug 11, 2020
Summary:
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 4629ad4
Pull Request resolved: #42576
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in ac93d45.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants