Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Mar 22, 2022

Stack from ghstack (oldest at bottom):

This PR add support for quantized tensors with "unknown quantizer",
which means that we can use standard APIs like torch.empty to allocate
quantized tensors, with the understanding that we will set the
quantizer later. This makes meta functions applicable to quantized
tensors (they will allocate with unknown quantizer and the kernel
will set the quantizer later) and fixes a bug David Dang reported
where structured kernels give a weird error message when you call them
with quantized inputs.

This is not a complete support for quantized structured kernels because
I haven't actually tried porting any of the quantized implementations
to structured; qadd is probably a good choice to try first as it
does its broadcasting implementation using TensorIterator. My goal
here is just to show that the error message is better.

See also #52680

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Differential Revision: D35317441

This PR add support for quantized tensors with "unknown quantizer",
which means that we can use standard APIs like torch.empty to allocate
quantized tensors, with the understanding that we will set the
quantizer later.  This makes meta functions applicable to quantized
tensors (they will allocate with unknown quantizer and the kernel
will set the quantizer later) and fixes a bug David Dang reported
where structured kernels give a weird error message when you call them
with quantized inputs.

This is not a complete support for quantized structured kernels because
I haven't actually tried porting any of the quantized implementations
to structured; qadd is probably a good choice to try first as it
does its broadcasting implementation using TensorIterator.  My goal
here is just to show that the error message is better.

See also #52680

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Mar 22, 2022
This PR add support for quantized tensors with "unknown quantizer",
which means that we can use standard APIs like torch.empty to allocate
quantized tensors, with the understanding that we will set the
quantizer later.  This makes meta functions applicable to quantized
tensors (they will allocate with unknown quantizer and the kernel
will set the quantizer later) and fixes a bug David Dang reported
where structured kernels give a weird error message when you call them
with quantized inputs.

This is not a complete support for quantized structured kernels because
I haven't actually tried porting any of the quantized implementations
to structured; qadd is probably a good choice to try first as it
does its broadcasting implementation using TensorIterator.  My goal
here is just to show that the error message is better.

See also #52680

Signed-off-by: Edward Z. Yang <ezyangfb.com>

ghstack-source-id: aa7272d
Pull Request resolved: #74560
@ezyang ezyang requested review from bdhirsh and dzdang March 22, 2022 17:39
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 22, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 5a934fd (more details on the Dr. CI page):


  • 18/18 failures introduced in this PR

🕵️‍♀️ 18 failures not recognized by patterns:

The following CI failures may be due to changes from the PR
Job Step Action
GitHub Actions pull / linux-xenial-py3.7-gcc5.4 / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-bionic-rocm4.5-py3.7 / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-py3.7-gcc7-no-ops / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / deploy-linux-xenial-cuda11.3-py3.7-gcc7 / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-py3.7-gcc5.4-mobile-lightweight-dispatch-build / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-py3-clang5-mobile-custom-build-static / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-py3.7-clang7-onnx / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / win-vs2019-cuda11.3-py3 / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / win-vs2019-cpu-py3 / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-py3.7-clang7-asan / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-py3-clang5-mobile-build / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single / build-and-test Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit / build-and-test Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-cuda11.3-py3.7-gcc7-bazel-test / build-and-test Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-py3.7-gcc7 / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-vulkan-bionic-py3.7-clang9 / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-bionic-py3.7-clang9 / build Setup SSH (Click me for login details) 🔁 rerun
GitHub Actions pull / linux-xenial-cuda11.3-py3.7-gcc7 / build Setup SSH (Click me for login details) 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

runtime_empty_supported_check = ""
elif backend_index.dispatch_key == DispatchKey.CompositeExplicitAutograd:
elif backend_index.dispatch_key in (
DispatchKey.CompositeExplicitAutograd, DispatchKey.QuantizedCPU, DispatchKey.QuantizedCUDA):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tiny nit if we care about the perf (probably not necessary, just calling it out): we could avoid the dispatcher hop here from calling at::empty, but we'd have to abide by the naming convention and name the native kernels at::native::empty_quantizedcpu/cuda

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh this used to not be easy to do but now it is easy

empty_impl = "at::empty"
empty_strided_impl = "at::empty_strided"
runtime_empty_supported_check = """\
if (!c10::detail::backend_supports_empty_operator(options)) {{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for killing this :')

@ezyang ezyang requested a review from jerryzh168 March 22, 2022 23:12
r"Registration to both CompositeImplicitAutograd and CompositeExplicitAutograd is not allowed"):
dispatcher.register(["CompositeExplicitAutograd", "CompositeImplicitAutograd"])

def test_quantized_structured_not_implemented(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add a test in https://github.com/pytorch/pytorch/blob/master/test/quantization/core/test_quantized_tensor.py#L142 for calling some methods (e.g. qscheme) on an unknown tensor as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure

@ezyang ezyang mentioned this pull request Mar 26, 2022
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

followups are non-blocking, will accept first

@ezyang
Copy link
Contributor Author

ezyang commented Apr 1, 2022

@pytorchbot merge this

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Matched rule superuser, but it was not reviewed yet by any of:pbelevich,H-Huang,albanD,hlu1,jamesr66a, ...
Raised by https://github.com/pytorch/pytorch/actions/runs/2074378349

This PR add support for quantized tensors with "unknown quantizer",
which means that we can use standard APIs like torch.empty to allocate
quantized tensors, with the understanding that we will set the
quantizer later.  This makes meta functions applicable to quantized
tensors (they will allocate with unknown quantizer and the kernel
will set the quantizer later) and fixes a bug David Dang reported
where structured kernels give a weird error message when you call them
with quantized inputs.

This is not a complete support for quantized structured kernels because
I haven't actually tried porting any of the quantized implementations
to structured; qadd is probably a good choice to try first as it
does its broadcasting implementation using TensorIterator.  My goal
here is just to show that the error message is better.

See also #52680

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
@dzdang
Copy link
Contributor

dzdang commented Apr 1, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Apr 5, 2022
Summary:
Pull Request resolved: #74560

This PR add support for quantized tensors with "unknown quantizer",
which means that we can use standard APIs like torch.empty to allocate
quantized tensors, with the understanding that we will set the
quantizer later.  This makes meta functions applicable to quantized
tensors (they will allocate with unknown quantizer and the kernel
will set the quantizer later) and fixes a bug David Dang reported
where structured kernels give a weird error message when you call them
with quantized inputs.

This is not a complete support for quantized structured kernels because
I haven't actually tried porting any of the quantized implementations
to structured; qadd is probably a good choice to try first as it
does its broadcasting implementation using TensorIterator.  My goal
here is just to show that the error message is better.

See also #52680

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D35317441

Pulled By: dzdang

fbshipit-source-id: ffb85b0e06ccbcc2b01052ca6760517684048b39
@github-actions
Copy link
Contributor

github-actions bot commented Apr 5, 2022

Hey @ezyang.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@ezyang ezyang added release notes: composability release notes category topic: not user facing topic category labels Apr 5, 2022
dzdang added a commit that referenced this pull request Apr 5, 2022
…tion_pad1d_quantized_cpu and"

Summary: With the introduction of structured kernel support for quantized tensors in
#74560, we are able to remove the dimension and output
resizing code in reflection_pad1d_out_template (this code already present in reflection_pad1d), as well as the implementation for
reflection_pad1d_quantized_cpu.

This PR should introduce no functional changes.

Test plan:
```
python run_test.py
```

Differential Revision: [D35148152](https://our.internmc.facebook.com/intern/diff/D35148152)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 5, 2022
…ized_cpu and"

Summary: With the introduction of structured kernel support for quantized tensors in
#74560, we are able to remove the dimension and output
resizing code in reflection_pad1d_out_template (this code already present in reflection_pad1d), as well as the implementation for
reflection_pad1d_quantized_cpu.

This PR should introduce no functional changes.

Test plan:
```
python run_test.py
```

Differential Revision: [D35148152](https://our.internmc.facebook.com/intern/diff/D35148152)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 5, 2022
…tion_pad1d_quantized_cpu, dimension and output resizing code in reflection_pad1d_out_template and implemented reflection_pad1d_out_quantized_cpu"

Summary: With the introduction of structured kernel support for quantized tensors in
#74560, we are able to remove the dimension and output
resizing code in reflection_pad1d_out_template. This code is already present in reflection_pad1d.
reflection_pad1d_quantized_cpu has also been removed as quantized tensors can now use reflection_pad1d
after the changes in the linked PR.
reflection_pad1d_out_quantized_cpu was implemented for quantized tensors.

This PR should introduce no functional changes.

Test plan:
```
python run_test.py
```

Differential Revision: [D35148152](https://our.internmc.facebook.com/intern/diff/D35148152)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 5, 2022
…ized_cpu, dimension and output resizing code in reflection_pad1d_out_template and implemented reflection_pad1d_out_quantized_cpu"

Summary: With the introduction of structured kernel support for quantized tensors in
#74560, we are able to remove the dimension and output
resizing code in reflection_pad1d_out_template. This code is already present in reflection_pad1d.
reflection_pad1d_quantized_cpu has also been removed as quantized tensors can now use reflection_pad1d
after the changes in the linked PR.
reflection_pad1d_out_quantized_cpu was implemented for quantized tensors.

This PR should introduce no functional changes.

Test plan:
```
python run_test.py
```

Differential Revision: [D35148152](https://our.internmc.facebook.com/intern/diff/D35148152)

[ghstack-poisoned]
@facebook-github-bot facebook-github-bot deleted the gh/ezyang/1110/head branch April 8, 2022 14:17
dzdang added a commit that referenced this pull request Apr 12, 2022
…tch registration for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
…tch registration for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
…tch registration for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
…tch registration for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants