Skip to content

Conversation

@dzdang
Copy link
Contributor

@dzdang dzdang commented Feb 4, 2022

Stack from ghstack (oldest at bottom):

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:

python test/test_quantization.py -k test_max_pool2d

Differential Revision: D35420901

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 4, 2022

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/31399ac2b0ac71fd5f736663160e0da3c3272d36/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default
Add ciflow labels to this PR to trigger more builds:

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
linux-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-manywheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk, ciflow/xla ✅ triggered
linux-bionic-rocm4.5-py3.7 ciflow/all, ciflow/default, ciflow/linux, ciflow/rocm, ciflow/trunk ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
windows-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
docker-builds ciflow/all, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 4, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 8ef7c8c (more details on the Dr. CI page):


  • 9/9 failures introduced in this PR

🕵️ 9 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 1, 2, linux.4xlarge.nvidia.gpu) (1/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T03:45:34.0549781Z AssertionError: can only test a child process
2022-04-15T03:45:34.0096384Z   test_multi_epochs_reproducibility (__main__.TestDataLoaderPersistentWorkers) ... ok (0.062s)
2022-04-15T03:45:34.0113738Z   test_multiple_dataloaders (__main__.TestDataLoaderPersistentWorkers) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/74598 for allplatform(s) . If you're seeing this on your local machine and would like to enable this test, please make sure IN_CI is not set and you are not using the flag --import-disabled-tests. (0.001s)
2022-04-15T03:45:34.0537967Z   test_multiprocessing_contexts (__main__.TestDataLoaderPersistentWorkers) ... Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7fac3f3b03b0>
2022-04-15T03:45:34.0538563Z Traceback (most recent call last):
2022-04-15T03:45:34.0539207Z   File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1362, in __del__
2022-04-15T03:45:34.0543771Z     self._shutdown_workers()
2022-04-15T03:45:34.0544722Z   File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1345, in _shutdown_workers
2022-04-15T03:45:34.0547682Z     if w.is_alive():
2022-04-15T03:45:34.0548315Z   File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 151, in is_alive
2022-04-15T03:45:34.0549174Z     assert self._parent_pid == os.getpid(), 'can only test a child process'
2022-04-15T03:45:34.0549781Z AssertionError: can only test a child process
2022-04-15T03:45:36.6665999Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2022-04-15T03:45:36.6667032Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2022-04-15T03:45:36.6700188Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2022-04-15T03:45:39.3169875Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2022-04-15T03:45:39.3221345Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2022-04-15T03:45:39.3222025Z [W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
2022-04-15T03:45:44.3811967Z ok (10.370s)
2022-04-15T03:45:44.3834753Z   test_multiprocessing_iterdatapipe (__main__.TestDataLoaderPersistentWorkers) ... skip: Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/74498 for allplatform(s) . If you're seeing this on your local machine and would like to enable this test, please make sure IN_CI is not set and you are not using the flag --import-disabled-tests. (0.002s)
2022-04-15T03:45:45.4457734Z   test_no_segfault (__main__.TestDataLoaderPersistentWorkers) ... ok (1.062s)
2022-04-15T03:45:45.4492276Z   test_numpy (__main__.TestDataLoaderPersistentWorkers) ... ok (0.003s)

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (default, 2, 2, linux.2xlarge) (2/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T03:03:02.5220253Z RuntimeError: test_jit failed!
2022-04-15T03:03:02.2082871Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_warn.TestWarn-20220415030101.xml
2022-04-15T03:03:02.2089639Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_with.TestWith-20220415030101.xml
2022-04-15T03:03:02.2096388Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_data_parallel.TestDataParallel-20220415030101.xml
2022-04-15T03:03:02.2108028Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_legacy_upgraders.TestLegacyUpgraders-20220415030101.xml
2022-04-15T03:03:02.2117744Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoadFlatbuffer-20220415030101.xml
2022-04-15T03:03:02.5216209Z Traceback (most recent call last):
2022-04-15T03:03:02.5216654Z   File "test/run_test.py", line 1058, in <module>
2022-04-15T03:03:02.5218153Z     main()
2022-04-15T03:03:02.5218380Z   File "test/run_test.py", line 1036, in main
2022-04-15T03:03:02.5220000Z     raise RuntimeError(err_message)
2022-04-15T03:03:02.5220253Z RuntimeError: test_jit failed!
2022-04-15T03:03:02.7317714Z + cleanup
2022-04-15T03:03:02.7317984Z + retcode=1
2022-04-15T03:03:02.7318152Z + set +x
2022-04-15T03:03:02.7360076Z ##[error]Process completed with exit code 1.
2022-04-15T03:03:02.7408342Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-15T03:03:02.7408594Z with:
2022-04-15T03:03:02.7408981Z   github-token: ***
2022-04-15T03:03:02.7409153Z env:
2022-04-15T03:03:02.7409305Z   IN_CI: 1
2022-04-15T03:03:02.7409451Z   IS_GHA: 1

See GitHub Actions build pull / linux-xenial-py3.7-gcc7 / test (default, 1, 2, linux.2xlarge) (3/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T03:09:50.2923810Z RuntimeError: test_jit failed!
2022-04-15T03:09:49.9393765Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_with.TestWith-20220415030750.xml
2022-04-15T03:09:49.9400423Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_data_parallel.TestDataParallel-20220415030750.xml
2022-04-15T03:09:49.9412069Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_legacy_upgraders.TestLegacyUpgraders-20220415030750.xml
2022-04-15T03:09:49.9420411Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_optimize_for_mobile_preserve_debug_info.TestOptimizeForMobilePreserveDebugInfo-20220415030750.xml
2022-04-15T03:09:49.9430699Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoadFlatbuffer-20220415030750.xml
2022-04-15T03:09:50.2919529Z Traceback (most recent call last):
2022-04-15T03:09:50.2920084Z   File "test/run_test.py", line 1058, in <module>
2022-04-15T03:09:50.2921688Z     main()
2022-04-15T03:09:50.2921889Z   File "test/run_test.py", line 1036, in main
2022-04-15T03:09:50.2923552Z     raise RuntimeError(err_message)
2022-04-15T03:09:50.2923810Z RuntimeError: test_jit failed!
2022-04-15T03:09:50.5160571Z + cleanup
2022-04-15T03:09:50.5160886Z + retcode=1
2022-04-15T03:09:50.5161180Z + set +x
2022-04-15T03:09:50.5204033Z ##[error]Process completed with exit code 1.
2022-04-15T03:09:50.5332348Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-15T03:09:50.5332604Z with:
2022-04-15T03:09:50.5333006Z   github-token: ***
2022-04-15T03:09:50.5333175Z env:
2022-04-15T03:09:50.5333328Z   IN_CI: 1
2022-04-15T03:09:50.5333474Z   IS_GHA: 1

See GitHub Actions build pull / linux-bionic-rocm5.0-py3.7 / test (default, 2, 2, linux.rocm.gpu) (4/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T05:34:26.3231252Z RuntimeError: test_jit failed!
2022-04-15T05:34:23.0650048Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_backends.TestBackendsWithCompiler-20220415053132.xml
2022-04-15T05:34:23.0664467Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_cuda.TestCUDA-20220415053132.xml
2022-04-15T05:34:23.0670218Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_data_parallel.TestDataParallel-20220415053132.xml
2022-04-15T05:34:23.0688810Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_legacy_upgraders.TestLegacyUpgraders-20220415053132.xml
2022-04-15T05:34:23.0705549Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoadFlatbuffer-20220415053132.xml
2022-04-15T05:34:26.3220642Z Traceback (most recent call last):
2022-04-15T05:34:26.3221467Z   File "test/run_test.py", line 1058, in <module>
2022-04-15T05:34:26.3224797Z     main()
2022-04-15T05:34:26.3225473Z   File "test/run_test.py", line 1036, in main
2022-04-15T05:34:26.3230511Z     raise RuntimeError(err_message)
2022-04-15T05:34:26.3231252Z RuntimeError: test_jit failed!
2022-04-15T05:34:27.9296520Z 
2022-04-15T05:34:27.9297127Z real	14m21.585s
2022-04-15T05:34:27.9298463Z user	19m17.528s
2022-04-15T05:34:27.9299096Z sys	3m47.013s
2022-04-15T05:34:27.9299674Z + cleanup
2022-04-15T05:34:27.9300241Z + retcode=1
2022-04-15T05:34:27.9300799Z + set +x
2022-04-15T05:34:27.9420877Z ##[error]Process completed with exit code 1.
2022-04-15T05:34:27.9519887Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-15T05:34:27.9520284Z with:

See GitHub Actions build pull / linux-bionic-py3.7-clang9 / test (noarch, 1, 1, linux.2xlarge) (5/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T03:12:41.9137240Z RuntimeError: test_jit failed!
2022-04-15T03:12:41.5590967Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_warn.TestWarn-20220415031045.xml
2022-04-15T03:12:41.5597073Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_with.TestWith-20220415031045.xml
2022-04-15T03:12:41.5603781Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_data_parallel.TestDataParallel-20220415031045.xml
2022-04-15T03:12:41.5614275Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_legacy_upgraders.TestLegacyUpgraders-20220415031045.xml
2022-04-15T03:12:41.5623347Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoadFlatbuffer-20220415031045.xml
2022-04-15T03:12:41.9131905Z Traceback (most recent call last):
2022-04-15T03:12:41.9132189Z   File "test/run_test.py", line 1058, in <module>
2022-04-15T03:12:41.9134729Z     main()
2022-04-15T03:12:41.9134931Z   File "test/run_test.py", line 1036, in main
2022-04-15T03:12:41.9136985Z     raise RuntimeError(err_message)
2022-04-15T03:12:41.9137240Z RuntimeError: test_jit failed!
2022-04-15T03:12:42.1382509Z 
2022-04-15T03:12:42.1382829Z real	16m38.954s
2022-04-15T03:12:42.1383182Z user	20m16.132s
2022-04-15T03:12:42.1383485Z sys	2m7.788s
2022-04-15T03:12:42.1383662Z + cleanup
2022-04-15T03:12:42.1383821Z + retcode=1
2022-04-15T03:12:42.1383986Z + set +x
2022-04-15T03:12:42.1426262Z ##[error]Process completed with exit code 1.
2022-04-15T03:12:42.1514152Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-15T03:12:42.1514412Z with:

See GitHub Actions build pull / linux-bionic-py3.7-clang9 / test (default, 2, 2, linux.2xlarge) (6/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T03:14:32.5531612Z RuntimeError: test_jit failed!
2022-04-15T03:14:32.2047019Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_warn.TestWarn-20220415031235.xml
2022-04-15T03:14:32.2054051Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_with.TestWith-20220415031235.xml
2022-04-15T03:14:32.2060595Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_data_parallel.TestDataParallel-20220415031235.xml
2022-04-15T03:14:32.2072149Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_legacy_upgraders.TestLegacyUpgraders-20220415031235.xml
2022-04-15T03:14:32.2081882Z Generated XML report: test-reports/python-unittest/test_jit/TEST-jit.test_save_load.TestSaveLoadFlatbuffer-20220415031235.xml
2022-04-15T03:14:32.5526120Z Traceback (most recent call last):
2022-04-15T03:14:32.5526404Z   File "test/run_test.py", line 1058, in <module>
2022-04-15T03:14:32.5529323Z     main()
2022-04-15T03:14:32.5529724Z   File "test/run_test.py", line 1036, in main
2022-04-15T03:14:32.5531214Z     raise RuntimeError(err_message)
2022-04-15T03:14:32.5531612Z RuntimeError: test_jit failed!
2022-04-15T03:14:32.7729032Z 
2022-04-15T03:14:32.7729388Z real	18m17.882s
2022-04-15T03:14:32.7729778Z user	37m52.394s
2022-04-15T03:14:32.7730090Z sys	3m10.320s
2022-04-15T03:14:32.7730268Z + cleanup
2022-04-15T03:14:32.7730457Z + retcode=1
2022-04-15T03:14:32.7730668Z + set +x
2022-04-15T03:14:32.7774394Z ##[error]Process completed with exit code 1.
2022-04-15T03:14:32.7830544Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-15T03:14:32.7830790Z with:

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (jit_legacy, 1, 1, linux.2xlarge) (7/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T02:55:23.3006122Z RuntimeError: test_jit_legacy failed!
2022-04-15T02:55:22.9790372Z Generated XML report: test-reports/python-unittest/test_jit_legacy/TEST-jit.test_warn.TestWarn-20220415025332.xml
2022-04-15T02:55:22.9798603Z Generated XML report: test-reports/python-unittest/test_jit_legacy/TEST-jit.test_with.TestWith-20220415025332.xml
2022-04-15T02:55:22.9805477Z Generated XML report: test-reports/python-unittest/test_jit_legacy/TEST-jit.test_data_parallel.TestDataParallel-20220415025332.xml
2022-04-15T02:55:22.9827892Z Generated XML report: test-reports/python-unittest/test_jit_legacy/TEST-jit.test_legacy_upgraders.TestLegacyUpgraders-20220415025332.xml
2022-04-15T02:55:22.9837762Z Generated XML report: test-reports/python-unittest/test_jit_legacy/TEST-jit.test_save_load.TestSaveLoadFlatbuffer-20220415025332.xml
2022-04-15T02:55:23.3001342Z Traceback (most recent call last):
2022-04-15T02:55:23.3001815Z   File "test/run_test.py", line 1058, in <module>
2022-04-15T02:55:23.3003455Z     main()
2022-04-15T02:55:23.3003696Z   File "test/run_test.py", line 1036, in main
2022-04-15T02:55:23.3005854Z     raise RuntimeError(err_message)
2022-04-15T02:55:23.3006122Z RuntimeError: test_jit_legacy failed!
2022-04-15T02:55:23.5403199Z + cleanup
2022-04-15T02:55:23.5403604Z + retcode=1
2022-04-15T02:55:23.5403922Z + set +x
2022-04-15T02:55:23.5450799Z ##[error]Process completed with exit code 1.
2022-04-15T02:55:23.5498769Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-15T02:55:23.5499046Z with:
2022-04-15T02:55:23.5499500Z   github-token: ***
2022-04-15T02:55:23.5499713Z env:
2022-04-15T02:55:23.5499878Z   IN_CI: 1
2022-04-15T02:55:23.5500033Z   IS_GHA: 1

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 2, 2, linux.4xlarge.nvidia.gpu) (8/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T04:46:16.3093186Z RuntimeError: test_quantization failed!
2022-04-15T04:46:15.3311293Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestReferenceQuantizedModule-20220415043243.xml
2022-04-15T04:46:15.3333341Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.bc.test_backward_compatibility.TestSerialization-20220415043243.xml
2022-04-15T04:46:15.3359118Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestStaticQuantizedModule-20220415043243.xml
2022-04-15T04:46:15.3377556Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter-20220415043243.xml
2022-04-15T04:46:15.3384726Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerONNXExport-20220415043243.xml
2022-04-15T04:46:16.3083250Z Traceback (most recent call last):
2022-04-15T04:46:16.3083876Z   File "test/run_test.py", line 1058, in <module>
2022-04-15T04:46:16.3086582Z     main()
2022-04-15T04:46:16.3087129Z   File "test/run_test.py", line 1036, in main
2022-04-15T04:46:16.3092573Z     raise RuntimeError(err_message)
2022-04-15T04:46:16.3093186Z RuntimeError: test_quantization failed!
2022-04-15T04:46:16.8099156Z + cleanup
2022-04-15T04:46:16.8099561Z + retcode=1
2022-04-15T04:46:16.8099932Z + set +x
2022-04-15T04:46:16.8157673Z ##[error]Process completed with exit code 1.
2022-04-15T04:46:16.8214555Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-15T04:46:16.8214921Z with:
2022-04-15T04:46:16.8215461Z   github-token: ***
2022-04-15T04:46:16.8215704Z env:
2022-04-15T04:46:16.8215922Z   IN_CI: 1
2022-04-15T04:46:16.8216128Z   IS_GHA: 1

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (default, 1, 2, linux.2xlarge) (9/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-15T03:30:37.6146597Z RuntimeError: test_quantization failed!
2022-04-15T03:30:36.9705750Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestReferenceQuantizedModule-20220415032030.xml
2022-04-15T03:30:36.9722570Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.bc.test_backward_compatibility.TestSerialization-20220415032030.xml
2022-04-15T03:30:36.9742800Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.core.test_quantized_module.TestStaticQuantizedModule-20220415032030.xml
2022-04-15T03:30:36.9757523Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.fx.test_subgraph_rewriter.TestSubgraphRewriter-20220415032030.xml
2022-04-15T03:30:36.9762413Z Generated XML report: test-reports/python-unittest/test_quantization/TEST-quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerONNXExport-20220415032030.xml
2022-04-15T03:30:37.6142033Z Traceback (most recent call last):
2022-04-15T03:30:37.6142441Z   File "test/run_test.py", line 1058, in <module>
2022-04-15T03:30:37.6144851Z     main()
2022-04-15T03:30:37.6145095Z   File "test/run_test.py", line 1036, in main
2022-04-15T03:30:37.6146334Z     raise RuntimeError(err_message)
2022-04-15T03:30:37.6146597Z RuntimeError: test_quantization failed!
2022-04-15T03:30:37.9187256Z + cleanup
2022-04-15T03:30:37.9187550Z + retcode=1
2022-04-15T03:30:37.9187749Z + set +x
2022-04-15T03:30:37.9227741Z ##[error]Process completed with exit code 1.
2022-04-15T03:30:37.9410185Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-15T03:30:37.9410439Z with:
2022-04-15T03:30:37.9410837Z   github-token: ***
2022-04-15T03:30:37.9411006Z env:
2022-04-15T03:30:37.9411155Z   IN_CI: 1
2022-04-15T03:30:37.9411299Z   IS_GHA: 1

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

dzdang added a commit that referenced this pull request Feb 4, 2022
…quantized_max_pool2d

ghstack-source-id: 5f1f0dc
Pull Request resolved: #72353
…x_pool2d & quantized_max_pool2d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices. In addition, quantized_max_pool2d has
been removed from the frontend, which was previously not used anyways.

[ghstack-poisoned]
@dzdang dzdang changed the title [Quant][bc-breaking] Combined dispatch registration for max_pool2d & quantized_max_pool2d [Quant][bc-breaking][devs] Combined dispatch registration for max_pool2d & quantized_max_pool2d Feb 8, 2022
…for max_pool2d & quantized_max_pool2d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices. In addition, quantized_max_pool2d has
been removed from the frontend, which was previously not used anyways.

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Feb 8, 2022
…l2d & quantized_max_pool2d

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices. In addition, quantized_max_pool2d has
been removed from the frontend, which was previously not used anyways.

ghstack-source-id: 57b9ac5
Pull Request resolved: #72353
@dzdang
Copy link
Contributor Author

dzdang commented Feb 8, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@dzdang dzdang requested a review from jerryzh168 February 15, 2022 17:33
// the output tensor is already computed from quantized_max_pool2d. junk_out is used as a dummy
// argument because it is required by max_pool2d_kernel
auto junk_out = at::empty(out.sizes(), self.int_repr().options());
max_pool2d_kernel(kCPU, junk_out, indices, self.int_repr(), kW, kH, dW, dH, padW, padH, dilationW, dilationH);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should be able to get the quantized output (quint8) based on the output of max_pool2d_kernel (uint8) I think, the output of max_pool2d_kernel will be the int_repr of the actual quantized output, we can make a quantized tensor with _make_per_tensor_quantized_tensor

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the code as written looks awful, you're running max_pool2d twice to get out the quantized output and then the indices. Terrible. Please do this modification.

@dzdang dzdang requested a review from jerryzh168 February 22, 2022 15:00
@jerryzh168 jerryzh168 requested a review from bdhirsh March 3, 2022 00:48
@jerryzh168
Copy link
Contributor

Hi @brianjo @ezyang could you take a look at this PR to see if the dispatch modifications are OK or not?

@brianjo brianjo requested a review from albanD March 3, 2022 00:59
@ezyang
Copy link
Contributor

ezyang commented Mar 3, 2022

test failures look alarming

switch (input.suggest_memory_format()) {
case at::MemoryFormat::Contiguous: {
AT_DISPATCH_FLOATING_TYPES_AND(ScalarType::BFloat16, input.scalar_type(), "max_pool2d", [&] {
AT_DISPATCH_ALL_TYPES_AND(ScalarType::BFloat16, input.scalar_type(), "max_pool2d", [&] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like you will need an OpInfo update

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't run max pool2d twice just cuz you need indices lol

@dzdang dzdang marked this pull request as draft March 24, 2022 13:13
…for max_pool2d & quantized_max_pool2d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices. In addition, quantized_max_pool2d has
been removed from the frontend, which was previously not used anyways.

This PR also introduces isnan() support for vectorized int tensors.

Differential Revision: [D34085075](https://our.internmc.facebook.com/intern/diff/D34085075)

[ghstack-poisoned]
@dzdang
Copy link
Contributor Author

dzdang commented Mar 30, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 12, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
@dzdang
Copy link
Contributor Author

dzdang commented Apr 12, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
@dzdang
Copy link
Contributor Author

dzdang commented Apr 14, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

const int dilationW = dilation.size() == 1 ? dilationH : safe_downcast<int, int64_t>(dilation[1]);

max_pool2d_kernel(kCPU, out, indices, self, kW, kH, dW, dH, padW, padH, dilationW, dilationH);
set_quantizer_(out, make_per_tensor_affine_quantizer(self.q_scale(), self.q_zero_point(), out.scalar_type()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

random question that is probably not the fault of this PR: is it really correct to set_quantizer_ on the out argument like this? Suppose I take a view of a quantized tensor and then write into it, the quantizers of the view and the base what then become inconsistent. That seems bad!

… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 14, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
@dzdang
Copy link
Contributor Author

dzdang commented Apr 14, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
@dzdang
Copy link
Contributor Author

dzdang commented Apr 15, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

bool res{false};
c10::guts::if_constexpr<std::is_integral<scalar_t>::value> (
[&res] () { res = false; }, // if integral type
[&res, val] () { res = std::isnan(val); } // if not integral type
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not just define a helper function

… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
@dzdang
Copy link
Contributor Author

dzdang commented Apr 15, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… for max_pool2d & quantized_max_pool2d and implemented max_pool2d_with_indices_out_quantized_cpu"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool2d, and implements a quantized
kernel for max_pool2d_with_indices.

This PR also introduces isnan() support for vectorized int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors.

Test plan:
```
python test/test_quantization.py -k test_max_pool2d
```

Differential Revision: [D35420901](https://our.internmc.facebook.com/intern/diff/D35420901)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
…tch registration for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
dzdang added a commit that referenced this pull request Apr 15, 2022
… for max_pool1d & quantized_max_pool1d"

Summary: This PR is part of a series of PRs addressing #54150,
related to using dispatcher for calls to quantized backends as opposed to if/else conditionals.
This particular PR removes the is_quantized check from max_pool1d and modifies
max_pool1d_impl to be compatible with int tensors.

This PR relies on #74560, which introduces
structured kernel support for quantized tensors and #72353.

Test plan:
```
python test/test_quantization.py -k test_max_pool1d
```

Differential Revision: [D35431831](https://our.internmc.facebook.com/intern/diff/D35431831)

[ghstack-poisoned]
@dzdang
Copy link
Contributor Author

dzdang commented Apr 15, 2022

@dzdang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Jun 14, 2022
@github-actions github-actions bot closed this Jul 14, 2022
@facebook-github-bot facebook-github-bot deleted the gh/dzdang/30/head branch August 13, 2022 14:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants