Skip to content

Conversation

@kshitij12345
Copy link
Collaborator

@kshitij12345 kshitij12345 commented Jan 6, 2022

Reference: #69991

Not sure if this is a good idea as this increases the number of operators.

@pytorch-probot
Copy link

pytorch-probot bot commented Jan 6, 2022

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/kshitij12345/pytorch/blob/79ce393cf39430ac86a8771bb8c2e026526740b7/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
docker-builds ciflow/all, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
linux-binary-conda ciflow/binaries, ciflow/binaries/conda 🚫 skipped
linux-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries/libtorch 🚫 skipped
linux-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries/libtorch 🚫 skipped
linux-binary-manywheel ciflow/binaries, ciflow/binaries/wheel 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jan 6, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 79ce393 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build win-vs2019-cuda11.3-py3 / test (force_on_cpu, 1, 1, windows.4xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-01-13T13:31:38.1986156Z FAIL [0.016s]: test_interval_stat (__main__.TestMonitor)
2022-01-13T13:31:38.1659402Z     test_interval_stat failed - num_retries_left: 3
2022-01-13T13:31:38.1659884Z   test_interval_stat (__main__.TestMonitor) ... ok (0.016s)
2022-01-13T13:31:38.1815652Z     test_interval_stat succeeded - num_retries_left: 2
2022-01-13T13:31:38.1816122Z   test_interval_stat (__main__.TestMonitor) ... ok (0.016s)
2022-01-13T13:31:38.1971995Z     test_interval_stat succeeded - num_retries_left: 1
2022-01-13T13:31:38.1972465Z   test_interval_stat (__main__.TestMonitor) ... ok (0.016s)
2022-01-13T13:31:38.1984779Z     test_interval_stat succeeded - num_retries_left: 0
2022-01-13T13:31:38.1985251Z   test_log_event (__main__.TestMonitor) ... ok (0.000s)
2022-01-13T13:31:38.1985522Z 
2022-01-13T13:31:38.1985768Z ======================================================================
2022-01-13T13:31:38.1986156Z FAIL [0.016s]: test_interval_stat (__main__.TestMonitor)
2022-01-13T13:31:38.1986624Z ----------------------------------------------------------------------
2022-01-13T13:31:38.1987082Z Traceback (most recent call last):
2022-01-13T13:31:38.1988392Z   File "test_monitor.py", line 35, in test_interval_stat
2022-01-13T13:31:38.1988920Z     self.assertGreaterEqual(len(events), 1)
2022-01-13T13:31:38.1989438Z AssertionError: 0 not greater than or equal to 1
2022-01-13T13:31:38.1989721Z 
2022-01-13T13:31:38.2051038Z ----------------------------------------------------------------------
2022-01-13T13:31:38.2051484Z Ran 7 tests in 0.063s
2022-01-13T13:31:38.2051713Z 
2022-01-13T13:31:38.2052080Z FAILED (failures=1, unexpected successes=3)

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@kshitij12345 kshitij12345 marked this pull request as ready for review January 6, 2022 23:50
@kshitij12345 kshitij12345 requested a review from ezyang as a code owner January 6, 2022 23:50
@kshitij12345 kshitij12345 requested review from zou3519 and removed request for ezyang January 6, 2022 23:50
@ngimel ngimel added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jan 7, 2022
#include <ATen/native/DispatchStub.h>
#include <ATen/native/MaxPooling.h>
#include <ATen/native/Pool.h>
#include <iostream>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: probably we don't need it, (might have lost after debugging locally).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops. Thanks!

Comment on lines +101 to +102

Tensor result = [&]() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO(rzou): check history

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Motivation for special kernel on CPU.
PR #43745 description has the relevant benchmark.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for finding it. It seems fine to me that we are turning this into an operator


Tensor result = [&]() {
NoNamesGuard guard;
return at::_max_pool1d_forward(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"_max_pool1d_forward_cpu_special" or something to make it clearer that it's a special case

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM -- as discussed offline we should change the naming of the new operator to make it clearer that it's on CPU and an optimization. I also had a suggestion for the dispatch string in native_functions

Comment on lines 2945 to 2946
dispatch:
CompositeExplicitAutograd: _max_pool1d_forward
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because this is CPU-only, can we do "dispatch: -cpu: _max_pool1d_forward"?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have updated the dispatch to CPU-only and also updated the name to _max_pool1d_cpu_forward.

Comment on lines +101 to +102

Tensor result = [&]() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for finding it. It seems fine to me that we are turning this into an operator

@facebook-github-bot
Copy link
Contributor

@zou3519 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@kshitij12345 kshitij12345 deleted the fix/composite/max-pool1d branch January 15, 2022 03:43
zou3519 added a commit that referenced this pull request Jan 28, 2022
This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan:
- wait for tests

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jan 28, 2022
This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan:
- wait for tests

ghstack-source-id: 041c985
Pull Request resolved: #71992
zou3519 added a commit that referenced this pull request Jan 31, 2022
…ance (#70900)""

This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan:
- wait for tests

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jan 31, 2022
This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan:
- wait for tests

ghstack-source-id: e65ed77
Pull Request resolved: #71992
zou3519 added a commit that referenced this pull request Jan 31, 2022
This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan:
- wait for tests

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jan 31, 2022
…ance (#70900)""

This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan:
- wait for tests

Differential Revision: [D33882918](https://our.internmc.facebook.com/intern/diff/D33882918)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jan 31, 2022
This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan:
- wait for tests

Differential Revision: [D33882918](https://our.internmc.facebook.com/intern/diff/D33882918)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jan 31, 2022
This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan:
- wait for tests

ghstack-source-id: 40a34bb
Pull Request resolved: #71992
facebook-github-bot pushed a commit that referenced this pull request Feb 1, 2022
Summary:
Pull Request resolved: #71992

This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan: - wait for tests

Reviewed By: jbschlosser, VitalyFedyunin

Differential Revision: D33882918

Pulled By: zou3519

fbshipit-source-id: f146e82e6b46690376b3d8825dc7f7da62e2c7de
pytorchmergebot pushed a commit that referenced this pull request Feb 1, 2022
Summary:
Pull Request resolved: #71992

This reverts commit b7222e1.

We are conservatively reverting this because it broke a test in functorch.
The original PR added a `_max_pool1d_cpu` operator. I'm not sure if it
is actually safe to revert this due to the addition of the new operator
(someone may have serialized it between now and then) but because it has
only been two weeks this should be fine.

Test Plan: - wait for tests

Reviewed By: jbschlosser, VitalyFedyunin

Differential Revision: D33882918

Pulled By: zou3519

fbshipit-source-id: f146e82e6b46690376b3d8825dc7f7da62e2c7de
(cherry picked from commit 1606333)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants