Skip to content

Conversation

@mruberry
Copy link
Collaborator

Fixes #44284.

torch.nonzero is distinct from numpy.nonzero. The latter returns a tensor by default, and the former returns a tuple of tensors. The as_tuple argument was added as part of an intended deprecation process to make torch.nonzero consistent with numpy.nonzero, but this was a confusing change for users. A better deprecation path would be to offer torch.argwhere consistent with numpy.argwhere, which is equivalent to the default torch.nonzero behavior. Once this is offered a change to torch.nonzero should be more straightforward with less user disruption, if we decided that's the correct change to pursue.

@mruberry mruberry requested a review from ngimel September 28, 2020 04:14
@mruberry mruberry requested a review from gchanan September 28, 2020 05:01
@mruberry mruberry added this to the 1.7.0 milestone Sep 28, 2020
@dr-ci
Copy link

dr-ci bot commented Sep 28, 2020

💊 CI failures summary and remediations

As of commit 854368f (more details on the Dr. CI page):


  • 2/3 failures possibly* introduced in this PR
    • 1/2 non-CircleCI failure(s)
  • 1/3 broken upstream at merge base 87f98a5 since Sep 29

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_cuda9_2_cudnn7_py3_gcc5_4_build (1/1)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Sep 29 13:48:02 caused by: Connection refused (os error 111)
Sep 29 13:48:02 ++++ extract_trap_cmd 
Sep 29 13:48:02 ++++ printf '%s\n' '' 
Sep 29 13:48:02 +++ printf '%s\n' cleanup 
Sep 29 13:48:02 ++ trap -- ' 
Sep 29 13:48:02 cleanup' EXIT 
Sep 29 13:48:02 ++ [[ pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc5.4-build != *pytorch-win-* ]] 
Sep 29 13:48:02 ++ which sccache 
Sep 29 13:48:02 ++ sccache --stop-server 
Sep 29 13:48:02 Stopping sccache server... 
Sep 29 13:48:02 error: couldn't connect to server 
Sep 29 13:48:02 caused by: Connection refused (os error 111) 
Sep 29 13:48:02 ++ true 
Sep 29 13:48:02 ++ rm /var/lib/jenkins/sccache_error.log 
Sep 29 13:48:02 rm: cannot remove '/var/lib/jenkins/sccache_error.log': No such file or directory 
Sep 29 13:48:02 ++ true 
Sep 29 13:48:02 ++ [[ pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc5.4-build == *rocm* ]] 
Sep 29 13:48:02 ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 
Sep 29 13:48:02 ++ SCCACHE_IDLE_TIMEOUT=1200 
Sep 29 13:48:02 ++ RUST_LOG=sccache::server=error 
Sep 29 13:48:02 ++ sccache --start-server 
Sep 29 13:48:02 Starting sccache server... 

🚧 1 ongoing upstream failure:

These were probably caused by upstream breakages that are not fixed yet:


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 24 times.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@gchanan
Copy link
Contributor

gchanan commented Sep 28, 2020

that deprecation plan does sound superior.

Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the JIT do the right thing?

@mruberry
Copy link
Collaborator Author

does the JIT do the right thing?

That's a good question. Let's extend the test for it.

@mruberry
Copy link
Collaborator Author

As expected, scripting with as_tuple does not work (see #45499). This is not a regression. Tracing with as_tuple DOES work.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in b66ac1e.

@ppwwyyxx
Copy link
Collaborator

ppwwyyxx commented Oct 5, 2020

tools/autograd/templates/python_variable_methods.cpp also needs to be updated. With the current status, 1.7 will have inconsistent warning behavior between torch.nonzero(tensor) and tensor.nonzero().

facebook-github-bot pushed a commit to facebookresearch/fairseq that referenced this pull request Oct 23, 2020
Summary: Reverting the diff because it has already been fixed in pytorch/pytorch#45413

Reviewed By: myleott

Differential Revision: D24511658

fbshipit-source-id: a5561dae50d69a03443ca8a60bebe2cd064e3ee0
@facebook-github-bot facebook-github-bot deleted the nonzero_no_tuple branch January 27, 2021 18:27
facebook-github-bot pushed a commit that referenced this pull request Feb 5, 2021
Summary:
Fixes #44284

#45413 incorrectly left this only partially fixed because it did not update the separate list of method signatures that were deprecated. This PR correctly fixes #44284. A test is added for the behavior, but until the WARN_ONCE flag is added it's toothless.

Pull Request resolved: #51618

Reviewed By: ngimel

Differential Revision: D26220181

Pulled By: mruberry

fbshipit-source-id: 397b47ac7e962d108d8fde0f3dc6468d6327d1c3
jinyiyang-jhu pushed a commit to jinyiyang-jhu/fairseq-jyang that referenced this pull request Feb 26, 2021
Summary: Reverting the diff because it has already been fixed in pytorch/pytorch#45413

Reviewed By: myleott

Differential Revision: D24511658

fbshipit-source-id: a5561dae50d69a03443ca8a60bebe2cd064e3ee0
caltia pushed a commit to caltia/fairseq that referenced this pull request Jul 8, 2025
Summary: Reverting the diff because it has already been fixed in pytorch/pytorch#45413

Reviewed By: myleott

Differential Revision: D24511658

fbshipit-source-id: a5561dae50d69a03443ca8a60bebe2cd064e3ee0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Current behavior of as_tuple argument is inconsistent in nonzero

6 participants