Skip to content

Conversation

@xiaomengy
Copy link
Contributor

Summary: Add torch.logit function

Test Plan: buck test mode/dev-nosan //caffe2/test:torch -- "logit"

Differential Revision: D22406912

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@xiaomengy
Copy link
Contributor Author

link #37060

@xiaomengy xiaomengy requested a review from ngimel July 7, 2020 04:34
@xiaomengy xiaomengy mentioned this pull request Jul 7, 2020
@mruberry mruberry added the module: numpy Related to numpy support, and also numpy compatibility of our operators label Jul 7, 2020
@mruberry mruberry self-requested a review July 7, 2020 04:56
@mruberry
Copy link
Collaborator

mruberry commented Jul 7, 2020

Thanks for putting a PR up, @BIT-silence. I'll get you a review ASAP. Want to add a couple helpful links now before I lose them:

@dr-ci
Copy link

dr-ci bot commented Jul 7, 2020

💊 CI failures summary and remediations

As of commit a2dc859 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 102 times.

Copy link
Collaborator

@ngimel ngimel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small comment about type of scalar arg

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @BIT-silence! What's here looks pretty good, but there are a few more things to do:

  • add a doc entry in docs/source/tensors.rst and docs/source/torch.rst
  • add the actual docs in torch/_tensor_docs.py and torch/_torch_docs.py
  • test the backward works as expected by adding an entry to torch/testing/_internal/common_method_invocations.py
  • add an entry in torch/_overrides.py so people can override this function when using the torch_function interface

Also, what do you think about adding an explicit test for extremal values in test_torch? Like -inf, -5, 0, 1, 2, inf, NaN?

@ngimel
Copy link
Collaborator

ngimel commented Jul 8, 2020

Also, what do you think about adding an explicit test for extremal values in test_torch? Like -inf, -5, 0, 1, 2, inf, NaN?

Instead of adding separate tests for extremal values, perhaps it makes sense to add those to TorchMathTest so that all functions there are tests? It already has "interesting" finite values, but it indeed misses infs and nans.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@xiaomengy
Copy link
Contributor Author

Also, what do you think about adding an explicit test for extremal values in test_torch? Like -inf, -5, 0, 1, 2, inf, NaN?

Instead of adding separate tests for extremal values, perhaps it makes sense to add those to TorchMathTest so that all functions there are tests? It already has "interesting" finite values, but it indeed misses infs and nans.

I think we can consider open another issue for this. I tried to add inf and nan in the current tests, but some of the existing test will fail because of the different behavior for outputing inf or nan. If we want to make everything match with numpy/scipy, maybe some more works are needed.

@xiaomengy xiaomengy requested a review from mruberry July 9, 2020 19:28
@mruberry
Copy link
Collaborator

mruberry commented Jul 9, 2020

Also, what do you think about adding an explicit test for extremal values in test_torch? Like -inf, -5, 0, 1, 2, inf, NaN?

Instead of adding separate tests for extremal values, perhaps it makes sense to add those to TorchMathTest so that all functions there are tests? It already has "interesting" finite values, but it indeed misses infs and nans.

I think we can consider open another issue for this. I tried to add inf and nan in the current tests, but some of the existing test will fail because of the different behavior for outputing inf or nan. If we want to make everything match with numpy/scipy, maybe some more works are needed.

Can you elaborate on this? It's important we be compatible with NumPy or we may have to go through a painful deprecation process later. We already test several torch functions against NumPy for extremal values like this and comparing their values directly hasn't been a problem so far.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@xiaomengy xiaomengy requested a review from apaszke as a code owner July 12, 2020 21:54
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

Copy link
Collaborator

@ngimel ngimel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now, modulo a small documentation fix and a question about symbolic gradient.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are you defining symbolic gradient? This would cause scripted module run this implementation instead of built-in backward, and without fuser this will be much slower. And I'm not sure if fuser (which of them?) is able to efficiently fuse this backward.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know why if I didn't add it here, there will be an error that autodiff node number didn't match. So I added it here to test that.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which test was failing?

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Thanks @BIT-silence! Really appreciate you taking the time to make the edge case behavior consistent and to validate it!

Summary:
Pull Request resolved: pytorch#41062

Add torch.logit function

Test Plan: buck test mode/dev-nosan //caffe2/test:torch -- "logit"

Differential Revision: D22406912

fbshipit-source-id: 0c66ce78b2ae82dfbe39a221f857c0330a7acb86
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D22406912

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 80d5b37.

@xiaomengy xiaomengy deleted the export-D22406912 branch July 14, 2020 04:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fb-exported Merged module: numpy Related to numpy support, and also numpy compatibility of our operators

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants