-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add torch.logit function #41062
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add torch.logit function #41062
Conversation
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
|
link #37060 |
|
Thanks for putting a PR up, @BIT-silence. I'll get you a review ASAP. Want to add a couple helpful links now before I lose them:
|
💊 CI failures summary and remediationsAs of commit a2dc859 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 102 times. |
ngimel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small comment about type of scalar arg
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @BIT-silence! What's here looks pretty good, but there are a few more things to do:
- add a doc entry in docs/source/tensors.rst and docs/source/torch.rst
- add the actual docs in torch/_tensor_docs.py and torch/_torch_docs.py
- test the backward works as expected by adding an entry to torch/testing/_internal/common_method_invocations.py
- add an entry in torch/_overrides.py so people can override this function when using the torch_function interface
Also, what do you think about adding an explicit test for extremal values in test_torch? Like -inf, -5, 0, 1, 2, inf, NaN?
Instead of adding separate tests for extremal values, perhaps it makes sense to add those to TorchMathTest so that all functions there are tests? It already has "interesting" finite values, but it indeed misses infs and nans. |
a1b0368 to
a456d79
Compare
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
a456d79 to
02362f4
Compare
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
02362f4 to
c995b72
Compare
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
c995b72 to
232f0bd
Compare
I think we can consider open another issue for this. I tried to add inf and nan in the current tests, but some of the existing test will fail because of the different behavior for outputing inf or nan. If we want to make everything match with numpy/scipy, maybe some more works are needed. |
Can you elaborate on this? It's important we be compatible with NumPy or we may have to go through a painful deprecation process later. We already test several torch functions against NumPy for extremal values like this and comparing their values directly hasn't been a problem so far. |
232f0bd to
6b46499
Compare
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
0c7485d to
3b3c865
Compare
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
3b3c865 to
6967a9c
Compare
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
6967a9c to
69bfe1a
Compare
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
69bfe1a to
09692ce
Compare
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
09692ce to
7da7494
Compare
ngimel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now, modulo a small documentation fix and a question about symbolic gradient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are you defining symbolic gradient? This would cause scripted module run this implementation instead of built-in backward, and without fuser this will be much slower. And I'm not sure if fuser (which of them?) is able to efficiently fuse this backward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why if I didn't add it here, there will be an error that autodiff node number didn't match. So I added it here to test that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which test was failing?
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
7da7494 to
289417f
Compare
mruberry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Thanks @BIT-silence! Really appreciate you taking the time to make the edge case behavior consistent and to validate it!
Summary: Pull Request resolved: pytorch#41062 Add torch.logit function Test Plan: buck test mode/dev-nosan //caffe2/test:torch -- "logit" Differential Revision: D22406912 fbshipit-source-id: 0c66ce78b2ae82dfbe39a221f857c0330a7acb86
|
This pull request was exported from Phabricator. Differential Revision: D22406912 |
289417f to
a2dc859
Compare
|
This pull request has been merged in 80d5b37. |
Summary: Add torch.logit function
Test Plan: buck test mode/dev-nosan //caffe2/test:torch -- "logit"
Differential Revision: D22406912