-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add log1p for sparse tensor #8969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
raise error for backward of log1p in sparse tensor
test/test_sparse.py
Outdated
| self.assertEqual(out._denseDims(), 0) | ||
|
|
||
| def test_log1p(self): | ||
| import math |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| AT_ASSERT(t.is_sparse()); | ||
|
|
||
| if (isSameTensor(r, t)) { | ||
| r._values().log1p_(); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
| if (isSameTensor(r, t)) { | ||
| // don't have in-place log1p because coalesce() is not in-place | ||
| AT_CHECK(r.is_coalesced(), "log1p only applies to coalesced sparse tensor!"); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| Tensor log1p_backward(const Tensor& grad, const Tensor& self) { | ||
| if (self.is_sparse()) { | ||
| AT_ERROR("log1p of a sparse tensor is made to be non-differentiable since ", | ||
| "local gradient of zero is 1 / (0 + 1) = 1 and it makes the tensor dense"); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, thanks!
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| // log1p(SparseTensor) | ||
| // -------------------------------------------------------------------- | ||
|
|
||
| // TODO: add in-place variant |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: - fixes log1p at #8853 - added log1p of sparse tensor in ATen - make log1p of sparse tensor non-differentiable and raise error, because local derivate of log1p for zero element is 1 / (0 + 1) = 1 and make tensor dense Closes pytorch/pytorch#8969 Reviewed By: ezyang Differential Revision: D8677491 fbshipit-source-id: 8363a613519de4bc75eda087ccd20a3eb2d18126
Summary: - fixes log1p at #8853 - added log1p of sparse tensor in ATen - make log1p of sparse tensor non-differentiable and raise error, because local derivate of log1p for zero element is 1 / (0 + 1) = 1 and make tensor dense Closes pytorch/pytorch#8969 Reviewed By: ezyang Differential Revision: D8677491 fbshipit-source-id: 8363a613519de4bc75eda087ccd20a3eb2d18126
Summary: - fixes log1p at pytorch#8853 - added log1p of sparse tensor in ATen - make log1p of sparse tensor non-differentiable and raise error, because local derivate of log1p for zero element is 1 / (0 + 1) = 1 and make tensor dense Closes pytorch#8969 Reviewed By: ezyang Differential Revision: D8677491 fbshipit-source-id: 8363a613519de4bc75eda087ccd20a3eb2d18126
Summary: Couple questions: 1) I used the log1p implementation in #8969 as a guide especially for testing. I'm not sure what the ```skipIfROCM``` annotation is for, so unsure if i need it for my test. 2) I implemented the branching logic in the narrow function itself; is this the right place to do so? I noticed that there a number of places where sparse-specific logic is handled with just an if statement in this file. Or should I implement a separate dispatch in native_functions.yml as in the log1p? And of course, happy to make any any other updates/changes that I may have missed as well. This is my first PR to the project. Pull Request resolved: #11342 Differential Revision: D9978430 Pulled By: weiyangfb fbshipit-source-id: e73dc20302ab58925afb19e609e31f4a38c634ad
summary