Skip to content

Conversation

@weiyangfb
Copy link
Contributor

summary

raise error for backward of log1p in sparse tensor
self.assertEqual(out._denseDims(), 0)

def test_log1p(self):
import math

This comment was marked as off-topic.

This comment was marked as off-topic.

AT_ASSERT(t.is_sparse());

if (isSameTensor(r, t)) {
r._values().log1p_();

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.


if (isSameTensor(r, t)) {
// don't have in-place log1p because coalesce() is not in-place
AT_CHECK(r.is_coalesced(), "log1p only applies to coalesced sparse tensor!");

This comment was marked as off-topic.

Tensor log1p_backward(const Tensor& grad, const Tensor& self) {
if (self.is_sparse()) {
AT_ERROR("log1p of a sparse tensor is made to be non-differentiable since ",
"local gradient of zero is 1 / (0 + 1) = 1 and it makes the tensor dense");

This comment was marked as off-topic.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, thanks!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

// log1p(SparseTensor)
// --------------------------------------------------------------------

// TODO: add in-place variant

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 28, 2018
Summary:
- fixes log1p at #8853
- added log1p of sparse tensor in ATen
- make log1p of sparse tensor non-differentiable and raise error, because local derivate of log1p for zero element is 1 / (0 + 1) = 1 and make tensor dense
Closes pytorch/pytorch#8969

Reviewed By: ezyang

Differential Revision: D8677491

fbshipit-source-id: 8363a613519de4bc75eda087ccd20a3eb2d18126
@weiyangfb weiyangfb deleted the sparse_log1p branch July 3, 2018 03:24
zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 13, 2018
Summary:
- fixes log1p at #8853
- added log1p of sparse tensor in ATen
- make log1p of sparse tensor non-differentiable and raise error, because local derivate of log1p for zero element is 1 / (0 + 1) = 1 and make tensor dense
Closes pytorch/pytorch#8969

Reviewed By: ezyang

Differential Revision: D8677491

fbshipit-source-id: 8363a613519de4bc75eda087ccd20a3eb2d18126
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
- fixes log1p at pytorch#8853
- added log1p of sparse tensor in ATen
- make log1p of sparse tensor non-differentiable and raise error, because local derivate of log1p for zero element is 1 / (0 + 1) = 1 and make tensor dense
Closes pytorch#8969

Reviewed By: ezyang

Differential Revision: D8677491

fbshipit-source-id: 8363a613519de4bc75eda087ccd20a3eb2d18126
facebook-github-bot pushed a commit that referenced this pull request Sep 26, 2018
Summary:
Couple questions:

1) I used the log1p implementation in #8969 as a guide especially for testing.  I'm not sure what the ```skipIfROCM``` annotation is for, so unsure if i need it for my test.

2) I implemented the branching logic in the narrow function itself; is this the right place to do so?  I noticed that there a number of places where sparse-specific logic is handled with just an if statement in this file.  Or should I implement a separate dispatch in native_functions.yml as in the log1p?

And of course, happy to make any any other updates/changes that I may have missed as well.  This is my first PR to the project.
Pull Request resolved: #11342

Differential Revision: D9978430

Pulled By: weiyangfb

fbshipit-source-id: e73dc20302ab58925afb19e609e31f4a38c634ad
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants