Skip to content

Conversation

@t-vi
Copy link
Collaborator

@t-vi t-vi commented May 3, 2018

Maybe this is useful.

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Can you add inf cases in the test? This looks great otherwise.

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to merge if test passes

Copy link
Contributor

@soumith soumith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simon reviewed, so accepting

ssnl
ssnl previously requested changes May 3, 2018
Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually could you write a custom backward? The autograd backward isn't very efficient. The formula should just be exp(input) / output for 1d case.

@ezyang
Copy link
Contributor

ezyang commented May 4, 2018

Windows error seems legit, if a bit strange:

22:53:58 ======================================================================
22:53:58 ERROR: test_logsumexp (test_autograd.TestAutograd)
22:53:58 ----------------------------------------------------------------------
22:53:58 Traceback (most recent call last):
22:53:58   File "C:\Jenkins\workspace\pytorch-builds\pytorch-win-ws2016-cuda9-cudnn7-py3-test\test\test_autograd.py", line 3290, in do_test
22:53:58     check(name)
22:53:58   File "C:\Jenkins\workspace\pytorch-builds\pytorch-win-ws2016-cuda9-cudnn7-py3-test\test\test_autograd.py", line 3225, in check
22:53:58     output_variable, (self_variable,) + args_variable)
22:53:58   File "C:\Jenkins\workspace\pytorch-builds\pytorch-win-ws2016-cuda9-cudnn7-py3-test\test\test_autograd.py", line 3151, in run_grad_and_gradgrad_checks
22:53:58     test_case.assertTrue(gradcheck(apply_method, input_variables, eps=1e-6, atol=PRECISION))
22:53:58   File "C:\Jenkins\workspace\pytorch-builds\pytorch-win-ws2016-cuda9-cudnn7-py3-test\test\torch\autograd\gradcheck.py", line 182, in gradcheck
22:53:58     analytical, reentrant, correct_grad_sizes = get_analytical_jacobian(tupled_inputs, o)
22:53:58   File "C:\Jenkins\workspace\pytorch-builds\pytorch-win-ws2016-cuda9-cudnn7-py3-test\test\torch\autograd\gradcheck.py", line 94, in get_analytical_jacobian
22:53:58     jacobian = make_jacobian(input, output.numel())
22:53:58 RuntimeError: Attempted to cast a Tensor to a Variable, but the dynamic type of the value is not Variable.
22:53:58 

@ezyang
Copy link
Contributor

ezyang commented May 4, 2018

I looked at the code and I don't see why this test would have triggered only on Windows. Looks like it will need debugging. Possibly an error elsewhere in PyTorch that is being exercised here.

@t-vi
Copy link
Collaborator Author

t-vi commented May 4, 2018

@ezyang Thank you for looking into that. Interestingly, I got failures on Linux (gcc 7.3) as well when I wanted to test the backward - incidentally it didn't yesterday before I did the backward. It turned out that I had forgotten to specify variants: function - given that the TestAutograd.test_logsumexp now runs fine, maybe that has to do with it.
Update: So we only have the #7202 OOM failure left.

@ssnl
Copy link
Collaborator

ssnl commented May 4, 2018

@pytorchbot retest this please

@ssnl
Copy link
Collaborator

ssnl commented May 4, 2018

This needs rebasing

@ezyang
Copy link
Contributor

ezyang commented May 13, 2018

@pytorchbot retest this please

@t-vi
Copy link
Collaborator Author

t-vi commented May 13, 2018

@ezyang I think now the PR has two spurious failures instead of one.

@ezyang
Copy link
Contributor

ezyang commented May 13, 2018

@pytorchbot retest this please

@t-vi
Copy link
Collaborator Author

t-vi commented May 13, 2018

Thank you! There seems to be a different failure now, but I'm afraid I'm not sure what is going on.

@zou3519
Copy link
Contributor

zou3519 commented May 14, 2018

@pytorchbot retest this please

@ezyang ezyang merged commit e1148db into pytorch:master May 15, 2018
@zou3519
Copy link
Contributor

zou3519 commented May 15, 2018

thanks @t-vi for the contribution!

@vadimkantorov
Copy link
Contributor

vadimkantorov commented Jun 1, 2018

@t-vi @zou3519 The doc-string isn't shown in docs online: empty https://pytorch.org/docs/master/search.html?q=logsumexp&check_keywords=yes&area=default

Maybe autofunction is missing

Also, it seems PyTorch now has two copies of logsumexp, second one is in distributions: #8031

weiyangfb pushed a commit to weiyangfb/pytorch that referenced this pull request Jun 11, 2018
* Implement logsumexp (fixes pytorch#2591)

* Add logsumexp_backward, fix _out declaration.

Thank you Simon and Edward for your comments!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants