-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Updated cholesky_backward for complex inputs
#45267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💊 CI failures summary and remediationsAs of commit 75051fa (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 7 times. |
a730ccc to
76b75cc
Compare
test/test_autograd.py
Outdated
| x = x + torch.eye(dims[-1]) | ||
|
|
||
| gradcheck(func, [x, upper]) | ||
| # TODO: gradgradcheck does not work |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gradgradcheck doesn't work because backward for matmul needs to be updated as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be better to create a separate PR to update the backward for mm, addmm, and matmul I think. Let me know if you would like me to create a PR for that or if you would like to work on it (in which case feel free to add me as a reviewer).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I was suspecting that, thanks. I will make the PR.
yeah as per the discussion in #41857, we decided to switch to the TensorFlow convention so that's expected. check out the doc note update for more context. |
Codecov Report
@@ Coverage Diff @@
## master #45267 +/- ##
==========================================
- Coverage 68.06% 68.05% -0.01%
==========================================
Files 393 393
Lines 50918 50918
==========================================
- Hits 34655 34654 -1
- Misses 16263 16264 +1
Continue to review full report at Codecov.
|
anjali411
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good overall. let's enable the gradgradcheck for the test with complex input once matmul gradient formula is updated.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@anjali411 merged this pull request in f47fd0e. |
…45737) Summary: This PR updates derivatives for a few functions so that `gradgradcheck` for `torch.cholesky` is passed ([ref](#45267 (comment))). Some tests (that call to `bmm_cuda`) fail with with `RuntimeError: _th_bmm_out not supported on CUDAType for ComplexDouble` until PR #42553 is merged. Ref. #33152 Pull Request resolved: #45737 Reviewed By: bdhirsh Differential Revision: D24279917 Pulled By: anjali411 fbshipit-source-id: 7b696d2cfc2ef714332c2e3e5d207e257be67744
Updated
cholesky_backwardto work correctly for complex input.Note that the current implementation gives the conjugate of what JAX would return. @anjali411 is that correct thing to do?
Ref. #44895