Skip to content

Conversation

@IvanYashchuk
Copy link
Collaborator

Updated cholesky_backward to work correctly for complex input.
Note that the current implementation gives the conjugate of what JAX would return. @anjali411 is that correct thing to do?
Ref. #44895

@IvanYashchuk IvanYashchuk added module: complex Related to complex number support in PyTorch module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul labels Sep 24, 2020
@dr-ci
Copy link

dr-ci bot commented Sep 24, 2020

💊 CI failures summary and remediations

As of commit 75051fa (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 7 times.

@IvanYashchuk IvanYashchuk removed the request for review from apaszke September 24, 2020 13:50
x = x + torch.eye(dims[-1])

gradcheck(func, [x, upper])
# TODO: gradgradcheck does not work
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gradgradcheck doesn't work because backward for matmul needs to be updated as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be better to create a separate PR to update the backward for mm, addmm, and matmul I think. Let me know if you would like me to create a PR for that or if you would like to work on it (in which case feel free to add me as a reviewer).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, I was suspecting that, thanks. I will make the PR.

@anjali411
Copy link
Contributor

anjali411 commented Sep 24, 2020

Note that the current implementation gives the conjugate of what JAX would return. @anjali411 is that correct thing to do?

yeah as per the discussion in #41857, we decided to switch to the TensorFlow convention so that's expected. check out the doc note update for more context.

@codecov
Copy link

codecov bot commented Sep 24, 2020

Codecov Report

Merging #45267 into master will decrease coverage by 0.00%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #45267      +/-   ##
==========================================
- Coverage   68.06%   68.05%   -0.01%     
==========================================
  Files         393      393              
  Lines       50918    50918              
==========================================
- Hits        34655    34654       -1     
- Misses      16263    16264       +1     
Impacted Files Coverage Δ
torch/testing/_internal/expecttest.py 77.55% <0.00%> (-1.03%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update dc67b47...75051fa. Read the comment docs.

@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 25, 2020
Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good overall. let's enable the gradgradcheck for the test with complex input once matmul gradient formula is updated.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in f47fd0e.

@IvanYashchuk IvanYashchuk deleted the cholesky-complex-vjp branch October 1, 2020 12:58
facebook-github-bot pushed a commit that referenced this pull request Oct 15, 2020
…45737)

Summary:
This PR updates derivatives for a few functions so that `gradgradcheck` for `torch.cholesky` is passed ([ref](#45267 (comment))).

Some tests (that call to `bmm_cuda`) fail with with `RuntimeError: _th_bmm_out not supported on CUDAType for ComplexDouble`
until PR #42553 is merged.

Ref. #33152

Pull Request resolved: #45737

Reviewed By: bdhirsh

Differential Revision: D24279917

Pulled By: anjali411

fbshipit-source-id: 7b696d2cfc2ef714332c2e3e5d207e257be67744
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: complex Related to complex number support in PyTorch module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants