Skip to content

Conversation

@nikitaved
Copy link
Collaborator

@nikitaved nikitaved commented Jun 17, 2020

Implements (batched) matrix exponential. Fixes #9983.

The algorithm follows:

 Bader, P.; Blanes, S.; Casas, F.
 Computing the Matrix Exponential with an Optimized Taylor Polynomial Approximation.
 Mathematics 2019, 7, 1174.

@nikitaved nikitaved marked this pull request as draft June 17, 2020 13:22
@dr-ci
Copy link

dr-ci bot commented Jun 17, 2020

💊 CI failures summary and remediations

As of commit a333444 (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 1/2 non-CircleCI failure(s)

XLA failure

Job pytorch_xla_linux_bionic_py3_6_clang9_build is failing. Please create an issue with title prefixed by [PT_BREAK] in pytorch/xla and link to to this PR. If you have questions, please reach out to @ailzhang / @dlibenzi / @JackCaoG.


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 201 times.

@nikitaved nikitaved marked this pull request as ready for review June 22, 2020 19:22
@nikitaved
Copy link
Collaborator Author

Ready for review. TODO: add reference/comment about the backward formula.

@nikitaved nikitaved changed the title [WIP] Matrix exponential implementation Introducing Matrix exponential Jun 22, 2020
@nikitaved
Copy link
Collaborator Author

The CUDA batched version is quite slow (batch loop), make mexp dispatch over CPU/CUDA + native batch loop.

@vishwakftw vishwakftw self-requested a review June 23, 2020 14:56
@nikitaved nikitaved marked this pull request as draft June 23, 2020 14:59
@nikitaved
Copy link
Collaborator Author

nikitaved commented Jun 23, 2020

It is not possible to use high-level Tensor functional on CUDA. So I am not sure how to make batch loop device-agnostic and with high-level functional support.

@nikitaved nikitaved marked this pull request as ready for review June 23, 2020 16:22
@nikitaved
Copy link
Collaborator Author

@nikitaved
Copy link
Collaborator Author

nikitaved commented Jun 23, 2020

I did check the BatchedLinearAlgebra.cpp file, and I only see serial for-loop over batch-dimensions, so, it so appears, we are good with a simple for-loop here...
Probably, this & Co can be later used for the batch dimensions.

@nikitaved nikitaved force-pushed the nikved/mexp branch 7 times, most recently from 7220f33 to c5f6d12 Compare June 25, 2020 16:02
@gchanan gchanan added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 25, 2020
@nikitaved nikitaved force-pushed the nikved/mexp branch 5 times, most recently from 0dd4c0d to 96c0c57 Compare June 29, 2020 19:31
@ezyang
Copy link
Contributor

ezyang commented Aug 7, 2020

Just to summarize outstanding action items:

  • see if you can use diag_embed to directly write out batched identity matrices
  • remove unnecessary pinned memory and nonblocking transfer (or convince us that it is profitable)

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ezyang
Copy link
Contributor

ezyang commented Aug 17, 2020

In case you care: internal lint failed with:

This file should only include printable US-ASCII bytes.
Use Unicode escapes like \u to represent disallowed values.
Find the Unicode escape at: https://www.internalfb.com/intern/unicode/string/%E2%80%9C fbcode/caffe2/aten/src/ATen/native/LinearAlgebra.cpp Line 1175
This file should only include printable US-ASCII bytes.
Use Unicode escapes like \u to represent disallowed values.
Find the Unicode escape at: https://www.internalfb.com/intern/unicode/string/%E2%80%9D fbcode/caffe2/aten/src/ATen/native/LinearAlgebra.cpp Line 1175

I'm going to try to land anyway

@nikitaved
Copy link
Collaborator Author

Let me fix that, it is just " symbol I presume.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[feature request] Add matrix functions