Skip to content

Conversation

@vishwakftw
Copy link
Contributor

Adds a compute_uv argument that defaults to True for optionally computing the singular vectors during SVD.

Closes #12420 .

- I didn't change the implementation for the nuclear_norm function
  since it was a differentiable op earlier. Implementation for
  matrix_rank has been changed though
Tensor svd_backward(const std::vector<torch::autograd::Variable> &grads, const Tensor& self,
bool some, const Tensor& raw_u, const Tensor& sigma, const Tensor& raw_v) {
bool some, bool compute_uv, const Tensor& raw_u, const Tensor& sigma, const Tensor& raw_v) {
if (!compute_uv) {

This comment was marked as off-topic.

@ssnl
Copy link
Collaborator

ssnl commented Oct 10, 2018

I wish there is an easy way to dispatch to different native_functions basing on GradMode and inputs' requires_grad flags....

@vishwakftw
Copy link
Contributor Author

@ssnl Is this good to go?

bool some, const Tensor& raw_u, const Tensor& sigma, const Tensor& raw_v) {
bool some, bool compute_uv, const Tensor& raw_u, const Tensor& sigma, const Tensor& raw_v) {
AT_CHECK(compute_uv,
"cannot compute backward for SVD without computing the singular matrices in the forward pass");

This comment was marked as off-topic.

@ssnl
Copy link
Collaborator

ssnl commented Oct 10, 2018

I also wonder how much faster this is. Could you do some benchmarking on CPU (MKL) & GPU (MAGMA)? Thanks!

@vishwakftw
Copy link
Contributor Author

CPU benchmarks:

size compute_uv=True (master) compute_uv=False (this PR)
100 x 200 1.88 ms ± 17.9 µs per loop 1.11 ms ± 2.57 µs per loop
200 x 400 8.67 ms ± 499 µs per loop 4.65 ms ± 591 µs per loop
500 x 250 13.6 ms ± 661 µs per loop 7.1 ms ± 259 µs per loop
800 x 400 39.7 ms ± 2.72 ms per loop 22.8 ms ± 1.21 ms per loop

GPU benchmarks can be expected to have the same speedup, since both use the divide and conquer algorithm (gesdd).

bool some, const Tensor& raw_u, const Tensor& sigma, const Tensor& raw_v) {
bool some, bool compute_uv, const Tensor& raw_u, const Tensor& sigma, const Tensor& raw_v) {
AT_CHECK(compute_uv,
"Setting compute_uv to false doesn't compute singular matrices, and hence we cannot compute backward ",

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@vishwakftw
Copy link
Contributor Author

@ssnl Is this good to go now?

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 15, 2018
Summary:
Adds a `compute_uv` argument that defaults to `True` for optionally computing the singular vectors during SVD.

Closes pytorch/pytorch#12420 .
Pull Request resolved: pytorch/pytorch#12517

Differential Revision: D10384554

Pulled By: SsnL

fbshipit-source-id: 704998a257afa815eda901b8ae830e8a661695be
@vishwakftw vishwakftw deleted the svd-compute_uv branch November 10, 2018 03:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[feature request] Add option for only computing the singular values of a matrix

5 participants