Skip to content

Conversation

@t-vi
Copy link
Collaborator

@t-vi t-vi commented Sep 18, 2018

Native batched potrf native. One item of #7500
Also batch tril/triu in native.

This doesn't speparate out the single matrix versions, but I don't
think it is too inefficient.

This builds on the batch linear algebra systematic of #9949 by @vishwakftw

Native batched potrf native. One item of pytorch#7500
Also batch tril/triu in native.

This doesn't speparate out the single matrix versions, but I don't
think it is too inefficient.

This builds on the batch linear algebra systematic of pytorch#9949 by @vishwakftw .
@t-vi
Copy link
Collaborator Author

t-vi commented Sep 18, 2018

With apologies to @ethanluoyc for not being aware of #9623 before just the backward and tests were missing.

@t-vi
Copy link
Collaborator Author

t-vi commented Sep 18, 2018

Hmm. This looks like something in #9949 (@vishwakftw ).
Edit: The signature of random_fullrank_matrix_distinct_singular_value changed.

14:00:34   File "test_autograd.py", line 3083, in <module>
14:00:34     ('inverse', random_fullrank_matrix_distinct_singular_value(S, [2, 3]),
14:00:34   File "/var/lib/jenkins/workspace/test/common.py", line 660, in random_fullrank_matrix_distinct_singular_value
14:00:34     return torch.stack(all_matrices).reshape(*(batches + (l, l)))
14:00:34 RuntimeError: shape '[2, 3]' is invalid for input of size 150
14:00:34 Traceback (most recent call last):
14:00:34   File "test/run_test.py", line 392, in <module>
14:00:34     main()
14:00:34   File "test/run_test.py", line 384, in main
14:00:34     raise RuntimeError(message)
14:00:34 RuntimeError: test_autograd failed!
14:00:35 + cleanup

@vishwakftw
Copy link
Contributor

#9949 requires a rebase, yes.

auto m = self.size(-1);
auto self_batched_ = self.view({-1, n, m});
auto self_batched = self_batched_.accessor<scalar_t, 3>();
auto result_batched_ = result.view({-1, n, m});

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@ssnl
Copy link
Collaborator

ssnl commented Sep 25, 2018

Thanks @t-vi, @zou3519 says that he will be taking a look. Could you rebase on top of master please?

@t-vi
Copy link
Collaborator Author

t-vi commented Sep 25, 2018

Sure thing. Where should I put the code? @vishwakftw moved the Gesv.* to BatchLinearAlgebra.* in his batch inverse PR (#9949) and I put my stuff in there. Should I keep the renaming?

@Balandat
Copy link
Contributor

Is the plan to add a batched version of potrs as well? Doesn't seem that's part of this PR.

@t-vi
Copy link
Collaborator Author

t-vi commented Oct 14, 2018 via email

@vishwakftw
Copy link
Contributor

potrs is there for GPU, and the batched version too.

@vishwakftw
Copy link
Contributor

@t-vi could you rebase?

@t-vi
Copy link
Collaborator Author

t-vi commented Nov 11, 2018

I looked into this, rebasing seems more effort than starting over.

@t-vi t-vi closed this Nov 11, 2018
n = n % stride_batch;
} else if (stride_batch > stride_min) {
n = n - n % stride_max + n % stride_batch; // eliminate batch part
} // if stride_batch < stride min, the divisions below will eliminate batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@t-vi Could you please give a small explanation of the logic here? It doesn't seem very obvious to me, unfortunately. Thank you.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, you have stride_max > stride_batch > stride_min you now want to eliminate the batch contribution in the offset. Subtracting n % stride_max will eliminate both the batch and the "lower" contribution, so adding n % stride_batch adds the second part back. As a result exactly the batch offset is removed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got it. Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants