Skip to content

Conversation

@t-vi
Copy link
Collaborator

@t-vi t-vi commented Jul 29, 2018

When only part of the outputs of unbind are used in a backward,
the gradients for the others are undefined. This sets those
to zero in to_tensor_list.

Fixes: #9977

When only part of the outputs of unbind are used in a backward,
the gradients for the others are undefined. This sets those
to zero in to_tensor_list.

Fixes: pytorch#9977

std::vector<Tensor> to_tensor_list(const variable_list& variables) {
return fmap(variables, [](const Variable &v) { return static_cast<Tensor>(v); } );
IntList sizes;

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

...to make the code clearer.
Based on a review comment by SsnL, thank you!
@li-roy li-roy added the ready for review (this tag is deprecated) All PRs are ready for review unless they are draft, WIP, or have undismissed requested changes label Jul 31, 2018
@ssnl
Copy link
Collaborator

ssnl commented Aug 8, 2018

The failure looks like that it might be related to the change. Can you see if it reproduces locally?

@t-vi
Copy link
Collaborator Author

t-vi commented Aug 9, 2018

The TestCuda.test_index? I cannot reproduce that and in fact I think that might just be the build failure du jour then (a randomly chosen adjacent build had that, too). Could you ask for a rebuild, please? If that fails, I'll take another look.

@ssnl
Copy link
Collaborator

ssnl commented Aug 9, 2018

@pytorchbot retest this please

@t-vi
Copy link
Collaborator Author

t-vi commented Aug 9, 2018

Thanks! Seems to have worked. :)

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Aug 10, 2018
Summary:
When only part of the outputs of unbind are used in a backward,
the gradients for the others are undefined. This sets those
to zero in to_tensor_list.

Fixes: pytorch#9977
Pull Request resolved: pytorch#9995

Differential Revision: D9239610

Pulled By: soumith

fbshipit-source-id: eb8d1b3f2b4e615449f9d856e10b946910df9147
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
When only part of the outputs of unbind are used in a backward,
the gradients for the others are undefined. This sets those
to zero in to_tensor_list.

Fixes: pytorch#9977
Pull Request resolved: pytorch#9995

Differential Revision: D9239610

Pulled By: soumith

fbshipit-source-id: eb8d1b3f2b4e615449f9d856e10b946910df9147
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open source ready for review (this tag is deprecated) All PRs are ready for review unless they are draft, WIP, or have undismissed requested changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

torch.stack() gradient errors in 0.4.1

7 participants