Skip to content

Conversation

@t-vi
Copy link
Collaborator

@t-vi t-vi commented Sep 21, 2018

We do this by being more NaN tolerant.

Fixes: #9062

We do this by being more NaN tolerant.

Fixes: pytorch#9062
@t-vi t-vi changed the title Align multinomial without replacement to CPU behaviour Align cuda multinomial without replacement to CPU behaviour Sep 21, 2018
@t-vi
Copy link
Collaborator Author

t-vi commented Sep 21, 2018

I think at least some of the failures are unrelated (they are in for distributed data parallel).

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 21, 2018
Summary:
We do this by being more NaN tolerant.

Fixes: #9062
Pull Request resolved: pytorch/pytorch#11933

Differential Revision: D9991129

Pulled By: soumith

fbshipit-source-id: c99b04462c1bee90d00eeabb0c111de12f855f4d
iotamudelta pushed a commit to ROCm/pytorch that referenced this pull request Sep 21, 2018
…11933)

Summary:
We do this by being more NaN tolerant.

Fixes: pytorch#9062
Pull Request resolved: pytorch#11933

Differential Revision: D9991129

Pulled By: soumith

fbshipit-source-id: c99b04462c1bee90d00eeabb0c111de12f855f4d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

CUDA error with torch.multinomial

4 participants