Skip to content

Conversation

@heitorschueroff
Copy link
Contributor

@heitorschueroff heitorschueroff commented Sep 24, 2020

Stack from ghstack:

Performance is the same on CPU and on CUDA is only 1-1.05x slower. This change is necessary for the future nan ops including nan(min|max|median)

Differential Revision: D23908796

@heitorschueroff heitorschueroff changed the title Corrected handling of nan for evenly_distribute_backward Fixed handling of nan for evenly_distribute_backward Sep 24, 2020
@heitorschueroff heitorschueroff added the module: autograd Related to torch.autograd, and the autograd engine in general label Sep 24, 2020
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

Maybe add a test to make sure it has the right behavior now both on CPU and CUDA?

@heitorschueroff
Copy link
Contributor Author

Looks good!

Maybe add a test to make sure it has the right behavior now both on CPU and CUDA?

I'll add some test now but more will come on the PR for the nan ops.

Performance is the same on CPU and on CUDA is only 1-1.05x slower. This change is necessary for the future nan ops including nan(min|max|median)

[ghstack-poisoned]
@dr-ci
Copy link

dr-ci bot commented Sep 24, 2020

💊 CI failures summary and remediations

As of commit 95b1638 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 10 times.

Performance is the same on CPU and on CUDA is only 1-1.05x slower. This change is necessary for the future nan ops including nan(min|max|median)

[ghstack-poisoned]
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Performance is the same on CPU and on CUDA is only 1-1.05x slower. This change is necessary for the future nan ops including nan(min|max|median)

[ghstack-poisoned]
heitorschueroff added a commit that referenced this pull request Sep 24, 2020
Performance is the same on CPU and on CUDA is only 1-1.05x slower. This change is necessary for the future nan ops including nan(min|max|median)

Differential Revision: [D23908796](https://our.internmc.facebook.com/intern/diff/D23908796)

[ghstack-poisoned]
heitorschueroff added a commit that referenced this pull request Sep 28, 2020
@codecov
Copy link

codecov bot commented Sep 28, 2020

Codecov Report

Merging #45280 into gh/heitorschueroff/16/base will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@                     Coverage Diff                     @@
##           gh/heitorschueroff/16/base   #45280   +/-   ##
===========================================================
  Coverage                       68.05%   68.05%           
===========================================================
  Files                             396      396           
  Lines                           51232    51232           
===========================================================
  Hits                            34865    34865           
  Misses                          16367    16367           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e4950a0...95b1638. Read the comment docs.

@facebook-github-bot
Copy link
Contributor

@heitorschueroff merged this pull request in 96f8755.

@facebook-github-bot facebook-github-bot deleted the gh/heitorschueroff/16/head branch October 2, 2020 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: autograd Related to torch.autograd, and the autograd engine in general

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants