Skip to content

Conversation

@peterbell10
Copy link
Collaborator

@peterbell10 peterbell10 commented Aug 21, 2020

Stack from ghstack:

Differential Revision: D23298654

@dr-ci
Copy link

dr-ci bot commented Aug 21, 2020

💊 CI failures summary and remediations

As of commit a356b80 (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 27 times.

@peterbell10 peterbell10 requested a review from zou3519 August 22, 2020 16:00
("log_normal_", ()),
("exponential_", ()),
("geometric_", (0.5,)),
("normal_", ()),
Copy link
Contributor

@zou3519 zou3519 Aug 24, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should add a test for bernoulli_, too, since we're turning on memory checking for it:

auto iter = TensorIterator::nullary_op(self);

auto iter = TensorIterator::nullary_op(self);

Some versions of bernoulli go through different code paths :/ we should probably try to keep them consistent in a single PR:

auto iter = TensorIteratorConfig()
.add_output(self)
.add_input(p)
.check_all_same_dtype(false)
.build();

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went through all use cases of nullary_op. Those look fine to me; we should add a test case for bernoulli_

@zou3519
Copy link
Contributor

zou3519 commented Aug 24, 2020

Hmm, actually, I see the bernoulli_ test code in the next diff up. Is that because it is only in the next diff that we finish moving all of the bernoulli code paths to checking memory overlap? If so then I think it is OK to leave it there.

I am probably going to try to land #43418 - #43421 together after this stack gets rebased and I'll take a look at #43422 and #43423 tomorrow

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pre-emptive approve b/c the changes remaining are not that big

@peterbell10
Copy link
Collaborator Author

peterbell10 commented Aug 25, 2020

Hmm, actually, I see the bernoulli_ test code in the next diff up. Is that because it is only in the next diff that we finish moving all of the bernoulli code paths to checking memory overlap?

There are different code paths depending on if the probability is a tensor. One goes through nullary_op and is checked in this PR, the other doesn't

auto iter = TensorIteratorConfig()

@zou3519
Copy link
Contributor

zou3519 commented Aug 25, 2020

There are different code paths depending on if the probability is a tensor. One goes through nullary_op and is checked in this PR, the other doesn't

Okay, I see. This is fine as-is then

@facebook-github-bot
Copy link
Contributor

@zou3519 merged this pull request in c177d25.

@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/7/head branch September 1, 2020 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants