Skip to content

Conversation

@apaszke
Copy link
Contributor

@apaszke apaszke commented Mar 17, 2017

Thanks to @ChangYong-Oh for the original implementation.
@apaszke apaszke mentioned this pull request Mar 17, 2017
return grad_output.mul(self.result).expand_as(input).div(input)
elif zero_idx.size(0) > 1:
return grad_output.new(self.input_size).zero_()
else:

This comment was marked as off-topic.

This comment was marked as off-topic.

single_zero_idx = slice_zero_count.eq(1).nonzero()
for idx in single_zero_idx:
idx_tuple = tuple(idx.cpu())
input_idx_tuple = idx_tuple[:self.dim] + (slice(0, None),) + idx_tuple[self.dim + 1:]

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith soumith merged commit 7e46eb1 into master Mar 17, 2017
@apaszke apaszke deleted the prod_fix branch March 17, 2017 22:28
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Nov 5, 2021
* Adds a boolean option to `shift` to disable padding. Shifting without
padding only sets values in a range that correspond to the valid range
in the input tensor.
* Separate read and write predicates for block and grid reductions
* For block reductions, it's necessary when reduction axes may not start
with zero. For grid reductions, see issue pytorch#1049.
jaglinux added a commit to jaglinux/pytorch that referenced this pull request Jul 5, 2022
Increase system memory requirement for TestShapeOpsCUDA.test_flip_large_tensor_cuda

Signed-off-by: Jagadish Krishnamoorthy <jagdish.krishna@gmail.com>
jaglinux added a commit to jaglinux/pytorch that referenced this pull request Jul 19, 2022
Increase system memory requirement for TestShapeOpsCUDA.test_flip_large_tensor_cuda

Signed-off-by: Jagadish Krishnamoorthy <jagdish.krishna@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Gradient formula for prod can't handle 0 inputs

4 participants