Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Apr 12, 2018

Addresses #6535

I'll do a Variable -> Tensor codemod in a separate PR.

if (!grad_fn) {
output_edges.emplace_back();
} else {
THPUtils_assert(grad_fn,

This comment was marked as off-topic.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two minor things, and it's good to merge.

I actually can't remember why I changed that, sorry to hear that it caused those problems. I know for sure that I wanted to remove only_inputs, but I can't think of any reasons why allow_unused got cut.

It might have been that I've heard many comments saying that returning None is what most people expect, and this is also what other libraries do, but I think it's ok to be strict in this case. This often happens when taking higher order derivatives.

unsigned char keep_graph = 0;
unsigned char create_graph = 0;
PyObject *inputs = nullptr;
unsigned char allow_unreachable = 1;

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

THPObjectPtr py_outputs {PyTuple_New(num_inputs)};
if (!py_outputs) return nullptr;
for (int i = 0; i < num_inputs; i++) {
THPUtils_assert(allow_unreachable || outputs[i].defined(), "One of the differentiated "

This comment was marked as off-topic.

test/test_jit.py Outdated
l1_ge, recording_inputs, create_graph=True, allow_unused=True)
l2_ge = (allSum(grads_ge) * l1_ge)
grads2_ge = torch.autograd.grad(l2_ge, recording_inputs)
grads2_ge = torch.autograd.grad(l2_ge, recording_inputs, allow_unused=True)

This comment was marked as off-topic.

This comment was marked as off-topic.

@apaszke apaszke merged commit e01569a into pytorch:master Apr 12, 2018
@ssnl ssnl deleted the allow_unused branch April 12, 2018 19:31
@julienroyd
Copy link

Hi @apaszke ,

What is the reason for deprecating the only_inputs argument from grad()?

I think it was useful to do partial backpropagations while still filling the gradient buffers on the way. Without it, is there another way to halt the backpropagation flow at a given node when calling backward() ?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants