-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Reuse intermediate results over multiple backwards grad_inputs #3526
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
aten/src/ATen/Declarations.cwrap
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
I also pushed a fairly hefty piece of documentation on the top of |
colesbury
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice lgtm!
tools/autograd/derivatives.yaml
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
…atives.yaml Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
e4316fe to
4131990
Compare
|
Pushed some PR comment fixes, and squashed some lint commits. I'll merge this when CI passes. |
The first two commits are just a little bit of refactoring.
The next two support defining gradient for multiple inputs simultaneously in
derivatives.yaml, and then an example of how to use it viaatan2. The basic model is, instead of sayingoutput1: returns a tensorandoutput2: returns a tensor, you just sayoutput1, output2: returns a tuple of tensorsI am not entirely sure I have done the
output_maskhandling idiomatically. This definitely seems like an opportunity for some compiler-y techniques. For example, one way to implement this assuming you have a working compiler is to define the computation once assuming every output is needed, and then for every output_mask permutation, DCE the unneeded outputs. (This is, of course, assuming that there isn't a totally different algorithm that is applicable when you can remove required grads; in the case of atan2, this is definitely not the case.) In any case I don't plan to do this on this PR.Looking at derivatives.yaml, here are some more opportunities for reusing intermediate computations with addbmm and dist (which already have gradients.) I'll do these once I confirm the pattern looks good.