-
Notifications
You must be signed in to change notification settings - Fork 26.3k
only Tensors of floating point dtype can require gradients (see #7021) #7034
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
only Tensors of floating point dtype can require gradients (see #7021) #7034
Conversation
|
something is up with the pytorch-linux-xenial-py3-clang5-asan test. It seems to hang for 30 minutes in test_multi_drop (test_utils.TestDataLoader). |
c7e2383 to
04fcbd6
Compare
|
@t-vi Probably just the test, I've seen intermittent timeouts there |
|
Indeed. Worked when I changed the commit msg. There is an issue about it, too.
|
test/test_autograd.py
Outdated
| for f in [f1, f2, f3]: | ||
| a = torch.ones(1, dtype=dt, device='cuda' if cuda else 'cpu') | ||
| if dt.is_floating_point: | ||
| f() |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
So, now the MacOS has a "CI changed" failure, but I think it works. |
|
OS X failure is unrelated, and I think fixed on master. @pytorchbot retest this please |
|
Thanks @t-vi! |
|
This didn't change the constructors in If you implemented this in those constructors, it would get a little awkward when combined with type inference, because you don't know the type of the tensor that will come out, e.g.: would not throw an error on `convert_to_tensors([0., 1.], [2., 3.]) but would on convert_to_tensors([0., 1.], [2, 3]). Sometimes you want this fail-fast behavior, but sometimes not. |
Only float tensors can be backpropped and PyTorch throws an error if you try: pytorch/pytorch#7034
Sometimes, people are surprised that things cannot be differentiated w.r.t. integer parameters such indices.
The following patch takes some steps to prevent them from requiring gradients of non-floating point Tensors:
tensor.requires_grad = Truetensor.requires_grad_(True)requires_grad=TrueOf course, applying the above with
Falseneeds to still be allowed.This as requested by Adam in #7021, this is done at the Python interface level.