-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add a warning in gradcheck if inputs precision < float64 #8663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This PR adds a warning to alert users about the possibility of a failure in the gradcheck
torch/autograd/gradcheck.py
Outdated
| prec_flag = True | ||
| if any(typecheck): | ||
| warnings.warn( | ||
| 'At least one of the inputs is of single precision. ' |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Could you update the warning message too? Thanks! |
|
Currently, this is the warning message: I could change it to: |
|
What I meant was that the tensor could be half, etc. I think it might be better to read |
ssnl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
ezyang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
g
* upstream/master: (92 commits) more formatting (pytorch#8701) Fix pytorch#8692 (pytorch#8699) Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345) Shard test_nn to reduce runtime for each test target (pytorch#8678) Create at::tensor (pytorch#8475) Clarify mp note about sharing a tensor's grad field. (pytorch#8688) Add owner rule for cpp_extension.py (pytorch#8700) fix formatting in :math: in fold docstring (pytorch#8696) Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666) Implement flatten function (pytorch#8578) Created Tensor::to functions (pytorch#8643) Add a warning in gradcheck if inputs precision < float64 (pytorch#8663) Fix parsing of floating point defaults in python_arg_parser (pytorch#8681) Export ProcessGroupGloo options to Python (pytorch#8664) Fix build error in pybind_state_ideep (pytorch#8684) Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676) Improve win-build.sh for local build (pytorch#8674) don't do unnecessary copies for bernoulli_ (pytorch#8682) Use parallel if get_num_threads 0 (pytorch#8677) Fix serialization for Parameters (pytorch#8633) ...
* Solves pytorch#8659 This PR adds a warning to alert users about the possibility of a failure in the gradcheck * Fix lint * Update gradcheck.py * Update gradcheck.py * update error message * Update warning message to be more descriptive
Fixes #8659 . This PR adds a warning to alert users about the possibility of a failure in the
gradcheckandgradgradcheckfunctions.Why warning? Users might alter the values of
eps,atoland/orrtolto make their custom tests pass. Throwing errors in such scenarios could be bad.cc: @ssnl