Skip to content

Conversation

@vishwakftw
Copy link
Contributor

@vishwakftw vishwakftw commented Jun 19, 2018

Fixes #8659 . This PR adds a warning to alert users about the possibility of a failure in the gradcheck and gradgradcheck functions.

Why warning? Users might alter the values of eps, atol and/or rtol to make their custom tests pass. Throwing errors in such scenarios could be bad.

cc: @ssnl

This PR adds a warning to alert users about the possibility of a failure in the gradcheck
prec_flag = True
if any(typecheck):
warnings.warn(
'At least one of the inputs is of single precision. '

This comment was marked as off-topic.

@ssnl
Copy link
Collaborator

ssnl commented Jun 19, 2018

Could you update the warning message too? Thanks!

@vishwakftw
Copy link
Contributor Author

Currently, this is the warning message:

'At least one of the inputs is of single precision. '
'The default values are designed for :attr:`input` of double precision. '
'This check will likely fail if :attr:`input` is of single precision. ')

I could change it to:

'At least one of the inputs is of single precision. This check will likely fail if the inputs are of single precision.'

@ssnl
Copy link
Collaborator

ssnl commented Jun 19, 2018

What I meant was that the tensor could be half, etc. I think it might be better to read At least one of the input tensors that require gradient is not double precision floating type, .....

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

g

@ezyang ezyang merged commit d97c9dd into pytorch:master Jun 20, 2018
@vishwakftw vishwakftw deleted the gradcheck-warn branch June 20, 2018 19:21
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 20, 2018
* upstream/master: (92 commits)
  more formatting (pytorch#8701)
  Fix pytorch#8692 (pytorch#8699)
  Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345)
  Shard test_nn to reduce runtime for each test target (pytorch#8678)
  Create at::tensor (pytorch#8475)
  Clarify mp note about sharing a tensor's grad field. (pytorch#8688)
  Add owner rule for cpp_extension.py (pytorch#8700)
  fix formatting in :math: in fold docstring (pytorch#8696)
  Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666)
  Implement flatten function (pytorch#8578)
  Created Tensor::to functions (pytorch#8643)
  Add a warning in gradcheck if inputs precision < float64 (pytorch#8663)
  Fix parsing of floating point defaults in python_arg_parser (pytorch#8681)
  Export ProcessGroupGloo options to Python (pytorch#8664)
  Fix build error in pybind_state_ideep (pytorch#8684)
  Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676)
  Improve win-build.sh for local build (pytorch#8674)
  don't do unnecessary copies for bernoulli_ (pytorch#8682)
  Use parallel if get_num_threads 0 (pytorch#8677)
  Fix serialization for Parameters (pytorch#8633)
  ...
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 21, 2018
* Solves pytorch#8659

This PR adds a warning to alert users about the possibility of a failure in the gradcheck

* Fix lint

* Update gradcheck.py

* Update gradcheck.py

* update error message

* Update warning message to be more descriptive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

warn/error when using gradcheck with < float64 precision

3 participants