-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Treat numerical differences as warnings instead of errors when tracing #11246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Also, make torch.isclose work with integral tensors and refactor _check_trace a bit.
zdevito
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good. Just to clarify: graph differences and numerical differences in constants are considered errors, but numerical differences in test outputs are now warnings?
|
Yes, that's correct. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apaszke has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…g (#11246) Summary: Also, make `torch.isclose` work with integral tensors and refactor `_check_trace` a bit. zdevito Pull Request resolved: pytorch/pytorch#11246 Differential Revision: D9652701 Pulled By: apaszke fbshipit-source-id: fb0bdbfd1952e45e153541e4d471b423a5659f25
resolve conflict in data parallel model * master: (201 commits) Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744) Move collapse dims into a single place (pytorch#11272) Fix some more warnings (pytorch#11257) Fix the batchnorm onnx exporting when affine=False Improve error message to include return types too (pytorch#11245) Check doxygen output in travis (pytorch#11124) Accept more numpy scalars as doubles (pytorch#9659) Fixed log message (pytorch#10874) Fix to distribution.__repr__ with lazy attributes (pytorch#11263) Add import export step to end to end tests Add complex hooks for out of tree complex implementation. (pytorch#11216) Unify opt flag for cmake codegen (pytorch#11227) nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127) Port PackedSequences functions to C++ (pytorch#11224) Treat numerical differences as warnings instead of errors when tracing (pytorch#11246) add a Float16UniformFill (pytorch#11123) Implement torch.tensordot (pytorch#10025) keep net type info when generating model complete net (pytorch#11032) Get rid of some uses of type() (pytorch#11215) Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205) ...
pytorch#11246) Summary: Also, make `torch.isclose` work with integral tensors and refactor `_check_trace` a bit. zdevito Pull Request resolved: pytorch#11246 Differential Revision: D9652701 Pulled By: apaszke fbshipit-source-id: fb0bdbfd1952e45e153541e4d471b423a5659f25
Also, make
torch.isclosework with integral tensors and refactor_check_tracea bit.@zdevito