Skip to content

Conversation

@apaszke
Copy link
Contributor

@apaszke apaszke commented Sep 4, 2018

Also, make torch.isclose work with integral tensors and refactor _check_trace a bit.

@zdevito

Also, make torch.isclose work with integral tensors and refactor
_check_trace a bit.
@pytorchbot pytorchbot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Sep 4, 2018
Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. Just to clarify: graph differences and numerical differences in constants are considered errors, but numerical differences in test outputs are now warnings?

@apaszke
Copy link
Contributor Author

apaszke commented Sep 5, 2018

Yes, that's correct.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apaszke has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@apaszke apaszke deleted the jit_warn_more branch September 5, 2018 14:10
zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 5, 2018
…g (#11246)

Summary:
Also, make `torch.isclose` work with integral tensors and refactor `_check_trace` a bit.

zdevito
Pull Request resolved: pytorch/pytorch#11246

Differential Revision: D9652701

Pulled By: apaszke

fbshipit-source-id: fb0bdbfd1952e45e153541e4d471b423a5659f25
petrex pushed a commit to petrex/pytorch that referenced this pull request Sep 5, 2018
resolve conflict in data parallel model
* master: (201 commits)
  Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744)
  Move collapse dims into a single place (pytorch#11272)
  Fix some more warnings (pytorch#11257)
  Fix the batchnorm onnx exporting when affine=False
  Improve error message to include return types too (pytorch#11245)
  Check doxygen output in travis (pytorch#11124)
  Accept more numpy scalars as doubles (pytorch#9659)
  Fixed log message (pytorch#10874)
  Fix to distribution.__repr__ with lazy attributes (pytorch#11263)
  Add import export step to end to end tests
  Add complex hooks for out of tree complex implementation. (pytorch#11216)
  Unify opt flag for cmake codegen (pytorch#11227)
  nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127)
  Port PackedSequences functions to C++ (pytorch#11224)
  Treat numerical differences as warnings instead of errors when tracing (pytorch#11246)
  add a Float16UniformFill (pytorch#11123)
  Implement torch.tensordot (pytorch#10025)
  keep net type info when generating model complete net (pytorch#11032)
  Get rid of some uses of type() (pytorch#11215)
  Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205)
  ...
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
pytorch#11246)

Summary:
Also, make `torch.isclose` work with integral tensors and refactor `_check_trace` a bit.

zdevito
Pull Request resolved: pytorch#11246

Differential Revision: D9652701

Pulled By: apaszke

fbshipit-source-id: fb0bdbfd1952e45e153541e4d471b423a5659f25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants