Skip to content

Conversation

@harouwu
Copy link
Contributor

@harouwu harouwu commented Aug 29, 2018

Summary: keep net type info when generating model complete net. This will keep the performance optimization option

Differential Revision: D9564125

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

harouwu is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

harouwu is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
keep net type info when generating model complete net. This will keep the performance optimization option
Pull Request resolved: #11032

Reviewed By: wat3rBro

Differential Revision: D9564125

Pulled By: wat3rBro

fbshipit-source-id: 1534b410641a2558007ae4252b72c4af85dd2fb5
petrex pushed a commit to petrex/pytorch that referenced this pull request Sep 5, 2018
resolve conflict in data parallel model
* master: (201 commits)
  Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744)
  Move collapse dims into a single place (pytorch#11272)
  Fix some more warnings (pytorch#11257)
  Fix the batchnorm onnx exporting when affine=False
  Improve error message to include return types too (pytorch#11245)
  Check doxygen output in travis (pytorch#11124)
  Accept more numpy scalars as doubles (pytorch#9659)
  Fixed log message (pytorch#10874)
  Fix to distribution.__repr__ with lazy attributes (pytorch#11263)
  Add import export step to end to end tests
  Add complex hooks for out of tree complex implementation. (pytorch#11216)
  Unify opt flag for cmake codegen (pytorch#11227)
  nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127)
  Port PackedSequences functions to C++ (pytorch#11224)
  Treat numerical differences as warnings instead of errors when tracing (pytorch#11246)
  add a Float16UniformFill (pytorch#11123)
  Implement torch.tensordot (pytorch#10025)
  keep net type info when generating model complete net (pytorch#11032)
  Get rid of some uses of type() (pytorch#11215)
  Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205)
  ...
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
Summary:
keep net type info when generating model complete net. This will keep the performance optimization option
Pull Request resolved: pytorch#11032

Reviewed By: wat3rBro

Differential Revision: D9564125

Pulled By: harouwu

fbshipit-source-id: c6546af9b1d4ff5eddf6124e24a5da1b8baf47df
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants