-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Port PackedSequences functions to C++ #11224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@pytorchbot retest this please |
2 similar comments
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
d91df97 to
720dd59
Compare
zdevito
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good -- I didn't check the indexing very closely, but the structure looks good, and it removes a good bit of onnx hacks.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
jamesr66a
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ONNX copypasta LGTM
720dd59 to
ece70ef
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apaszke has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: zdevito Pull Request resolved: pytorch/pytorch#11224 Differential Revision: D9652703 Pulled By: apaszke fbshipit-source-id: 558e39457e590cad07516e5bb2ecb12789564950
resolve conflict in data parallel model * master: (201 commits) Add cost inference to ConvGradient and WeightedSum operators (pytorch#10744) Move collapse dims into a single place (pytorch#11272) Fix some more warnings (pytorch#11257) Fix the batchnorm onnx exporting when affine=False Improve error message to include return types too (pytorch#11245) Check doxygen output in travis (pytorch#11124) Accept more numpy scalars as doubles (pytorch#9659) Fixed log message (pytorch#10874) Fix to distribution.__repr__ with lazy attributes (pytorch#11263) Add import export step to end to end tests Add complex hooks for out of tree complex implementation. (pytorch#11216) Unify opt flag for cmake codegen (pytorch#11227) nomnigraph - fix memory error in NN subgraph matchOp (pytorch#11127) Port PackedSequences functions to C++ (pytorch#11224) Treat numerical differences as warnings instead of errors when tracing (pytorch#11246) add a Float16UniformFill (pytorch#11123) Implement torch.tensordot (pytorch#10025) keep net type info when generating model complete net (pytorch#11032) Get rid of some uses of type() (pytorch#11215) Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (pytorch#11205) ...
Summary: zdevito Pull Request resolved: pytorch#11224 Differential Revision: D9652703 Pulled By: apaszke fbshipit-source-id: 558e39457e590cad07516e5bb2ecb12789564950
@zdevito