-
Notifications
You must be signed in to change notification settings - Fork 26.3k
introduce shape_as_tensor and reshape_from_tensor_shape #5824
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
c0631da to
fe28c91
Compare
colesbury
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might want to stick with either shape or size in the function names. (There's currently sizeas_variable, but view_from_variableshape)
Either is reasonable since tensor.shape and tensor.size() return the same thing.
torch/onnx/operators.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/onnx/operators.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/onnx/operators.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
++ will use 'shape' in the names |
fe28c91 to
1018cfd
Compare
|
[note: please don't merge until I've confirmed that this fully addresses the use case] |
|
tested by onnxbot/onnx-fb-universe#1163 |
1018cfd to
2d6fcfa
Compare
|
once onnx/onnx#608 is merged, then the onnx-fb-universe test will start working - let's wait until then to merge this |
|
ready to go, I've tested locally that it works (and will push the tests to onnx-fb-universe once this is merged) |
|
@onnxbot retest this please |
|
|
||
|
|
||
| def _reshape_from_tensor_shape(g, input, shape): | ||
| return g.op('Reshape', input, shape) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@onnxbot retest this please |
* Add test for dynamic reshapes * Pin pytorch submodule to PR 5824 for testing pytorch/pytorch#5824 * Pin pytorch submodule to PR 5824 for testing * Revert "Pin pytorch submodule to PR 5824 for testing" This reverts commit 6b61286. * Revert "Revert "Pin pytorch submodule to PR 5824 for testing"" This reverts commit 319bb38. * Revert "Pin pytorch submodule to PR 5824 for testing" This reverts commit 6b61286. * point pytorch back to master
)" This reverts commit f446b82.
…simple") moving average" (#5892) * Revert "Port ATen and JIT C++ tests to Catch2 (#5788)" This reverts commit 6f80023. * Revert "Fix error message for cat-ing zero-dim tensors (#5819)" This reverts commit cf2e176. * Revert "Softmax symbolic should account for negative dim (#5846)" This reverts commit ba64724. * Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855)" This reverts commit 22ef8e5. * Revert "Don't modify requires_grad when running DataParallel in no_grad mode (#5880)" This reverts commit d11b7fb. * Revert "fix some methods not showing up in doc (#5882)" This reverts commit 24fca0e. * Revert "ReduceOps cleanup and set_num_threads (#5723)" This reverts commit 84400d5. * Revert "introduce shape_as_tensor and reshape_from_variable_shape (#5824)" This reverts commit f446b82. * Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (#5766)" This reverts commit 99b1f6c.
…simple") moving average" (pytorch#5892) * Revert "Port ATen and JIT C++ tests to Catch2 (pytorch#5788)" This reverts commit 6f80023. * Revert "Fix error message for cat-ing zero-dim tensors (pytorch#5819)" This reverts commit cf2e176. * Revert "Softmax symbolic should account for negative dim (pytorch#5846)" This reverts commit ba64724. * Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (pytorch#5855)" This reverts commit 22ef8e5. * Revert "Don't modify requires_grad when running DataParallel in no_grad mode (pytorch#5880)" This reverts commit d11b7fb. * Revert "fix some methods not showing up in doc (pytorch#5882)" This reverts commit 24fca0e. * Revert "ReduceOps cleanup and set_num_threads (pytorch#5723)" This reverts commit 84400d5. * Revert "introduce shape_as_tensor and reshape_from_variable_shape (pytorch#5824)" This reverts commit f446b82. * Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (pytorch#5766)" This reverts commit 99b1f6c.
for now, I guess we can just leave these in torch/onnx, even if there's an argument they don't strictly belong there. It's straightforward to move them later.
overall motivation: this provides variants of size() and view() which operate on Variable sizes rather than [int] sizes. This allows us to attach a symbolic override and trace them.