Skip to content

Conversation

@anderspapitto
Copy link
Contributor

@anderspapitto anderspapitto commented Mar 15, 2018

for now, I guess we can just leave these in torch/onnx, even if there's an argument they don't strictly belong there. It's straightforward to move them later.

overall motivation: this provides variants of size() and view() which operate on Variable sizes rather than [int] sizes. This allows us to attach a symbolic override and trace them.

Copy link
Member

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to stick with either shape or size in the function names. (There's currently sizeas_variable, but view_from_variableshape)

Either is reasonable since tensor.shape and tensor.size() return the same thing.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@anderspapitto
Copy link
Contributor Author

++ will use 'shape' in the names

@anderspapitto
Copy link
Contributor Author

[note: please don't merge until I've confirmed that this fully addresses the use case]

@anderspapitto
Copy link
Contributor Author

tested by onnxbot/onnx-fb-universe#1163

@ezyang ezyang changed the title introduce size_as_variable and view_from_variable_shape [WIP] introduce size_as_variable and view_from_variable_shape Mar 16, 2018
@anderspapitto anderspapitto changed the title [WIP] introduce size_as_variable and view_from_variable_shape [WIP] introduce shape_as_tensor and reshape_from_variable_shape Mar 16, 2018
@anderspapitto
Copy link
Contributor Author

once onnx/onnx#608 is merged, then the onnx-fb-universe test will start working - let's wait until then to merge this

@anderspapitto anderspapitto changed the title [WIP] introduce shape_as_tensor and reshape_from_variable_shape [WIP] introduce shape_as_tensor and reshape_from_tensor_shape Mar 16, 2018
@anderspapitto anderspapitto changed the title [WIP] introduce shape_as_tensor and reshape_from_tensor_shape introduce shape_as_tensor and reshape_from_tensor_shape Mar 17, 2018
@anderspapitto
Copy link
Contributor Author

ready to go, I've tested locally that it works (and will push the tests to onnx-fb-universe once this is merged)

@bddppq
Copy link
Contributor

bddppq commented Mar 17, 2018

@onnxbot retest this please



def _reshape_from_tensor_shape(g, input, shape):
return g.op('Reshape', input, shape)

This comment was marked as off-topic.

@bddppq
Copy link
Contributor

bddppq commented Mar 19, 2018

@onnxbot retest this please

@colesbury colesbury merged commit f446b82 into pytorch:master Mar 19, 2018
bddppq pushed a commit to onnxbot/onnx-fb-universe that referenced this pull request Mar 19, 2018
* Add test for dynamic reshapes

* Pin pytorch submodule to PR 5824 for testing

pytorch/pytorch#5824

* Pin pytorch submodule to PR 5824 for testing

* Revert "Pin pytorch submodule to PR 5824 for testing"

This reverts commit 6b61286.

* Revert "Revert "Pin pytorch submodule to PR 5824 for testing""

This reverts commit 319bb38.

* Revert "Pin pytorch submodule to PR 5824 for testing"

This reverts commit 6b61286.

* point pytorch back to master
@anderspapitto anderspapitto deleted the size-as-tensor branch March 19, 2018 17:34
soumith added a commit that referenced this pull request Mar 19, 2018
soumith added a commit that referenced this pull request Mar 19, 2018
…simple") moving average" (#5892)

* Revert "Port ATen and JIT C++ tests to Catch2 (#5788)"

This reverts commit 6f80023.

* Revert "Fix error message for cat-ing zero-dim tensors (#5819)"

This reverts commit cf2e176.

* Revert "Softmax symbolic should account for negative dim (#5846)"

This reverts commit ba64724.

* Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855)"

This reverts commit 22ef8e5.

* Revert "Don't modify requires_grad when running DataParallel in no_grad mode (#5880)"

This reverts commit d11b7fb.

* Revert "fix some methods not showing up in doc (#5882)"

This reverts commit 24fca0e.

* Revert "ReduceOps cleanup and set_num_threads (#5723)"

This reverts commit 84400d5.

* Revert "introduce shape_as_tensor and reshape_from_variable_shape (#5824)"

This reverts commit f446b82.

* Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (#5766)"

This reverts commit 99b1f6c.
jekbradbury pushed a commit to jekbradbury/pytorch that referenced this pull request Mar 21, 2018
…simple") moving average" (pytorch#5892)

* Revert "Port ATen and JIT C++ tests to Catch2 (pytorch#5788)"

This reverts commit 6f80023.

* Revert "Fix error message for cat-ing zero-dim tensors (pytorch#5819)"

This reverts commit cf2e176.

* Revert "Softmax symbolic should account for negative dim (pytorch#5846)"

This reverts commit ba64724.

* Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (pytorch#5855)"

This reverts commit 22ef8e5.

* Revert "Don't modify requires_grad when running DataParallel in no_grad mode (pytorch#5880)"

This reverts commit d11b7fb.

* Revert "fix some methods not showing up in doc (pytorch#5882)"

This reverts commit 24fca0e.

* Revert "ReduceOps cleanup and set_num_threads (pytorch#5723)"

This reverts commit 84400d5.

* Revert "introduce shape_as_tensor and reshape_from_variable_shape (pytorch#5824)"

This reverts commit f446b82.

* Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (pytorch#5766)"

This reverts commit 99b1f6c.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants