Skip to content

Conversation

@li-roy
Copy link
Contributor

@li-roy li-roy commented Jun 16, 2018

Closes #7743

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@bddppq
Copy link
Contributor

bddppq commented Jun 16, 2018

@pytorchbot retest this please

@bddppq
Copy link
Contributor

bddppq commented Jun 16, 2018

ignore pr/caffe2-py3.6-clang3.8-rocm1.7.1-ubuntu16.04-test failure in this PR, it's fixed in master

@vadimkantorov
Copy link
Contributor

vadimkantorov commented Jun 16, 2018

Are there defaults for start and end? For cases like:

x.flatten(start = -2) # for shorter version of x.flatten(start = -2, end = -1)
x.flatten(end = 2) # for shorter version of x.flatten(start = 0, end = 2)

For complete defaults, NumPy would return a 1-dim array equivalent to x.view(-1), while tf.contrib.layers.flatten would return a 2-dim array equivalent to x.view(len(x), -1). But maybe it's better to not have complete defaults because of this possible two interpretations (?).

Also a naming style suggestion: start => start_dim or dim_start (same for end), this way semantics is clearer (and aligns with existing dim and multi_dims) but at the expense of more verbosity. (but I understand, it's a question of taste anyway)

This comment was marked as off-topic.

@vadimkantorov
Copy link
Contributor

vadimkantorov commented Jun 18, 2018

For fewer edge cases, should it not support start_dim == end_dim and return the original tensor?

Can be useful for cases if dimensions are computed dynamically.

@zasdfgbnm
Copy link
Collaborator

@li-roy Thanks for this PR, I just need this!

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. But please revert the onnx submodule change.

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@soumith soumith merged commit cc6b046 into pytorch:master Jun 20, 2018
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 20, 2018
* upstream/master: (92 commits)
  more formatting (pytorch#8701)
  Fix pytorch#8692 (pytorch#8699)
  Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345)
  Shard test_nn to reduce runtime for each test target (pytorch#8678)
  Create at::tensor (pytorch#8475)
  Clarify mp note about sharing a tensor's grad field. (pytorch#8688)
  Add owner rule for cpp_extension.py (pytorch#8700)
  fix formatting in :math: in fold docstring (pytorch#8696)
  Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666)
  Implement flatten function (pytorch#8578)
  Created Tensor::to functions (pytorch#8643)
  Add a warning in gradcheck if inputs precision < float64 (pytorch#8663)
  Fix parsing of floating point defaults in python_arg_parser (pytorch#8681)
  Export ProcessGroupGloo options to Python (pytorch#8664)
  Fix build error in pybind_state_ideep (pytorch#8684)
  Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676)
  Improve win-build.sh for local build (pytorch#8674)
  don't do unnecessary copies for bernoulli_ (pytorch#8682)
  Use parallel if get_num_threads 0 (pytorch#8677)
  Fix serialization for Parameters (pytorch#8633)
  ...
petrex pushed a commit to petrex/pytorch that referenced this pull request Jun 21, 2018
* Implement flatten function

* address comments

* allow start_dim=end_dim

* undo submodule change
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants