-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Implement flatten function #8578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
aten/src/ATen/native/TensorShape.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/native/TensorShape.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/native/TensorShape.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/native/TensorShape.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/native/TensorShape.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_torch.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
tools/autograd/derivatives.yaml
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/_torch_docs.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot retest this please |
|
ignore |
|
Are there defaults for For complete defaults, NumPy would return a 1-dim array equivalent to Also a naming style suggestion: |
aten/src/ATen/native/TensorShape.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
For fewer edge cases, should it not support Can be useful for cases if dimensions are computed dynamically. |
|
@li-roy Thanks for this PR, I just need this! |
ssnl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. But please revert the onnx submodule change.
ssnl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* upstream/master: (92 commits) more formatting (pytorch#8701) Fix pytorch#8692 (pytorch#8699) Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (pytorch#8345) Shard test_nn to reduce runtime for each test target (pytorch#8678) Create at::tensor (pytorch#8475) Clarify mp note about sharing a tensor's grad field. (pytorch#8688) Add owner rule for cpp_extension.py (pytorch#8700) fix formatting in :math: in fold docstring (pytorch#8696) Some 0-sized dimension support, port catArray away from resizeLegacy. (pytorch#8666) Implement flatten function (pytorch#8578) Created Tensor::to functions (pytorch#8643) Add a warning in gradcheck if inputs precision < float64 (pytorch#8663) Fix parsing of floating point defaults in python_arg_parser (pytorch#8681) Export ProcessGroupGloo options to Python (pytorch#8664) Fix build error in pybind_state_ideep (pytorch#8684) Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (pytorch#8676) Improve win-build.sh for local build (pytorch#8674) don't do unnecessary copies for bernoulli_ (pytorch#8682) Use parallel if get_num_threads 0 (pytorch#8677) Fix serialization for Parameters (pytorch#8633) ...
* Implement flatten function * address comments * allow start_dim=end_dim * undo submodule change
Closes #7743