Skip to content

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Mar 15, 2018

Fixes #5552

cc @gchanan

@zou3519 zou3519 changed the title Fix error message for cat-ing zero-dim tensors [wip] Fix error message for cat-ing zero-dim tensors Mar 15, 2018
@zou3519 zou3519 force-pushed the fix-cat-save-size branch 3 times, most recently from 1f61020 to e2d871d Compare March 16, 2018 14:53
@zou3519 zou3519 changed the title [wip] Fix error message for cat-ing zero-dim tensors Fix error message for cat-ing zero-dim tensors Mar 16, 2018
@zou3519
Copy link
Contributor Author

zou3519 commented Mar 16, 2018

Okay this should be good for reviewing now. Today I learned that IntList is not the same thing as std::vector<int64_t>.

@apaszke
Copy link
Contributor

apaszke commented Mar 16, 2018

@zou3519 I think in our code *List is usually at::ArrayRef<*>, while *_list is really std::vector<*>. I agree it's not great 😄

@zou3519 zou3519 force-pushed the fix-cat-save-size branch from e2d871d to 9ed3eba Compare March 16, 2018 15:26
if (size == 0) {
auto& shape = sizes[i];
// If input was empty tensor, gradInput should be empty tensor.
if (shape[0] == 0) {

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

# replace to_args_sizes(self) with self_args_sizes
(r'to_args_sizes\({}\)', {
'suffix': '_args_sizes',
'type': 'std::vector<std::vector<int64_t>>',

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

std::vector<Tensor> cat_tensors_backward(const Tensor & grad, const std::vector<std::vector<int64_t>> &sizes, int64_t dim) {
if (sizes.size() > 0) {
// cat wraps dim to the first tensor's shape
dim = at::maybe_wrap_dim(dim, sizes[0].size());

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

}

std::vector<Tensor> cat_tensors_backward(const Tensor & grad, const std::vector<int64_t> &sizes, int64_t dim) {
std::vector<Tensor> cat_tensors_backward(const Tensor & grad, const std::vector<std::vector<int64_t>> &sizes, int64_t dim) {

This comment was marked as off-topic.

@ezyang ezyang merged commit cf2e176 into pytorch:master Mar 19, 2018
ezyang added a commit that referenced this pull request Mar 19, 2018
soumith added a commit that referenced this pull request Mar 19, 2018
soumith added a commit that referenced this pull request Mar 19, 2018
…simple") moving average" (#5892)

* Revert "Port ATen and JIT C++ tests to Catch2 (#5788)"

This reverts commit 6f80023.

* Revert "Fix error message for cat-ing zero-dim tensors (#5819)"

This reverts commit cf2e176.

* Revert "Softmax symbolic should account for negative dim (#5846)"

This reverts commit ba64724.

* Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855)"

This reverts commit 22ef8e5.

* Revert "Don't modify requires_grad when running DataParallel in no_grad mode (#5880)"

This reverts commit d11b7fb.

* Revert "fix some methods not showing up in doc (#5882)"

This reverts commit 24fca0e.

* Revert "ReduceOps cleanup and set_num_threads (#5723)"

This reverts commit 84400d5.

* Revert "introduce shape_as_tensor and reshape_from_variable_shape (#5824)"

This reverts commit f446b82.

* Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (#5766)"

This reverts commit 99b1f6c.
jekbradbury pushed a commit to jekbradbury/pytorch that referenced this pull request Mar 21, 2018
Fixes pytorch#5552

* Fix error message for cat-ing zero-dim tensors

* Address comments
jekbradbury pushed a commit to jekbradbury/pytorch that referenced this pull request Mar 21, 2018
…simple") moving average" (pytorch#5892)

* Revert "Port ATen and JIT C++ tests to Catch2 (pytorch#5788)"

This reverts commit 6f80023.

* Revert "Fix error message for cat-ing zero-dim tensors (pytorch#5819)"

This reverts commit cf2e176.

* Revert "Softmax symbolic should account for negative dim (pytorch#5846)"

This reverts commit ba64724.

* Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (pytorch#5855)"

This reverts commit 22ef8e5.

* Revert "Don't modify requires_grad when running DataParallel in no_grad mode (pytorch#5880)"

This reverts commit d11b7fb.

* Revert "fix some methods not showing up in doc (pytorch#5882)"

This reverts commit 24fca0e.

* Revert "ReduceOps cleanup and set_num_threads (pytorch#5723)"

This reverts commit 84400d5.

* Revert "introduce shape_as_tensor and reshape_from_variable_shape (pytorch#5824)"

This reverts commit f446b82.

* Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (pytorch#5766)"

This reverts commit 99b1f6c.
wuhuikx pushed a commit to wuhuikx/pytorch that referenced this pull request Jan 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants