-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Fix error message for cat-ing zero-dim tensors #5819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1f61020 to
e2d871d
Compare
|
Okay this should be good for reviewing now. Today I learned that |
|
@zou3519 I think in our code |
e2d871d to
9ed3eba
Compare
| if (size == 0) { | ||
| auto& shape = sizes[i]; | ||
| // If input was empty tensor, gradInput should be empty tensor. | ||
| if (shape[0] == 0) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| # replace to_args_sizes(self) with self_args_sizes | ||
| (r'to_args_sizes\({}\)', { | ||
| 'suffix': '_args_sizes', | ||
| 'type': 'std::vector<std::vector<int64_t>>', |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| std::vector<Tensor> cat_tensors_backward(const Tensor & grad, const std::vector<std::vector<int64_t>> &sizes, int64_t dim) { | ||
| if (sizes.size() > 0) { | ||
| // cat wraps dim to the first tensor's shape | ||
| dim = at::maybe_wrap_dim(dim, sizes[0].size()); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
|
|
||
| std::vector<Tensor> cat_tensors_backward(const Tensor & grad, const std::vector<int64_t> &sizes, int64_t dim) { | ||
| std::vector<Tensor> cat_tensors_backward(const Tensor & grad, const std::vector<std::vector<int64_t>> &sizes, int64_t dim) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
…simple") moving average" (#5892) * Revert "Port ATen and JIT C++ tests to Catch2 (#5788)" This reverts commit 6f80023. * Revert "Fix error message for cat-ing zero-dim tensors (#5819)" This reverts commit cf2e176. * Revert "Softmax symbolic should account for negative dim (#5846)" This reverts commit ba64724. * Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855)" This reverts commit 22ef8e5. * Revert "Don't modify requires_grad when running DataParallel in no_grad mode (#5880)" This reverts commit d11b7fb. * Revert "fix some methods not showing up in doc (#5882)" This reverts commit 24fca0e. * Revert "ReduceOps cleanup and set_num_threads (#5723)" This reverts commit 84400d5. * Revert "introduce shape_as_tensor and reshape_from_variable_shape (#5824)" This reverts commit f446b82. * Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (#5766)" This reverts commit 99b1f6c.
Fixes pytorch#5552 * Fix error message for cat-ing zero-dim tensors * Address comments
…simple") moving average" (pytorch#5892) * Revert "Port ATen and JIT C++ tests to Catch2 (pytorch#5788)" This reverts commit 6f80023. * Revert "Fix error message for cat-ing zero-dim tensors (pytorch#5819)" This reverts commit cf2e176. * Revert "Softmax symbolic should account for negative dim (pytorch#5846)" This reverts commit ba64724. * Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (pytorch#5855)" This reverts commit 22ef8e5. * Revert "Don't modify requires_grad when running DataParallel in no_grad mode (pytorch#5880)" This reverts commit d11b7fb. * Revert "fix some methods not showing up in doc (pytorch#5882)" This reverts commit 24fca0e. * Revert "ReduceOps cleanup and set_num_threads (pytorch#5723)" This reverts commit 84400d5. * Revert "introduce shape_as_tensor and reshape_from_variable_shape (pytorch#5824)" This reverts commit f446b82. * Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (pytorch#5766)" This reverts commit 99b1f6c.
This reverts commit cf2e176.
Fixes #5552
cc @gchanan