-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Make any and all on ByteTensor behave like sum/prod. #4627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@sighingnow, thanks for your PR! We identified @zdevito to be a potential reviewer. |
apaszke
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That looks good, thanks for the PR. I only have a few minor comments
aten/src/TH/generic/THTensorMath.c
Outdated
| THTensor_(squeeze1d)(r_, r_, dimension); | ||
| } | ||
| } | ||
| #define TENSOR_IMPLEMENT_ACC(NAME, ACC_OP, INIT_VALUE) \ |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/TH/generic/THTensorMath.c
Outdated
| TH_TENSOR_APPLY(real, tensor, sum = sum OP *tensor_data;); \ | ||
| return sum; \ | ||
| } | ||
| TENSOR_IMPLEMENT_ACCALL(logicalallall, int, &&, 1) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_torch.py
Outdated
| @unittest.skipIf(not TEST_NUMPY, "Numpy not found") | ||
| def test_all_any_with_dim(self): | ||
| def test(x): | ||
| r1 = np.all(x, axis=1, keepdims=False).astype('uint8') |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| tensorLogspace, | ||
| tensorRand, | ||
| tensorRandn, | ||
| tensorLogicalallall, |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| - bool keepdim | ||
| - cname: logicalany | ||
| return: argument 0 | ||
| before_call: maybeThrowBackCompatKeepdimWarn("any"); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Thanks for your review, @apaszke , new patch done. |
gchanan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, thanks! Some small suggestions below, and I'll fix the general ATen issue about bool returns.
aten/src/TH/generic/THTensorMath.c
Outdated
| // Reduce all elements in the given tensor `t` with given `INIT_VALUE` and return | ||
| // the result of type `RETTYPE`. | ||
| // The accumulate operator `ACC_OP` must be commutative. | ||
| #define TENSOR_IMPLEMENT_ACCALL(NAME, RETTYPE, ACC_OP, INIT_VALUE) \ |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/TH/generic/THTensorMath.c
Outdated
|
|
||
| TENSOR_IMPLEMENT_LOGICAL_SUM(logicalall, &&, 1) | ||
| TENSOR_IMPLEMENT_LOGICAL_SUM(logicalany, ||, 0) | ||
| TENSOR_IMPLEMENT_ACC(logicalAnd, &&, 1) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_torch.py
Outdated
| self.assertEqual(r3.shape, r4.shape) | ||
| self.assertTrue((r3 == r4).all()) | ||
|
|
||
| test(torch.rand((1, 2, 3, 4)).round().byte()) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/Declarations.cwrap
Outdated
| - THTensor* self | ||
| options: | ||
| - cname: logicalAndAll | ||
| return: bool |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Ok, #4639 should fix the bool return issue. Let's merge that first and rebase this one so the return type is always consistent. |
|
#4639 is in. |
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
|
Ping @gchanan |
|
This needs a rebase now; sorry about the delay :( |
* upstream/master: (663 commits) Fix "command not found" error in perf test (pytorch#5982) add pip mkl-devel to the error message when mkl is found but mkl headers are not (pytorch#5984) Support batch LowerCholeskyTransform (pytorch#5980) Linearly interpolating upsampling fix (pytorch#5927) Store perf numbers in S3 (pytorch#5951) Modidy setup docs for Windows (pytorch#5981) Group Normalization (pytorch#5968) [distributions] Implement Power transform (pytorch#5976) Disable TestBottleneck test_cuda on Windows (pytorch#5977) Fix crash when cat-ing empty cuda tensors (pytorch#5971) Update no_unions flag for nanopb gen and update ONNX proto files (pytorch#5972) Expose gradients w.r.t. input & weight for conv1d, conv2d, conv3d in Python (pytorch#5408) Fixed non-determinate preprocessing on DataLoader (pytorch#4640) add AVX2 implementation for sigmoid function (pytorch#5010) Implement torch.util.bottleneck (pytorch#5216) Remove pragma once from cpp file (pytorch#5965) fix mvn docs (pytorch#5967) Fix incorrect rendering of Tensor.index_*_ doc examples. (pytorch#5969) Implement range for loop in script (pytorch#5827) Add windows doc (pytorch#5859) ... # Conflicts: # aten/src/TH/generic/THTensorMath.c # torch/_tensor_docs.py # torch/csrc/generic/methods/TensorCompare.cwrap
Signed-off-by: HE, Tao <sighingnow@gmail.com>
|
Thanks for picking up this. @ezyang I have rebased this PR to current master. |
|
@pytorchbot retest this please |
|
Fix compile error in THD module. |
|
@pytorchbot retest this please |
|
The failures wasn't caused by the code. Seems something bad happened to the CI server. |
|
@pytorchbot retest this please |
|
Any further comments on this patch ? I think it's now ready to get merged! /cc @ezyang |
| return (bool)sum; | ||
| } | ||
|
|
||
| void THTensor_(logicalAnd)(THTensor *r_, THTensor *t, int dimension, int keepdim) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This broken master |
|
Sorry for the breakage. Thanks for fixing this @soumith . |
Make
ByteTensor's methodsanyandallcan acceptdimandkeepdimparameters, behave likesum/prod.I add two macro
TENSOR_IMPLEMENT_ACCandTENSOR_IMPLEMENT_ACCALLto make the code less duplicate.This feature should fix issue #4313