Skip to content

Conversation

@sighingnow
Copy link
Contributor

Make ByteTensor's methods any and all can accept dim and keepdim parameters, behave like sum/prod.

I add two macro TENSOR_IMPLEMENT_ACC and TENSOR_IMPLEMENT_ACCALL to make the code less duplicate.

This feature should fix issue #4313

@pytorchbot
Copy link
Collaborator

@sighingnow, thanks for your PR! We identified @zdevito to be a potential reviewer.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks good, thanks for the PR. I only have a few minor comments

THTensor_(squeeze1d)(r_, r_, dimension);
}
}
#define TENSOR_IMPLEMENT_ACC(NAME, ACC_OP, INIT_VALUE) \

This comment was marked as off-topic.

TH_TENSOR_APPLY(real, tensor, sum = sum OP *tensor_data;); \
return sum; \
}
TENSOR_IMPLEMENT_ACCALL(logicalallall, int, &&, 1)

This comment was marked as off-topic.

@unittest.skipIf(not TEST_NUMPY, "Numpy not found")
def test_all_any_with_dim(self):
def test(x):
r1 = np.all(x, axis=1, keepdims=False).astype('uint8')

This comment was marked as off-topic.

This comment was marked as off-topic.

tensorLogspace,
tensorRand,
tensorRandn,
tensorLogicalallall,

This comment was marked as off-topic.

This comment was marked as off-topic.

- bool keepdim
- cname: logicalany
return: argument 0
before_call: maybeThrowBackCompatKeepdimWarn("any");

This comment was marked as off-topic.

@sighingnow
Copy link
Contributor Author

Thanks for your review, @apaszke , new patch done.

Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, thanks! Some small suggestions below, and I'll fix the general ATen issue about bool returns.

// Reduce all elements in the given tensor `t` with given `INIT_VALUE` and return
// the result of type `RETTYPE`.
// The accumulate operator `ACC_OP` must be commutative.
#define TENSOR_IMPLEMENT_ACCALL(NAME, RETTYPE, ACC_OP, INIT_VALUE) \

This comment was marked as off-topic.

This comment was marked as off-topic.


TENSOR_IMPLEMENT_LOGICAL_SUM(logicalall, &&, 1)
TENSOR_IMPLEMENT_LOGICAL_SUM(logicalany, ||, 0)
TENSOR_IMPLEMENT_ACC(logicalAnd, &&, 1)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

self.assertEqual(r3.shape, r4.shape)
self.assertTrue((r3 == r4).all())

test(torch.rand((1, 2, 3, 4)).round().byte())

This comment was marked as off-topic.

- THTensor* self
options:
- cname: logicalAndAll
return: bool

This comment was marked as off-topic.

This comment was marked as off-topic.

@gchanan
Copy link
Contributor

gchanan commented Jan 12, 2018

Ok, #4639 should fix the bool return issue. Let's merge that first and rebase this one so the return type is always consistent.

@gchanan
Copy link
Contributor

gchanan commented Jan 12, 2018

#4639 is in.

Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
Signed-off-by: HE, Tao <sighingnow@gmail.com>
@sighingnow
Copy link
Contributor Author

Ping @gchanan

@ezyang
Copy link
Contributor

ezyang commented Mar 23, 2018

This needs a rebase now; sorry about the delay :(

* upstream/master: (663 commits)
  Fix "command not found" error in perf test (pytorch#5982)
  add pip mkl-devel to the error message when mkl is found but mkl headers are not (pytorch#5984)
  Support batch LowerCholeskyTransform (pytorch#5980)
  Linearly interpolating upsampling fix (pytorch#5927)
  Store perf numbers in S3 (pytorch#5951)
  Modidy setup docs for Windows (pytorch#5981)
  Group Normalization (pytorch#5968)
  [distributions] Implement Power transform (pytorch#5976)
  Disable TestBottleneck test_cuda on Windows (pytorch#5977)
  Fix crash when cat-ing empty cuda tensors (pytorch#5971)
  Update no_unions flag for nanopb gen and update ONNX proto files (pytorch#5972)
  Expose gradients w.r.t. input & weight for conv1d, conv2d, conv3d in Python (pytorch#5408)
  Fixed non-determinate preprocessing on DataLoader (pytorch#4640)
  add AVX2 implementation for sigmoid function (pytorch#5010)
  Implement torch.util.bottleneck (pytorch#5216)
  Remove pragma once from cpp file (pytorch#5965)
  fix mvn docs (pytorch#5967)
  Fix incorrect rendering of Tensor.index_*_ doc examples. (pytorch#5969)
  Implement range for loop in script (pytorch#5827)
  Add windows doc (pytorch#5859)
  ...

# Conflicts:
#	aten/src/TH/generic/THTensorMath.c
#	torch/_tensor_docs.py
#	torch/csrc/generic/methods/TensorCompare.cwrap
Signed-off-by: HE, Tao <sighingnow@gmail.com>
@sighingnow
Copy link
Contributor Author

Thanks for picking up this. @ezyang I have rebased this PR to current master.

@ezyang
Copy link
Contributor

ezyang commented Mar 26, 2018

@pytorchbot retest this please

@sighingnow
Copy link
Contributor Author

Fix compile error in THD module.

@ezyang
Copy link
Contributor

ezyang commented Mar 29, 2018

@pytorchbot retest this please

@sighingnow
Copy link
Contributor Author

The failures wasn't caused by the code. Seems something bad happened to the CI server.

@ezyang
Copy link
Contributor

ezyang commented Mar 30, 2018

@pytorchbot retest this please

@sighingnow
Copy link
Contributor Author

Any further comments on this patch ? I think it's now ready to get merged!

/cc @ezyang

return (bool)sum;
}

void THTensor_(logicalAnd)(THTensor *r_, THTensor *t, int dimension, int keepdim)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith
Copy link
Contributor

soumith commented Apr 25, 2018

15:08:33 ======================================================================
15:08:33 ERROR: test_all_any_with_dim (test_torch.TestTorch)
15:08:33 ----------------------------------------------------------------------
15:08:33 Traceback (most recent call last):
15:08:33   File "test_torch.py", line 794, in test_all_any_with_dim
15:08:33     [1, 1, 1]]))
15:08:33   File "test_torch.py", line 784, in test
15:08:33     self.assertTrue((r1 == r2).all())
15:08:33 RuntimeError: Expected object of type torch.LongTensor but found type torch.ByteTensor for argument #2 'other'
15:08:33 
15:08:33 ----------------------------------------------------------------------

This broken master

@sighingnow
Copy link
Contributor Author

Sorry for the breakage. Thanks for fixing this @soumith .

Jorghi12 pushed a commit to wsttiger/pytorch that referenced this pull request May 10, 2018
weiyangfb pushed a commit to weiyangfb/pytorch that referenced this pull request Jun 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants