Skip to content

Commit bc03f27

Browse files
committed
Fix size of histc return on CPU when input is 0-dimensional and bins=1.
This triggers the wrong scalar_check unless it is explicitly overridden (because the heuristic uses the shape of the input). This wasn't an issue on CUDA because it didn't use TH, so doesn't have scalar checks. gh-metadata: pytorch pytorch 21497 gh/gchanan/26/head
1 parent d7197bc commit bc03f27

File tree

2 files changed

+8
-0
lines changed

2 files changed

+8
-0
lines changed

aten/src/ATen/Declarations.cwrap

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1758,6 +1758,7 @@
17581758
variants:
17591759
- function
17601760
return: argument 0
1761+
scalar_check: false
17611762
arguments:
17621763
- arg: THTensor* result
17631764
output: True

test/test_torch.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2646,6 +2646,13 @@ def test_histc(self):
26462646
torch.tensor([0, 2, 1, 0], dtype=torch.float, device=device),
26472647
actual)
26482648
self.assertEqual(actual.dtype, torch.float)
2649+
# scalar input and 1 bin -- should return a 1-dimensional tensor, not a scalar.
2650+
actual = torch.histc(
2651+
torch.tensor(0, dtype=torch.float, device=device),
2652+
bins=1, min=0, max=3)
2653+
self.assertEqual(
2654+
torch.tensor([1], dtype=torch.float, device=device),
2655+
actual)
26492656

26502657
# test against numpy.histogram()
26512658
def test_against_np(tensor, bins=100, min=0, max=0):

0 commit comments

Comments
 (0)