-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
high prioritymodule: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulmodule: testsIssues related to tests (not the torch.testing module)Issues related to tests (not the torch.testing module)triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
This test is failing consistently on _win-vs2019-cuda11.3-py3 / test (default, 2, 2, windows.8xlarge.nvidia.gpu) but apparently no other jobs. Failure snippet:
❌ Failure: TestLinalgCUDA.test_addmm_baddbmm_overflow_cuda_float16
Traceback (most recent call last):
File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 891, in only_fn
return fn(slf, *args, **kwargs)
File "test_linalg.py", line 6139, in test_addmm_baddbmm_overflow
self.assertTrue((out == 10000.).all())
AssertionError: tensor(False, device='cuda:0') is not true
Platforms: windows
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @mruberry @jianyuh @nikitaved @pearu @walterddr @IvanYashchuk @xwang233 @lezcano
Metadata
Metadata
Assignees
Labels
high prioritymodule: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulmodule: testsIssues related to tests (not the torch.testing module)Issues related to tests (not the torch.testing module)triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module