-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Description
PyTorch master@92a0f78
Our digamma single-precision floating point accuracy is bad near the poles. This is not caught by our tests because we only test on a few values:
Lines 272 to 274 in 92a0f78
| def test_digamma(self): | |
| from scipy.special import digamma | |
| self._testMath(torch.digamma, digamma, large=False, precs=(2e-8, 3e-4)) |
input = torch.tensor([-1.99999994], dtype=torch.float32)
fp32 = input.digamma()
fp64 = input.double().digamma()
print(((fp32.double() - fp64) / fp64).item()) # 0.24012388522631392This is a relative error of 24%
We also are returning real numbers at the poles (for both float32 and float64) when we should be returning inf.
Metadata
Metadata
Assignees
Labels
No labels