Skip to content

ReLU with NaN as input gives 0 as output #7999

@codinfox

Description

@codinfox

Issue description

It seems torch.relu() when given nan as input, it produces 0 instead of nan. I am wondering if this is the intended behavior. This behavior actually hides code bugs, making troubleshooting harder.

Code example

a = torch.Tensor(1).fill_(float('nan'))
torch.relu(a)  # => 0

System Info

Collecting environment information...
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 8.0.61

OS: Ubuntu 14.04.5 LTS
GCC version: (Ubuntu 4.8.5-4ubuntu8~14.04.2) 4.8.5
CMake version: version 3.2.2

Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: 8.0.61
GPU models and configuration: GPU 0: GeForce GTX 1080
Nvidia driver version: 384.111
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.0.5
/usr/local/cuda-8.0/lib64/libcudnn.so
/usr/local/cuda-8.0/lib64/libcudnn.so.5
/usr/local/cuda-8.0/lib64/libcudnn.so.5.1.10
/usr/local/cuda-8.0/lib64/libcudnn.so.7
/usr/local/cuda-8.0/lib64/libcudnn.so.7.0.5
/usr/local/cuda-8.0/lib64/libcudnn_static.a

Versions of relevant libraries:
[pip] numpy (1.13.3)
[pip] numpydoc (0.7.0)
[pip] torch (0.4.0)
[pip] torchvision (0.2.0)
[conda] pytorch 0.4.0 py35_cuda8.0.61_cudnn7.1.2_1 pytorch
[conda] torchvision 0.2.0 py35_0

Metadata

Metadata

Assignees

Labels

todoNot as important as medium or high priority tasks, but we will work on these.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions