Skip to content

CUDNN_STATUS_BAD_PARAM with LSTM/RNN in 1.6.0 with autocast #43322

@DietDietDiet

Description

@DietDietDiet

🐛 Bug

I met the following error when I was using pytorch 1.6 with LSTM
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM

To Reproduce

I think the root cause was that I was using the mixed precision training mode in pytorch 1.6 following https://pytorch.org/docs/stable/notes/amp_examples.html
and the datatype might be converted to fp16 which causes this error.
When I train my model with full precision, it worked normally.

cc @csarofeen @ptrblck @zou3519

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: cudnnRelated to torch.backends.cudnn, and CuDNN supportmodule: rnnIssues related to RNN support (LSTM, GRU, etc)triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions