Skip to content

Automated Mixed Precision not documented to work with nn.DataParallel #50170

@marii-moe

Description

@marii-moe

📚 Documentation

The documentation is here: https://pytorch.org/docs/stable/notes/amp_examples.html#dataparallel-in-a-single-process
and here: https://github.com/pytorch/pytorch/blob/master/docs/source/notes/amp_examples.rst#dataparallel-in-a-single-process

This is the statement in question:

torch.nn.DataParallel spawns threads to run the forward pass on each device. The autocast state is thread local, so the following will not work:

This pull request updated parallel_apply so that nn.DataParallel will work with autocast. #43102

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: data paralleltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions