Skip to content

[c10d] Use torch::autograd::Variable with no_grad #19145

@pietern

Description

@pietern

We currently use at::Tensor directly and don't worry about the distinction between Variable and Tensor in c10d. This came up when working on adding support for reduction of sparse tensors, where Variable instances that come in through pybind are mixed with temporary Tensor instances created on the c10d side.

One solution here is to use torch::autograd::Variable (later to be fully merged into at::Tensor) throughout. This means adding a dependency on torch/csrc/autograd in c10d and changing all temporaries to be created with factory functions in the torch namespace.

A temporary workaround is to unbox and rebox Variable instances at the Python/C++ boundary.

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: internalsRelated to internal abstractions in c10 and ATenoncall: distributedAdd this issue/PR to distributed oncall triage queuetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions