Skip to content

There is no interation between no_grad, enable_grad, set_grad_enabled #40158

@Yura52

Description

@Yura52

📚 Documentation

torch==1.5.0

The problem is the same for all three functions. For example, let's consider no_grad.

The documentation says: "This mode has no effect when using enable_grad context manager". This is not true:

import torch
x = torch.tensor([1.0], requires_grad=True)
with torch.enable_grad():
    with torch.no_grad():
        y = x * 2
assert not y.requires_grad

And I mean that the documentation should be fixed, not the behavior of torch 😄
These functions just set grad enabled/disabled in __enter__ (or in the constructor in the case of set_grad_enabled) and set the previous state in __exit__. This is very intuitive. IMHO, it will be confusing if, for example, the behavior of no_grad changes depending on some other context.

cc @ezyang @gchanan @zou3519 @ssnl @albanD @gqchen @jlin27

Metadata

Metadata

Assignees

Labels

high prioritymodule: autogradRelated to torch.autograd, and the autograd engine in generalmodule: docsRelated to our documentation, both in docs/ and docblockssmallWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions