Skip to content

[memory leak] [PyTorch] .backward(create_graph=True) #7343

@ssnl

Description

@ssnl
>>> import torch
>>> import gc
>>> _ = torch.randn(1, device='cuda')
>>> del _
>>> torch.cuda.synchronize()
>>> gc.collect()
0
>>> print(torch.cuda.memory_allocated())
865280
>>> x = torch.randn(1, device='cuda', requires_grad=True)
>>> y = x.tanh()
>>> y.backward(torch.ones_like(y), create_graph=True)
>>> del x, y
>>> torch.cuda.synchronize()
>>> gc.collect()
0
>>> print(torch.cuda.memory_allocated())
867328

leaks with y = x.tanh() but not with y = x + 1.

Discovered when running code in #7270

cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: autogradRelated to torch.autograd, and the autograd engine in generalmodule: memory usagePyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions