-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Open
Labels
module: autogradRelated to torch.autograd, and the autograd engine in generalRelated to torch.autograd, and the autograd engine in generalmodule: memory usagePyTorch is using more memory than it should, or it is leaking memoryPyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
>>> import torch
>>> import gc
>>> _ = torch.randn(1, device='cuda')
>>> del _
>>> torch.cuda.synchronize()
>>> gc.collect()
0
>>> print(torch.cuda.memory_allocated())
865280
>>> x = torch.randn(1, device='cuda', requires_grad=True)
>>> y = x.tanh()
>>> y.backward(torch.ones_like(y), create_graph=True)
>>> del x, y
>>> torch.cuda.synchronize()
>>> gc.collect()
0
>>> print(torch.cuda.memory_allocated())
867328leaks with y = x.tanh() but not with y = x + 1.
Discovered when running code in #7270
kuc2477
Metadata
Metadata
Assignees
Labels
module: autogradRelated to torch.autograd, and the autograd engine in generalRelated to torch.autograd, and the autograd engine in generalmodule: memory usagePyTorch is using more memory than it should, or it is leaking memoryPyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module