Skip to content

cpp frontend torch::nll_loss2d memory leak #21894

@chennian32

Description

@chennian32

🐛 Bug

torch::nll_loss2d memory leak

When I use classification, there is no memory leak using nll_loss,But when I train with semantics segmentation, memory leaks occur when I use nll_loss2d.

To Reproduce

Steps to reproduce the behavior:

1.semantics segmentation
2.torch::nll_loss2d
3.cpp frontend

model_->train();				
auto data = batch.data.to(d), targets = batch.target.to(d);	
optimizer_->zero_grad();
auto prediction = model_->forward(data);	
loss = torch::nll_loss2d(torch::log_softmax(prediction, 1), targets);
loss.backward();//If I comment on this line, there will be no memory leak
optimizer_->step();
float lossIn = loss.item<float>();
lr = optimizer_->options.learning_rate();
  • PyTorch Version (e.g., 1.0):1.1
  • OS (e.g., Linux):windows 10
  • How you installed PyTorch (conda, pip, source):libtorch
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:cu100
  • GPU models and configuration:cuda cudnn
  • Any other relevant information:

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: autogradRelated to torch.autograd, and the autograd engine in generalmodule: cppRelated to C++ APImodule: lossProblem is related to loss functionmodule: memory usagePyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions