Skip to content

How can I clear GPU memory in tensorflow 2? #36465

@HristoBuyukliev

Description

@HristoBuyukliev

System information

  • Custom code; nothing exotic though.
  • Ubuntu 18.04
  • installed from source (with pip)
  • tensorflow version v2.1.0-rc2-17-ge5bf8de
  • 3.6
  • CUDA 10.1
  • Tesla V100, 32GB RAM

I created a model, nothing especially fancy in it. When I create the model, when using nvidia-smi, I can see that tensorflow takes up nearly all of the memory. When I try to fit the model with a small batch size, it successfully runs. When I fit with a larger batch size, it runs out of memory. Nothing unexpected so far.

However, the only way I can then release the GPU memory is to restart my computer. When I run nvidia-smi I can see the memory is still used, but there is no process using a GPU. Also, If I try to run another model, it fails much sooner.

Nothing in the first five pages of google results works. (and most solutions are for TF1)

Is there any way to release GPU memory in tensorflow 2?

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions