-
Notifications
You must be signed in to change notification settings - Fork 75.2k
Open
Labels
TF 2.7Issues related to TF 2.7.0Issues related to TF 2.7.0comp:gpuGPU related issuesGPU related issuesstat:awaiting tensorflowerStatus - Awaiting response from tensorflowerStatus - Awaiting response from tensorflowertype:bugBugBug
Description
System information
- Custom code; nothing exotic though.
- Ubuntu 18.04
- installed from source (with pip)
- tensorflow version v2.1.0-rc2-17-ge5bf8de
- 3.6
- CUDA 10.1
- Tesla V100, 32GB RAM
I created a model, nothing especially fancy in it. When I create the model, when using nvidia-smi, I can see that tensorflow takes up nearly all of the memory. When I try to fit the model with a small batch size, it successfully runs. When I fit with a larger batch size, it runs out of memory. Nothing unexpected so far.
However, the only way I can then release the GPU memory is to restart my computer. When I run nvidia-smi I can see the memory is still used, but there is no process using a GPU. Also, If I try to run another model, it fails much sooner.
Nothing in the first five pages of google results works. (and most solutions are for TF1)
Is there any way to release GPU memory in tensorflow 2?
orena1, amitport, kaorusss, jackvial, dsuthar-nvidia and 80 more
Metadata
Metadata
Assignees
Labels
TF 2.7Issues related to TF 2.7.0Issues related to TF 2.7.0comp:gpuGPU related issuesGPU related issuesstat:awaiting tensorflowerStatus - Awaiting response from tensorflowerStatus - Awaiting response from tensorflowertype:bugBugBug