-
Notifications
You must be signed in to change notification settings - Fork 75.2k
Description
Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template
System information
- TensorFlow version (you are using): 2.3.1, 2.4.1
- Are you willing to contribute it (Yes/No): No
Describe the feature and the current behavior/state.
Currently there is no way to completely free the (once) allocated GPU RAM.
For example, i want to use tensorflow in the context of 3d visualization which is made next to impossible by this behavior. Standard solutions like tf.config.experimental.set_memory_growth(gpus[0], True) are unfortunately not sufficient, because the once allocated RAM cannot be released again.
In #36465 (#36465 (comment)), it is mentioned that by using GPUProcessState::TestOnlyReset and ProcessState::TestOnlyReset the option to release GPU memory exists, but is just not exposed or for testing purposes only.
It would be very nice for applications using tensorflow to have proper access to gpu ram release functions.
Will this change the current api? How?
Introduce a new (experimental) function to reset the current session/graph/device/... - state and thus free the GPU RAM completely.
Who will benefit with this feature?
People who use Tensorflow in their application in conjunction with other GPU-RAM critical operations such as 3D rendering.