-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Closed
Labels
Description
Is there an existing issue for this?
- I have searched the existing issues
Bug description
When using a top-down pytorch model the detector is not running on the GPU when calling analyze_videos, at least for a model trained via memory replay. This is true even when providing a specific device via the device= pytorch parameter.
If I hack a line to specifically pass the device to utils.get_detector_inference_runner at line 398 of pose_estimation_pytorch/apis/analyze_videos.py then the detector does use the supplied GPU.
Operating System
Ubuntu 22.04
DeepLabCut version
3.0.0rc6
DeepLabCut mode
single animal
Device type
Nvidia RTX A5000
Steps To Reproduce
Train a top-down model (I used fine tuning with memory replay from the mouse superanimal model), then call analyze_videos.
Relevant log output
Anything else?
No response
Code of Conduct
- I agree to follow this project's Code of Conduct
Reactions are currently unavailable