Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 8 additions & 80 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,7 @@
[![Gitter](https://badges.gitter.im/DeepLabCut/community.svg)](https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://twitter.com/DeepLabCut)
[![Generic badge](https://img.shields.io/badge/Contributions-Welcome-brightgreen.svg)](CONTRIBUTING.md)


[![CZI's Essential Open Source Software for Science](https://chanzuckerberg.github.io/open-science/badges/CZI-EOSS.svg)](https://czi.co/EOSS)

</div>

Expand Down Expand Up @@ -128,84 +127,9 @@ This is an actively developed package and we welcome community development and i
| The DeepLabCut [AI Residency Program](https://www.deeplabcutairesidency.org/) | To come and work with us next summer👏 | Annually | DLC Team |


## References:

If you use this code or data we kindly ask that you please [cite Mathis et al, 2018](https://www.nature.com/articles/s41593-018-0209-y) and, if you use the Python package (DeepLabCut2.x) please also cite [Nath, Mathis et al, 2019](https://doi.org/10.1038/s41596-019-0176-0). If you utilize the MobileNetV2s or EfficientNets please cite [Mathis, Biasi et al. 2021](https://openaccess.thecvf.com/content/WACV2021/papers/Mathis_Pretraining_Boosts_Out-of-Domain_Robustness_for_Pose_Estimation_WACV_2021_paper.pdf). If you use versions 2.2beta+ or 2.2rc1+, please cite [Lauer et al. 2022](https://www.nature.com/articles/s41592-022-01443-0).

DOIs (#ProTip, for helping you find citations for software, check out [CiteAs.org](http://citeas.org/)!):

- Mathis et al 2018: [10.1038/s41593-018-0209-y](https://doi.org/10.1038/s41593-018-0209-y)
- Nath, Mathis et al 2019: [10.1038/s41596-019-0176-0](https://doi.org/10.1038/s41596-019-0176-0)
- Lauer et al 2022: [10.1038/s41592-022-01443-0](https://doi.org/10.1038/s41592-022-01443-0)


Please check out the following references for more details:

@article{Mathisetal2018,
title = {DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
journal = {Nature Neuroscience},
year = {2018},
url = {https://www.nature.com/articles/s41593-018-0209-y}}

@article{NathMathisetal2019,
title = {Using DeepLabCut for 3D markerless pose estimation across species and behaviors},
author = {Nath*, Tanmay and Mathis*, Alexander and Chen, An Chi and Patel, Amir and Bethge, Matthias and Mathis, Mackenzie W},
journal = {Nature Protocols},
year = {2019},
url = {https://doi.org/10.1038/s41596-019-0176-0}}

@InProceedings{Mathis_2021_WACV,
author = {Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W.},
title = {Pretraining Boosts Out-of-Domain Robustness for Pose Estimation},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2021},
pages = {1859-1868}}

@article{Lauer2022MultianimalPE,
title={Multi-animal pose estimation, identification and tracking with DeepLabCut},
author={Jessy Lauer and Mu Zhou and Shaokai Ye and William Menegas and Steffen Schneider and Tanmay Nath and Mohammed Mostafizur Rahman and Valentina Di Santo and Daniel Soberanes and Guoping Feng and Venkatesh N. Murthy and George Lauder and Catherine Dulac and M. Mathis and Alexander Mathis},
journal={Nature Methods},
year={2022},
volume={19},
pages={496 - 504}}

@article{insafutdinov2016eccv,
title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele},
booktitle = {ECCV'16},
url = {http://arxiv.org/abs/1605.03170}}

Review & Educational articles:

@article{Mathis2020DeepLT,
title={Deep learning tools for the measurement of animal behavior in neuroscience},
author={Mackenzie W. Mathis and Alexander Mathis},
journal={Current Opinion in Neurobiology},
year={2020},
volume={60},
pages={1-11}}

@article{Mathis2020Primer,
title={A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives},
author={Alexander Mathis and Steffen Schneider and Jessy Lauer and Mackenzie W. Mathis},
journal={Neuron},
year={2020},
volume={108},
pages={44-65}}

Other open-access pre-prints related to our work on DeepLabCut:

@article{MathisWarren2018speed,
author = {Mathis, Alexander and Warren, Richard A.},
title = {On the inference speed and video-compression robustness of DeepLabCut},
year = {2018},
doi = {10.1101/457242},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2018/10/30/457242},
eprint = {https://www.biorxiv.org/content/early/2018/10/30/457242.full.pdf},
journal = {bioRxiv}}
## References \& Citations:

Please see our [dedicated page](https://deeplabcut.github.io/DeepLabCut/docs/citation.html) on how to **cite DeepLabCut** 🙏 and our sugestions for your Methods section!

## License:

Expand Down Expand Up @@ -276,3 +200,7 @@ importing a project into the new data format for DLC 2.0
- August 2018: NVIDIA AI Developer News: [AI Enables Markerless Animal Tracking](https://news.developer.nvidia.com/ai-enables-markerless-animal-tracking/)
- July 2018: Ed Yong covered DeepLabCut and interviewed several users for the [Atlantic](https://www.theatlantic.com/science/archive/2018/07/deeplabcut-tracking-animal-movements/564338).
- April 2018: first DeepLabCut preprint on [arXiv.org](https://arxiv.org/abs/1804.03142)

## Funding

We are grateful for the follow support over the years! This software project was supported in part by the Essential Open Source Software for Science (EOSS) program at Chan Zuckerberg Initiative (cycles 1, 3, 3-DEI, 4), and jointly with the Kavli Foundation for EOSS Cycle 6! We also thank the Rowland Institute at Harvard for funding from 2017-2020, and EPFL from 2020-present.
13 changes: 9 additions & 4 deletions _toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ parts:
chapters:
- file: docs/quick-start/single_animal_quick_guide
- file: docs/quick-start/tutorial_maDLC
- caption: Beginner's Guide to DeepLabCut
- caption: 🚀 Beginner's Guide to DeepLabCut
chapters:
- file: docs/beginner-guides/beginners-guide
- file: docs/beginner-guides/manage-project
Expand All @@ -42,15 +42,16 @@ parts:
- caption: DeepLabCut-Live!
chapters:
- file: docs/deeplabcutlive
- caption: DeepLabCut Model Zoo
- caption: 🦄 DeepLabCut Model Zoo
chapters:
- file: docs/ModelZoo
- file: docs/recipes/UsingModelZooPupil
- file: docs/recipes/MegaDetectorDLCLive
- caption: Cookbook (detailed helper guides)
- caption: 🧑‍🍳 Cookbook (detailed helper guides)
chapters:
- file: docs/tutorial
- file: docs/convert_maDLC
- file: docs/recipes/OtherData
- file: docs/recipes/io
- file: docs/recipes/nn
- file: docs/recipes/post
Expand All @@ -61,11 +62,15 @@ parts:
- file: docs/recipes/flip_and_rotate
- file: docs/recipes/pose_cfg_file_breakdown
- file: docs/recipes/publishing_notebooks_into_the_DLC_main_cookbook
- caption: DeepLabCut Benchmark
- caption: DeepLabCut Benchmarking
chapters:
- file: docs/benchmark
- file: docs/pytorch/Benchmarking_shuffle_guide
- caption: Mission & Contribute
chapters:
- file: docs/MISSION_AND_VALUES
- file: docs/roadmap
- file: docs/Governance
- caption: Citations for DeepLabCut
chapters:
- file: docs/citation
2 changes: 2 additions & 0 deletions deeplabcut/pose_estimation_tensorflow/predict_videos.py
Original file line number Diff line number Diff line change
Expand Up @@ -1461,6 +1461,7 @@ def _convert_detections_to_tracklets(
greedy=greedy,
pcutoff=inference_cfg.get("pcutoff", 0.1),
min_affinity=inference_cfg.get("pafthreshold", 0.05),
min_n_links=inference_cfg["minimalnumberofconnections"]
)
if calibrate:
trainingsetfolder = auxiliaryfunctions.get_training_set_folder(cfg)
Expand Down Expand Up @@ -1753,6 +1754,7 @@ def convert_detections2tracklets(
min_affinity=inferencecfg.get("pafthreshold", 0.05),
window_size=window_size,
identity_only=identity_only,
min_n_links=inferencecfg["minimalnumberofconnections"]
)
assemblies_filename = dataname.split(".h5")[0] + "_assemblies.pickle"
if not os.path.exists(assemblies_filename) or overwrite:
Expand Down
2 changes: 1 addition & 1 deletion docs/UseOverviewGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Below we will first outline what you need to get started, the different ways you can use DeepLabCut, and then the full workflow. Note, we highly recommend you also read and follow our [Nature Protocols paper](https://www.nature.com/articles/s41596-019-0176-0), which is (still) fully relevant to standard DeepLabCut.

```{Hint}
💡📚 If you are new to Python and DeepLabCut, you might consider checking our [beginner guide](https://deeplabcut.github.io/DeepLabCut/docs/beginners-guide.html) once you are ready to jump into using the DeepLabCut App!
💡📚 If you are new to Python and DeepLabCut, you might consider checking our [beginner guide](https://deeplabcut.github.io/DeepLabCut/docs/beginner-guides/beginners-guide.html) once you are ready to jump into using the DeepLabCut App!
```


Expand Down
138 changes: 138 additions & 0 deletions docs/citation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
# How to Cite DeepLabCut

Thank you for using DeepLabCut! Here are our recommendations for citing and documenting your use of DeepLabCut in your Methods section:


If you use this code or data we kindly ask that you please [cite Mathis et al, 2018](https://www.nature.com/articles/s41593-018-0209-y)
and, if you use the Python package (DeepLabCut2.x+) please also cite [Nath, Mathis et al, 2019](https://doi.org/10.1038/s41596-019-0176-0).
If you utilize the MobileNetV2s or EfficientNets please cite [Mathis, Biasi et al. 2021](https://openaccess.thecvf.com/content/WACV2021/papers/Mathis_Pretraining_Boosts_Out-of-Domain_Robustness_for_Pose_Estimation_WACV_2021_paper.pdf).
If you use multi-animal versions 2.2beta+ or 2.2rc1+, please cite [Lauer et al. 2022](https://www.nature.com/articles/s41592-022-01443-0).
If you use our SuperAnimal models, please cite [Ye et al. 2024](https://www.nature.com/articles/s41467-024-48792-2).

DOIs (#ProTip, for helping you find citations for software, check out [CiteAs.org](http://citeas.org/)!):

- Mathis et al 2018: [10.1038/s41593-018-0209-y](https://doi.org/10.1038/s41593-018-0209-y)
- Nath, Mathis et al 2019: [10.1038/s41596-019-0176-0](https://doi.org/10.1038/s41596-019-0176-0)
- Lauer et al 2022: [10.1038/s41592-022-01443-0](https://doi.org/10.1038/s41592-022-01443-0)
- Ye et al 2024: [10.1038/s41467-024-48792-2](https://www.nature.com/articles/s41467-024-48792-2)

## Formatted citations:

@article{Mathisetal2018,
title = {DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
journal = {Nature Neuroscience},
year = {2018},
url = {https://www.nature.com/articles/s41593-018-0209-y}}

@article{NathMathisetal2019,
title = {Using DeepLabCut for 3D markerless pose estimation across species and behaviors},
author = {Nath*, Tanmay and Mathis*, Alexander and Chen, An Chi and Patel, Amir and Bethge, Matthias and Mathis, Mackenzie W},
journal = {Nature Protocols},
year = {2019},
url = {https://doi.org/10.1038/s41596-019-0176-0}}

@InProceedings{Mathis_2021_WACV,
author = {Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W.},
title = {Pretraining Boosts Out-of-Domain Robustness for Pose Estimation},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2021},
pages = {1859-1868}}

@article{Lauer2022MultianimalPE,
title={Multi-animal pose estimation, identification and tracking with DeepLabCut},
author={Jessy Lauer and Mu Zhou and Shaokai Ye and William Menegas and Steffen Schneider and Tanmay Nath and Mohammed Mostafizur Rahman and Valentina Di Santo and Daniel Soberanes and Guoping Feng and Venkatesh N. Murthy and George Lauder and Catherine Dulac and M. Mathis and Alexander Mathis},
journal={Nature Methods},
year={2022},
volume={19},
pages={496 - 504}}

@article{Ye2024SuperAnimal,
title={SuperAnimal pretrained pose estimation models for behavioral analysis},
author={Shaokai Ye and Anastasiia Filippova and Jessy Lauer and Steffen Schneider and Maxime Vidal and and Tian Qiu and Alexander Mathis and Mackenzie W. Mathis},
journal={Nature Communications},
year={2024},
volume={15}}


### Review & Educational articles:

@article{Mathis2020DeepLT,
title={Deep learning tools for the measurement of animal behavior in neuroscience},
author={Mackenzie W. Mathis and Alexander Mathis},
journal={Current Opinion in Neurobiology},
year={2020},
volume={60},
pages={1-11}}

@article{Mathis2020Primer,
title={A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives},
author={Alexander Mathis and Steffen Schneider and Jessy Lauer and Mackenzie W. Mathis},
journal={Neuron},
year={2020},
volume={108},
pages={44-65}}

### Other open-access pre-prints related to our work on DeepLabCut:

@article{MathisWarren2018speed,
author = {Mathis, Alexander and Warren, Richard A.},
title = {On the inference speed and video-compression robustness of DeepLabCut},
year = {2018},
doi = {10.1101/457242},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2018/10/30/457242},
eprint = {https://www.biorxiv.org/content/early/2018/10/30/457242.full.pdf},
journal = {bioRxiv}}



## Methods Suggestion:

For body part tracking we used DeepLabCut (version 2.X.X)* [Mathis et al, 2018, Nath et al, 2019, Lauer et al. 2022]. Specifically, we labeled X number of frames taken from X videos/animals (then X% was used for training (default is 95%). We used a X-based neural network (i.e. X = ResNet-50, ResNet-101, MobileNetV2-0.35, MobileNetV2-0.5, MobileNetV2-0.75, MobileNetV2-1***) with default parameters* for X number of training iterations. We validated with X number of shuffles, and found the test error was: X pixels, train: X pixels (image size was X by X). We then used a p-cutoff of X (i.e. 0.9) to condition the X,Y coordinates for future analysis. This network was then used to analyze videos from similar experimental settings.

> Mathis, A. et al. Deeplabcut: markerless pose estimation
> of user-defined body parts with deep learning. Nature
> Neuroscience 21, 1281–1289 (2018).

> Nath, T. et al. Using deeplabcut for 3d markerless pose
> estimation across species and behaviors. Nature Protocols
> 14, 2152–2176 (2019).

*If any defaults were changed in *`pose_config.yaml`*, mention them here.

i.e. common things one might change:
* the loader (options are `default`, `imgaug`, `tensorpack`, `deterministic`).
* the `post_dist_threshold` (default is 17 and determines training resolution).
* optimizer: do you use the default `SGD` or `ADAM`?

*** here, you could add additional citations.
If you use ResNets, consider citing Insafutdinov et al 2016 & He et al 2016. If you use the MobileNetV2s consider citing Mathis et al 2019, and Sandler et al, 2018.


> Mathis, A. et al. Pretraining boosts out-of-domain robustness for pose estimation
> arXiv 1909.11229 (2019)

> Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka,
> M. & Schiele, B. DeeperCut: A deeper, stronger, and
> faster multi-person pose estimation model. In European
> Conference on Computer Vision, 34–50 (Springer, 2016).

> Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. &
> Chen, L.-C. Mobilenetv2: Inverted residuals and linear
> bottlenecks. In Proceedings of the IEEE Conference
> on Computer Vision and Pattern Recognition, 4510–4520
> (2018).

> He, K., Zhang, X., Ren, S. & Sun, J. Deep residual
> learning for image recognition. In Proceedings of the
> IEEE conference on computer vision and pattern recognition,
> 770–778 (2016). URL https://arxiv.org/abs/
> 1512.03385.

## Graphics

We also have the network graphic freely available on SciDraw.io if you'd like to use it! https://scidraw.io/drawing/290

You are welcome to use our logo in your works as well.
Loading