Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 24 additions & 14 deletions docs/ModelZoo.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# The DeepLabCut Model Zoo!

🦒 🐈 🐕‍🦺 🐀 🐁 🦡 🦦 🐏 🐫 🐆 🦓 🐖 🐄 🐂 🦖
🦒 🐈 🐕‍🦺 🐀 🐁 🦡 🦦 🐏 🐫 🐆 🦓 🐖 🐄 🐂 🦖 🐿 🦍 🦥

## 🏠 [Home page](http://modelzoo.deeplabcut.org/)

Started in 2020, the model zoo is four things:
Started in 2020 and expanded in 2022, the model zoo is four things:
- (1) a collection of models that are trained on diverse data across (typically) large datsets, which means you do not need to train models yourself
- (2) a contribution website for community crowd sourcing of expertly labeled keypoints to improve models in part 1!
- (3) a no-install DeepLabCut that you can use on ♾[Google Colab](https://colab.research.google.com/github/DeepLabCut/DeepLabCut/blob/master/examples/COLAB/COLAB_DLC_ModelZoo.ipynb),
Expand All @@ -18,29 +18,40 @@ pip install deeplabcut[tf,gui,modelzoo]
```


### About SuperAnimal Models.
## About the SuperAnimal Models

Our newest generation models act as a paradigm shift of using pre-trained model. It aims to provide a plug and play solution that works without training.
Animal pose estimation is critical in applications ranging from neuroscience to veterinary medicine. However, reliable inference of animal poses currently requires domain knowledge and labeling effort. To ease access to high-performance animal pose estimation models across diverse environments and species, we present a new paradigm for pre-training and fine-tuning that provides excellent zero-shot (no training required) performance on two major classes of animal pose data: quadrupeds and lab mice.

IMPORTANT: we currently only support single animal scenarios
To provide the community with easy access to such high performance models across diverse environments and species, we present a new paradigm for building pre-trained animal pose models -- which we call SuperAnimal models -- and the ability to use them for transfer learning (e.g., fine-tune them if needed).

### We now introduce two SuperAnimal members, namely, `superanimal_quadruped` and `superanimal_topviewmouse`.

#### `superanimal_quadruped` model aim to work across a large range of quadruped animals, from horses, dogs, sheep, rodents, to elephants. The camera perspective is ortholonal to the animal ("side view"), and most of the data includes the animals face (thus the front and side of the animal). Here are example images of what the model is trained on:

We now introduce two SuperAnimal members, namely, superquadruped and supertopview.
![SA_Q](https://user-images.githubusercontent.com/28102185/209957688-954fb616-7750-4521-bb52-20a51c3a7718.png)

- superquadruped model aim to work across a large range of quadruped animals. Note since quadrupeds are mostly side viewed, it is important to tune the pcutoff to help model remove keypoints are occluded.
#### `superanimal_topviewmouse` aims to work across lab mice in different lab settings from a top-view perspective; this is very polar in many behavioral assays in freely moving mice. Here are example images of what the model is trained on:

- supertopview model aims to work across labmice in different cage settings.
![SA-TVM](https://user-images.githubusercontent.com/28102185/209957260-c0db72e0-4fdf-434c-8579-34bc5f27f907.png)


IMPORTANT: we currently only support single animal scenarios


### Our perspective.

Via DeepLabCut Model Zoo, we aim to provide plug and play models that do not need any labeling and will just work decently on novel videos. If the predictions are not great enough due to failure modes described below, please give us feedback! We are rapidly improving our models and adaptation methods.


### To use our models in DeepLabCut, please use the following API
### To use our models in DeepLabCut (versions 2.3+), please use the following API

```
pip install deeplabcut[tf,modelzoo]
```

```python
video_path = 'demo-video.mp4'
superanimal_name = 'superquadruped'
superanimal_name = 'superanimal_quadruped'
scale_list = range(200, 600, 50) # image height pixel size range and increment

deeplabcut.video_inference_superanimal([video_path], superanimal_name, scale_list=scale_list)
Expand All @@ -52,12 +63,11 @@ deeplabcut.video_inference_superanimal([video_path], superanimal_name, scale_lis
**Coming soon:** The DeepLabCut Project Manager GUI will allow you to use the SuperAnimal Models. You can run the model and do ``active learning" to improve performance on your data.
Specifically, we have *new* video adaptation methods to make your tracking extra smooth and robust!

### Potential failure modes for SuperAnimal Models.
### Potential failure modes for SuperAnimal Models and how to fix it.

Spatial domain shift: typical DNN models suffer from the spatial resolution shift between training datasets and test videos. To help find the proper resolution for our model, please try a range of scale_list in the API (details in the API docs). For superquadruped, we empirically observe that if your video is larger than 1500 pixels, it is better to pass `scale_list` in the range within 1000.
Spatial domain shift: typical DNN models suffer from the spatial resolution shift between training datasets and test videos. To help find the proper resolution for our model, please try a range of `scale_list` in the API (details in the API docs). For `superanimal_quadruped`, we empirically observe that if your video is larger than 1500 pixels, it is better to pass `scale_list` in the range within 1000.

Pixel statistics domain shift: The brightness of your video might look very different from our training datasets. This might either result in jittering predictions in the video or
fail modes for lab mice videos (if the brightness of the mice is unusual compared to our training dataset). We are currently developing new models and new methods to counter that.
Pixel statistics domain shift: The brightness of your video might look very different from our training datasets. This might either result in jittering predictions in the video or fail modes for lab mice videos (if the brightness of the mice is unusual compared to our training dataset). You can use our "video adaptation" model (released soon) to counter this.

### To see our first preprint on the work, check out [our paper](https://arxiv.org/abs/2203.07436v1):

Expand Down
5 changes: 2 additions & 3 deletions docs/PROJECT_GUI.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,10 @@ As some users may be more comfortable working with an interactive interface, we
## Get Started:

(1) Install DeepLabCut using the simple-install with Anaconda found [here!](how-to-install)*.
Now that you have deeplabcut installed, just go into your env (activate DEEPLABCUT) then run:
Now you have DeepLabCut installed, but if you want to update it, either follow the prompt in the GUI which will ask you to upgrade when a new version is available, or just go into your env (activate DEEPLABCUT) then run:

` pip install --upgrade --force-reinstall 'deeplabcut[gui,tf]'`
` pip install --upgrade --force-reinstall 'deeplabcut[gui,tf,modelzoo]'`

*Note, currently the latest GUI is a release candidate, so you need to specifically install the `rc` version. This will change once v2.3 is stable.

(2) Open the terminal and run: `python -m deeplabcut`

Expand Down
10 changes: 7 additions & 3 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,12 @@

DeepLabCut can be run on Windows, Linux, or MacOS (see also [technical considerations](tech-considerations-during-install) and if you run into issues also check out the [Installation Tips](https://deeplabcut.github.io/DeepLabCut/docs/recipes/installTips.html) page).

We recommend using our supplied CONDA environment.

## PIP:

- Everything you need to run DeepLabCut (i.e., our source code and our dependencies) can be installed with `pip install 'deeplabcut[gui,tf]'` (for GUI support w/tensorflow) or without the gui: `pip install 'deeplabcut[tf]'`.
- Everything you need to build custom models within DeepLabCut (i.e., use our source code and our dependencies) can be installed with `pip install 'deeplabcut[gui,tf]'` (for GUI support w/tensorflow) or without the gui: `pip install 'deeplabcut[tf]'`.
- If you want to use the SuperAnimal models, then please use `pip install 'deeplabcut[gui,tf,modelzoo]'`.

- Please note, there are several modes of installation, and the user should decide to either use a **system-wide** (see [note below](system-wide-considerations-during-install)), **conda environment** based installation (**recommended**), or the supplied [**Docker container**](docker-containers) (recommended for Ubuntu advanced users). One can of course also use other Python distributions than Anaconda, but **Anaconda is the easiest route.**

Expand Down Expand Up @@ -98,8 +101,9 @@ In the terminal type:

`conda create -n DLC python=3.8`

**Current version:** The only thing you then need to add to the env is deeplabcut (`pip install deeplabcut[tf]`) or `pip install 'deeplabcut[gui,tf]'` which has a pyside/napari based GUI.

The only thing you then need to add to the env is deeplabcut (`pip install deeplabcut`) or `pip install 'deeplabcut[gui]'` which has wxPython for GUI support. For Windows and MacOS, you just run `pip install -U wxPython<4.1.0` but for linux you might need the specific wheel (https://wxpython.org/pages/downloads/index.html).
**Pre-version2.3 (Dec 2022):** The only thing you then need to add to the env is deeplabcut (`pip install deeplabcut`) or `pip install 'deeplabcut[gui]'` which has wxPython for GUI support. For Windows and MacOS, you just run `pip install -U wxPython<4.1.0` but for linux you might need the specific wheel (https://wxpython.org/pages/downloads/index.html).

We have some tips for linux users here, as the latest Ubuntu doesn't easily support a 1-click install: https://deeplabcut.github.io/DeepLabCut/docs/recipes/installTips.html

Expand Down Expand Up @@ -190,7 +194,7 @@ If you perform the system-wide installation, and the computer has other Python p
- Anaconda/Python3: Anaconda: a free and open source distribution of the Python programming language (download from https://www.anaconda.com/). DeepLabCut is written in Python 3 (https://www.python.org/) and not compatible with Python 2.
- `pip install deeplabcut`
- TensorFlow
- You will need [TensorFlow](https://www.tensorflow.org/) (we used version 1.0 in the paper, later versions also work with the provided code (we tested **TensorFlow versions 1.0 to 1.15, and 2.0 to 2.5**; we recommend TF2.5 now) for Python 3.7, 3.8, or 3.9 with GPU support.
- You will need [TensorFlow](https://www.tensorflow.org/) (we used version 1.0 in the Nature Neuroscience paper, later versions also work with the provided code (we tested **TensorFlow versions 1.0 to 1.15, and 2.0 to 2.10**; we recommend TF2.10 now) for Python 3.8, 3.9, 3.10 with GPU support.
- To note, is it possible to run DeepLabCut on your CPU, but it will be VERY slow (see: [Mathis & Warren](https://www.biorxiv.org/content/early/2018/10/30/457242)). However, this is the preferred path if you want to test DeepLabCut on your own computer/data before purchasing a GPU, with the added benefit of a straightforward installation! Otherwise, use our COLAB notebooks for GPU access for testing.
- Docker: We highly recommend advanced users use the supplied [Docker container](docker-containers)

Expand Down
5 changes: 2 additions & 3 deletions examples/COLAB/COLAB_DLC_ModelZoo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/DeepLabCut/DeepLabCut/blob/master/examples/COLAB/COLAB_DLC_ModelZoo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
"<a href=\"https://colab.research.google.com/github/DeepLabCut/DeepLabCut/blob/main/examples/COLAB/COLAB_DLC_ModelZoo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
Expand Down Expand Up @@ -55,8 +55,7 @@
"outputs": [],
"source": [
"#click the play icon (this will take a few minutes to install all the dependencies!)\n",
"#Install the latest DeepLabCut version from master:\n",
"!pip install \"deeplabcut[tf,modelzoo] @ git+https://github.com/DeepLabCut/DeepLabCut.git@master\""
"!pip install deeplabcut[tf,modelzoo]"
]
},
{
Expand Down