Skip to content

DeepLabCut/benchmark

Repository files navigation

DeepLabCut Benchmark

[[ benchmark.deeplabcut.org ]]

Welcome to the DeepLabCut benchmark! This repo hosts all submitted results, which are available at benchmark.deeplabcut.org. If you are interested in submitting to the benchmark, please find detailed instructions on benchmark.deeplabcut.org/submission.

Quickstart for developers

The mandatory requirements for building the benchmark page can be installed via

$ pip install -r requirements.txt

The (non-public) ground truth data needs to be present in data/. Check that this is the case by running

find data -type f
benchmark/data/CollectedData_Mackenzie.h5
benchmark/data/CollectedData_Daniel.h5
benchmark/data/CollectedData_Valentina.h5
benchmark/data/CollectedData_Mostafizur.h5

For using all functionalities of this package and re-running evaluations, a DeepLabCut installation is additionally required.

Check that the package works as expected by running

python -m pytest tests

which should finish without errors or warnings.

To re-evaluate all available models, run

$ python -m benchmark

or, if you want to run in debugging mode,

python -m benchmark --nocache --onerror raise

from the repository root.

To manually build the documentation, run

$ make deploy