DLC2Action is an action segmentation package that makes running and tracking experiments easy.
You can simply install DLC2Action by typing:
pip install dlc2action
Or you can install the latest DLC2Action for development by running this in your terminal:
git clone https://github.com/amathislab/DLC2Action
cd DLC2Action
conda create --name DLC2Action python=3.13
conda activate DLC2Action
python -m pip install .
The functionality of DLC2Action includes:
- a GUI for annotation, visualization and active learning
- compiling and updating project-specific configuration files,
- filling in the configuration dictionaries automatically whenever possible,
- saving training parameters and results,
- running predictions and hyperparameter searches,
- creating active learning files,
- loading hyperparameter search results in experiments and dumping them into configuration files,
- comparing new experiment parameters with the project history and loading pre-computed features (to save time) and previously created splits (to enforce consistency) when there is a match,
- filtering and displaying training, prediction and hyperparameter search history,
- plotting training curve comparisons
and more.
You can start a new project, run an experiment, visualize it and use the trained model to make a prediction in a few lines of code:
project = Project("project", data_type="dlc_track", annotation_type="csv")
project.update_parameters(...)
project.run_default_hyperparameter_search("search")
project.run_episode("episode", load_search="search")
project.evaluate(["episode"])
project.run_prediction("prediction", episode_names=["episode"], data_path="/path/to/new/data")
We provide standardize benchmarks on action segmentation to help you evaluate DLC2Action's performance. Check out the benchmarks section for detailed results and comparisons.
Check out the examples or read the documentation for a taste of what else you can do.
DLC2Action is developed by members of the A. Mathis Group at EPFL. We are grateful to many people for feedback, alpha-testing, suggestions and contributions, in particular to Lucas Stoffl, Margaret Lane, Marouane Jaakik, Steffen Schneider and Mackenzie Mathis.
We are also grateful to the creators of the different benchmarks, as well as models that were adapted in DLC2action. In particular, the MS-TCN, the C2F-TCN, the ASFormer, the EDTCN and the MotionBERT models, and the CalMS21, the SIMBA CRIM13 and SIMBA-RAT, the OFT and EPM, the SHOT7m2 and hBABEL, and the Atari-HEAD datasets. Please refer to the benchmarks section for detailed references and consider citing these works when using them.
Note that the software is provided "as is", without warranty of any kind, express or implied. If you use the code or data, please cite us!
Stay tuned for our first publication -- any feedback on this beta release is welcome at this time. Thanks for using DLC2Action. Please reach out if you want to collaborate!