Home
OCTRON is a pipeline built on napari that enables segmentation and tracking of animals in behavioral setups. It helps you to create rich annotation data that can be used to train your own machine learning segmentation models. This enables dense quantification of animal behavior across a wide range of species and video recording conditions.
OCTRON is built on Napari, Segment Anything (SAM2), YOLO, BoxMOT and 💜.
Main repository: OCTRON-GUI
The main steps implemented in OCTRON typically include: Loading video data from behavioral experiments, annotating frames to create training data for segmentation, training machine learning models for segmentation and tracking, and finally applying models to new data for automated tracking.
Support
If you find this project helpful, consider supporting us:
- GitHub Sponsors
- Buy Me a Coffee
How to cite
Using OCTRON for your project? Please cite this paper to share the word!
Jacobsen, R. I., van Eekelen, N. M., Humphrey, L., Renton, J., van Rooij, E., Rivera, J., Arenas, O. M., Lumpkin, E. A., Maccuro, S., Buresch, K. C., Seuntjens, E., & Obenhaus, H. A. (2025). OCTRON - a general purpose segmentation and tracking pipeline for behavioral experiments. bioRxiv, 2025.12.20.695663. https://doi.org/10.64898/2025.12.20.695663
@ARTICLE{Jacobsen2025-qq,
title = "{OCTRON} - a general purpose segmentation and tracking pipeline
for behavioral experiments",
author = "Jacobsen, Ragnhild Irene and van Eekelen, Nadia M and Humphrey,
Laurel and Renton, Johnston and van Rooij, Elke and Rivera, Jason
and Arenas, Oscar M and Lumpkin, Ellen A and Maccuro, Sofia and
Buresch, Kendra C and Seuntjens, Eve and Obenhaus, Horst A",
journal = "bioRxiv",
pages = "2025.12.20.695663",
abstract = "OCTRON is a pipeline for markerless segmentation and tracking of
animals in behavioral experiments. By combining Segment Anything
Models (SAM 2) for rapid annotation, YOLO11 models for training,
and state-of-the-art multi-object trackers, OCTRON enables
unsupervised segmentation and tracking of multiple animals with
complex, deformable body plans. We validate its versatility across
species - from transparent marine annelids to camouflaging
cuttlefish - demonstrating robust, general-purpose applicability
for behavioral analysis.",
month = dec,
year = 2025,
language = "en"
}
Attributions
- Interface button and icon images were created by user Arkinasi from Noun Project (CC BY 3.0)
- Logo font: datalegreya
- OCTRON mp4 video reading is based on napari-pyav
- OCTRON training is accomplished via ultralytics:
- OCTRON annotation data is generated via Segment Anything:
@article{kirillov2023segany, title={Segment Anything}, author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross}, journal={arXiv:2304.02643}, year={2023} } - OCTRON multi-object tracking is achieved via BoxMot trackers: