Home
OCTRON is a pipeline built on napari that enables segmentation, detection, and tracking of animals in behavioral setups. It helps you to create rich annotation data that can be used to train your own machine learning segmentation and detection models. This enables dense quantification of animal behavior across a wide range of species and video recording conditions.
OCTRON is built on Napari, Segment Anything (SAM2/SAM3), YOLO, BoxMOT and đź’ś.
Main repository: OCTRON-GUI
Documentation updates
We are currently working on updating the documentation for the latest OCTRON release (v0.2). If you cannot find what you need related to the new features, this thread may be helpful.
The main steps implemented in OCTRON typically include: Loading video data from behavioral experiments, annotating frames to create training data for segmentation, training machine learning models for segmentation and tracking, and finally applying models to new data for automated tracking.
Support
If you find this project helpful, consider supporting us:
- GitHub Sponsors
- Buy Me a Coffee
How to cite
Using OCTRON for your project? Please cite this paper to share the word!
Jacobsen, R. I., van Eekelen, N. M., Humphrey, L., Renton, J., van Rooij, E., Rivera, J., Arenas, O. M., Lumpkin, E. A., Maccuro, S., Buresch, K. C., Seuntjens, E., & Obenhaus, H. A. (2025). OCTRON - a general purpose segmentation and tracking pipeline for behavioral experiments. bioRxiv, 2025.12.20.695663. https://doi.org/10.64898/2025.12.20.695663
@ARTICLE{Jacobsen2025-qq,
title = "{OCTRON} - a general purpose segmentation and tracking pipeline
for behavioral experiments",
author = "Jacobsen, Ragnhild Irene and van Eekelen, Nadia M and Humphrey,
Laurel and Renton, Johnston and van Rooij, Elke and Rivera, Jason
and Arenas, Oscar M and Lumpkin, Ellen A and Maccuro, Sofia and
Buresch, Kendra C and Seuntjens, Eve and Obenhaus, Horst A",
journal = "bioRxiv",
pages = "2025.12.20.695663",
abstract = "OCTRON is a pipeline for markerless segmentation and tracking of
animals in behavioral experiments. By combining Segment Anything
Models (SAM 2) for rapid annotation, YOLO11 models for training,
and state-of-the-art multi-object trackers, OCTRON enables
unsupervised segmentation and tracking of multiple animals with
complex, deformable body plans. We validate its versatility across
species - from transparent marine annelids to camouflaging
cuttlefish - demonstrating robust, general-purpose applicability
for behavioral analysis.",
month = dec,
year = 2025,
language = "en"
}
Attributions
- Interface button and icon images were created by user Arkinasi from Noun Project (CC BY 3.0)
- Logo font: datalegreya
- OCTRON mp4 video reading is based on napari-pyav
- OCTRON training is accomplished via ultralytics:
- OCTRON annotation data is generated via Segment Anything:
@article{kirillov2023segany, title={Segment Anything}, author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross}, journal={arXiv:2304.02643}, year={2023} }@inproceedings{sam_hq, title={Segment Anything in High Quality}, author={Ke, Lei and Ye, Mingqiao and Danelljan, Martin and Liu, Yifan and Tai, Yu-Wing and Tang, Chi-Keung and Yu, Fisher}, booktitle={NeurIPS}, year={2023} }@misc{carion2025sam3segmentconcepts, title={SAM 3: Segment Anything with Concepts}, author={Nicolas Carion and Laura Gustafson and Yuan-Ting Hu and Shoubhik Debnath and Ronghang Hu and Didac Suris and Chaitanya Ryali and Kalyan Vasudev Alwala and Haitham Khedr and Andrew Huang and Jie Lei and Tengyu Ma and Baishan Guo and Arpit Kalla and Markus Marks and Joseph Greer and Meng Wang and Peize Sun and Roman Rädle and Triantafyllos Afouras and Effrosyni Mavroudi and Katherine Xu and Tsung-Han Wu and Yu Zhou and Liliane Momeni and Rishi Hazra and Shuangrui Ding and Sagar Vaze and Francois Porcher and Feng Li and Siyuan Li and Aishwarya Kamath and Ho Kei Cheng and Piotr Dollár and Nikhila Ravi and Kate Saenko and Pengchuan Zhang and Christoph Feichtenhofer}, year={2025}, eprint={2511.16719}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2511.16719}, } - OCTRON multi-object tracking is achieved via BoxMot trackers: