Computing instance-wise segmentation quality metrics for 2D and 3D semantic- and instance segmentation maps.
Use Cases & Tutorials | Documentation
The package provides three core modules:
- Instance Approximator: instance approximation algorithms to extract instances from semantic segmentation maps/model outputs.
- Instance Matcher: matches predicted instances with reference instances.
- Instance Evaluator: computes segmentation and detection quality metrics for pairs of predicted - and reference segmentation maps.
With a Python 3.10+ environment, you can install panoptica from pypi.org
pip install panoptica
Note
Panoptica supports a large range of metrics.
An overview of the supported metrics and their formulas can be found here: panoptica/metrics.md
A minimal example of using panoptica could look e.g. like this (here with Matched Instances as Input):
from panoptica import InputType, Panoptica_Evaluator
from panoptica.metrics import Metric
from auxiliary.nifti.io import read_nifti # feel free to use any other way to read nifti files
ref_masks = read_nifti("reference.nii.gz")
pred_masks = read_nifti("prediction.nii.gz")
evaluator = Panoptica_Evaluator(
expected_input=InputType.MATCHED_INSTANCE,
decision_metric=Metric.IOU,
decision_threshold=0.5,
)
result, intermediate_steps_data = evaluator.evaluate(pred_masks, ref_masks)["ungrouped"]
Tip
We provide Jupyter Notebook tutorials showcasing various use cases.
You can explore them here: BrainLesion/tutorials/panoptica
Although an instance-wise evaluation is highly relevant and desirable for many biomedical segmentation problems, they are still addressed as semantic segmentation problems due to the lack of appropriate instance labels.
This tutorial leverages all three modules of panoptica: instance approximation, -matching and -evaluation.
It is a common issue that instance segmentation outputs feature good outlines but mismatched instance labels. For this case, the matcher module can be utilized to match instances and the evaluator to report metrics.
If your predicted instances already match the reference instances, you can directly compute metrics using the evaluator module.
You can construct Panoptica_Evaluator (among many others) objects and save their arguments, so you can save project-specific configurations and use them later. It uses ruamel.yaml in a readable way.
We provide a readthedocs documentation of our codebase here
Important
If you use panoptica in your research, please cite it to support the development!
Kofler, F., Möller, H., Buchner, J. A., de la Rosa, E., Ezhov, I., Rosier, M., Mekki, I., Shit, S., Negwer, M., Al-Maskari, R., Ertürk, A., Vinayahalingam, S., Isensee, F., Pati, S., Rueckert, D., Kirschke, J. S., Ehrlich, S. K., Reinke, A., Menze, B., Wiestler, B., & Piraud, M. (2023). Panoptica -- instance-wise evaluation of 3D semantic and instance segmentation maps. arXiv preprint arXiv:2312.02608.
@misc{kofler2023panoptica,
title={Panoptica -- instance-wise evaluation of 3D semantic and instance segmentation maps},
author={Florian Kofler and Hendrik Möller and Josef A. Buchner and Ezequiel de la Rosa and Ivan Ezhov and Marcel Rosier and Isra Mekki and Suprosanna Shit and Moritz Negwer and Rami Al-Maskari and Ali Ertürk and Shankeeth Vinayahalingam and Fabian Isensee and Sarthak Pati and Daniel Rueckert and Jan S. Kirschke and Stefan K. Ehrlich and Annika Reinke and Bjoern Menze and Benedikt Wiestler and Marie Piraud},
year={2023},
eprint={2312.02608},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
We welcome all kinds of contributions from the community!
Please open a new issue here.