Panoptic Segmentation with Partial Annotations for Agricultural Robots
We present a novel approach to leverage partial annotations for panoptic segmentation. These partial annotations contain ground truth information only for a subset of pixels per image and are thus much faster to obtain than dense annotations. We propose a novel set of losses that exploit measures from vector fields used in physics, i.e., divergence and curl, to effectively supervise predictions without ground truth annotations.
Exemplary, we show in the following figure a comparision between dense annotations (left) and partial annotations (right). The former contains ground truth annotations for all instances, i.e., things, and each pixel in the background, i.e., stuff. In contrast, the latter annotations contain annotations for a few instances and some blob-like labels for the background.
- We assume that
wget
is installed on your machine and that your CUDA Runtime version is >= 11.0 - We use
conda
to setup a virtual environment - in caseconda
is not installed on your machine follow the offical instructions (we recommend miniconda)
The following script will setup a virtual environment denoted as pspa
./setup.sh
- We use the PhenoBench dataset
- However, as mentioned in the paper, we use a subset of images during training to ensure that each unique instance appears only once
- Consequently, we provide in
phenobench_auxiliary/split.yaml
the filenames of images used during training - To download the dataset and organize it in the expected format run the following script (requires approx. 15GB)
cd ./phenobench_auxiliary
./get_phenobench.sh <full/path/to/download/PhenoBench>
cd ..
We provide pretrained models using ERFNet as network architecture that are trained with different amount of partial annotations:
- Model trained with all annotations
- Model trained with 50% of all annotations
- Model trained with 25% of all annotations
- Model trained with 10% of all annotations
Before you start the inference you need to specify the path to the dataset in the corresponding configuration file (config/config-phenobench.yaml
), e.g.:
data:
path: <full/path/to/download/PhenoBench/test>
Please change only <full/path/to/download/PhenoBench>
according to your previously specified path to download PhenoBench but keep test
as a directory at the very end.
Next, you can run the model in inference mode
conda activate pspa
python predict.py --config ./config/config-phenobench.yaml --export <path/to/export_dir> --ckpt <path/to/checkpoint/model.ckpt>
In the specified <path/to/export_dir>
you will find the predicted semantics and plant instances.
In case you want to apply our proposed fusing procedure, i.e., assign an unique semantic class to each instance, you need run the following subsequently
conda activate pspa
python auxiliary/merge_plants_sem.py --semantics <path/to/previously/predicted/semantics> --plants <path/to/previously/predicted/plant/instances> --export <path/to/merged_export_dir>
To be more specific about the paths:
<path/to/previously/predicted/semantics>
should be<path/to/export_dir/lightning_logs/version_*/predict/export/semantics/000>
<path/to/previously/predicted/plant/instances>
shoulde be<path/to/export_dir/lightning_logs/version_*/predict/export/instances/000>
- Both paths contain png files with the previous semantic and instance predictions
Since PhenoBench provides a hidden test set you need to register for the corresponding CodaLab challenge and upload your results to run the evaluation.
Similiarly you can train a new model
conda activate pspa
python train.py --config ./config/config-phenobench.yaml --export <path/to/export_dir>
- Before you start the training you need to specify the path to the dataset in the corresponding configuration file, e.g.:
data:
path: <full/path/to/download/PhenoBench>
where <full/path/to/download/PhenoBench>
matches the path you specified to download the PhenoBench dataset (i.e. without the <.../test> at the very end).
In case you face CUDA out of memory
you may change the batch size during training via the configuration file (config/config-phenobench.yaml
), e.g.:
train:
batch_size: 2
This software is released under a creative commons license which allows for personal and research use only. For a commercial license please contact the authors. You can view a license summary here.