Fundus quality prediction
A quality prediction model for fundus images (gradeable vs. ungradeable) based on an ensemble of 10 models (ResNets and EfficientNets) trained on DeepDRiD and DrimDB data. Can be just used for prediction or retrained.Read more.
Fundus fovea and optic disc localization
A model to predict the center coordinates of the fovea and the optic disc in fundus images based on a multi-task EfficientNet trained on ADAM, REFUGE and IDRID datasets. Can be just used for prediction or retrained.Read more.
Example predictions from the external dataset "DeepDRiD".
Fundus registration
Align a fundus photograph to another fundus photograph from the same eye using SuperRetina (Liu et al., 2022). Image registration also goes by the terms image alignment and image matching.Read more.
Fundus vessel segmentation
Segment the blood vessels in a fundus image using an ensemble of FR-U-Nets trained on the FIVES dataset (Köhler et al., 2024).Read more.
Fundus circle crop
Fastly crop fundus images to a circle and center it (Fu et al., 2019).Read more.
Fundus utilities
A collection of additional utilities that can come in handy when working with fundus images.Read more.
- ImageTorchUtils: Image manipulation based on Pytorch tensors.
- Balancing: A script to balance a torch dataset by both oversampling the minority class and undersampling the majority class from imbalanced-dataset-sampler.
- Fundus transforms: A collection of torchvision data augmentation transforms to apply to fundus images adapted from pytorch-classification.
- Get pixel mean std: A script to calculate the mean and standard deviation of the pixel values of a dataset by channel.
- Get efficientnet resnet: Getter for torchvision models with efficientnet and resnet architectures initialized with ImageNet weights.
- Lr scheduler: Get a pytorch learning rate scheduler (plus a warmup scheduler) for a given optimizer: OneCycleLR, CosineAnnealingLR, CosineAnnealingWarmRestarts.
- Multilevel 3-way split: Split a pandas dataframe into train, validation and test splits with the options to split by group (i.e. keep groups together) and stratify by label. Wrapper for multi_level_split.
- Seed everything: Set seed for reproducibility in python, numpy and torch.
The following code summarises the usage of the toolbox. See the usage.ipynb for a tutorial notebook and the subdirectories for more detailed usage examples information on the respective packages.
# Get sample images. All methods work on path(s) to image(s) or on image(s) as numpy arrays, tensors or PIL images.
fundus1, fundus2 = "path/to/fundus1.jpg", "path/to/fundus2.jpg"
from fundus_circle_crop import circle_crop
fundus1_cropped = circle_crop(fundus1, size=(512,512)) # > np.ndarray (512, 512, 3) uint8
from fundus_fovea_od_localization import load_fovea_od_model, plot_coordinates
model, _ = load_fovea_od_model(device="cuda:0")
coordinates = model.predict([fundus1, fundus2]) # > List[np.ndarray[fovea_x,fovea_y,od_x,od_y], ...]
plot_coordinates([fundus1, fundus2], coordinates)
from fundus_quality_prediction import load_quality_ensemble, ensemble_predict_quality, plot_quality
ensemble = load_quality_ensemble(device="cuda:0")
confs, labels = ensemble_predict_quality(ensemble, [fundus1, fundus2], threshold=0.5) # > np.ndarray[conf1, conf2], np.ndarray[label1, label2]
for img, conf, label in zip([fundus1, fundus2], confs, labels):
plot_quality(img, conf, label, threshold=0.5)
from fundus_registration import load_registration_model, register, DEFAULT_CONFIG
model, matcher = load_registration_model(config)
moving_image_aligned = register(
fundus1,
fundus2,
show=True,
show_mapping=False,
config=DEFAULT_CONFIG,
model=model,
matcher=matcher
) # > np.ndarray (h_in, w_in, 3) uint8
from fundus_vessel_segmentation import load_segmentation_ensemble, ensemble_predict_segmentation, plot_masks
ensemble = load_segmentation_ensemble(device=device)
vessel_masks = ensemble_predict_segmentation(ensemble, [fundus1, fundus2], threshold=0.5, size=(512, 512)) # > np.ndarray[np.ndarray[h_in, w_in], ...] float64
plot_masks([fundus1, fundus2], vessel_masks)
Use Python version 3.9.5 as fundus_vessel_segmentation requires versions <3.10.
conda create --name fundus_image_toolbox python=3.9.5 pip
conda activate fundus_image_toolbox
pip install git+https://github.com/berenslab/fundus_image_toolbox
-or-
Replace <subpackage>
in the following command with the subfolder name of the desired package (i.e., fundus_quality_prediction
, fundus_fovea_od_localization
, fundus_registration
, fundus_vessel_segmentation
, fundus_circle_crop
, or fundus_utilities
) and run:
pip install 'git+https://github.com/berenslab/fundus_image_toolbox#egg=<subpackage>&subdirectory=<subpackage>'
If you use this toolbox in your research, please consider citing it:
TODO: Have a doi to cite
If you use external parts of the toolbox that this toolbox provides an interface for, please consider citing the respective papers:
- Fundus registration: Liu et al., 2022
- Fundus vessel segmentation: Köhler et al., 2024
- Fundus circle crop: Fu et al., 2019
The toolbox is licensed under the MIT License. See the license file for more information.