Robust Asymmetric Loss for Multi-Label Long-Tailed Learning
Wongi Park, Inhyuk Park, Sungeun Kim, Jongbin Ryu.
(CVAMD workshop at International Conference on Computer Vision (ICCVW), 2023 ),
- Conda environment
: Ubuntu 18.04 CUDA-10.1 (10.2) with Pytorch==1.13.0, Torchvision==0.6.0 (python 3.8), libauc, torchmetrices==0.8.0.
# Create Environment
conda create -n ral python=3.8
conda activate ral
# Install pytorch, torchvision, cudatoolkit
conda install pytorch==1.13.0 torchvision==0.6.0 libauc==1.3.0 cudatoolkit=10.1 (10.2) -c pytorch
- How to get dataset?
- MIMIC-CXR 2.0 : MIMIC-CXR 2.0
- APTOS 2019 Blindness : APTOS2019
- ISIC2018 Challenge : ISIC2018
- Directory structure of our project
- Directory
- run.sh : shell Script Version (train, infer)
- main.py : Main Execution (Args, Setting)
- dataset : Augmentation, DataLoader
- ...
- train.py : training, validation
- predict.py : inference
- ...
- utils : Distribution Setting, Metrics
- ...
(1) Focal Loss for Dense Object Detection (Paper / Code)
(2) Asymmetric Loss For Multi-Label Classification (Paper / Code)
(3) Simple and Robust Loss Design for Multi-Label Learning with Missing Labels (Paper / Code)
Train
torchrun --nproc_per_node=8 main.py --gpu_ids 0,1,2,3,4,5,6,7 --seed 0 --train 1 --model convnext --batchsize 64 --epochs 30
Inference
torchrun --nproc_per_node=8 main.py --gpu_ids 0,1,2,3,4,5,6,7 --seed 0 --img_size 1024 --infer 1 --model convnext --batchsize 20 --epochs 200 --store_name fold --save_model 1
The results will be automatically saved in ./workspace/[model name]/[Workspace Name].
@article = {
title = {Robust Asymmetric Loss for Multi-Label Long-Tailed Learning},
author = {Wongi Park, Inhyuk Park, Sungeun Kim, Jongbin Ryu},
Paper = {CVAMD workshop at International Conference on Computer Vision (ICCV)},
url = {https://arxiv.org/abs/2308.05542},
year = {2023},
}