[Paper
][Video
][Project Page
]
(:star2: denotes equal contribution)
The codebase is built on PyTorch 1.1.0 and tested on Ubuntu 16.04 environment (Python3.6, CUDA9.0, cuDNN7.5).
For installing, follow these intructions
conda create -n mlzsl python=3.6
conda activate mlzsl
conda install pytorch=1.1 torchvision=0.3 cudatoolkit=9.0 -c pytorch
pip install matplotlib scikit-image scikit-learn opencv-python yacs joblib natsort h5py tqdm pandas
Install warmup scheduler
cd pytorch-gradual-warmup-lr; python setup.py install; cd ..
Our approach on NUS-WIDE Dataset. |
Our approach on OpenImages Dataset. |
- Download pre-computed features from here and store them at
features
folder insideBiAM/datasets/NUS-WIDE
directory. - [Optional] You can extract the features on your own by using the original NUS-WIDE dataset from here and run the below script:
python feature_extraction/extract_nus_wide.py
To train and evaluate multi-label zero-shot learning model on full NUS-WIDE dataset, please run:
sh scripts/train_nus.sh
To evaluate the multi-label zero-shot model on NUS-WIDE. You can download the pretrained weights from here and store them at NUS-WIDE
folder inside pretrained_weights
directory.
sh scripts/evaluate_nus.sh
-
Please download the annotations for training, validation, and testing into this folder.
-
Store the annotations inside
BiAM/datasets/OpenImages
. -
To extract the features for OpenImages-v4 dataset run the below scripts for crawling the images and extracting features of them:
## Crawl the images from web
python ./datasets/OpenImages/download_imgs.py #`data_set` == `train`: download images into `./image_data/train/`
python ./datasets/OpenImages/download_imgs.py #`data_set` == `validation`: download images into `./image_data/validation/`
python ./datasets/OpenImages/download_imgs.py #`data_set` == `test`: download images into `./image_data/test/`
## Run feature extraction codes for all the 3 splits
python feature_extraction/extract_openimages_train.py
python feature_extraction/extract_openimages_test.py
python feature_extraction/extract_openimages_val.py
To train and evaluate multi-label zero-shot learning model on full OpenImages-v4 dataset, please run:
sh scripts/train_openimages.sh
sh scripts/evaluate_openimages.sh
To evaluate the multi-label zero-shot model on OpenImages. You can download the pretrained weights from here and store them at OPENIMAGES
folder inside pretrained_weights
directory.
sh scripts/evaluate_openimages.sh
This repository is released under the Apache 2.0 license as found in the LICENSE file.
If you find this repository useful, please consider giving a star ⭐ and citation 🎊:
@article{narayan2021discriminative,
title={Discriminative Region-based Multi-Label Zero-Shot Learning},
author={Narayan, Sanath and Gupta, Akshita and Khan, Salman and Khan, Fahad Shahbaz and Shao, Ling and Shah, Mubarak},
journal={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
publisher = {IEEE},
year={2021}
}
Should you have any question, please contact 📧 akshita.gupta@inceptioniai.org