PyTorch implementation of MAP-Net, from the following paper:
Video Dehazing via a Multi-Range Temporal Alignment Network with Physical Prior. CVPR 2023.
Jiaqi Xu, Xiaowei Hu, Lei Zhu, Qi Dou, Jifeng Dai, Yu Qiao, and Pheng-Ann Heng
We propose MAP-Net, a novel video dehazing framework that effectively explores the physical haze priors and aggregates temporal information.
We construct a large-scale outdoor video dehazing benchmark dataset, HazeWorld, which contains video frames in various real-world scenarios.
To prepare the HazeWorld dataset for experiments, please follow the instructions.
This implementation is based on MMEditing, which is an open-source image and video editing toolbox.
python 3.10.9
pytorch 1.12.1
torchvision 0.13.1
cuda 11.3
Below are quick steps for installation.
Step 1. Install PyTorch following official instructions.
Step 2. Install MMCV with MIM.
pip3 install openmim
mim install mmcv-full
Step 3. Install MAP-Net from source.
git clone https://github.com/jiaqixuac/MAP-Net.git
cd MAP-Net
pip3 install -e .
Please refer to MMEditing Installation for more detailed instruction.
You can train MAP-Net on HazeWorld using the below command with 4 GPUs:
bash tools/dist_train.sh configs/dehazers/mapnet/mapnet_hazeworld.py 4
We mainly use psnr and ssim to measure the model performance. For HazeWorld, we compute the dataset-averaged video-level metrics; see the evaluate function.
You can use the following command with 1 GPU to test your trained model xxx.pth
:
bash tools/dist_test.sh configs/dehazers/mapnet/mapnet_hazeworld.py xxx.pth 1
You can find one model checkpoint trained on HazeWorld here.
Demo for the real-world hazy videos.
224602014-93785fdd-2bc3-4904-b423-67cafe08699c.1.mp4
For the REVIDE dataset, the visual results of MAP-Net can be downloaded here.
This repository is built using the mmedit and mmseg toolboxes, DAT and STM repositories.
If you find this repository helpful to your research, please consider citing the following:
@inproceedings{xu2023map,
title = {Video Dehazing via a Multi-Range Temporal Alignment Network with Physical Prior},
author = {Jiaqi Xu and Xiaowei Hu and Lei Zhu and Qi Dou and Jifeng Dai and Yu Qiao and Pheng-Ann Heng},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023},
}
This project is released under the MIT license. Please refer to the acknowledged repositories for their licenses.