A curated list of research papers and datasets related to image and video deblurring.
Name | Description | Link |
---|---|---|
GoPro | The GoPro dataset consists of 3,214 pairs of motion-blurred and sharp images, each with a resolution of 1,280×720 pixels, divided into 2,103 training pairs and 1,111 test pairs. | GoPro |
REDS | The REalistic and Dynamic Scenes (REDS) dataset is generated from 120 fps videos, with blurry frames synthesized by merging consecutive frames, capturing realistic motion blur in dynamic scenes. | REDS |
DPDD | The Dual-Pixel Defocus Deblurring (DPDD) dataset contains 500 carefully captured scenes, comprising 2000 images in total: 500 defocus-blurred images with their 1000 dual-pixel (DP) sub-aperture views and 500 corresponding all-in-focus images, all at full-frame resolution of 6720x4480 pixels. | DPDD |
HIDE | The HIDE (Human-aware Image Deblurring) dataset consists of 8,422 blurred images paired with their corresponding sharp images, focusing on motion deblurring with an emphasis on human subjects, making it ideal for human-centric deblurring tasks. | HIDE |
RealBlur | The RealBlur dataset consists of 4,738 pairs of images from 232 different scenes, captured in both camera raw and JPEG formats. It is divided into two subsets: RealBlur-R with raw images and RealBlur-J with JPEG images, with 3,758 training pairs and 980 test pairs in each subset. | RealBlur |
CelebA | The CelebFaces Attributes dataset (CelebA) is a large-scale face attributes dataset comprising 202,599 images of 10,177 celebrities. Each image is 178×218 pixels and annotated with 40 binary labels for facial attributes like hair color, gender, and age. | CelebA |
Deblur-NeRF | The Deblur-NeRF dataset focuses on two types of blur: camera motion blur and defocus blur. It includes 5 synthesized scenes for each blur type, created using Blender with multi-view cameras to simulate real data capture. For motion blur, images are rendered from interpolated camera poses, while defocus blur images are generated with depth-of-field effects. Additionally, the dataset features 20 real-world scenes—10 for each blur type—captured with a Canon EOS RP, including both manually blurred images and sharp reference images. | Deblur-NeRF |
RSBlur | The RSBlur dataset offers pairs of real and synthetic blurred images, each with corresponding ground truth sharp images. It is designed to evaluate deblurring and blur synthesis methods on real-world blurred images, with training, validation, and test sets comprising 8,878, 1,120, and 3,360 blurred images, respectively. | RSBlur |
ReloBlur | The ReloBlur dataset for local motion deblurring consists of 2405 blurred images with the size of 2152×1436 that are divided into 2010 training images and 395 test images. The dataset consists of pairs of a realistic locally blurred image and the corresponding ground truth sharp image that are obtained by a synchronized beam-splitting photographing system. For efficient training and testing, we also provide the resized version of ReLoBlur Dataset with the size of 538x359. | ReloBrur |