[arXiv]
Thick Mask
Medium Mask
Thin Mask
Model Architecture
Change the path to input directory containing images. Set the output image size as required. Modify the model parameters and training configurations. Use can use either medium or thick masks.
python train.py -batch <batch-size> -mask <mask-size>
Provide the path to save model pth file, folder containing validation images ground truth and masks, folder to which model outputs to be saved, folder to which masked images to be saved
python infer.py
Provide the path to save model pth file, folder containing validation images ground truth and masks, folder to which model outputs to be saved, folder to which masked images to be saved
python evaluate.py <path/to/Ground/truth/images> <path/to/model/output> <path/to/save/metrics.csv>
As its written now, the code expects a folder structure in the form:
workspace/
├── train.py
├── config.py
├── datasets.py
├── evaluate.py
├── infer.py
├── masks.py
├── model.py
├── scores.py
├── celebhq/
│ ├── train_256/
│ │ ├── 0.jpg
│ │ └── 1.jpg ...
│ └── val_256/
│ ├── random_medium_256/
│ │ ├── 0.png
│ │ └── 0_mask000.png ...
│ ├── random_thick_256/
│ │ ├── 0.png
│ │ └── 0_mask000.png ...
│ ── random_thin_256/
│ ├── 0.png
│ └── 0_mask000.png ...
├── generated_images/
│ ├── image1.png
│ └── image2.png...
├── metrics/
│ ├── alex.pth
│ ├── squeeze.pth
│ ├── vgg.pth
│ └── metrics.csv
└── output/
├── masked/
│ ├── img1.png
│ └── img2.png ...
└── output/
├── img1.png
└── img2.png...
We have used LaMa training and inference codes for our experiments from https://github.com/advimman/lama The scripts to generate various masks for validation set is also available there.
If you found this code helpful, please consider citing:
@misc{jeevan2023wavepaint,
title={WavePaint: Resource-efficient Token-mixer for Self-supervised Inpainting},
author={Pranav Jeevan and Dharshan Sampath Kumar and Amit Sethi},
year={2023},
eprint={2307.00407},
archivePrefix={arXiv},
primaryClass={cs.CV}
}