Yufei Han · Heng Guo · Koki Fukai · Hiroaki Santo · Boxin Shi · Fumio Okura · Zhanyu Ma · Yunpeng Jia
Our code was tested on Ubuntu with Python 3.10, PyTorch 1.11 (2.x may meet trouble). Follow these steps to reproduce our environment and results.
conda create -n nersp python=3.10 -y
conda activate nersp
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
git clone https://github.com/PRIS-CV/NeRSP.git
We release the synthetic dataset SMVP3D and the real-wolrd dataset RMVP3D. All dataset is devided in original part and test part (6 views only). If you just want to train the results, pelase download the test part and put the case (object) folder under the new folder./dataset
.
SMVP3D
SMVP3D has 5 objects rendering with different environment maps. All images are 512 * 512 size. You can download the original part and test part from google drive.
RMVP3D
RMVP3D has 4 objects captured under room environments. The original images are 1024 * 1224 size. We train and test under 512 * 612 size. You can download the original part and test part from google drive.
PANDORA
You can download the original dataset from PANDORA. The test part for Vase and Owl tested in our method is under 512 * 612 size.
SMVP3D
After downloading the test part dataset of SMVP3D, you can run the code by:
python exp_runner.py --conf confs/wmask_ours_synthetic.conf --mode train --case snail
RMVP3D
After downloading the test part dataset of RMVP3D, you can run the code by:
python exp_runner.py --conf confs/wmask_ours_real.conf --mode train --case shisa
PANDORA
After downloading the test part dataset of PANDORA, you can run the code by:
python exp_runner.py --conf confs/wmask_pandora.conf --mode train --case owl
After training, run the code to output mesh and images.
python exp_runner.py --conf <conf_file> --mode validate_mesh --case <case_name>
Our implementation is built from NeuS, IDR, MVAS and PANDORA.
@inproceedings{nersp2024yufei,
title = {NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images},
author = {Yufei, Han and Heng, Guo and Koki, Fukai and Hiroaki, Santo and Boxin, Shi and Fumio, Okura and Zhanyu, Ma and Yunpeng, Jia},
year = {2024},
booktitle = CVPR,
}