This is a guide for ROS melodic users created by RISE Lab. Author's original README starts from here.
It is highly recommended to use a virtual environment to install the dependencies.
virtualenv -p python3.6 --system-site-packages venv
Activate the virtual environment
source venv/bin/activate
Install the dependencies
pip install -r requirements.txt
You can find the models here. Download and unzip the models in the checkpoints
directory. The directory structure should look like this:
project
├── checkpoints
├── DepthSeedingNetwork_3D_TOD_checkpoint.pth
├── RRN_OID_checkpoint.pth
└── RRN_TOD_checkpoint.pth
├── src
├── ...
└── uois_3D_example.ipynb
roslauch uois_ros uois_server.launch
Node information is shown below.
- Default node name:
uois_server
- Services
-
~init_segmask(uois_ros/InitSegmask): Region of interest (ROI) initialization
- Request
- color_image(sensor_msgs/Image): Color image of the scene.
- Response
- is_success(bool): True if the model is successfully loaded.
- Request
-
~get_segmask(uois_ros/GetSegmask): Get the segmentation mask
- Request
- rgb_image(sensor_msgs/Image): rgb image of the scene.
- xyz_image(sensor_msgs/Image): xyz image of the scene.
- Response
- segmask_image(sensor_msgs/Image): Segmentation mask of the scene. The value of each pixel is the object ID. The background is 0. Type: uint16.
- Request
-
This is a PyTorch-based implementation of our network, UOIS-Net-3D, for unseen object instance segmentation. Our instance segmentation algorithm utilizes a two-stage method to explicitly leverage the strengths of depth and RGB separately for stronger instance segmentation. Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is non-photorealistic. Details of the algorithm can be found in our arXiv paper:
Unseen Object Instance Segmentation for Robotic Environments
Christopher Xie, Yu Xiang, Arsalan Mousavian, Dieter Fox
IEEE Transactions on Robotics (T-RO), 2021.
We highly recommend setting up a virtual environment using Anaconda. Here is an example setup using these tools:
git clone https://github.com/chrisdxie/uois.git
cd uois3d/
conda env create -f env.yml
You can find the models here. We provide a Depth Seeding Network (DSN) model trained on our synthetic Tabletop Object Dataset (TOD), a Region Refinement Network (RRN) model trained on TOD, and an RRN model trained on real data from the Google Open Images Dataset (OID).
You can find the Tabletop Object Dataset (TOD) here. See the data loading and data augmentation code for more details.
We provide sample training code in train_DSN.ipynb and train_RRN.ipynb.
See uois_3D_example.ipynb for an example of how to run the network on example images. In order to run this file, Jupyter Notebook must be installed (this is included in env.yml
). If you haven't used Jupyter Notebooks before, here is a tutorial to get you up to speed. This repository provides a few images in the example_images folder.
Notes:
- Make sure to activate the Anaconda environment before running jupyter. This can be done with
conda activate uois3d; jupyter notebook
- the notebook should be run in the directory in which it lives (
<ROOT_DIR>
), otherwise the filepaths must be manually adjusted. - After downloading and unzipping the models, make sure to update
checkpoint_dir
in uois_3D_example.ipynb to point to the directory where the models live.
Our code is released under the MIT license.
If you find our work helpful in your research, please cite our work.
@article{xie2021unseen,
author = {Christopher Xie and Yu Xiang and Arsalan Mousavian and Dieter Fox},
title = {Unseen Object Instance Segmentation for Robotic Environments},
journal = {IEEE Transactions on Robotics (T-RO)},
year = {2021}
}