CORE: Cooperative Reconstruction for Multi-Agent Perception,
Binglu Wang, Lei Zhang, Zhaozhong Wang, Yongqiang Zhao, and Tianfei Zhou
ICCV 2023 (arXiv 2307.11514)
This paper presents CORE, a conceptually simple, effective and communication-efficient model for multi-agent cooperative perception. It addresses the task from a novel perspective of cooperative reconstruction, based on two key insights: 1) cooperating agents together provide a more holistic observation of the environment, and 2) the holistic observation can serve as valuable supervision to explicitly guide the model learning how to reconstruct the ideal observation based on collaboration. CORE instantiates the idea with three major components: a compressor for each agent to create more compact feature representation for efficient broadcasting, a lightweight attentive collaboration component for cross-agent message aggregation, and a reconstruction module to reconstruct the observation based on aggregated feature representations. This learning-to-reconstruct idea is task-agnostic, and offers clear and reasonable supervision to inspire more effective collaboration, eventually promoting perception tasks. We validate CORE on two large-scale multi-agent percetion dataset, OPV2V and V2X-Sim, in two tasks, i.e., 3D object detection and semantic segmentation. Results demonstrate that CORE achieves state-of-the-art performance, and is more communication-efficient.
The code is build upon OpenCOOD codebase and using following library versions:
- python 3.7
- pytorch 1.12.1
- cudatoolkit 11.3.1
Clone the repository:
git clone https://github.com/zllxot/CORE.git
Create a conda virtual environment:
conda create -n core python=3.7
conda activate core
Install pytorch, cudatoolkit, and torchvision:
conda install pytorch=1.12.1=py3.7_cuda11.3_cudnn8.3.2_0 torchvision=0.13.1=py37_cu113
Install spconv 2.x:
pip install spconv-cu113
Install dependencies:
cd core
pip install -r requirements.txt
python setup.py develop
Install the CUDA version of the NMS calculation:
python opencood/utils/setup.py build_ext --inplace
Our experiments are conducted on the OPV2V dataset. You can learn more about this dataset by visiting the website.
We follow the same configuration as OpenCOOD, utilizing a YAML file to set all the training parameters. To train your model, run the following commands:
python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER} --half]
Arguments Explanation:
hypes_yaml
: the path of the training configuration file, e.g.opencood/hypes_yaml/second_early_fusion.yaml
, meaning you want to train an early fusion model which utilizes SECOND as the backbone. See Tutorial 1: Config System to learn more about the rules of the yaml files.model_dir
(optional) : the path of the checkpoints. This is used to fine-tune the trained models. When themodel_dir
is given, the trainer will discard thehypes_yaml
and load theconfig.yaml
in the checkpoint folder.half
(optional): If set, the model will be trained with half precision. It cannot be set with multi-gpu training togetger.
To train on multiple gpus, run the following command:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --use_env opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER}]
Here's a example of how to run the training script on a single GPU:
python opencood/tools/train.py --hypes_yaml opencood/hypes_yaml/voxelnet_core.yaml
Before you run the following command, first make sure the validation_dir
in config.yaml under your checkpoint folder
refers to the testing dataset path, e.g. opv2v_data_dumping/test
.
python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} [--show_vis] [--show_sequence]
Arguments Explanation:
model_dir
: the path to your saved model.fusion_method
: indicate the fusion strategy, currently support 'early', 'late', and 'intermediate'.show_vis
: whether to visualize the detection overlay with point cloud.show_sequence
: the detection results will visualized in a video stream. It can NOT be set withshow_vis
at the same time.
Gratitude to the creators and contributors of the following open-source cooperative perception works, codebases, and datasets, which played a crucial role in shaping this project: