Repository for the 2023 ICRA paper "Sequential Bayesian Optimization for Adaptive Informative Path Planning with Multimodal Sensing." A video describing our work can be found here and the paper can be found here.
Adaptive Informative Path Planning with Multi-modal Sensing (AIPPMS) considers the problem of an agent equipped with multiple sensors, each with different sensing accuracy and energy costs. The agent’s goal is to explore the environment and gather information subject to its resource constraints in unknown, partially observable environments. Pre-vious work has focused on the less general Adaptive Informative Path Planning (AIPP) problem, which considers only the effect of the agent’s movement on received observations. The AIPPMS problem adds additional complexity by requiring that the agent reasons jointly about the effects of sensing and movement while balancing resource constraints with information objectives.
We formulate the adaptive informative path planning with multimodal sensing (AIPPMS) problem as a belief MDP where the world belief-state is represented as a Gaussian process. We solve the AIPPMS problem through a sequential Bayesian optimization approach using Monte Carlo tree search with Double Progressive Widening (MCTS-DPW) and belief-dependent rewards. We compare our approach with that of the POMDP formulation using POMCP with different rollout policies as first presented in:
Choudhury, Shushman, Nate Gruver, and Mykel J. Kochenderfer. "Adaptive informative path planning with multimodal sensing." Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 30. 2020.
We have included their code in this repository for future benchmarking with the permission of the authors. The code uses the JuliaPOMDP framework.
- InformationRockSample: contains the files for the ISRS problem described below
- AIPPMS contains the implementation from Choudhury et al.
- GP_BMDP_RockSample contains the code from our implementation
- Rover: contains the files for the Rover Exploration problem described below
- POMDP_Rover contains our implementation of the formulation presented by Choudhury et al.
- GP_BMDP_Rover contains the code for our formulation
- CustomGP.jl setups the Gaussian process structure
- rover_pomdp.jl states.jl beliefs.jl actions.jl observations.jl transitions.jl rewards.jl define the POMDP
- belief_mdp.jl converts the POMDP defined above to a belief MDP
- Trials_RoverBMDP.jl sets up the environment and executes the experiments. It can be run with (tested in Julia 1.8):
julia Trials_RoverBDMP.jl
We introduce a new AIPPMS benchmark problem known as the Rover Exploration problem which is directly inspired by multiple planetary rover exploration missions. The rover begins at a specified starting location and has a set amount of energy available to explore the environment and reach the goal location. The rover is equipped with a spectrometer and a drill. Drilling reveals the true state of the environment at the location the drill sample was taken and as a result, is a more costly action to take from an energy budget perspective. Conversely, the spectrometer provides a noisy observation of the environment and uses less of the energy budget. At each step, the rover can decide whether or not it wants to drill. The rover's goal is to collect as many unique samples as it can while respecting its energy constraints. The rover receives
The environment is modeled as an
For the Rover Exploration problem, we focus on the interplay between the energy budget allotted to the rover and the sensing quality of the spectrometer, where
The animation below shows the posterior mean and variance of the Gaussian process belief using MCTS-DPW with
We also evaluate our method on the Information Search RockSample (ISRS) problem introduced by He et al. and adapted by Choudry et al. ISRS is a variation of the classic RockSample problem. The agent must move through an environment represented as an
There are also
The animation below shows the posterior mean and variance of the Gaussian process belief using MCTS-DPW with