MapExRL: Human-Inspired Indoor Exploration with Predicted Environment Context and Reinforcement Learning

Published: by
Seungchan Kim

MapExRL: Human-Inspired Indoor Exploration with Predicted Environment Context and Reinforcement Learning

We propose MapExRL, a human-inspired method combining reinforcement learning and map predictions for indoor robot exploration.

Learning, Planning