Skip to content

Enabling Human-Multi-Robot Collaborative Visual Exploration in Underwater Environments

Stewart C. Jamieson, Ph.D., 2024
Yogesh Girdhar, Advisor

This thesis presents novel approaches for vision-based autonomous exploration in under- water settings using human-multi-robot systems, enabling the robots to adapt to evolving mission priorities learned via a human supervisor’s responses to images collected in situ. The robots use unsupervised semantic mapping algorithms to model the spatial distribution of various habitats and terrain types in the environment using distinct semantic classes, and send image queries to the supervisor to learn which of these classes contain the highest concentration of targets of interest. The robots do not need any prior training or examples of these targets, as they learn these semantic classes and concentration parameters online. This makes the approach suitable for exploration in complex and unfamiliar environments where new or rare phenomena are frequently discovered, such as in coral reefs and the deep sea. Furthermore, we develop a novel, state-of-the-art risk-based online learning algorithm to learn these concentration parameters using the smallest possible number of queries and without the myopia of previous algorithms, enabling the robots to adapt more quickly and reducing the operational burden on the supervisor. This is especially critical given the ex- tremely low communications bandwidths available in underwater environments, limiting the robots to only making a small number of queries per mission.

We introduce four primary contributions to address prevalent challenges in underwater exploration. Firstly, our multi-robot semantic representation matching algorithm enables inter-robot sharing of semantic maps, and generates consistent global maps with 20-60% higher quality scores than those produced by other methods. Next, we present DeepSee- Color, a novel real-time algorithm for correcting underwater image color distortions, which achieves up to 60 Hz processing speeds, thereby enabling improved semantic mapping and target recognition accuracy online. Thirdly, our ecient and non-myopic risk-based online learning algorithm ensures eective communication between robots and human supervisors, overcoming the myopia which can cause previous algorithms to underestimate a query’s value, while remaining computationally tractable. Lastly, we propose a unique reward model and planning algorithm tailored for autonomous exploration, and optimized for use with risk- based online reward learning, which resulted in a 25 to 75% increase in the number of (a priori unknown) targets of interest located when compared to baseline surveys. These exper- iments were conducted with simulated robots exploring real coral reef maps and with real, ecologically meaningful targets of interest. Collectively, these contributions overcome many of the key barriers to vision-based autonomous underwater exploration, enhancing the capa- bility of autonomous underwater vehicles to adapt to new and evolving mission objectives in situ. The advancements presented in this thesis not only contribute substantially to the eld of underwater robotic exploration, but also hold implications for broader applications including space exploration, environmental monitoring, and a wide range of online learning problems.