Abstract:
Ruggedized display systems promising to provide augmented reality (AR) capabilities for public safety and military use are on the horizon, but the robust and accurate real-time positioning data needed to make use of them is still a long way off. Current localization systems for AR are almost entirely vision-based, which means they suffer in environments that are dark, smoky, dusty, or have other visual obstructions. Inertial-only systems have been studied extensively in the literature, but they quickly accumulate drift, especially in real-world scenarios where users are engaging in arbitrary motion patterns. Many existing systems rely on some form of localization infrastructure to mitigate the drift, but this infrastructure is not available in buildings that have not been explicitly mapped or outfitted with dedicated sensors. In this work, we will explore a novel approach to first responder localization that aims to be accurate enough to use for AR, but without the need for any localization infrastructure or visual line of sight to the environment. To achieve this, we propose a system with two components: (1) a range-based back end that uses peer-to-peer ultra-wideband distance measurements to establish a relative coordinate frame between users without needing any fixed infrastructure in the building, and (2) a radar-inertial odometry front end that fuses millimeter-wave radar scans with traditional inertial sensors to provide a smooth odometry estimate that provides similar functionality to visual odometry, but can see through visual obstructions like smoke, dust, and light materials. Such a system would enable AR in environments where localization is typically challenging. We will also explore integration with the CONIX ARENA platform, enabling the authoring of AR applications that are entirely ad hoc and infrastructure free. This means that users would be able to interact in an AR scene without having to outfit the physical space with tags or sensors.
Release Date: 10/12/2022Uploaded File: View