Abstract:
mmWave radars suffer from the problem of sparsity. When compared with a lidar, radar point clouds are more than a 1000 times more sparse. This makes is challenging to use mmWave radars for 3D scene reconstruction. While cameras are able to generate a scale-less depth map, they are not able to predict the distance between objects in the scene. In this project, our aim is to obtain a lidar like dense depth map via the fusion of a single monocular camera image and a mmWave radar point cloud. We propose a radar-camera fusion technique that can generate dense depth using sparse radar depth and a camera image. Additionally, we hope that such a framework can also be used to study pixel-wise fusion of radar and camera data.
Release Date: 10/02/2021Uploaded File: View