Next Article in Journal
Numerical and Theoretical Study of Performance and Mechanical Behavior of PEM-FC Using Innovative Channel Geometrical Configurations
Previous Article in Journal
Effect of Blade Leading and Trailing Edge Configurations on the Performance of a Micro Tubular Propeller Turbine Using Response Surface Methodology
 
 
Article

Radar Voxel Fusion for 3D Object Detection

1
Institute of Automotive Technology, Technical University of Munich, 85748 Garching, Germany
2
mLab:Real-Time and Embedded Systems Lab, University of Pennsylvania, Philadelphia, PA 19104, USA
*
Author to whom correspondence should be addressed.
Academic Editor: Chris G. Tzanis
Appl. Sci. 2021, 11(12), 5598; https://doi.org/10.3390/app11125598
Received: 29 April 2021 / Revised: 30 May 2021 / Accepted: 11 June 2021 / Published: 17 June 2021
(This article belongs to the Section Robotics and Automation)
Automotive traffic scenes are complex due to the variety of possible scenarios, objects, and weather conditions that need to be handled. In contrast to more constrained environments, such as automated underground trains, automotive perception systems cannot be tailored to a narrow field of specific tasks but must handle an ever-changing environment with unforeseen events. As currently no single sensor is able to reliably perceive all relevant activity in the surroundings, sensor data fusion is applied to perceive as much information as possible. Data fusion of different sensors and sensor modalities on a low abstraction level enables the compensation of sensor weaknesses and misdetections among the sensors before the information-rich sensor data are compressed and thereby information is lost after a sensor-individual object detection. This paper develops a low-level sensor fusion network for 3D object detection, which fuses lidar, camera, and radar data. The fusion network is trained and evaluated on the nuScenes data set. On the test set, fusion of radar data increases the resulting AP (Average Precision) detection score by about 5.1% in comparison to the baseline lidar network. The radar sensor fusion proves especially beneficial in inclement conditions such as rain and night scenes. Fusing additional camera data contributes positively only in conjunction with the radar fusion, which shows that interdependencies of the sensors are important for the detection result. Additionally, the paper proposes a novel loss to handle the discontinuity of a simple yaw representation for object detection. Our updated loss increases the detection and orientation estimation performance for all sensor input configurations. The code for this research has been made available on GitHub. View Full-Text
Keywords: perception; deep learning; sensor fusion; radar point cloud; object detection; sensor; camera; radar; lidar perception; deep learning; sensor fusion; radar point cloud; object detection; sensor; camera; radar; lidar
Show Figures

Figure 1

MDPI and ACS Style

Nobis, F.; Shafiei, E.; Karle, P.; Betz, J.; Lienkamp, M. Radar Voxel Fusion for 3D Object Detection. Appl. Sci. 2021, 11, 5598. https://doi.org/10.3390/app11125598

AMA Style

Nobis F, Shafiei E, Karle P, Betz J, Lienkamp M. Radar Voxel Fusion for 3D Object Detection. Applied Sciences. 2021; 11(12):5598. https://doi.org/10.3390/app11125598

Chicago/Turabian Style

Nobis, Felix, Ehsan Shafiei, Phillip Karle, Johannes Betz, and Markus Lienkamp. 2021. "Radar Voxel Fusion for 3D Object Detection" Applied Sciences 11, no. 12: 5598. https://doi.org/10.3390/app11125598

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop