Next Article in Journal
Neurophysiological and Autonomic Correlates of Metacognitive Control of and Resistance to Distractors in Ecological Setting: A Pilot Study
Previous Article in Journal
Molecular Weights of Polyethyleneimine-Dependent Physicochemical Tuning of Gold Nanoparticles and FRET-Based Turn-On Sensing of Polymyxin B
 
 
Correction published on 23 May 2024, see Sensors 2024, 24(11), 3320.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HP3D-V2V: High-Precision 3D Object Detection Vehicle-to-Vehicle Cooperative Perception Algorithm

1
Faculty of Electrical Engineering, Henan University of Technology, Zhengzhou 450001, China
2
School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK
3
Division of Mechanics and Acoustics, National Institute of Metrology, Beijing 102200, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(7), 2170; https://doi.org/10.3390/s24072170
Submission received: 22 February 2024 / Revised: 15 March 2024 / Accepted: 22 March 2024 / Published: 28 March 2024 / Corrected: 23 May 2024

Abstract

Cooperative perception in the field of connected autonomous vehicles (CAVs) aims to overcome the inherent limitations of single-vehicle perception systems, including long-range occlusion, low resolution, and susceptibility to weather interference. In this regard, we propose a high-precision 3D object detection V2V cooperative perception algorithm. The algorithm utilizes a voxel grid-based statistical filter to effectively denoise point cloud data to obtain clean and reliable data. In addition, we design a feature extraction network based on the fusion of voxels and PointPillars and encode it to generate BEV features, which solves the spatial feature interaction problem lacking in the PointPillars approach and enhances the semantic information of the extracted features. A maximum pooling technique is used to reduce the dimensionality and generate pseudoimages, thereby skipping complex 3D convolutional computation. To facilitate effective feature fusion, we design a feature level-based crossvehicle feature fusion module. Experimental validation is conducted using the OPV2V dataset to assess vehicle coperception performance and compare it with existing mainstream coperception algorithms. Ablation experiments are also carried out to confirm the contributions of this approach. Experimental results show that our architecture achieves lightweighting with a higher average precision (AP) than other existing models.
Keywords: cooperative perception; 3D object detection; feature extraction; crossvehicle feature fusion cooperative perception; 3D object detection; feature extraction; crossvehicle feature fusion

Share and Cite

MDPI and ACS Style

Chen, H.; Wang, H.; Liu, Z.; Gu, D.; Ye, W. HP3D-V2V: High-Precision 3D Object Detection Vehicle-to-Vehicle Cooperative Perception Algorithm. Sensors 2024, 24, 2170. https://doi.org/10.3390/s24072170

AMA Style

Chen H, Wang H, Liu Z, Gu D, Ye W. HP3D-V2V: High-Precision 3D Object Detection Vehicle-to-Vehicle Cooperative Perception Algorithm. Sensors. 2024; 24(7):2170. https://doi.org/10.3390/s24072170

Chicago/Turabian Style

Chen, Hongmei, Haifeng Wang, Zilong Liu, Dongbing Gu, and Wen Ye. 2024. "HP3D-V2V: High-Precision 3D Object Detection Vehicle-to-Vehicle Cooperative Perception Algorithm" Sensors 24, no. 7: 2170. https://doi.org/10.3390/s24072170

APA Style

Chen, H., Wang, H., Liu, Z., Gu, D., & Ye, W. (2024). HP3D-V2V: High-Precision 3D Object Detection Vehicle-to-Vehicle Cooperative Perception Algorithm. Sensors, 24(7), 2170. https://doi.org/10.3390/s24072170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop