Next Article in Journal
Dynamic pH Sensor with Embedded Calibration Scheme by Advanced CMOS FinFET Technology
Next Article in Special Issue
SemanticDepth: Fusing Semantic Segmentation and Monocular Depth Estimation for Enabling Autonomous Driving in Roads without Lane Lines
Previous Article in Journal
Flexible Three-Dimensional Reconstruction via Structured-Light-Based Visual Positioning and Global Optimization
Open AccessArticle

Research on Lane a Compensation Method Based on Multi-Sensor Fusion

1
College of Transportation, Shandong University of Science and Technology, Qingdao 266590, China
2
State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100084, China
3
School of Mechanical and Automotive Engineering, Liaocheng University, Liaocheng 252059, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(7), 1584; https://doi.org/10.3390/s19071584
Received: 10 January 2019 / Revised: 9 March 2019 / Accepted: 29 March 2019 / Published: 2 April 2019
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate the real lane in real-time during sensor jumps. This paper presents a lane compensation method based on multi-sensor fusion of global positioning system (GPS), inertial measurement unit (IMU) and vision sensors. In order to compensate the lane, the cubic polynomial function of the longitudinal distance is selected as the lane model. In this method, a Kalman filter is used to estimate vehicle velocity and yaw angle by GPS and IMU measurements, and a vehicle kinematics model is established to describe vehicle motion. It uses the geometric relationship between vehicle and relative lane motion at the current moment to solve the coefficient of the lane polynomial at the next moment. The simulation and vehicle test results show that the prediction information can compensate for the failure of the vision sensor, and has good real-time, robustness and accuracy. View Full-Text
Keywords: sensor fusion; kinematics; lane detection; vision; virtual lane sensor fusion; kinematics; lane detection; vision; virtual lane
Show Figures

Figure 1

MDPI and ACS Style

Li, Y.; Zhang, W.; Ji, X.; Ren, C.; Wu, J. Research on Lane a Compensation Method Based on Multi-Sensor Fusion. Sensors 2019, 19, 1584.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop