Next Article in Journal
A Comprehensive Review of the Incorporation of Electric Vehicles and Renewable Energy Distributed Generation Regarding Smart Grids
Previous Article in Journal
Efficiency Maps of Shifted Inductances Axes Permanent Magnet Synchronous Motors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Longitudinal Velocity Estimation of Driverless Vehicle by Fusing LiDAR and Inertial Measurement Unit

College of Mechanical Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2023, 14(7), 175; https://doi.org/10.3390/wevj14070175
Submission received: 15 May 2023 / Revised: 26 June 2023 / Accepted: 29 June 2023 / Published: 2 July 2023

Abstract

:
Aiming at the problem that, under certain extreme conditions, relying on tire force or tire angular velocity to represent the longitudinal velocity of the unmanned vehicle will fail, this paper proposes a longitudinal velocity estimation method that fuses LiDAR and inertial measurement unit (IMU). First, in order to improve the accuracy of LiDAR odometry, IMU information is introduced in the process of eliminating point cloud motion distortion. Then, the statistical characteristic of the system noise is tracked by an adaptive noise estimator, which reduces the model error and suppresses the filtering divergence, thereby improving the robustness and filtering accuracy of the algorithm. Next, in order to further improve the estimation accuracy of longitudinal velocity, time-series analysis is used to predict longitudinal acceleration, which improves the accuracy of the prediction step in the unscented Kalman filter (UKF). Finally, the feasibility of the estimation method is verified by simulation experiments and real-vehicle experiments. In the simulation experiments, medium- and high-velocity conditions are tested. In high-velocity conditions (0–30 m/s), the average error is 1.573 m/s; in the experiment, the average error is 0.113 m/s.

1. Introduction

Longitudinal velocity is the velocity component along the forward direction of the vehicle body during driving, and its estimation accuracy directly affects the safety control of the vehicle by active and passive safety systems such as the anti-lock braking system and acceleration slip regulation [1,2]. Low-precision longitudinal velocity estimation not only increases unnecessary energy consumption, but may even affect the safety of passengers. Therefore, in order to achieve high-precision control of driverless vehicles, reduce energy consumption, and ensure the safety of passengers, it is of great significance to carry out in-depth research on vehicle longitudinal velocity estimation methods in the field of driverless vehicles [3].
Research on vehicle longitudinal velocity estimation methods can be divided into two directions: dynamics-based methods and kinematics-based methods [4], as well as some intelligent estimation methods [5,6,7]. The dynamics-based method mainly relies on the dynamics model of the vehicle, and it estimates the longitudinal velocity of the vehicle by analyzing the force of the vehicle model. In [8], a nonlinear dynamic model including the longitudinal velocity, lateral velocity and yaw rate of the vehicle was established based on the linear tire model, and the real-time estimation of the vehicle velocity was realized. In [9], a nonlinear dynamic model including longitudinal velocity, lateral velocity, yaw angular velocity and four-wheel angular velocities was established based on the tire model of magic formula, and the vehicle velocity was estimated by using the unscented Kalman filter (UKF). In [10], a hybrid Kalman filter was proposed. When the system noise statistical characteristic error is small, the square root cubature Kalman filter is used to estimate the vehicle velocity; when the system noise statistical characteristic error is large, the square root cubature Kalman finite impulse filter is used for real-time estimation of the vehicle velocity. In [11], a state estimation method based on IMU is proposed, which uses two estimators based on the vehicle dynamics model to estimate longitudinal speed, pitch angle, lateral speed, and roll angle.
However, the dynamics-based method is less affected by external environmental factors, and it is limited by the dynamics model; the accuracy of the estimation algorithm cannot be guaranteed in some extreme conditions. Therefore, this paper mainly studies the kinematics-based estimation method. In [12], improvements have been made to the system noise of the UKF to prevent filtering divergence. In [13], the sideslip angle of the vehicle’s center of mass was described by using the vehicle yaw rate, longitudinal velocity, and lateral acceleration, and then the vehicle’s lateral velocity was deduced by using the definition formula of the vehicle’s center of mass sideslip angle. In [14], the vehicle yaw rate was taken as the scheduling parameter, a linear time-varying parameter vehicle kinematics model was established, and the real-time estimation of the vehicle velocity was realized by using the robust filter. In [15,16], a kinematics model was established including the longitudinal and lateral velocities of the vehicle using the tire longitudinal and lateral forces measured by the sensor, and real-time estimation of the longitudinal and lateral velocities of the vehicle was realized by using the Kalman filter. In [17], the tire lateral force measured by the sensor in a hybrid nominal model was established including the kinematics and dynamics of the vehicle, and the real-time estimation of the vehicle velocity was realized by using the extended Kalman filter. In [15,16,17], the tire force was considered measurable state information, but in view of the assembly position of the tire force sensor and the constraints of the use condition, it is difficult to meet the control requirements of driverless vehicles.
This problem can be solved by combining positioning technology and IMU to estimate the velocity of the vehicle. Usually, the combination of GPS and IMU can be used to obtain accurate estimation results [18]. In [19], the accuracy of estimated heading and position was improved by using carrier-phase differential GPS. In [20], the lateral velocity is estimated by using two low-cost GPS, and the problem of low frequency of traditional GPS signals is compensated by combining with IMU. In [21], an extended square-root cubic Kalman filter was proposed to combine GPS and IMU. In [22], motion and constraint models were combined with IMU data to overcome interference from gyroscope drift and disturbances in external acceleration. In [23], a novel Kalman filter is proposed to solve the problem of sensor jitter noise, which further improves the estimation accuracy. However, GPS may lose signal in some areas, resulting in serious estimation errors [24]. Meanwhile, with the development of autonomous driving technology, high-precision and highly robust autonomous vehicle positioning technology is one of the fundamental technologies in the field of autonomous driving. However, the above research results have not fully utilized the positioning information of driverless vehicles for velocity estimation.
Therefore, this paper proposes a method that integrates LiDAR and IMU information to achieve accurate estimation of longitudinal velocity, which addresses the issues of relying on tires to indicate the failure of vehicle longitudinal velocity under certain extreme operating conditions and the possibility of GPS losing signal in certain areas. At the same time, in order to reduce model error and suppress filtering divergence, a noise adaptive module (AUKF) is added to the UKF, and time-series analysis is introduced into the adaptive unscented Kalman filter (TSA-AUKF) to further improve estimation accuracy. The structure of the proposed method is shown in Figure 1. Based on this, the novelties of this paper are summarized as follows:
  • To address the issue of point cloud distortion during the motion of LiDAR, IMU is used to predict the pose changes of LiDAR to reduce the impact of motion distortion on odometry accuracy.
  • Time-series analysis is introduced into the motion equation to predict the trend of longitudinal acceleration changes through multiple sets of IMU historical data, thereby improving the estimation accuracy of the filtering algorithm.
The rest of this paper is structured as follows. In the second section, the LiDAR-IMU odometry is introduced. In the third section, the proposed longitudinal velocity estimation method is introduced. In the fourth section, simulation experiments and real vehicle experiments are conducted, and the results are analyzed. In the fifth section, the full text is summarized.

2. LiDAR-IMU Odometry

2.1. Correction of Point Cloud Motion Distortion

This paper uses a mechanical rotating LiDAR to output a point cloud collected within a cycle as one frame. When the system is in motion and the position of the LiDAR changes, the point cloud within a frame corresponds to different coordinate origins, resulting in motion distortion of the obtained point cloud data, as shown in Figure 2.
In order to correct the motion distortion of LiDAR, it is necessary to obtain the pose changes of LiDAR within one frame. However, due to the fact that LiDAR is often in variable velocity motion and the low update frequency of LiDAR odometry, the actual application effect is poor. The IMU sampling frequency can reach 100 Hz, which can accurately reflect the movement and high-precision local pose estimation. By integrating the data collected by IMU and using linear interpolation, the pose increment of the laser point at any time t [ t k 1 , t k ] relative to time t k 1 can be obtained. The calculation formula is
T i = t k t i t k t k 1 T t k 1 t k
The laser point coordinates at time t i are converted to the first laser point coordinate system through pose increment. The calculation formula is
P i = T i P i
where P i is the position of the laser point at time t i , and P i is the position of P i in the coordinate system of the first laser point. The use of IMU to eliminate motion distortion in LiDAR can effectively avoid problems such as map ghosting caused by distortion. The effect of motion distortion removal is shown in Figure 3.

2.2. LiDAR Point Cloud Clustering

During the information collection process of LiDAR, small objects may not repeatedly appear in adjacent frames, causing errors in inter-frame matching. Therefore, this paper calculates the angle between adjacent laser beam scanning points and point cloud clusters based on the size of the angle. If the clustering results are less than 30 points, they are removed as noise points to improve feature extraction accuracy and inter-frame matching efficiency. As shown in Figure 4, traverse from the red point as the center to adjacent points (green points) around, calculate the angle between the center point and adjacent points α , until the clustering conditions are not met.
The calculation process is
h = d 2 · sin α
l 2 = d 2 · cos α
l 1 = d 1 l 2 = d 1 d 2 · cos α
Flatness F a n g l e is
F a n g l e = a t a n 2 ( d 2 · sin α , d 1 d 2 · cos α )
where d 1 and d 2 are the lengths of blue and orange laser beams, and a l p h a and a l p h a are the angular resolutions in the horizontal and vertical directions, respectively. When adjacent laser beam scanning points are on the same horizontal line, α is equal to the horizontal angular resolution a l p h a . When adjacent laser beam scanning points are on the same vertical line, α is equal to the vertical angular resolution a l p h a . During the iteration process, points with flatness greater than the threshold are placed in set N . When the number of points in set N is less than 30 at the end of the iteration, points in set N are removed as noise points.

2.3. Feature Extraction of Point Cloud

After clustering the original point cloud of the LiDAR, the distance image is horizontally divided into several sub-images, and each sub-image is traversed and processed sequentially. Any point p i is selected in the point cloud, multiple continuous points in the vertical direction of point p i are selected to construct a point set S , and the smoothness c of each point in the point set is calculated as
c = 1 | S | r i || j S , j i ( r j r i ) ||
By setting threshold c t h to distinguish the features of points, those with smoothness greater than the threshold are edge points, while those with smoothness greater than the threshold are planar points. To evenly extract features from all directions, the LiDAR scanning data are divided into two subsets horizontally. The smoothness of the points in the subset is sorted, and the edge feature point 𝒫 i e with the highest roughness is selected from each row of the subset. Then, the planar feature point 𝒫 i f with the lowest roughness is selected in the same way.

2.4. Feature Matching of Point Cloud

By extracting the feature points of the LiDAR keyframe and matching the feature points of adjacent keyframes, the pose transformation relationship of the LiDAR can be solved. The point clouds scanned by the LiDAR at time t k and time t k + 1 are set as k and k + 1 , the edge feature point sets extracted from the feature extraction as k e and k + 1 e , and the planar feature point sets as k p and k + 1 p . The purpose of inter-frame matching is to obtain the transformation relationship between k and k + 1 , as well as the transformation relationship between k + 1 e and k e and the transformation relationship between k p and k + 1 p . Due to the distortion generated during the LiDAR motion process, the edge feature point set k + 1 e and planar feature point set k + 1 p extracted at time t k + 1 are projected to time t k , denoted as k + 1 e and k + 1 p . Edge points are points in sharp positions in a 3D environment, and edge feature points use a point-to-line matching method. Planar points are points located in a smooth region in a 3D environment, and planar feature points use point-to-face matching.

2.4.1. Edge Point Matching

A point F k + 1 e , a in the set of edge feature points k + 1 e is selected at time t k + 1 , and the closest point F k e , b to F k + 1 e , a and the closest point F k e , c in the scan line adjacent to point F k e , b at time t k are selected, with coordinates X k + 1 e , a , X k e , b , and X k e , c , as shown in Figure 5. The distance d E from point F k + 1 e , a to line { F k e , b , F k e , c } is
d E = | ( X k + 1 e , a X k e , b ) × ( X k + 1 e , a X k e , c ) | | X k e , b X k e , c |

2.4.2. Planar Point Matching

A point F k + 1 p , a in the set of planar feature points k + 1 p at time t k + 1 is selected, the closest point F k p , b to F k + 1 p , a at time t k is selected, and the closest point F k p , c to F k p , b in the same vertical resolution point set is found, as well as the closest point F k e , d in the adjacent scanning laser line with point F k p , b , with coordinates X k + 1 p , a , X k p , b , X k p , c , and X k e , d , as shown in Figure 6.
The distance d P from point F k + 1 p , a to plane { F k p , b , F k p , c , F k e , d } is
d P = | ( X k + 1 p , a X k p , b ) · ( ( X k p , b X k p , c ) × ( X k p , b X k e , d ) ) | ( X k p , b X k p , c ) × ( X k p , b X k e , d )

2.5. Pose Estimation

After matching the features of edge points and planar points, the point-to-line distance d E and point-to-planar distance d P are obtained. To obtain the optimal pose, the following cost function can be established by combining (8) and (9):
f ( T k + 1 ) = m i n ( d E + d P )
The error function is established by using the Lewinberg-Marquardt (L-M) method to optimize the solution:
m i n 1 2 || f ( T k + 1 L ) + J ( T k + 1 L ) T Δ T k + 1 L || , s . t || D Δ x k || μ
where μ is the trust radius, and D is the coefficient matrix. Constructing Lagrangian function:
L ( Δ T k + 1 L , λ ) = 1 2 || f ( T k + 1 L ) + J ( T k + 1 L ) T Δ T k + 1 L || 2 + λ 2 ( || D Δ T k + 1 L || 2 μ )
where λ Is the coefficient. Simplify and derive (12) to obtain
( J J T + λ D T D ) Δ T = J f
Substituting (13) into equation (10) yields
Δ T = ( J J T + λ D T D ) 1 ( J f )
where the coefficient matrix D can be approximated as J J T , so under the premise that the initial Jacobian matrix J is known, the residual Δ T can be solved. Substitute the gradient descent formula as
T k + 1 L T k + 1 L + Δ T k + 1 L T k + 1 L T k + 1 L ( J J T + λ D T D ) 1 ( J f )
Using two L-M optimization iterations to solve until convergence, obtain the pose transformation matrix T k + 1 L . Finally, take the differential value of T k + 1 L as a rough estimation of longitudinal velocity Z k .

3. Fusion Method of LiDAR and IMU

3.1. AUKF

The nonlinear system of vehicle motion is
{ X k = F X k 1 + Q k Z k = H X k + R k
where X k and X k 1 represent the motion state variables at time k and k 1 , respectively, F represents the motion model transfer matrix, Q k represents process noise, Z k represents the observation at time k , H represents the observation model transfer matrix, and R k represents the observation noise.
Set the initial estimation value X ^ 0 and error covariance P 0 for the motion state:
X ^ 0 = E [ X 0 ]
P 0 = E [ ( X 0 X ^ 0 ) ( X 0 X ^ 0 ) T ]
At the same time, calculate the Sigma sampling points for the motion state:
X σ k 1 i = { X ^ k 1 0 , i = 0 X ^ k 1 i + ( ( n + λ ) L k 1 ) i , i = 1 , 2 , , n X ^ k 1 i ( ( n + λ ) L k 1 ) i n , i = 1 , 2 , , n
where n represents the dimension of X , and λ represents the scaling factor, taken as λ = 1 , L k 1 L k 1 = P k 1 .
Sigma sampling points are mapped through the motion model transfer matrix F :
X σ k | k 1 i = F X σ k 1 i
Then, perform weighted calculations to obtain the prediction and error covariance matrix of the motion state variables at time k :
X ^ k = i = 0 2 n ( w i X σ k | k 1 i )
P k = i = 0 2 n [ w i ( X σ k | k 1 i X ^ k ) ( X σ k | k 1 i X ^ k ) T ] + Q k
where w i represents the weight
w i = { λ ( λ + n ) , i = 0 λ 2 ( λ + n ) , i = 1 , 2 , , 2 n
Calculate the Sigma sampling points for the observed value as
Y σ k 1 i = { X ^ k 1 0 , i = 0 X ^ k i + ( ( n + λ ) L k 1 ) i , i = 1 , 2 , , n ( ( n + λ ) L k 1 ) i n , i = 1 , 2 , , n
where L k 1 L k 1 = P t . Sigma sampling points are mapped through the motion model transfer matrix H :
Y σ k | k 1 i = H Y σ k 1 i
Then, perform weighted calculations to obtain the prediction and error covariance matrix of the observed value at time k , as well as the cross-covariance matrix of the observation prediction error:
Y ^ = i = 0 2 n ( w i Y σ k | k 1 i )
P Y = i = 0 2 n [ w i ( Y σ k | k 1 i Y ^ ) ( Y σ k | k 1 i Y ^ ) T ] + R k
P X Y = i = 0 2 n [ w i ( X σ k | k 1 i X k ) ( X σ k | k 1 i X ^ k ) T ]
The Kalman gain can be calculated as
K = P X Y P Y 1
Finally, the motion state at time k can be obtained as
X ^ k = X ^ k + K ( Z k Y ^ )
P k = P k K P Y K T
Modeling error is an important component of process noise. By adaptively adjusting the covariance matrix of process noise based on the difference between observed and predicted values, estimation error can be reduced and filtering divergence can be suppressed. So, the difference between the observed and predicted values is defined as
d = Z k Y ^
where d is affected by modeling errors and initial conditions, and the noise covariance matrix can be estimated based on d . So, the theoretical covariance matrix of d is
C k = E [ v k v k T ] = H P k H T + R k
Due to the influence of modeling errors and observation noise, the actual value of the covariance matrix of d often deviates from the theoretical value. So, the actual covariance matrix calculation method for d is
C ^ k = 1 M i = 1 M v i 1 v i 1 T
where M is the length of the sliding window.
R k is adjusted by comparing the actual covariance matrix C k of d with the theoretical covariance matrix C ^ k . R k is reduced when C k > C ^ k ; when C k < C ^ k is used, theoretically R k should be added, but to avoid filter divergence, R k can be kept constant. Define the R k adjustment factor as
α k = max ( 1 , t r a c e ( C ^ k ) t r a c e ( C k ) )
In order to improve the estimation accuracy of the Kalman filter, the observation noise covariance matrix and the process noise covariance matrix are generally adjusted in the opposite direction [25]. Therefore, the adaptive adjustment method for the covariance matrix of state estimation error is
R k = α k R k
Q k = Q k α k

3.2. Longitudinal Acceleration Prediction

Due to the existence of (20), it is necessary to predict acceleration, but the acceleration of the vehicle is related to driver habits and current road conditions, making it difficult to describe it in mathematical language. Therefore, the ARIMA model in time-series analysis was used to predict the longitudinal acceleration of vehicles.
The essence of the ARIMA model is to achieve short-term prediction through historical observation data with p sampling intervals and historical random interference with q sampling intervals. Its mathematical expression is
x k = ϕ 0 + ϕ 1 x k 1 + ϕ 2 x k 2 + + ϕ p x k p + ε k θ 1 ε k 1 θ q ε k q
where p represents the order of the AR model; q represents the order of the MA model; x k represents predicted data; x k 1 , x k 2 , and x k p represent historical observation data; ϕ 0 , ϕ 1 , ϕ 2 , and ϕ p represent the AR model coefficients; ε k , ε k 1 , and ε k q represent historical noise interference; ε N ( 0 , σ ε 2 ) ; θ 1 , θ 2 , and θ q represent the coefficients of the MA model.
We used ten longitudinal acceleration datasets as a set of historical data and predicted the acceleration value at the next moment using ARIMA. The predicted and true values of longitudinal acceleration are shown in Figure 7, indicating that ARIMA has a certain predictive ability for longitudinal acceleration.
From (38), the predicted longitudinal acceleration at time k can be obtained. However, observing (20), it is found that a mapping relationship from time k 1 to time k is required. Therefore, we used a simple processing method
F a = a k a k 1
where F a represents the transfer matrix. So, (20) can be changed to
X σ k | k 1 i = [ F F a ] [ X σ k 1 i 1 ]

4. Simulation and Experimental Results

To verify the effectiveness of the proposed longitudinal velocity estimation method, we conducted medium- to high-velocity tests (0–30 m/s) in Carla simulation software and low-velocity tests (0–5 m/s) on campus.

4.1. Carla Simulation Experiment

The experimental equipment used is shown in Table 1. Under the conditions of this device, Carla simulation software data may occasionally drift, as we will explain in the experimental results.
The driving trajectory in simulation experiment 1 is shown in Figure 8, and the vehicle velocity varies between 0–15 m/s. The longitudinal velocities obtained through LiDAR-IMU odometry, AUKF, and TSA-AUKF are shown in Figure 9, and the errors with respect to the longitudinal velocity reference value in Carla are shown in Figure 10. It can be seen that TSA-AUKF has higher estimation accuracy. During 19–22 s, due to the drift phenomenon in IMU data, only the LiDAR-IMU odometry could obtain a reasonable longitudinal velocity estimation, resulting in complete failure of AUKF. TSA-AUKF, based on the prediction of IMU historical data, could undergo significant correction.
The driving trajectory in simulation experiment 2 is shown in Figure 11, and the vehicle velocity varies between 0–30 m/s. Figure 12 shows the longitudinal velocity of the vehicle obtained by different methods, and the error with respect to the longitudinal velocity reference value in Carla is shown in Figure 13. It can be clearly seen that TSA-AUKF still has better accuracy. A tunnel was passed between 28–37 s, resulting in a decrease in the performance of the LiDAR-IMU odometry. Predictions based on time-series analysis can be used to correct longitudinal velocity estimates. However, due to the lag of time-series analysis, there was a certain deviation in TSA-AUKF between 31–37 s.
The root mean square errors (RMSEs) of LiDAR-IMU odometry, AUKF, and TSA-AUKF estimation results are shown in Table 2. In simulation experiment 1, compared with laser odometer and AUKF, TSA-AUKF can achieve approximately 37% and 55% performance improvements; in simulation experiment 2, compared with LiDAR-IMU odometry and AUKF, TSA-AUKF can achieve approximately 62% and 51% performance improvements.

4.2. Real-Vehicle Experiment

Figure 14 shows the experimental platform. The platform’s sensors provide longitudinal reference velocity information of 10 Hz, LiDAR provides point cloud information of 10 Hz, IMU provides acceleration and angular velocity information of 100 Hz, and GPS provides positioning information of 5 Hz. Lidar, IMU, and GPS are installed in the red box position in Figure 14. Velodyne VLP-16 is used as a LiDAR sensor. The specifications of IMU are shown in Table 3.
The driving trajectory is shown in Figure 15, and the vehicle’s velocity varies between 0–5 m/s. In this experiment, we compared TSA-AUKA with the traditional GPS-IMU method. Figure 16 shows the longitudinal velocity of vehicles obtained using different methods, and the error with respect to the reference longitudinal velocity is shown in Figure 17. It can be clearly seen that TSA-AUKA can provide stable longitudinal velocity estimates.
The RMSEs of LiDAR-IMU odometry, GPS-IMU, and TSA-AUKF estimation results are shown in Table 4. Compared with LiDAR-IMU and GPS-IMU, TSA-AUKF can achieve approximately 59% and 28% performance improvements.

5. Conclusions and Future Work

This paper proposes a longitudinal velocity estimation method for driverless vehicles. By integrating LiDAR and IMU information, the goal of not relying on tires to represent longitudinal velocity is achieved, and the problem of GPS losing signal in certain areas can also be avoided. By using an adaptive noise estimator to track the statistical characteristics of system noise, model errors are reduced and filtering divergence is suppressed, thereby improving the robustness and filtering accuracy of the algorithm; introducing time-series analysis into the motion equation improves the estimation accuracy. The feasibility of the estimation method under medium- to high-velocity conditions was verified in simulation experiments. Under a velocity variation of 0–15 m/s, the RMSE of TSA-AUKF is 1.149 m/s; at a velocity variation of 0–30 m/s, the RMSE of TSA-AUKF is 1.573 m/s. In addition, the data drift generated by the simulation software can also verify the high robustness of the proposed algorithm. In real-vehicle experiments, TSA-AUKF achieved a 28% performance improvement compared to existing GPS-IMU longitudinal velocity estimation methods. The results show that the method proposed in this paper can provide stable longitudinal velocity estimates for driverless vehicles. In future work, more realistic vehicles will be used for experiments on standard roads to improve the performance of the algorithm.

Author Contributions

Conceptualization, C.Z. and Z.G.; investigation, Z.G.; methodology, C.Z. and Z.G.; project administration, M.D.; resources, M.D.; software, Z.G.; supervision, M.D.; validation, C.Z.; writing—original draft, Z.G.; writing—review and editing, Z.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China: Research on the Integrated Control Method of the Lateral Stability of Distributed Drive Mining Electric Vehicles (51974229), Shaanxi Innovative Talents Promotion Plan—Science and Technology Innovation Team (2021TD-27).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, H.W.; Peng, J.K.; Xiong, R.; Fan, H. An acceleration slip regulation strategy for four-wheel drive electric vehicles based on sliding mode control. Energies 2014, 7, 3748–3763. [Google Scholar] [CrossRef] [Green Version]
  2. Zhao, B.; Xu, N.; Chen, H.; Guo, K.; Huang, Y. Stability control of electric vehicles with in-wheel motors by considering tire slip energy. Mech. Syst. Sig. Process. 2019, 118, 340–359. [Google Scholar] [CrossRef]
  3. Arnold, E.; Al-Jarrah, O.; Dianati, M.; Fallah, S.; Oxtoby, D.; Mouzakitis, A. A survey on 3D object detection methods for autonomous driving applications. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3782–3795. [Google Scholar] [CrossRef] [Green Version]
  4. Chen, W.; Tan, D.; Zhao, L. Vehicle sideslip angle and road friction estimation using online gradient descent algorithm. IEEE Trans. Veh. Technol. 2018, 67, 11475–11485. [Google Scholar] [CrossRef]
  5. Napolitano Dell’Annunziata, G.; Arricale, V.M.; Farroni, F.; Genovese, A.; Pasquino, N.; Tranquillo, G. Estimation of Vehicle Longitudinal Velocity with Artificial Neural Network. Sensors 2022, 22, 9516. [Google Scholar] [CrossRef] [PubMed]
  6. Karlsson, R.; Hendeby, G. Speed Estimation From Vibrations Using a Deep Learning CNN Approach. IEEE Sens. Lett. 2021, 5, 1–4. [Google Scholar] [CrossRef]
  7. Zhang, D.; Song, Q.; Wang, G.; Liu, C. A Novel Longitudinal Speed Estimator for Four-Wheel Slip in Snowy Conditions. Appl. Sci. 2021, 11, 2809. [Google Scholar] [CrossRef]
  8. Xin, X.S.; Chen, J.X.; Zou, J.X. Vehicle state estimation using cubature Kalman filter. In Proceedings of the 2014 IEEE the 17th International Conference on Computational Science and Engineering, Chengdu, China, 19–21 December 2014; pp. 44–48. [Google Scholar]
  9. Antonov, S.; Fehn, A.; Kugi, A. Unscented Kalman filter for vehicle state estimation. Veh. Syst. Dyn. 2011, 49, 1497–1520. [Google Scholar] [CrossRef]
  10. Li, J.; Zhang, J.X. Vehicle sideslip angle estimation based on hybrid Kalman filter. Math. Probl. Eng. 2016, 3269142, 1–10. [Google Scholar] [CrossRef] [Green Version]
  11. Xiong, L.; Xia, X.; Lu, Y.; Liu, W.; Gao, L.; Song, S.; Han, Y.; Yu, Z. IMU-Based Automated Vehicle Slip Angle and Attitude Estimation Aided by Vehicle Dynamics. Sensors 2019, 19, 1930. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, Y.; Li, Y.; Zhao, Z. State Parameter Estimation of Intelligent Vehicles Based on an Adaptive Unscented Kalman Filter. Electronics 2023, 12, 1500. [Google Scholar] [CrossRef]
  13. van Zanten, A.T. Bosch ESP system: 5 years of experience. SAE Trans. 2000, 109, 428–436. [Google Scholar]
  14. Du, H.P.; Li, W.H. Kinematics-based parameter-varying observer design for sideslip angle estimation. In Proceedings of the 2014 International Conference on Mechatronics and Control, Jinzhou, China, 3–5 July 2014; pp. 2042–2047. [Google Scholar]
  15. Madhusudhanan, A.K.; Corno, M.; Holweg, E. Vehicle sideslip estimation using tyre force measurements. In Proceedings of the 2015 the 23rd Mediterranean Conference on Control and Automation, Torremolinos, Spain, 16–19 June 2015; pp. 88–93. [Google Scholar]
  16. Rezaeian, A.; Khajepour, A.; Melek, W.; Chen, S.K.; Moshchuk, N. Simultaneous Vehicle Real-Time Longitudinal and Lateral Velocity Estimation. IEEE Trans. Veh. Technol. 2017, 66, 1950–1962. [Google Scholar] [CrossRef]
  17. Pi, D.W.; Chen, N.; Wang, J.X.; Zhang, B.J. Design and evaluation of sideslip angle observer for vehicle stability control. Int. J. Automot. Technol. 2011, 12, 391–399. [Google Scholar] [CrossRef]
  18. Li, X.; Chan, C.Y.; Wang, Y. A reliable fusion methodology for simultaneous estimation of vehicle sideslip and yaw angles. IEEE Trans. Veh. Technol. 2016, 65, 4440–4458. [Google Scholar] [CrossRef]
  19. Farrell, J.A.; Tan, H.S.; Yang, Y. Carrier phase gps-aided ins-based vehicle lateral control. J. Dyn. Syst. Meas. Control 2003, 125, 339–353. [Google Scholar] [CrossRef]
  20. Yoon, J.H.; Peng, H. A cost-effective sideslip estimation method using velocity measurements from two gps receivers. IEEE Trans. Veh. Technol. 2014, 63, 2589–2599. [Google Scholar] [CrossRef]
  21. Song, R.; Fang, Y. Vehicle state estimation for INS/GPS aided by sensors fusion and SCKF-based algorithm. Mech. Syst. Signal Process. 2021, 107315, 0888–3270. [Google Scholar] [CrossRef]
  22. Gao, L.; Ma, F.; Jin, C. A Model-Based Method for Estimating the Attitude of Underground Articulated Vehicles. Sensors 2019, 19, 5245. [Google Scholar] [CrossRef] [Green Version]
  23. Talebi, S.P.; Godsill, S.J.; Mandic, D.P. Filtering Structures for α-Stable Systems. IEEE Control Syst. Lett. 2023, 7, 553–558. [Google Scholar] [CrossRef]
  24. Lv, J.; He, H.; Liu, W.; Chen, Y.; Sun, F. Vehicle Velocity Estimation Fusion with Kinematic Integral and Empirical Correction on Multi-Timescales. Energies 2019, 12, 1242. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, X.; You, Z.; Zhao, K. Inertial/celestial-based fuzzy adaptive unscented Kalman filter with Covariance Intersection algorithm for satellite attitude determination. Aerosp. Sci. Technol. 2016, 48, 214–222. [Google Scholar] [CrossRef]
Figure 1. The structure of the longitudinal velocity estimation method.
Figure 1. The structure of the longitudinal velocity estimation method.
Wevj 14 00175 g001
Figure 2. The principle of motion distortion generation: (a) LiDAR scanning starting point; (b) LiDAR motion process; (c) Point cloud distortion caused by motion.
Figure 2. The principle of motion distortion generation: (a) LiDAR scanning starting point; (b) LiDAR motion process; (c) Point cloud distortion caused by motion.
Wevj 14 00175 g002
Figure 3. The comparison of motion distortion removal: (a) The point cloud of the same laser beam in the red box cannot be aligned before removing motion distortion; (b) Aligning point cloud of the same laser beam after distortion removal.
Figure 3. The comparison of motion distortion removal: (a) The point cloud of the same laser beam in the red box cannot be aligned before removing motion distortion; (b) Aligning point cloud of the same laser beam after distortion removal.
Wevj 14 00175 g003
Figure 4. Flatness-based clustering analysis.
Figure 4. Flatness-based clustering analysis.
Wevj 14 00175 g004
Figure 5. Edge point feature matching: (a) Edge feature point set at time t k ; (b) Edge feature point set at time t k + 1 .
Figure 5. Edge point feature matching: (a) Edge feature point set at time t k ; (b) Edge feature point set at time t k + 1 .
Wevj 14 00175 g005
Figure 6. Planar point feature matching; (a) Planar feature point set at time t k ; (b) Planar feature point set at time t k + 1 .
Figure 6. Planar point feature matching; (a) Planar feature point set at time t k ; (b) Planar feature point set at time t k + 1 .
Wevj 14 00175 g006
Figure 7. Comparison between predicted and true values of longitudinal acceleration.
Figure 7. Comparison between predicted and true values of longitudinal acceleration.
Wevj 14 00175 g007
Figure 8. Simulation Experiment 1 Trajectory.
Figure 8. Simulation Experiment 1 Trajectory.
Wevj 14 00175 g008
Figure 9. Longitudinal velocity curve obtained by different methods in simulation experiment 1.
Figure 9. Longitudinal velocity curve obtained by different methods in simulation experiment 1.
Wevj 14 00175 g009
Figure 10. Error curve in simulation experiment 1.
Figure 10. Error curve in simulation experiment 1.
Wevj 14 00175 g010
Figure 11. Simulation Experiment 2 Trajectory.
Figure 11. Simulation Experiment 2 Trajectory.
Wevj 14 00175 g011
Figure 12. Longitudinal velocity curve obtained by different methods in simulation experiment 2.
Figure 12. Longitudinal velocity curve obtained by different methods in simulation experiment 2.
Wevj 14 00175 g012
Figure 13. Error curve in simulation experiment 2.
Figure 13. Error curve in simulation experiment 2.
Wevj 14 00175 g013
Figure 14. Experimental platform.
Figure 14. Experimental platform.
Wevj 14 00175 g014
Figure 15. Real-vehicle experimental trajectory.
Figure 15. Real-vehicle experimental trajectory.
Wevj 14 00175 g015
Figure 16. Longitudinal velocity curve obtained by different methods in real-vehicle experiment.
Figure 16. Longitudinal velocity curve obtained by different methods in real-vehicle experiment.
Wevj 14 00175 g016
Figure 17. Error curve in real-vehicle experiment.
Figure 17. Error curve in real-vehicle experiment.
Wevj 14 00175 g017
Table 1. Information about experimental equipment.
Table 1. Information about experimental equipment.
EquipmentModel
CPUI7–7700HQ
GPUGTX 1050Ti
Table 2. RMSE of estimation error.
Table 2. RMSE of estimation error.
Scenario (Velocity)LiDAR-IMU OdometryAUKFTSA-AUKF
0–15 m/s1.8282.5971.149
0–30 m/s4.7083.2011.573
Table 3. Datasheet of LPMS-IG1.
Table 3. Datasheet of LPMS-IG1.
SensorQuantityPerformance
GyroscopesBias stability4 deg / hr
Angular random walk0.12 deg / hr
AccelerometersBias stability25 ug
Velocity random walk0.045 m / s / hr
hr = hour, s = second; g = 9.8 m/s2, deg = degree, u = 1.0 × 10 6 , m = meter.
Table 4. RMSE of estimation error.
Table 4. RMSE of estimation error.
LiDAR-IMU OdometryGPS-IMUTSA-AUKF
0.2790.1580.113
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, C.; Guo, Z.; Dang, M. Longitudinal Velocity Estimation of Driverless Vehicle by Fusing LiDAR and Inertial Measurement Unit. World Electr. Veh. J. 2023, 14, 175. https://doi.org/10.3390/wevj14070175

AMA Style

Zhang C, Guo Z, Dang M. Longitudinal Velocity Estimation of Driverless Vehicle by Fusing LiDAR and Inertial Measurement Unit. World Electric Vehicle Journal. 2023; 14(7):175. https://doi.org/10.3390/wevj14070175

Chicago/Turabian Style

Zhang, Chuanwei, Zhongyu Guo, and Meng Dang. 2023. "Longitudinal Velocity Estimation of Driverless Vehicle by Fusing LiDAR and Inertial Measurement Unit" World Electric Vehicle Journal 14, no. 7: 175. https://doi.org/10.3390/wevj14070175

Article Metrics

Back to TopTop