Next Article in Journal
Systematic Development of Intelligent Systems for Public Road Transport
Next Article in Special Issue
The Vector Matching Method in Geomagnetic Aiding Navigation
Previous Article in Journal
Portable Wind Energy Harvesters for Low-Power Applications: A Survey
Previous Article in Special Issue
Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation

1
College of Computer Science and Technology, Jilin University, Changchun 130012, China
2
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
3
State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130012, China
4
Department of Computer Science and Engineering, Hanyang University, Ansan 426791, Korea
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(7), 1103; https://doi.org/10.3390/s16071103
Submission received: 6 May 2016 / Revised: 6 July 2016 / Accepted: 11 July 2016 / Published: 16 July 2016
(This article belongs to the Special Issue Advances in Multi-Sensor Information Fusion: Theory and Applications)

Abstract

:
This paper proposes a multi-sensory Joint Adaptive Kalman Filter (JAKF) through extending innovation-based adaptive estimation (IAE) to estimate the motion state of the moving vehicles ahead. JAKF views Lidar and Radar data as the source of the local filters, which aims to adaptively adjust the measurement noise variance-covariance (V-C) matrix ‘R’ and the system noise V-C matrix ‘Q’. Then, the global filter uses R to calculate the information allocation factor ‘β’ for data fusion. Finally, the global filter completes optimal data fusion and feeds back to the local filters to improve the measurement accuracy of the local filters. Extensive simulation and experimental results show that the JAKF has better adaptive ability and fault tolerance. JAKF enables one to bridge the gap of the accuracy difference of various sensors to improve the integral filtering effectivity. If any sensor breaks down, the filtered results of JAKF still can maintain a stable convergence rate. Moreover, the JAKF outperforms the conventional Kalman filter (CKF) and the innovation-based adaptive Kalman filter (IAKF) with respect to the accuracy of displacement, velocity, and acceleration, respectively.

Graphical Abstract

1. Introduction

The motion state estimation of the forward-moving vehicle is a prerequisite for automatic driving vehicles, and its main outputs are the relative transverse longitudinal velocity and acceleration, and relative location. Currently, Lidar and Radar are commonly used as distance measurement sensors in automatic driving systems, which were already fully demonstrated by many teams at the DARPA Urban Challenge [1,2,3]. Lidar has high accuracy and wide measuring range, and can immediately obtain targets’ displacement, and thus can simply calculate velocity, acceleration, and other states. Laser measurements however easily suffer from attenuation over the air, and the perception accuracy happens to sharply decline due to the serious noise, and thus Lidar fails to work normally in bad weather [4,5,6]. Radar is another common distance measurement sensor that can easily obtain targets’ displacement, velocity, and acceleration. Despite a lower accuracy than Lidar, Radar is better at penetrating through smoke and dusts, and thus is robust against bad weather conditions. Hence, an obvious step is to fuse Lidar and Radar sensors in order to highlight their respective advantages and to compensate their mutual shortcomings. By exploiting the associated properties with the different frequency spectra, these sensors can become excellent candidates for data processing and fusion systems [7].
The Kalman filter is widely used to estimate the motion state of a dynamic target. However, the CKF [8] needs to ensure the measurement noise V-C matrix R and the system noise V-C matrix Q precisely enough in order to achieve the best filtering performance, but in fact, R and Q often sensitively fluctuate with the varying accuracy of sensors and data sampling frequency. When encountering bad weather and/or frequently changing environments, the CKF can even output a divergent filtered result.
To address the above issues, we propose a JAKF that employs Lidar and Radar to estimate the motion state of vehicles. It is a multi-layer filter through extending Federated Fusion-Reset (FR) Mode [9] that splits a complete filter into two local filters and one global filter. JAKF treats the Lidar and Radar sensors as the source of the Local Filter (LF) to concurrently work with IAE [10]. First, the data collected by the Lidar and Radar sensors are preprocessed by coordinate transformation and time synchronization. Then, the preprocessed data are input into two LFs from the Lidar and Radar sensors, respectively. Finally, the locally filtered results along with the corresponding system noise Q, measurement noise R, and filtered result V-C matrix P are delivered to the Global Filter (GF). During the process of joint filtering, the information allocation factor β needs to be calculated by R in LF. β can adjust the weight of the state vectors X and P of LF towards the optimal fusion result. When the performance of Lidar happens to worsen, its R will be increased accordingly, then β will be reduced, and thus the noise pollution posed by this sensor can be mitigated. The JAKF combines the features of Lidar and Radar, so it is a stable and well-adaptive method to estimate the motion state of the vehicle ahead.
In this paper, an adaptive decision should be made by balancing the performance between the Lidar and Radar. The main contributions can be summarized as follows: (i) we propose a precise and robust method for estimating vehicle motion state by extending the CKF; (ii) extensive simulations and experiments to compare the accuracy and stability of CKF, IAKF, and JAKF are conducted and (iii) we apply a multi-sensor system to vehicle motion state estimation and provide the pros and cons of Lidar and Radar.
The remainder of this paper is structured as follows: Section 2 overviews a number of motion state estimation methods in the recent literature; Section 3 introduces the proposed JAKF in details, and gives the coordinate transformation and time synchronization in particular; Section 4 presents our extensive simulation and experimental results to verify the accuracy and stability of the JAKF; Finally, Section 5 draws some conclusions and suggests the next work.

2. Related Work

The key to accurately estimate the motion state is to get useful information from a large number of sensor data. A good model of the target will undoubtedly facilitate this information extraction to a great extent [11]. Over the past three decades, many kinds of estimation models have been proposed, and the Kalman filter in different forms has been widely used in these models, e.g., extended Kalman filter, unscented Kalman filter, and other nonlinear filters based on the conventional Kalman filter. According to the number of the installed sensors, they can be divided into single-sensor filters and multi-sensor filters.
Single-sensor filters have high efficiency and usually use high precision sensors to improve the accuracy of results. There are many common vehicle-mounted sensors, e.g., INS, Lidar, radar, camera, accelerometer, and more others. Lee et al. [12] proposed an interactive multiple model (IMM) estimator based on fuzzy weighted input estimation for tracking a maneuvering target. The fuzzy logic theory is utilized to construct the fuzzy weighting factor to improve the input estimation method and that is used to compute the unknown acceleration input for the modified Singer acceleration model. The modified Singer acceleration model combined with the fuzzy weighted input estimation method can track the maneuvering target effectively. Hollinger et al. [13] proposed Ground vehicle-based LADAR for standoff detection of roadside hazards, and discussed detection of roadside hazards partially concealed by light to medium vegetation. Hong et al. [14] proposed a car test for the estimation of GPS/INS alignment errors, and presented car test results on the estimation of alignment errors in the integration of a low-grade inertial measurement unit (IMU) with accurate GPS measurement systems. An iterative scheme was used to improve the estimation of the alignment errors during post-processing. The scheme was shown to be useful when the test car did not have sufficient changes in motion due to limitations in its path. Xian et al. [15] proposed a robust innovation-based adaptive Kalman filter for INS/GPS land navigation. A robust IAE-AKF algorithm was presented in this paper, which evaluates the innovation sequence with Chi-squared test and revises the abnormal innovation vector. The new algorithm possesses high accuracy and stability, and also has the ability to prevent the filtering from diverging even in a rigorous GPS measurement environment.
In practice, the single-sensor filter generally fails to work as excellently as declared due to its performance limitations. If one filter can be calibrated by another one, the eventual filtered results can be more accurate. Multi-sensor filters aim to solve this issue through integrating the respective superiority of different sensors. Multi-sensor filters fuse the results of different sensors into an optimal filtered result at the cost of efficiency in order to improve fault tolerance and robustness. Han et al. [16] proposed maneuvering obstacle motion state estimation for intelligent vehicles using an adaptive Kalman filter based on the current statistical model, and developed such a system that uses a radar and GPS/INS. They introduced a current statistical (CS) model from the military field, which uses the modified Rayleigh distribution to describe acceleration. The adaptive Kalman filter based on CS model was used to estimate the motion state of the target. Mirzaei et al. [17] proposed a Kalman filter-based algorithm for IMU-Camera systems. The proposed method does not require any special hardware (such as a spin table or 3-D laser scanner) other than a calibration target. Furthermore, they employed the observability rank criterion based on Lie derivatives and proved that the nonlinear system describing the IMU-camera calibration process is observable. Sarunic et al. [18] proposed hierarchical model predictive control of UAVs performing multitarget-multisensor tracking, which enables implementation of a computationally feasible and suboptimal solution that takes into account both short-term and long-term goals. Hostettler et al. [19] proposed joint vehicle trajectory and model parameter estimation using roadside sensors, and introduced how a particle smoother-based system identification method can be applied for estimating the trajectory of road vehicles. As for sensors, they adopted a combination of an accelerometer measuring the road surface vibrations and a magnetometer measuring magnetic disturbances mounted on the side of the road.
There are some discussions about making structure of multi-sensor filter more stable. Naets et al. [20] proposed an online coupled state/input/parameter estimation approach for structural dynamics, which uses a parametric model reduction parameter technique. The reduced model is coupled to an Extended Kalman Filter (EKF) with augmented states for the unknown inputs and parameters. This leads to an efficient framework for estimation in structural dynamics problems. Chatzi et al. [21] compared the unscented Kalman filter and particle filter methods for nonlinear structural system identification with non-collocated heterogeneous sensing. The use of heterogeneous, non-collocated measurements for non-linear structural system identification is explored. They also proposed online correction of drift in structural identification using artificial white noise observations and an unscented Kalman filter [22]. The method relies on the introduction of artificial white noise (WN) observations into the filter equations, which is shown to achieve an online correction of the drift issue, thus yielding highly accurate motion data. The above literature focuses on how to fuse the data from the same type of multiple sensors to improve the accuracy, and the final accuracy is higher than that of any other single sensor. In this paper, the data collected by different types of multiple sensors, i.e Lidar and Radar, are fused by the FR mode. Both the accuracy and system stability are improved together. When any one type of equipped sensors is on outage, the other can still optimize the result by feedback. In addition, the proposed method synchronizes the different rates of various sensors by Lagrange three-point interpolation to realize the multi-rate Kalman filter.
In a word, the current main efforts are made to improving the accuracy of the filtered results. However, when suffering from continuous high noise, none of the related sensors would keep working as expected. Any sensor losing effectivity undoubtedly decreases the overall performance, so an adaptive control decision should be made by a tradeoff between accuracy and stability. Considering the two factors, the proposed JAKF employs the high-accuracy Lidar and Radar as the source of input data, and synthesizes the respective advantages of IAE and FR to realize the multi-sensor adaptation. In addition, Chi-square hypothesis test and correction decreases measurement error, and coordinate transformation and time synchronization decreases fusion error. Therefore, JAKF is more accurate and stable than the conventional federated Kalman filter and single-sensor adaptive Kalman filter using Lidar or Radar.

3. Method

This section first briefly introduces the structure of JAKF, and then explains the local and global filter in details, and at last provides two post-processing operations: coordinate transformation and time synchronization.

3.1. The Structure of JAKF

JAKF is a two-step processing method for partition estimation. Figure 1 shows the structure and working process of JAKF. (1) Sensors send the current collected data to the LF; (2) The LF fuses the measured data with the feedback data from the GF, and then updates the time and filtered information into the latest value; (3) The GF fuses all the corrected data into a new global state estimation, and outputs the global state estimation and meanwhile feeds them back to the LF.
In Figure 1, LFi takes the latest Zi, Qi, Pi, and Xi as input and then independently performs the IAE based on such information. Zi is the corrected measured data from Lidar or Radar by coordinate transformation and time synchronization. β is the information allocation factor of LFi. Xg is the global optimal estimation. Pg is the V-C matrix of Xg. Then, LFi sends Qi, Ri, Xi, and Pi to the GF. Qi is the system noise V-C matrix of LFi. Ri is the measurement noise V-C matrix of LFi. Xi is the state estimated value of LFi. Pi is the V-C matrix of Xi. At last, GF calculates Qi, Xi, and Pi, and feeds them back to the LFi, and finally outputs the optimal result Xg and Pg. When i = 1, the data source of LFi is Lidar, otherwise is Radar for i = 2.

3.2. Local Kalman Filter

Since vehicle’s transverse and longitudinal velocity changes slowly during driving, so LF can use the linear discrete Kalman filter model, as expressed by [23]:
X t + 1 = Φ X t + G W t
Z t + 1 = H X t + 1 + V t + 1
where Xt+1 and Xt are the state vector X = [xs xv xa ys yv ya]T at time t + 1 and t, respectively. xs is the relative transverse displacement, xv is the relative transverse velocity, xa is the relative transverse acceleration, ys is the relative longitudinal displacement, yv is the relative longitudinal velocity, and ya is the relative longitudinal acceleration. Φ is the state transition matrix, which can be expressed by:
Φ = [ 1 Δ t Δ t 2 / 2 0 0 0 0 1 Δ t 0 0 0 0 0 1 0 0 0 0 0 0 1 Δ t Δ t 2 / 2 0 0 0 0 1 Δ t 0 0 0 0 0 1 ]
where Δt is the time interval of the filtered data. G is the coefficient of the system noise matrix and defined as G = [Δt3/6 Δt2/2 Δt Δt3/6 Δt2/2 Δt]T. Wt is the system noise vector at time t, Zt+1 is the measurement vector at time t + 1, H is the coefficient of the measurement matrix, which is a six-order unit matrix, and Vt+1 is the measurement noise vector at time t + 1. Wt and Vt+1 are uncorrelated zero-mean white Gaussian noise sequence with covariance, i.e.:
cov [ V k , V j ] = R k δ k j
cov [ W k , W j ] = Q k δ k j
cov [ W k , V j ] = 0
δ k j = { 1 , k = j 0 , k j
where the term cov is the function of calculating the covariance matrix, and δkj is the dirichlet function. From the above linear discrete system, the CKF working behaviors can be characterized by [23]:
X t + 1 / t = Φ X t / t + G W t
P t + 1 / t = Φ P t / t Φ Τ + Q t + 1
K t + 1 = P t + 1 / t H Τ [ H P t + 1 / t H Τ + R t + 1 ] 1
P t + 1 / t + 1 = [ I K t + 1 H ] P t + 1 / t
X t + 1 / t + 1 = X t + 1 / t + K t + 1 v t
where vt is the innovation vector, and C ^ v t is the V-C matrix of vt. To improve the real-time performance and ergodicity, we set a sliding estimation window with size N for averaging C ^ v t . Finally, R ^ t + 1 and Q ^ t + 1 can be calculated according to C ^ v t . So vt, C ^ v t , R ^ t + 1 , and Q ^ t + 1 are defined by [10]:
v t = Z t H X t + 1 / t
C ^ v t = 1 N j = t N + 1 t v j v j Τ
R ^ t + 1 = C ^ v t H P t + 1 / t H Τ
Q ^ t + 1 = K t C ^ v t K t Τ + P t / t Φ P t 1 / t 1 Φ Τ K t C ^ v t K t Τ
If the innovation vector vt of Lidar includes seriously divergent noise or errors, C ^ v t is divergent as well, so we use the Chi-square hypothesis test to mitigate this negative influence. The Chi-square hypothesis test can estimate the deviation between the observed and theoretical values of samples. The deviation decides the Chi-square values, i.e., the higher the Chi-square values, the more abnormal the samples, so a certain Chi-square distribution threshold enables us to identify abnormal samples. Considering vt is the zero-mean white noise with Gaussian distribution, we obtain:
U t ( m ) = v t 2 ( m ) C v t 1 ( m , m ) ~ χ α 2 ( 1 )
where vt(m) is the mth element of vector vt, Cvt−1(i, i) is the ith diagonal element of matrix C ^ v t , and χ α 2 ( 1 ) represents a Chi-square distribution with m freedom degree and confidential level α. If Ut(m) < χ α 2 ( 1 ) , the new innovation vector is normal, otherwise it is abnormal. To mitigate the influence of the abnormal innovation vector, the abnormal element is corrected by:
v t ( m ) = { v t ( m ) , 0 U t ( m ) χ α 2 ( 1 ) v t ( m ) exp ( ( U t ( m ) χ α 2 ( 1 ) ) χ α 2 ( 1 ) ) , U t ( m ) χ α 2 ( 1 ) , m = 1 , ... , n

3.3. Global Kalman Filter

To get the optimal estimation, the GF analyzes and integrates the estimated information of the LFs. The computational process of GF is as follows:
Step 1. Optimal information fusion. GF fuses all the state estimations of the local filters into the optimal fusion state vector X ^ g , and meanwhile it fuses the covariance matrix into Pg, as expressed by [9]:
X ^ g = P g i = 1 2 P i 1 X i ^
P g = ( i = 1 2 P i 1 ) 1
Step 2. Calculating the information allocation factor. The information allocation factors β1 and β2 are used to adjust Qi, Pi, and the weight of X ^ 1 and X ^ 2 in the optimal fusion state X ^ g . β1 is calculated according to tr(R1) and tr(R2). tr(R1) and tr(R2) indicate the trace of R1 and R2, respectively. R can truly reflect the performance of the current filter. Table 1 lists the relation between R and the accuracy of Lidar. When the Signal-Noise Ratio (SNR) of Lidar spans from 80 dB to 87 dB, the value of R1 decreases from 0.0021 to 0.0004, and accordingly the accuracy declines from 0.0362 to 0.0164. This implies that there exists a directly proportional relationship between these two quantities. If the deteriorating environment makes the accuracy of Lidar drop down to a certain threshold such as tr(R1) ≥ 20 tr(R2), we shall choose β1 << 1, so as that the output of GF can approximate to the output of only Radar available. Generally speaking, Lidar is almost ten times as accurate as the Radar, so when tr(R1) ≥ 20 tr(R2), the accuracy of Radar becomes far better than that of Lidar, i.e., the Lidar already loses effectivity and produces divergent results. Therefore, β1 is defined by:
β 1 = { 0.01 , t r ( R 1 ) 20 t r ( R 2 ) 1 / t r ( R 1 ) 1 / t r ( R 1 ) + 1 / t r ( R 2 ) , t r ( R 1 ) < 20 t r ( R 2 )
According to the principle of information distribution conservation β1 + β2 = 1 [9], so β2 is expressed by:
β 2 = 1 β 1
Step 3. Filtering state feedback. GF uses β1 and β2 to calculate the latest system information that will be feed back to the LFs, as expressed by:
Q 1 = Q 1 1 + Q 2 1    Q i 1 = β i Q 1 ,   i = 1 ,   2
P 1 = P 1 1 + P 2 1    P i 1 = β i P 1 ,   i = 1 ,   2
X i ^ = X g ^ ,   i = 1 ,   2

3.4. Coordinate Transformation

The collected motion state of the forward vehicle should be placed into the current car’s coordinate system, but in fact, the data collected by Lidar and Radar are based on their own coordinate systems, so in order to realize spatial synchronization of the measured data, we propose a method to transform the Lidar coordinate system Clidar(xl, yl, zl) and the Radar coordinate system Cradar(xr, yr, zr) into the current car coordinate system Ccar(xc, yc, zc). Figure 2 illustrates the relations between these coordinate systems.
First, we implement the coordinate transformation between Clidar and Ccar. Suppose Plidar = (xlidar, ylidar, zlidar)T is a point in Clidar, and Pcar = (xcar, ycar, zcar)T is the corresponding point in Ccar, so their coordinate transformation can be achieved by:
P c a r = R l i d a r c a r * P l i d a r + T l i d a r c a r
where Tlidar→car is the mounted coordinate of Lidar onto the coordinate system Ccar, which can be simply measured. Rlidar→car is the rotation matrix in Clidar relative to Ccar. In Rlidar→car, the pitch angle α can be obtained by rotating around the x-axis, and similarly, the deflection angle φ can be obtained by rotating around the z-axis, so Rlidar→car is defined by:
R l i d a r c a r = [ cos φ sin φ cos α sin φ cos α sin φ cos φ cos α cos φ cos α 0 sin α cos α ]
More details about the calculation of α and φ can be found in [24]. Similarly, we can obtain Tradar→car and Rradar→car to execute the coordinate transformation between Cradar and Ccar, and thus achieving the spatial synchronization of Lidar and Radar.

3.5. Time Synchronization

Since various sensors have different data sampling frequenci, the time error calibration has to be considered. To improve the accuracy of data fusion, we choose an appropriate time slice as the time interval of data fusion. Since the sampling frequency of Lidar is higher than that of Radar, the collected data from Lidar need to be calculated in accordance with the Radar by interpolation and extrapolation. Suppose the data sampling interval of Radar as the time point of data fusion ti, so we need to find a corresponding time point in data sampling interval of Lidar as interpolate point, and then calculating the measured value of Lidar at time ti through Lagrange three-point interpolation.
Assume the measured data sequence from Lidar is Xi−1, Xi, and Xi+1 at time tmi−1, tmi, and tmi+1, and tmitmi−1 = tmitmi+1 = h. tmititmi+1, i.e., ti = tmi + τh, 0 ≤ τ ≤ 1, so the estimated value X ¯ i at time ti is defined by:
X i ¯ = ( t i t m i ) ( t i t m i + 1 ) ( t m i 1 t m i ) ( t m i 1 t m i + 1 ) X i 1 + ( t i t m i 1 ) ( t i t m i + 1 ) ( t m i t m i 1 ) ( t m i t m i + 1 ) X i + ( t i t m i 1 ) ( t i t m i ) ( t m i + 1 t m i 1 ) ( t m i + 1 t m i ) X i + 1

4. Results

4.1. Simulation

We conducted the simulation using MATLAB R2014a on a computer equipped with an Intel Core i7-4790 CPU (3.60 GHz) and Windows 64. There are two important factors that can affect the accuracy of the filtered results. At the one hand, the noise intensity is treated as a prevailing concern and can be quantified by the variance R. In the simulation, the Lidar noise variance is increased by 0.1 per time span from 0.03 to 5.93, while the Radar noise variance maintains its initial value of 0.1. The initial velocity, acceleration, and displacement of the forward vehicle are 2 m/s, 0.18 m/s2, and 0 m, respectively. Figure 3 shows the root mean squared error (RMSE), where JAKF produces the maximum RMSE of the displacement 0.2323 m and the average 0.0818 m, the maximum RMSE of the velocity 0.0837 m/s and the average 0.0689 m/s, and the maximum RMSE of the acceleration 0.0237 m/s2 and the average 0.0171 m/s2. When the Lidar noise variance is increased to 0.93, the RMSEs of the displacement, velocity, and acceleration produced by CKF (Lidar) increase to 10-fold, so the results of CKF (Lidar) have diverged. Although IAKF (Lidar) can adapt to the increased noise intensity, the RMSE of JAKF is significantly smaller than that of IAKF (Lidar). At the other hand, the acceleration aF of the forward vehicle in the CA model is another matter. In the simulation, aF is increased by 0.1 m/s2 per time from 0.18 m/s2 to 6.08 m/s2 and the noise variances of Lidar and Radar are fixed at 0.03 and 0.1, respectively. Figure 4 compares the motion state estimation against different acceleration values aF. JAKF contributes the maximum RMSE of the displacement 0.0403 m and the average 0.0261 m, the maximum RMSE of the velocity 0.0664 m/s and the average 0.0643 m/s, and the maximum RMSE of the acceleration 0.0737 m/s2 and the average 0.0346 m/s2. In the increased acceleration situation, the RMSEs of CKF (Radar) and IAKF (Radar) are larger than the RMSEs of CKF (Lidar), IAKF (Lidar), and JAKF.
Regarding the comparative results in Figure 3 and Figure 4, we can conclude: (i) JAKF enables bounding the RMSE of the motion state estimation to an acceptable level, so the JAKF is robust; (ii) JAKF outputs the smallest RMSE of the motion state estimation, so the JAKF is more accurate than CKF and IAKF; (iii) JAKF uses multi-sensors data fusion to realize the global adaptation, so JAKF can be more adaptive than the single-sensor adaptive filter through performance compensation between different sensors.

4.2. Experiment

To obtain accurate data, the test car is equipped with a high precision URG-04LX Lidar developed by Hokuyo (Osaka, Japan), and a millimeter-wave Radar ESR, developed by Delphi (Warren, OH, USA). The forward car collects the displacement, velocity, and acceleration from a Controller Area Network (CAN) bus as a benchmark that is used to compare with the filtered results to evaluate JAKF. CAN bus is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other in applications. The motion states of vehicles can be easily obtained from some vehicle-sensors by CAN bus. CAN bus even can correct the current velocity and direction. Table 2 lists the parameters of URG-04LX and ESR.
Figure 5 shows the employed URG-04LX and ESR in the experiments, respectively. Figure 6 shows the test car and the forward car, respectively. Figure 7 shows the experimental scenario and route.
In the experiments, R1, R2, Q1 and Q2 represent the noises of local filters LF1 and LF2, and their initial values are 0.03, 0.1, 0.000001 and 0.000001, respectively. R1 and R2 are measurement noise covariance in Kalman filter model and are calculated by IAE at each moment. Q1 and Q2 are constant and indicate the systemic noise covariance in Kalman filter model. The sliding window size of the innovation sequence is 30, and the given confidential level α is 0.005 in Chi-square distribution, so X 0.005 2 (1) = 7.879. We conducted three experiments to evaluate the JAKF results. In the first two experiments, the forward car moves along a straight line at a constant and varying acceleration, respectively. In the third, the forward car is permitted to change lanes and thus produces obvious displacement in the lateral.
In the first experiment, the forward car just shifts visibly in the vertical, so we only focus on the longitudinal motion information. The initial longitudinal velocity, acceleration, and displacement of the forward vehicle are 5.43 m/s, 0.18 m/s2, and 0 m, respectively. The test car follows the forward car so as to ensure sensors can detect the forward car. We collected 5000 samples with frequency 100 Hz.
Figure 8 shows the noises of URG and ESR, respectively. URG is suffered from the continuous high noise within the 1000th~1500th time slots, the 2000th~3000th time slots, and the 3500th~4000th time slots, while the noise of ESR always holds steady, but URG has less error than ESR when working well during the 1500th~2000th, 3000th~3500th, and the 4000th~5000th time slots. Figure 9 provides the measurement noise V-C matrix R of the local-filter output of URG and ESR, which indicates the R of JAKF can adapt to the variation of the noise.
Figure 10 provides the estimation and error comparison of URG. The estimation of error is the abs value of the difference between the actual and estimation values in the same time slot. After the 500th time slot, all the focused three filters converge to a stable state. When the suffered measure noise by URG is normal, these three filtering algorithms can output the normal results. But when the continuous high noise appears during the 1000th~1500th, 2500th~3000th, and 3500th~4000th time slots, the noise variance R of URG is increased. In this case, the CKF is incapable of dealing with the continuous serious disturbance, while the JAKF still can produce higher accuracy than CKF and IAKF. At the 3051th time slot, the absolute error of the longitudinal displacement is about 0.2665 m in CKF, while 0.0826 m in IAKF and 0.0392 m in JAKF. At the 3023th time slot, the error of the longitudinal velocity is about 0.0826 m/s in CKF, and 0.0725 m/s in IAKF, while 0.0222 m/s in JAKF. At the 3001th time slot, the error of the longitudinal acceleration is about 0.0560 m/s2 in CKF, while 0.0298 m/s2 in IAKF, and 0.0148 m/s2 in JAKF.
Figure 11 gives the estimation and error comparison of ESR. After the filters converged, all the measured noises of the three filters fluctuate within the theoretical range. The errors in CKF and IAKF are nearly similar, while the JAKF still behaves best w.r.t. accuracy. At the 3571th time slot, the error of the longitudinal displacement is about 0.0649 m in JAKF, while 0.1147 m in CKF and 0.1034 m in IAKF. At the 3466th time slot, the error of the longitudinal velocity is about 0.0626 m/s in CKF, and 0.0605 m/s in IAKF, while 0.0289 m/s in JAKF. At the 4776th time slot, the error of the longitudinal acceleration is about 0.0046 m/s2 in JAKF, while 0.0297 m/s2 in CKF and 0.0257 m/s2 in IAKF.
Table 3 lists the comparisons of CKF, IAKF and JAKF about root-mean-square error, maximum error, and variance of filtered results. In Table 3, the RMS error and the variance of JAKF is smaller than the ones of CKF and IAKF, so JAKF has better stability and fault-tolerance against the continuous varying noise, and the accuracy of JAKF is higher than that of the single-sensor filter in the situation where the acceleration of the forward car is constant.
In the second experiment, the forward car moved along a straight line at a varying acceleration. As the first experiment, we only focus on the longitudinal motion in this situation. The initial longitudinal velocity, acceleration, and displacement of the forward vehicle are 0.15 m/s, 0 m/s2, and 0 m, respectively. The test car follows the forward car so as to ensure sensors can detect the forward car. We collected 5000 samples with frequency 100 Hz.
Figure 12 shows the noises of URG and ESR. URG suffered from the continuous high noise within the 2000th~3000th time slots, and the 3500th~4400th time slots, while the noise of ESR still holds steady all the time. Figure 13 provides the measurement noise V-C matrix R of the local-filter output of URG and ESR.
Figure 14 provides the estimation and error comparison of URG. The continuous high noise appears during the 2000th~3000th time slots and the 3500th~4400th time slots, during which the CKF is badly divergent while the JAKF maintains convergent. For example, at the 2712th time slot, the absolute error of the longitudinal displacement is about 0.1486 m in CKF, while 0.0998 m in IAKF and 0.0152 m in JAKF; the error of the longitudinal velocity is about 0.0513 m/s in CKF, and 0.0226 m/s in IAKF, while 0.0185 m/s in JAKF; and the error of the longitudinal acceleration is about 0.0486 m/s2 in CKF, while 0.0375 m/s2 in IAKF and 0.0151 m/s2 in JAKF.
Figure 15 gives the estimation and error comparison of ESR. After the filters converged, all the measured noises of the three filters fluctuate within the theoretical range. The errors in CKF and IAKF are still nearly similar while the JAKF behaves best w.r.t. accuracy during most time slots. At the 2256th time slot, the error of the displacement is about 0.0482 m in JAKF, while 0.0883 m in CKF and 0.0752 m in IAKF; the error of the velocity is about 0.0326 m/s in CKF, and 0.0305 m/s in IAKF, while 0.0189 m/s in JAKF; and the error of the acceleration is about 0.0046 m/s2 in JAKF, while 0.062 m/s2 in CKF and 0.057 m/s2 in IAKF.
Table 4 lists the comparisons of CKF, IAKF and JAKF about root-mean-square error, maximum error, and variance of filtered results.
In Table 4, JAKF has high accuracy and stability even though the motion model of the forward car is dynamic. In a nutshell, a varying acceleration imposes little influence on JAKF.
In the third experiment, the forward car is permitted to change lanes and thus produces obvious displacement in the lateral, so we mainly focus on the transverse motion of the forward car in this situation. The initial transverse velocity, acceleration, and displacement of the forward vehicle are 0 m/s, 0 m/s2, and 0 m, respectively. During the whole experiment, the test car moves along a straight line and keeps the inter-distance about 3 m~8 m apart from the forward car so as to ensure sensors can detect the forward car. We collected 1000 samples with frequency 100 Hz. We collected the transverse vehicle motion information to CKF, IAKF and JAKF, and correspondingly compared the transverse results with the former two longitudinal ones.
Figure 16 shows the noises of URG and ESR. URG is suffered from the continuous high noise within the 200th~400th time slots, and the 600th~800th time slots, while the noise of ESR still holds steady. Figure 17 provides the measurement noise V-C matrix R of the local-filter output of URG and ESR.
Figure 18 provides the estimation and error comparison of URG. The continuous high noise appears during the 200th~400th time slots and the 600th~800th time slots, during which the CKF fails to deal with it, while the JAKF maintains convergent. For example, at the 372th time slot, the error of the longitudinal displacement is about 0.178 m in CKF, while 0.0995 m in IAKF and 0.0852 m in JAKF; the error of the longitudinal velocity is about 0.1062 m/s in CKF and 0.0481 m/s in IAKF, while 0.0423 m/s in JAKF; and the error of the longitudinal acceleration is about 0.0341 m/s2 in CKF, while 0.0105 m/s2 in IAKF and 0.01 m/s2 in JAKF.
Figure 19 gives the estimation and error comparison of ESR. After the filters converged, all the measured noises of the three filters fluctuate within the theoretical range. The errors in CKF and IAKF are nearly similar while the JAKF still behaves best w.r.t. accuracy. For example, at the 600th time slot, the error of the displacement is about 0.0412 m in JAKF, while 0.0954 m in CKF and 0.0986 m in IAKF; the error of the velocity is about 0.0428 m/s in CKF and 0.0402 m/s in IAKF, while 0.0125 m/s in JAKF; and the error of the acceleration is about 0.0081 m/s2 in JAKF, while 0.0092 m/s2 in CKF and 0.0095 m/s2 in IAKF. Therefore, the accuracy of JAKF is stable in the situation where the forward car is permitted to change lanes.
Table 5 lists the comparisons of CKF, IAKF and JAKF about root-mean-square error, maximum error, and variance of filtered results.
In Table 5, compared with the results of the first two experiments, the advantages of JAKF in accuracy and stability decline slightly, but the JAKF results are still better than the ones of CKF and IAKF. Table 6 summarizes the ratio of accuracy improvement of JAKF in the three experiments.
In a nutshell, the JAKF extends the single adaptive filter through integrating the respective advantages of Lidar and Radar, and uses the noise variance R and the information allocation factor β to adjust the filtered results of the local and global filters, respectively, and thus can improve the fault tolerance and accuracy.

5. Conclusions

The paper proposed a joint adaptive Kalman filter algorithm called JAKF that combines one Lidar and one Radar to accurately estimate the motion state of the moving car ahead, e.g., the relative position, the relative velocity, and the relative acceleration. For adaptation, JAKF adopts the modified innovation sequence to calculate the measurement noise V-C matrix of each local filter. Since the measurement noise V-C matrix can reflect the performance of the current filter, the information allocation factor can be specified to optimize the multi-sensorial fusion. The simulation and experimental results show that JAKF can always maintain the convergence even when suffering from abnormal noise. JAKF significantly improves the fault tolerance and stability of the estimation system, and meanwhile, it enhances the accuracy of the filtered results.

Acknowledgments

This work was supported by National Nature Science Foundation (61202472, 61373123, 61572229, U1564211); Scientific Research Foundation for Returned Scholars; International Scholar Exchange Fellowship (ISEF) program of Korea Foundation for Advanced Studies (KFAS); Jilin University Young Teacher and Student Cross Discipline Foundation (JCKY-QKJC09); and Jilin Provincial International Cooperation Foundation (20140414008GH, 20150414004GH).

Author Contributions

Siwei Gao and Jian Wang developed the system, conducted experiments, and wrote the article. Yanheng Liu, Wenwei Deng and Heekuck Oh supervised the overall study and experiments, and revised the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Darms, M.S.; Rybski, P.E.; Baker, C.; Urmson, C. Obstacle detection and tracking for the urban challenge. IEEE Trans. Intell. Transp. Syst. 2009, 10, 475–485. [Google Scholar] [CrossRef]
  2. Leonard, J.; How, J.; Teller, S.; Berger, M.; Campbell, S.; Fiore, G.; Fletcher, L.; Frazzoli, E.; Huang, A.; Karaman, S.; et al. A perception-driven autonomous urban vehicle. J. Field Robot. 2008, 25, 727–774. [Google Scholar] [CrossRef]
  3. Montemerlo, M.; Becker, J.; Bhat, S.; Dahlkamp, H.; Dolgov, D.; Ettinger, S.; Haehnel, D.; Hilden, T.; Hoffmann, G.; Huhnkeand, B.; et al. Junior: The stanford entry in the urban challenge. J. Field Robot. 2008, 25, 569–597. [Google Scholar]
  4. Fiorino, S.T.; Bartell, R.J.; Cusumano, S.J. Worldwide uncertainty assessments of ladar and radar signal-to-noise ratio performance for diverse low altitude atmospheric environments. J. Appl. Remote Sens. 2010, 4, 1312–1323. [Google Scholar] [CrossRef]
  5. Vostretsov, N.A.; Zhukov, A.F. About temporary autocorrelation function of fluctuations of the scattered radiation of the focused laser beam (0.63 mm) in the surface atmosphere in rain, drizzle and fog. Int. Symp. Atmos. Ocean Opt. Atmo. Phys. 2015. [Google Scholar] [CrossRef]
  6. Guo, J.; Zhang, H.; Zhang, X.J. Propagating characteristics of pulsed laser in rain. Int. J. Antennas Propag. 2015, 4, 1–7. [Google Scholar] [CrossRef]
  7. Hollinger, J.; Kutscher, B.; Close, R. Fusion of lidar and radar for detection of partially obscured objects. SPIE Def. Secur. Int. Soc. Opt. Photonics 2015, 1–9. [Google Scholar] [CrossRef]
  8. Simon, D.; Chia, T.L. Kalman filtering with state equality constraints. IEEE Trans. Aerosp. Electr. Syst. 2002, 38, 128–136. [Google Scholar] [CrossRef]
  9. Carlson, N.A.; Berarducci, M.P. Federated Kalman filter simulation results. Navigation 1994, 41, 297–322. [Google Scholar] [CrossRef]
  10. Mohamed, A.H.; Schwarz, K.P. Adaptive Kalman filtering for INS/GPS. J. Geod. 1999, 73, 193–203. [Google Scholar] [CrossRef]
  11. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part I. Dynamic models. IEEE Trans. Aerosp. Electr. Syst. 2003, 39, 1333–1364. [Google Scholar]
  12. Lee, Y.L.; Chen, Y.W. IMM estimator based on fuzzy weighted input estimation for tracking a maneuvering target. Appl. Math. Model. 2015, 39, 5791–5802. [Google Scholar] [CrossRef]
  13. Close, R. Ground vehicle based LADAR for standoff detection of road-side hazards. SPIE Def. Secur. Int. Soc. Opt. Photonics 2015. [Google Scholar] [CrossRef]
  14. Hong, S.; Lee, M.H.; Kwon, S.H.; Chun, H.H. A car test for the estimation of GPS/INS alignment errors. IEEE Trans. Intell. Transp. Syst. 2004, 5, 208–218. [Google Scholar] [CrossRef]
  15. Xian, Z.W.; Hu, X.P.; Lian, J.X. Robust innovation-based adaptive Kalman filter for INS/GPS land navigation. Chin. Autom. Congr. 2013. [Google Scholar] [CrossRef]
  16. Han, B.; Xin, G.; Xin, J.; Fan, L. A study on maneuvering obstacle motion state estimation for intelligent vehicle using adaptive Kalman filter based on current statistical model. Math. Probl. Eng. 2015, 4, 1–14. [Google Scholar] [CrossRef]
  17. Mirzaei, F.M.; Roumeliotis, S.I. A Kalman filter-based algorithm for IMU-Camera calibration: Observability analysis and performance evaluation. IEEE Trans. Robot. 2008, 24, 1143–1156. [Google Scholar] [CrossRef]
  18. Sarunic, P.; Evans, R. Hierarchical model predictive control of UAVs performing multitarget-multisensor tracking. IEEE Trans. Aerosp. Electr. Syst. 2014, 50, 2253–2268. [Google Scholar] [CrossRef]
  19. Hostettler, R.; Birk, W.; Nordenvaad, M.L. Joint vehicle trajectory and model parameter estimation using road side sensors. IEEE Sens. J. 2015, 15, 5075–5086. [Google Scholar] [CrossRef]
  20. Naets, F.; Croes, J.; Desmet, W. An online coupled state/input/parameter estimation approach for structural dynamics. Comput. Methods Appl. Mech. Eng. 2015, 283, 1167–1188. [Google Scholar] [CrossRef]
  21. Chatzi, E.N.; Smyth, A.W. The unscented Kalman filter and particle filter methods for nonlinear structural system identification with non-collocated heterogeneous sensing. Control Health Monit. 2009, 16, 99–123. [Google Scholar] [CrossRef]
  22. Chatzi, E.N.; Fuggini, C. Online correction of drift in structural identification using artificial white noise observations and an unscented Kalman filter. Smart Struct. Syst. 2015, 16, 295–328. [Google Scholar] [CrossRef]
  23. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. Trans. 2015, 82, 35–45. [Google Scholar] [CrossRef]
  24. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
Figure 1. The structure and working process of JAKF.
Figure 1. The structure and working process of JAKF.
Sensors 16 01103 g001
Figure 2. The relations among Clidar, Cradar, and Ccar.
Figure 2. The relations among Clidar, Cradar, and Ccar.
Sensors 16 01103 g002
Figure 3. Comparisons of motion state estimation against different noise variances R: (a) The displacement estimation; (b) The velocity estimation; (c) The acceleration estimation.
Figure 3. Comparisons of motion state estimation against different noise variances R: (a) The displacement estimation; (b) The velocity estimation; (c) The acceleration estimation.
Sensors 16 01103 g003
Figure 4. Comparisons of motion state estimation against different accelerations aF: (a) The displacement estimation; (b) The velocity estimation; (c) The acceleration estimation.
Figure 4. Comparisons of motion state estimation against different accelerations aF: (a) The displacement estimation; (b) The velocity estimation; (c) The acceleration estimation.
Sensors 16 01103 g004
Figure 5. The URG-04LX Lidar (Left) and the ESR Radar (Right).
Figure 5. The URG-04LX Lidar (Left) and the ESR Radar (Right).
Sensors 16 01103 g005
Figure 6. The test car at the left side and the forward car at the right side.
Figure 6. The test car at the left side and the forward car at the right side.
Sensors 16 01103 g006
Figure 7. The experimental scenario and route.
Figure 7. The experimental scenario and route.
Sensors 16 01103 g007
Figure 8. The noise of (a) URG and (b) ESR in the first experiment.
Figure 8. The noise of (a) URG and (b) ESR in the first experiment.
Sensors 16 01103 g008
Figure 9. The measurement noise V-C matrix R of the local-filter output of URG and ESR in the first experiment (The R of displacement of (a) URG and (d) ESR, the R of velocity of (b) URG and (e) ESR and the R of acceleration of (c) URG and (f) ESR).
Figure 9. The measurement noise V-C matrix R of the local-filter output of URG and ESR in the first experiment (The R of displacement of (a) URG and (d) ESR, the R of velocity of (b) URG and (e) ESR and the R of acceleration of (c) URG and (f) ESR).
Sensors 16 01103 g009
Figure 10. Estimation and error comparison of URG in the first experiment: (a) Displacement estimation of URG; (b) Displacement error of URG; (c) Velocity estimation of URG; (d) Velocity error of URG; (e) Acceleration estimation of URG; (f) Acceleration error of URG.
Figure 10. Estimation and error comparison of URG in the first experiment: (a) Displacement estimation of URG; (b) Displacement error of URG; (c) Velocity estimation of URG; (d) Velocity error of URG; (e) Acceleration estimation of URG; (f) Acceleration error of URG.
Sensors 16 01103 g010aSensors 16 01103 g010bSensors 16 01103 g010c
Figure 11. Estimation and error comparison of ESR in the first experiment: (a) Displacement estimation of ESR; (b) Displacement error of ESR; (c) Velocity estimation of ESR; (d) Velocity error of ESR; (e) Acceleration estimation of ESR; (f) Acceleration error of ESR.
Figure 11. Estimation and error comparison of ESR in the first experiment: (a) Displacement estimation of ESR; (b) Displacement error of ESR; (c) Velocity estimation of ESR; (d) Velocity error of ESR; (e) Acceleration estimation of ESR; (f) Acceleration error of ESR.
Sensors 16 01103 g011aSensors 16 01103 g011b
Figure 12. The noise of (a) URG and (b) ESR in the second experiment.
Figure 12. The noise of (a) URG and (b) ESR in the second experiment.
Sensors 16 01103 g012
Figure 13. The measurement noise V-C matrix R of the local-filter output of URG and ESR in the second experiment (The R of displacement of (a) URG and (d) ESR, the R of velocity of (b) URG and (e) ESR and the R of acceleration of (c) URG and (f) ESR).
Figure 13. The measurement noise V-C matrix R of the local-filter output of URG and ESR in the second experiment (The R of displacement of (a) URG and (d) ESR, the R of velocity of (b) URG and (e) ESR and the R of acceleration of (c) URG and (f) ESR).
Sensors 16 01103 g013
Figure 14. Estimation and error comparison of URG in the second experiment: (a) Displacement estimation of URG; (b) Displacement error of URG; (c) Velocity estimation of URG; (d) Velocity error of URG; (e) Acceleration estimation of URG; (f) Acceleration error of URG.
Figure 14. Estimation and error comparison of URG in the second experiment: (a) Displacement estimation of URG; (b) Displacement error of URG; (c) Velocity estimation of URG; (d) Velocity error of URG; (e) Acceleration estimation of URG; (f) Acceleration error of URG.
Sensors 16 01103 g014aSensors 16 01103 g014b
Figure 15. Estimation and error comparison of ESR in the second experiment: (a) Displacement estimation of ESR; (b) Displacement error of ESR; (c) Velocity estimation of ESR; (d) Velocity error of ESR; (e) Acceleration estimation of ESR; (f) Acceleration error of ESR.
Figure 15. Estimation and error comparison of ESR in the second experiment: (a) Displacement estimation of ESR; (b) Displacement error of ESR; (c) Velocity estimation of ESR; (d) Velocity error of ESR; (e) Acceleration estimation of ESR; (f) Acceleration error of ESR.
Sensors 16 01103 g015aSensors 16 01103 g015b
Figure 16. The noise of (a) URG and (b) ESR in the third experiment.
Figure 16. The noise of (a) URG and (b) ESR in the third experiment.
Sensors 16 01103 g016
Figure 17. The measurement noise V-C matrix R of the local-filter output of URG and ESR in the third experiment: The R of displacement of (a) URG and (d) ESR; the R of velocity of (b) URG and (e) ESR and the R of acceleration of (c) URG and (f) ESR.
Figure 17. The measurement noise V-C matrix R of the local-filter output of URG and ESR in the third experiment: The R of displacement of (a) URG and (d) ESR; the R of velocity of (b) URG and (e) ESR and the R of acceleration of (c) URG and (f) ESR.
Sensors 16 01103 g017
Figure 18. Estimation and error comparison of URG in the third experiment: (a) Displacement estimation of URG; (b) Displacement error of URG; (c) Velocity estimation of URG; (d) Velocity error of URG; (e) Acceleration estimation of URG; (f) Acceleration error of URG.
Figure 18. Estimation and error comparison of URG in the third experiment: (a) Displacement estimation of URG; (b) Displacement error of URG; (c) Velocity estimation of URG; (d) Velocity error of URG; (e) Acceleration estimation of URG; (f) Acceleration error of URG.
Sensors 16 01103 g018aSensors 16 01103 g018bSensors 16 01103 g018c
Figure 19. Estimation and error comparison of ESR in the third experiment: (a) Displacement estimation of ESR; (b) Displacement error of ESR; (c) Velocity estimation of ESR; (d) Velocity estimation of ESR; (e) Acceleration estimation of ESR; (f) Acceleration error of ESR.
Figure 19. Estimation and error comparison of ESR in the third experiment: (a) Displacement estimation of ESR; (b) Displacement error of ESR; (c) Velocity estimation of ESR; (d) Velocity estimation of ESR; (e) Acceleration estimation of ESR; (f) Acceleration error of ESR.
Sensors 16 01103 g019aSensors 16 01103 g019bSensors 16 01103 g019c
Table 1. The relation between R and the accuracy of Lidar.
Table 1. The relation between R and the accuracy of Lidar.
SNR (dB)8081828384858687
R0.00210.00160.00130.00100.00080.00060.00050.0004
Accuracy (m)0.03620.03180.02840.02540.02260.02010.01800.0164
Table 2. Parameters of URG and ESR.
Table 2. Parameters of URG and ESR.
URGESR
Measuring distance0.1 m~30 m0.5 m~60 m
Distance accuracy±0.03 m±0.25 m
Scanning angle±120°±45°
Angular resolution0.36°±1°
Scanning time25 ms/scan50 ms/scan
Table 3. Error comparisons of CKF, IAKF, and JAKF.
Table 3. Error comparisons of CKF, IAKF, and JAKF.
CKFURGIAKFURGCKFESRIAKFESRJAKF
RMS Distance Error (m)0.08800.04790.05950.05650.0320
Distance Variance2.0833 × 1042.0831 × 1042.0833 × 1042.0832 × 1042.0831 × 104
RMS Velocity Error (m/s)0.07020.06650.06760.06690.0640
Velocity Variance6.79796.78526.77226.76996.7737
RMS Acceleration Error (m/s2)0.01750.01550.01840.01790.0140
Acceleration Variance4.2133 × 10−42.2840 × 10−43.1428 × 10−43.0290 × 10−41.8763 × 10−4
Table 4. Error comparisons of CKF, IAKF, and JAKF in second experiment.
Table 4. Error comparisons of CKF, IAKF, and JAKF in second experiment.
CKFURGIAKFURGCKFESRIAKFESRJAKF
RMS Distance Error (m)0.09120.08830.17540.12960.0685
Distance Variance2.8775 × 1042.8744 × 1042.8771 × 1042.8774 × 1042.8773 × 104
RMS Velocity Error (m/s)0.03670.03380.05270.05510.0269
Velocity Variance6.80206.77376.78496.78726.7794
RMS Acceleration Error (m/s2)0.01900.01330.02050.02030.0135
Acceleration Variance0.13550.13640.13510.13480.1356
Table 5. Error comparisons of CKF, IAKF, and JAKF in third experiment.
Table 5. Error comparisons of CKF, IAKF, and JAKF in third experiment.
CKFURGIAKFURGCKFESRIAKFESRJAKF
RMS Distance Error (m)0.07790.06360.06810.06360.0549
Distance Variance2.8638 × 1042.8635 × 1042.8642 × 1042.8641 × 1042.8633 × 104
RMS Velocity Error (m/s)0.04370.04310.04310.04140.0385
Velocity Variance0.67020.66880.67110.67720.6619
RMS Acceleration Error (m/s2)0.04150.03800.04420.04430.0368
Acceleration Variance0.04140.04100.04200.04240.0409
Table 6. Ratio of accuracy improvement of JAKF in the three experiments.
Table 6. Ratio of accuracy improvement of JAKF in the three experiments.
JAKF CompareDisplacementVelocityAcceleration
The first experimentCKF54.93%27.08%21.96%
IAKF28.28%19.05%15.74%
The second experimentCKF42.91%37.85%36.43%
IAKF35.03%35.79%16%
The third experimentCKF25.14%13.29%14.02%
IAKF13.68%9.84%10.04%

Share and Cite

MDPI and ACS Style

Gao, S.; Liu, Y.; Wang, J.; Deng, W.; Oh, H. The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation. Sensors 2016, 16, 1103. https://doi.org/10.3390/s16071103

AMA Style

Gao S, Liu Y, Wang J, Deng W, Oh H. The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation. Sensors. 2016; 16(7):1103. https://doi.org/10.3390/s16071103

Chicago/Turabian Style

Gao, Siwei, Yanheng Liu, Jian Wang, Weiwen Deng, and Heekuck Oh. 2016. "The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation" Sensors 16, no. 7: 1103. https://doi.org/10.3390/s16071103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop