Extended Particle-Aided Unscented Kalman Filter Based on Self-Driving Car Localization

: The location of the vehicle is a basic parameter for self-driving cars. The key problem of localization is the noise of the sensors. In previous research, we proposed a particle-aided unscented Kalman ﬁlter (PAUKF) to handle the localization problem in non-Gaussian noise environments. However, the previous basic PAUKF only considers the infrastructures in two dimensions (2D). This previous PAUKF 2D limitation rendered it inoperable in the real world, which is full of three-dimensional (3D) features. In this paper, we have extended the previous basic PAUKF’s particle weighting process based on the multivariable normal distribution for handling 3D features. The extended PAUKF also raises the feasibility of fusing multisource perception data into the PAUKF framework. The simulation results show that the extended PAUKF has better real-world applicability than the previous basic PAUKF.


Introduction
In recent years, the self-driving car has become the hottest topic in the automotive industry [1][2][3][4]. To make a self-driving car drive conveniently and safely, accurate identification of the vehicle's position is one of the most important parameters. Only if the self-driving car knows where it is can it execute the next decision. Localization is on the basic module of the perception module. Almost all the other data work on the basis of the location data. The most commonly used localization method is GPS [5]. However, GPS signals are affected by the quantity of satellites and by the multipath effect in an urban environment [6][7][8]. When GPS cannot obtain a sufficient quantity of satellites [9], an alternative approach to localization is needed. As high-definition (HD) maps have developed, it has become possible to localize the vehicle position based on map matching [10,11]. However, not only does the HD map make a noise, but the sensors have a detect noise as well. A method is needed, therefore, to filter noise from different sources. The Kalman filter family is generally used to filter the noise of many sensors [12][13][14]. The Kalman filter is designed for processing Gaussian noise with a linear transition model and a linear measurement model. However, in the real world, most of the transitions and measurements are not linear. Therefore, the extended Kalman filter (EKF) and unscented Kalman filter (UKF) were proposed [15][16][17][18][19][20] to provide a methodology to handle nonlinear transitions and measurements. EKF and UKF still operate on a basic assumption of the Kalman filter, in that the noise always should be Gaussian. The particle filter (PF) is a Monte Carlo-based method that can handle any type of noise [21][22][23][24][25]. The more particles used, the more accurate PF can be. However, because the PF method involves calculating each particle's weight, the precision and computational burden must trade-off.
In our previous work, we proposed the particle-aided unscented Kalman filter (PAUKF), which has more stable and precise performance compared to the PF using the unscented Kalman filter [26]. We constructed the PAUKF using the infrastructure two-dimensional information in previous research. The mean estimation error for the PAUKF is 1.08 m and its variance is 0.7147 m, which is more precise than the mean of 1.69 m and variance of 1.63 m obtained by GANGNAM for a similar noise environment. However, in the real world, there is rich 3D information gained from perception module sensors that use light detection and ranging (LiDAR), and radio detection and ranging (RADAR). The PAUKF algorithm only considers two-dimensional features; it cannot process features in three dimensions appropriately. The PAUKF algorithm needs extended capabilities to handle three-dimensional information.
In this paper, we have improved the perception calculation method and extended the weight calculation scheme, by using a multivariable normal distribution in three dimensions. Compared to our previous research, the extended PAUKF improves the particle selection method. With an improved particle weighting scheme, the extended PAUKF extracts a more precise global location based on 3D features. We also found that the extended PAUKF provides feasibility to fuse multisource perception data into the PAUKF framework by weighting the particles. The simulation result shows that the extended PAUKF achieves better performance in a three-dimensional feature environment compared to the previous PAUKF. The extended PAUKF improves precision 73.81% compared to the initial PAUKF. The limitations of this algorithm are that the hyperparameters are not optimal even when tuned carefully, and that it does not consider the vehicle on the freeway on-ramps. These limitations should be mitigated in future research.
Section 2 illustrates the extension of the basic PAUKF to incorporate three-dimensional features and the framework to fuse multisource perception data. Section 3 illustrates the simulation environment. Section 4 shows the results of simulation and provides a discussion of the extended PAUKF's performance compared to that of the basic PAUKF.

Extended Particle-Aided Unscented Kalman Filter
This section describes the implementation of the extended PAUKF. As is assumed with the initial PAUKF, the environment was taken as Markov, meaning that the current state depends only on the previous state. A bicycle model was used for making a prediction model of particles, as is shown in Figure 1. We assumed that the changes in steering and slip angle information were included in the yaw angle. From the XY plane, outside of the page, is the positive Z direction.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 2 of 11 In our previous work, we proposed the particle-aided unscented Kalman filter (PAUKF), which has more stable and precise performance compared to the PF using the unscented Kalman filter [26]. We constructed the PAUKF using the infrastructure two-dimensional information in previous research. The mean estimation error for the PAUKF is 1.08 m and its variance is 0.7147 m, which is more precise than the mean of 1.69 m and variance of 1.63 m obtained by GANGNAM for a similar noise environment. However, in the real world, there is rich 3D information gained from perception module sensors that use light detection and ranging (LiDAR), and radio detection and ranging (RADAR). The PAUKF algorithm only considers two-dimensional features; it cannot process features in three dimensions appropriately. The PAUKF algorithm needs extended capabilities to handle three-dimensional information.
In this paper, we have improved the perception calculation method and extended the weight calculation scheme, by using a multivariable normal distribution in three dimensions. Compared to our previous research, the extended PAUKF improves the particle selection method. With an improved particle weighting scheme, the extended PAUKF extracts a more precise global location based on 3D features. We also found that the extended PAUKF provides feasibility to fuse multisource perception data into the PAUKF framework by weighting the particles. The simulation result shows that the extended PAUKF achieves better performance in a three-dimensional feature environment compared to the previous PAUKF. The extended PAUKF improves precision 73.81% compared to the initial PAUKF. The limitations of this algorithm are that the hyperparameters are not optimal even when tuned carefully, and that it does not consider the vehicle on the freeway on-ramps. These limitations should be mitigated in future research.
Section 2 illustrates the extension of the basic PAUKF to incorporate three-dimensional features and the framework to fuse multisource perception data. Section 3 illustrates the simulation environment. Section 4 shows the results of simulation and provides a discussion of the extended PAUKF's performance compared to that of the basic PAUKF.

Extended Particle-Aided Unscented Kalman Filter
This section describes the implementation of the extended PAUKF. As is assumed with the initial PAUKF, the environment was taken as Markov, meaning that the current state depends only on the previous state. A bicycle model was used for making a prediction model of particles, as is shown in Figure 1. We assumed that the changes in steering and slip angle information were included in the yaw angle. From the XY plane, outside of the page, is the positive Z direction. This study follows all the assumptions defined in the previous basic PAUKF. The extended algorithm is the particle filter portion. The unscented Kalman filter part is the same as that used with the previous basic PAUKF. This study follows all the assumptions defined in the previous basic PAUKF. The extended algorithm is the particle filter portion. The unscented Kalman filter part is the same as that used with the previous basic PAUKF.

Particle Filter Initialization
Initialization provides the initial guess of the ego vehicle's position in global coordinates via GPS. Once an algorithm knows the initial guess, it randomly generates some particles based on the initial position. If the algorithm is run in an indoor environment, an initial guess of the global map-based position should be given. In the PAUKF algorithm framework, an initial guess is not necessary per se, since features in the HD map provides a relative global position, but having one can make the estimated result converge to the ground truth more rapidly.

Particle Prediction
The prediction step predicts the state one timestamp ahead of the current state. According to Bayesian rule, the prediction always refers to the prior belief. It provides prior information of the vehicle based on the previous posterior belief, which is often the final estimated result of the previous timestamp. In this research, the road was assumed to be a flat XY plane, and movement in the vertical direction z was considered random noise. The state vector is given as Equation (1) and the vertical direction movement is given as Equation (2). Here, the value of σ v z is 0.3.
Equation (3) shows the transition model of the particles. Each particle is generated randomly and calculates the next timestamp value based on the transition model. The predicted value provides the prior guess of each particle and all the particles will be weighted based on the predicted value and current measurement value [27]. ( When the yaw rate equals zero, Equation (3) can be shown to become infinite. A different prediction model should therefore be used when the yaw rate is zero as shown in Equation (4).

Particle Weight Estimation Based on Three-Dimensional Features
Weight estimation, also known as sequential importance sampling (SIS), is a critical step in that it directly affects the importance of each particle from the prediction step. The particle filter is an application of an SIS approach. The main idea of weight estimation is to represent the required posterior density function by a set of random samples, which are taken from the prediction step. These samples use associated weights to compute estimates based on these samples and weights [28].
It is necessary to calculate the error of the predicted mean value and the measured value of each particle, in order to calculate the weight. On the basis of the error of predicted and measured values, the weight of each particle can be calculated. Since the predicted value is calculated in the prediction step, the only parameter that affects weight is the measurement value. In this paper, the extended PAUKF considered the three-dimensional features from the perception module, such as LiDAR-and RADAR-based object recognition. Since the range sensor calculated the distance d i between features based on time of flight, the measured distance is the point-to-point distance, as Figure 2 shows. The method differs from the two-dimensional case. Because of the vertical offset of the features, the weight calculation should also be extended to a higher dimension. The only known measurement values are the point-to-point relative distance d i between vehicle and feature F as well as the angle of the relative bearing α i and relative elevation β i , as Figure 2 shows. The geometry perception data should be considered as well as the noise affection in the vertical direction.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 11 shows. The method differs from the two-dimensional case. Because of the vertical offset of the features, the weight calculation should also be extended to a higher dimension. The only known measurement values are the point-to-point relative distance d i between vehicle and feature F as well as the angle of the relative bearing α i and relative elevation β i , as Figure 2 shows. The geometry perception data should be considered as well as the noise affection in the vertical direction. The measurement value of the vehicle is shown as Equation (5), since sensors in the real world cannot precisely recognize out-ranged targets. Therefore, a valid recognition limit of 50 m was set, meaning that the vehicle localizes itself based on features within 50 m.
The perception values are transformed based on the generated particle value X ̅ k+1 , which is represented as x ̅ f,k+1 , y ̅ f,k+1 and z ̅ f,k+1 . As the feature extends to three dimensions, an extended method should be used to evaluate each weight. After unifying the measurement value and the predicted value in map coordinates, the multivariable normal distribution was used for weighting particles, as Equation (6) shows. The highest weighted particle was used for estimating the position. The total particle number N is 100.
On the basis of the multivariable distribution framework, if there are different sources of perception data, they can be fused simply. When the vehicle receives multiple perception data about one feature, the belief of each perception data can be calculated based on the multivariable normal distribution equation, as Equation (6) shows. After calculating the probability based on each sensor, the final weight of each particle can be calculated, as Equation (7) shows. This provides a feasible multisource data fusion method based on the PAUKF. Please note that the feasibility is not verified. In this paper, we only considered the single range sensor case. The measurement value of the vehicle is shown as Equation (5), since sensors in the real world cannot precisely recognize out-ranged targets. Therefore, a valid recognition limit of 50 m was set, meaning that the vehicle localizes itself based on features within 50 m.

Resampling Particles
The perception values are transformed based on the generated particle value X k+1 , which is represented as x f,k+1 , y f,k+1 and z f,k+1 . As the feature extends to three dimensions, an extended method should be used to evaluate each weight. After unifying the measurement value and the predicted value in map coordinates, the multivariable normal distribution was used for weighting particles, as Equation (6) shows. The highest weighted particle was used for estimating the position. The total particle number N is 100.
On the basis of the multivariable distribution framework, if there are different sources of perception data, they can be fused simply. When the vehicle receives multiple perception data about one feature, the belief of each perception data can be calculated based on the multivariable normal distribution equation, as Equation (6) shows. After calculating the probability based on each sensor, the final weight of each particle can be calculated, as Equation (7) shows. This provides a feasible multisource Appl. Sci. 2020, 10, 5045 5 of 11 data fusion method based on the PAUKF. Please note that the feasibility is not verified. In this paper, we only considered the single range sensor case. w i = P LiDAR (x, y, z) * P RADAR (x, y, z) * P Camera (x, y, z) * . . . P perception source (x, y, z). (7)

Resampling Particles
The resampling step tries to eliminate samples with low importance weights. As long as the importance weight is low, the particle's value effect is small. In this paper, we only considered the precision of the filter; therefore, the algorithm resampling happens at every timestamp. The highest weight of the particle's value was considered as the final estimated value.

UKF Part of Extended PAUKF
In previous research, we found that the PF estimated result is not smooth and stable. The UKF uses the PF estimated results, which can make the estimated result more stable and precise. In this paper, we optimized the geometry of perception and extended the weight calculation method based on the multinomial normal distribution equation, meaning that the UKF received more precise [x, y, θ] measurement data. Since the remaining part was the same as in previous research, the implementation of the UKF is omitted in this paper.

Simulation Environment
The simulation environment is based on the MATLAB autonomous driving toolbox. The whole algorithm is made in MATLAB .m file. Since previous research proved no significant differences between S shape roads and straight roads, in this paper, only the S shape road with the 50 m range of perception limit was used, as shown in Figure 3a. The three-dimensional features are generated randomly along the road, as shown in Figure 3b.
The noise setting parameters are shown in Table 2. The GPS sensor data is used for the vehicle position initialization and particle initialization. The noise is generated by the normal distribution and transformed with sinusoidal function [29,30]. The onboard noise values are empirical values. The perception error is modeled as a normal distribution in x, y, z-directions. In this paper, we assumed that the perception was performed by a LiDAR sensor. These simulation noise parameters only provide reference results. The perception error depends on the algorithm of the perception module; therefore, it should also be tuned to the actual sensors. The velocity was assumed to be constant. The test velocities of the vehicle were 60 km/h, 70 km/h, 80 km/h, 90 km/h, 100 km/h, 110 km/h, and 120 km/h. The random seed was fixed to 50 for repeatable simulation. The sample time was set as 0.05 s, by considering the perception time. These parameters provide a reference for the performance of the extended PAUKF algorithm. These empirical noise parameters should therefore be tuned with the actual sensor characteristics. We also ran the algorithm in a huge perception error environment to confirm the performance of the algorithm.

Simulation Environment
The simulation environment is based on the MATLAB autonomous driving toolbox. The whole algorithm is made in MATLAB .m file. Since previous research proved no significant differences between S shape roads and straight roads, in this paper, only the S shape road with the 50 m range of perception limit was used, as shown in Figure 3a. The three-dimensional features are generated randomly along the road, as shown in Figure 3b.

Simulation Results
The performance of the previous basic PAUKF and the extended PAUKF were compared based on the root mean square error (RMSE). The performance of filters can be realized by the numerical number of the RMSE of the estimation error. The calculation method of the RMSE of the estimation error is shown as Equation (8). Figure 4a,b show the previous basic PAUKF and the extended PAUKF at 60 km/h, respectively. In both figures, the blue line with the circles is the ground truth trajectory of the vehicle, the red line with the triangles is the noisy vehicle position, the yellow line with the square is the particle filter's estimation result, and the black line with the circle is the final estimated vehicle position. The estimated trajectory of the previous basic PAUKF showed a significant error in relation to the ground truth trajectory, as shown in Figure 4a, since the previous basic PAUKF did not consider the geometry effect of perception. Furthermore, the previous basic PAUKF did not consider the effect of noise in the vertical direction. As a result, it cannot generate and select appropriately weighted particles. The extended PAUKF by contrast achieved better performance than the previous basic PAUKF, by not only improving the calculation of the geometry, but also by considering the vertical noise effect on the weight generation and the selection scheme. The trajectory of the extended PAUKF was very close to the ground truth data, as shown in Figure 4b.   Figure 5 shows the visualization result of the probability distribution of the features' position, the ground truth position, the position from the GPS, and the estimated position from the extended PAUKF at 250th sample time. In order to make the figure easy to understand, the origin coordinate of probability distributions translates into the position of features, GPS, and vehicle. One thousand random numbers were used for visualizing the probability of each parameter. In the figure, the red square value is the position from the GPS, the black color with start marker is the position that is estimated from the extended PAUKF, and the blue data with the square is the ground truth data of the vehicle. The figure shows that all the features have different vertical values. The position from the GPS has a large error in relation to the ground truth data. The position from the extended PAUKF estimation is closest to the ground truth data. The extended PAUKF fused the surrounding feature position information to localize the vehicle itself in the map coordinates. This shows the effectiveness of the extended PAUKF.  Table 3 shows the performance of each filter at a different velocity condition. The velocity of the vehicle was set at intervals of 10, from 60 km/h to 120 km/h. The performance of the filter changed slightly because of random environment effects. The RMSE change of noise shows that the vehicle position noise was almost the same at all velocity conditions. This suggests that both filters were in similar environments. From the RMSE of PF, it can be concluded that the previous basic particle  Figure 5 shows the visualization result of the probability distribution of the features' position, the ground truth position, the position from the GPS, and the estimated position from the extended PAUKF at 250th sample time. In order to make the figure easy to understand, the origin coordinate of probability distributions translates into the position of features, GPS, and vehicle. One thousand random numbers were used for visualizing the probability of each parameter. In the figure, the red square value is the position from the GPS, the black color with start marker is the position that is estimated from the extended PAUKF, and the blue data with the square is the ground truth data of the vehicle. The figure shows that all the features have different vertical values. The position from the GPS has a large error in relation to the ground truth data. The position from the extended PAUKF estimation is closest to the ground truth data. The extended PAUKF fused the surrounding feature position information to localize the vehicle itself in the map coordinates. This shows the effectiveness of the extended PAUKF.    Table 3 shows the performance of each filter at a different velocity condition. The velocity of the vehicle was set at intervals of 10, from 60 km/h to 120 km/h. The performance of the filter changed slightly because of random environment effects. The RMSE change of noise shows that the vehicle position noise was almost the same at all velocity conditions. This suggests that both filters were in  Table 3 shows the performance of each filter at a different velocity condition. The velocity of the vehicle was set at intervals of 10, from 60 km/h to 120 km/h. The performance of the filter changed slightly because of random environment effects. The RMSE change of noise shows that the vehicle position noise was almost the same at all velocity conditions. This suggests that both filters were in similar environments. From the RMSE of PF, it can be concluded that the previous basic particle filter estimation's mean RMSE is 11.002 m and the extended particle filter estimation's mean RMSE is 6.201 m. The estimation RMSE of PF decreased by about 43.64% compared to the previous PF. The final estimation result is shown in the right column of the PAUKF in Table 3. The mean of RMSE of the previous basic PAUKF was 10.295 m and that of the extended PAUKF was 2.696 m. Compared to the previous basic PAUKF, the extended PAUKF decreased the RMSE by about 73.81%. The extended PAUKF considered the geometry of perception, and improved the weight generation method and selection method. The extended PAUKF may therefore estimate the trajectory precisely. In order to figure out the performance of the extended PAUKF in a huge perception error environment, we changed the perception noise from N(0, 0.09) to N (3,9) in the x, y, z-directions, meaning that the minimum error of relative distance of every feature and vehicle was always larger than 5.196 m. In this noisy environment, with a velocity of 60 km/h, the RMSE of the total RMSE of the extended PAUKF was 5.771 m. Since the measurement variances were not changed, the extended PAUKF still attempted to rely on the measurement from the PF. The algorithm should perform better if we tune the variance of the measurement matrix. This simulation provides a reference for the extended PAUKF in a huge perceived noise environment. The algorithm requires tuning of the hyperparameters before implementation in the real world.

Conclusions
In this research, an extended version of particle-aided unscented Kalman filter was proposed for self-driving car localization. The extended PAUKF is based on more realistic assumptions. In this research, we processed the additional uncertainty by expanding the particle's dimensionality and by improving the weight calculation method of each particle. From the simulation results, we conclude that the previous basic PAUKF has significant limitations in a three-dimensional feature environment. The mean RMSE of estimation based on the previous basic PAUKF was 10.295 m. Compared to the previous basic PAUKF, the extended PAUKF handled the three-dimensional features appropriately. As a result, the mean RMSE of the extended PAUKF was 2.696 m, which represented a 73.81% improvement in precision compared to the previous basic PAUKF, because of the extended particle processing and improved weight selection scheme of the extended PAUKF. We also proposed the feasibility of sensor fusion methodology based on the PAUKF framework. Using our algorithm, a vehicle can receive more precise and robust location information on the basis of traffic lights, road markers, and other traffic elements. However, some limitations still need to be solved. The hyperparameters should be tuned for optimal performance and the motion in vertical direction should be modeled not only by using noise, but also with consideration for the vertical dynamic. In future work, we plan to find the appropriate hyperparameters of the extended PAUKF and implement all the algorithms into a real ground vehicle with consideration for the vertical motion. Vertical noise of the vehicle d i Distance between feature and vehicle ∆α [i] The relative bearing angle between feature i and vehicle x v , y v , z v x, y, z position of the vehicle in map coordinate x f_i,k+1 , y f_i,k+1 , z f_i,k+1 Relative distance at x, y, z direction between feature and vehicle. d Compound noise of distance measurement ∆α Compound noise of angle measurement w 1,2,3,...i Weights of particle1, particle 2, . . . particle i x p, i , y p, i , z p, i The x, y, z value of the ith particle P x p, i , y p, i , z p, i Probability when the particle is x p, i , y p, i , z p, i σ x , σ y , σ z Compound standard deviation in the x, y, z-direction x f_i,k+1 , y f_i,k+1 , z f_i,k+1 The transformed relative distance of feature I and vehicle at x, y, z direction in map coordinate µ f ,x , µ f ,y , µ f ,z The feature position x, y, z in the pre-saved HD map N number of particles P LiDAR (x, y, z), P RADAR (x, y, z), P Camera (x, y, z), P perception source The probability of perception based on different sensors