Next Article in Journal
A New Realization of Electronically Tunable Multiple-Input Single-Voltage Output Second-Order LP/BP Filter Using VCII
Next Article in Special Issue
Resilient Consensus for Multi-Agent Systems in the Presence of Sybil Attacks
Previous Article in Journal
High-Voltage LC-Parallel Resonant Converter with Current Control to Detect Metal Pollutants in Water through Glow-Discharge Plasma
Previous Article in Special Issue
A Multisensor Fusion-Based Cooperative Localization Scheme in Vehicle Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Localization Error Modeling for Autonomous Driving in GPS Denied Environment

School of Marine Science and Technology, Northwestern Polytechnical University, 127 Youyi West Road, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(4), 647; https://doi.org/10.3390/electronics11040647
Submission received: 19 January 2022 / Revised: 11 February 2022 / Accepted: 14 February 2022 / Published: 18 February 2022
(This article belongs to the Collection Advance Technologies of Navigation for Intelligent Vehicles)

Abstract

:
Precise localization plays a crucial role in autonomous driving applications. As Global Position System (GPS) signals are often susceptible to interference or even not fully available, odometry sensors can precisely calculate positions in urban environments. However, the cumulative error is thus originated with time increasing. This paper proposes an effective empirical formula to model such unbounded cumulative errors from noisy relative measurements. Furthermore, a recursive cumulative error expression has been established by calculating the first and second moments of the Ackermann model. Finally, based on the developed formula, numerical experiments have also been conducted to verify the validity of the proposed model.

1. Introduction

Interest in autonomous driving has grown exponentially in past decades [1,2]. As the Global Position System (GPS) signal is susceptible to interference or even not fully available in typical urban environments, precise localization is considered as a supplement solution to localize vehicles with multi-source heterogeneous sensors e.g., laser sensors [3], Wireless Local Area Network (WLAN) sensors [4] and inertial sensors [5].
The current localization framework could be mainly divided into SLAM and dead-reckoning-based technologies. The former has high accuracy but requires loop-closure (the vehicle must visit the same places) during the navigation procedure [6,7,8,9,10,11]. In contrast, the latter has faster performance but is limited to unbounded drift (also called cumulative error) originating from noisy odometry sensors [12,13]. As the vehicle often goes to unknown areas, it is essential to eliminate cumulative errors during dead-reckoning procedures.
Although varied techniques have been developed to decline cumulative error, a rigorous analysis of the growth rate concerning ego-motion is still missing. Typical contributions are summarized as follows: Wan et al. presented a multi-sensor fusion method to eliminate the cumulative error in urban environments [14]. Song et al. also proposed a fusion-based approach to enhance localization accuracy with the camera and laser [15]. However, the cumulative error has already proved to be overgrowing, and the statistical characteristics are rarely considered [16,17]. Moreover, to the best of the author’s knowledge, a precise formula to calculate cumulative error from anonymous trajectories is quite challenging. In contrast to the straightforward analytic solutions, calibration-based methods have thus been investigated. Liu et al. proposed a framework for error calibration based on the RFID techniques [18], whereas Alwin et al. analyzed the error propagations by utilizing ultra-wideband measurement [19]. Furthermore, with the development of artificial intelligence (AI) techniques, deep learning-based approaches have also been investigated [20]. For example, Brossard et al. proposed an AI-based dead-reckoning model to improve the localization performance [21], where the neural network is utilized to learn the cumulative errors. Meanwhile, Shit et al. designed an AI crowdsource-based localization approach for intelligent transportation systems [22]. Since all learning-based methods inevitably rely on deep neural networks, the poor interpretability seriously limits its potential in autonomous driving applications [23]. In addition, as the learning-based strategy is not comprehensive enough and contains huge hyper-parameters, different scenarios may significantly reduce localization performance [24].
It is concluded that no methodology is perfect for autonomous driving; especially in large-scale urban environments, the cumulative error grows exponentially. Our previous work introduced an analytical method to analyze the cumulate error; however, only a simple dead-reckoning model was used, and it hardly contributed to actual driving scenarios [25].
This paper proposes an effective empirical formula to estimate moments of the cumulative error regarding autonomous driving. Independent scenarios could quickly implement the proposed methodology to regular and irregular trajectories and be verified numerically through Monte Carlo simulations. Furthermore, in contrast to the state-of-the-art, the proposed strategy could recursively estimate the cumulative error without prior information, which has fantastic potential in autonomous driving.
The contents of this paper are organized as follows: Section 1 briefly introduces the background of the problem. Section 2 investigates the statistical properties of the localization errors. Section 3 presents experimental results with Monte-Carlo simulations. Finally, this paper is concluded in Section 4.

2. Mathematical Model and Estimation

An autonomous vehicle localizes itself by receiving GPS signals during driving scenarios. However, in GPS denied scenarios, dead-reckoning models are often applied to calculate the position using odometry measurements. Meanwhile, as the relative noises are distributed with Independent and Identical Distribution (IID) in consecutive frames, the vehicle must calibrate cumulative errors frequently. Hence, the main challenge for autonomous driving is how to model the nonlinear and unbounded drift in urban scenarios precisely.

2.1. Problem Statement

To precisely control and localize the movement of an autonomous vehicle, a digital model of the vehicle ego-motion should be established. This paper utilizes a typical kinematic model to reflect the vehicle characteristics accurately, also called the Ackermann steering geometry. Figure 1a is a schematic diagram of the Ackermann model. Here, O is the rotation center of the vehicle, β is the slip angle, θ is the heading angle, and v is the velocity.
Figure 1b illustrates both the Ackermann model and the position ( x n m , y n m ) calculated by relative noisy measurements ( θ n m , β n m , v n m ) , where n and m express both the step and measurement, respectively. v, θ and β represent the corresponding velocity, heading angle, and slip angle in consecutive frames. Definitions of notations are exhibited in Table 1. The pose measurements θ ¯ n , β ¯ n and v ¯ n are defined as the ground truths. θ ˜ n , β ˜ n and v ˜ n are noisy errors assumed to be independent with zero mean and standard deviation δ θ , δ β and δ v . Hence, measurements on each step are acquired as follows
θ n m = θ ¯ n + θ ˜ n ; β n m = β ¯ n + β ˜ n ; v n m = v ¯ n + v ˜ n .
where corresponding errors are IID. Hence, the vehicle’s localization is calculated by dead-reckoning with:
x n m = i = 1 n ( v i m cos ( j = 1 i θ j m + β i m ) ) ; y n m = i = 1 n ( v i m sin ( j = 1 i θ j m + β i m ) ) .

2.2. True Error Statistics

The localization error could also be represented by expanding for autonomous driving scenarios.
x n m = x ¯ n + x ˜ n = i = 1 n [ ( v ¯ i + v ˜ i ) cos ( j = 1 i ( θ ¯ j + θ ˜ j ) + ( β ¯ i + β ˜ i ) ) ] = [ i = 1 n v ¯ i + i = 1 n v ˜ i ] · [ cos ( j = 1 i θ ¯ j + β ¯ i ) cos ( j = 1 i θ ˜ j + β ˜ i ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] ,
y n m = y ¯ n + y ˜ n = i = 1 n [ ( v ¯ i + v ˜ i ) sin ( j = 1 i ( θ ¯ j + θ ˜ j ) + ( β ¯ i + β ˜ i ) ) ] = [ i = 1 n v ¯ i + i = 1 n v ˜ i ] · [ sin ( j = 1 i θ ¯ j + β ¯ i ) cos ( j = 1 i θ ˜ j + β ˜ i ) + cos ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] .
where x ¯ n = i = 1 n [ v ¯ i cos ( ( j = 1 i θ ¯ j ) + β ¯ i ) ] and y ¯ n = i = 1 n [ v ¯ i sin ( ( j = 1 i θ ¯ j ) + β ¯ i ) ] are the ground-truth positions. Furthermore, by rearranging the above equations, we have:
x ˜ n = i = 1 n v ¯ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) ( cos ( j = 1 i θ ˜ j + β ˜ i ) 1 ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] + i = 1 n v ˜ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) cos ( j = 1 i θ ˜ j + β ˜ i ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] ,
y ˜ n = i = 1 n v ¯ i [ sin ( j = 1 i θ ¯ j + β ¯ i ) ( cos ( j = 1 i θ ˜ j + β ˜ i ) 1 ) + cos ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] + i = 1 n v ˜ i [ sin ( j = 1 i θ ¯ j + β ¯ i ) cos ( j = 1 i θ ˜ j + β ˜ i ) + cos ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] .
Regarding the formulas mentioned above, the expectation and variance of the accumulated error are then calculated. Notice that each cumulative error depends on the ground truth, where the statistical properties of corresponding errors are explicitly represented as follows:
Assuming θ ˜ N ( 0 , δ θ 2 ) , β ˜ N ( 0 , δ β 2 ) and v ˜ N ( 0 , δ v 2 ) , we can obtain:
i = 1 n θ ˜ N ( 0 , i δ θ 2 ) ; i = 1 n β ˜ N ( 0 , i δ β 2 ) ; i = 1 n v ˜ N ( 0 , i δ v 2 ) ,
Concerning the Ackermann model, since both the heading angle and slip angle are independently distributed, equations are also calculated as follows:
( ( i = 1 n θ ˜ ) + β ˜ ) N ( 0 , i δ θ 2 + δ β 2 ) .
In this case, the following equations are acquired
E ( cos ( ( j = 1 i θ ˜ j ) + β ˜ i ) ) = e ( i δ θ 2 + δ β 2 ) / 2 , E ( sin ( ( j = 1 i θ ˜ j ) + β ˜ i ) ) = 0 , E ( cos 2 ( ( j = 1 i θ ˜ j ) + β ˜ i ) ) = 1 2 ( e 2 ( i δ θ 2 + δ β 2 ) + 1 ) , E ( sin 2 ( ( j = 1 i θ ˜ j ) + β ˜ i ) ) = 1 2 ( e 2 ( i δ θ 2 + δ β 2 ) + 1 ) , E ( cos ( ( j = 1 i θ ˜ j ) + β ˜ i ) sin ( ( j = 1 i θ ˜ j ) + β ˜ i ) ) = 0 .
where Equation (9) exhibits the expected value of parameters θ and β . Calculating the expectation of x ˜ n and y ˜ n respectively, and placing formula (9) into the corresponding equation for simplification, the expectations are thus acquired as follows:
μ t ( θ ¯ , β ¯ , v ¯ ) = E [ x ˜ n ] E [ y ˜ n ] = i = 1 n v ¯ [ cos ( ( j = 1 i θ ˜ j ) + β ˜ i ) ( e i δ θ 2 + δ β 2 2 1 ) ] i = 1 n v ¯ [ sin ( ( j = 1 i θ ˜ j ) + β ˜ i ) ( e i δ θ 2 + δ β 2 2 1 ) ] .
The Cauchy–Schwarz inequality has to be utilized afterward to calculate the second-order moment of the cumulative error.
( i = 1 n x i y i ) 2 ( i = 1 n x i 2 ) ( i = 1 n y i 2 ) .
Based on the above equation, we have
E ( x ˜ n E ( x ˜ n ) ) 2 = E [ i = 1 n v ¯ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) ( cos ( j = 1 i θ ˜ j + β ˜ i ) e i δ θ 2 + δ β 2 2 ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] + i = 1 n v ˜ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) cos ( j = 1 i θ ˜ j + β ˜ i ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] ] 2 ( 1 2 + 1 2 ) · E [ i = 1 n v ¯ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) ( cos ( j = 1 i θ ˜ j + β ˜ i ) e i δ θ 2 + δ β 2 2 1 ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] 2 + [ i = 1 n v ˜ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) cos ( j = 1 i θ ˜ j + β ˜ i ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] ] 2 ] = 2 · E ( A ) + 2 · E ( B ) .
The above formula is the second-order moment of the cumulative error in the X direction regarding the variance. However, it is still not entirely simplified. By expressing it with A x and B x , the Cauchy-Schwarz inequality is then utilized again.
A x = [ i = 1 n v ¯ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) ( cos ( j = 1 i θ ˜ j + β ˜ i ) e i δ θ 2 + δ β 2 2 ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] ] 2 and B x = [ i = 1 n v ˜ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) ( cos j = 1 i θ ˜ j + β ˜ i ) sin ( j = 1 i θ ¯ j + β ¯ i ) sin ( j = 1 i θ ˜ j + β ˜ i ) ] ] 2
Similar, we also have
E ( A x ) = n · i = 1 n v ¯ i 2 [ cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ] ,
E ( B x ) = n · E ( i = 1 n v ˜ i 2 ) [ cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ] = n · δ v 2 i = 1 n ( cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ) .
Finally, v a r ( x ˜ n ) becomes
v a r ( x ˜ n ) 2 n · i = 1 n v ¯ i 2 [ cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ] + 2 n · δ v 2 i = 1 n ( cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ) ,
The v a r ( x ˜ n ) and the v a r ( y ˜ n ) are calculated in the same way. The variance of y ˜ n can also be written as:
v a r ( y ˜ n ) 2 n · i = 1 n v ¯ i 2 [ sin 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) sin 2 ( j = 1 i θ ¯ j + β ¯ i ) e ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ] + 2 n · δ v 2 i = 1 n ( sin 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ) .

2.3. Error Statistics in Practice

Equations (10), (15) and (16) are theoretically proofed representations for error statistics. However, they are conditional on the ground truth. Hence, the expected values must be conditioned on noisy measurements to extend the potential usages in practice.
E [ μ t | θ m , β m , v m ] = μ m ,
E [ v a r ( x ˜ n ) | θ m , β m , v m ] = v a r ( x ˜ n m ) .
E [ v a r ( y ˜ n ) | θ m , β m , v m ] = v a r ( y ˜ n m ) .
By expanding the above equations with Equation (1), we have
E [ x ˜ n m ] = E [ E [ x ˜ n ] | θ m , β m , v m ] = E [ i = 1 n v ¯ i [ cos ( j = 1 i θ ¯ j + β ¯ i ) ( e ( i δ θ 2 + δ β 2 ) 2 1 ) ] ] = E [ i = 1 n ( v i m v ˜ i ) [ cos ( j = 1 i ( θ j m θ ˜ j ) + ( β i m β ˜ i ) ) ( e ( i δ θ 2 + δ β 2 2 ) 1 ) ] ] = i = 1 n v i m E [ ( cos ( ( j = 1 i θ j m + β i m ) ( j = 1 i θ ˜ j + β ˜ i ) ) ) · ( e ( i δ θ 2 + δ β 2 ) 2 1 ) ] = i = 1 n v i m ( e ( i δ θ 2 + δ β 2 ) e ( i δ θ 2 + δ β 2 ) 2 ) cos ( j = 1 i θ j m + β i m ) .
Meanwhile, the second-order moment of the cumulative error has also been calculated with the Cauchy–Schwarz inequality as follows:
v a r ( x ˜ n m ) E [ 2 n · i = 1 n v ¯ i 2 [ cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ] ] + E [ 2 n · δ v 2 i = 1 n ( cos 2 ( j = 1 i θ ¯ j + β ¯ i ) e 2 ( i δ θ 2 + δ β 2 ) e 2 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ) ] 2 n ( 1 2 + ( 1 ) 2 ) E [ i = 1 n ( ( v m i ) 2 ( v ¯ i ) 2 ) [ cos 2 ( ( j = 1 i θ j m + β i m ) ( j = 1 i θ ˜ j + β ˜ i ) ) ] · e ( i δ θ 2 + δ β 2 ) · ( e 2 ( i δ θ 2 + δ β 2 ) 1 ) ] + 2 n · δ v 2 i = 1 n [ cos 2 ( ( j = 1 i θ j m + β i m ) ) e 4 ( i δ θ 2 + δ β 2 ) e 4 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ] .
Then, continuing to simplify v a r ( x ˜ n m ) , we can have:
v a r ( x ˜ n m ) 4 n i = 1 n [ ( v m i ) 2 + ( δ v 2 ) ] [ cos 2 ( j = 1 i θ j m + β i m ) e 4 ( i δ θ 2 + δ β 2 ) cos 2 ( j = 1 i θ j m + β i m ) e 3 ( i δ θ 2 + δ β 2 ) e 4 ( i δ θ 2 + δ β 2 ) 2 + e 3 ( i δ θ 2 + δ β 2 ) 2 e ( i δ θ 2 + δ β 2 ) 2 + 1 2 ] + 2 n δ v 2 i = 1 n ( cos 2 ( j = 1 i θ j m + β i m ) e 4 ( i δ θ 2 + δ β 2 ) e 4 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ) .
Equations (20) and (22) are the expectation and variance of the cumulative error in the X direction, respectively. With the same manner, corresponding values of y ˜ n can also be calculated:
E [ y ˜ n m ] = i = 1 n v i m ( e ( i δ θ 2 + δ β 2 ) e ( i δ θ 2 + δ β 2 ) 2 ) sin ( j = 1 i θ j m + β i m ) .
v a r ( y ˜ n m ) 4 n i = 1 n [ ( v m i ) 2 + ( δ v 2 ) ] [ sin 2 ( j = 1 i θ j m + β i m ) e 4 ( i δ θ 2 + δ β 2 ) sin 2 ( j = 1 i θ j m + β i m ) e 3 ( i δ θ 2 + δ β 2 ) e 4 ( i δ θ 2 + δ β 2 ) 2 + e 3 ( i δ θ 2 + δ β 2 ) 2 e ( i δ θ 2 + δ β 2 ) 2 + 1 2 ] + 2 n δ v 2 i = 1 n ( sin 2 ( j = 1 i θ j m + β i m ) e 4 ( i δ θ 2 + δ β 2 ) e 4 ( i δ θ 2 + δ β 2 ) 2 + 1 2 ) .
Notice that the corresponding values are still complicated; nevertheless, these equations are practically helpful.

3. Experimental Results

In this section, the error expectation and variance have been evaluated by using the Monte-Carlo simulation. Especially for autonomous driving scenarios, the well-known Kitti dataset [26] has also been utilized, which offers the pose measurements in urban applications. In the Kitti dataset, 30 different GPS/IMU values are stored in text files, including altitude, global positioning, speed, acceleration, angular rate, accuracy, and satellite information. The Monte-Carlo simulation is conducted by selecting the parameters regarding odometry sensors, such as forwarding speed, heading angle, and the angular speed in the dataset. Meanwhile, Gaussian white noise is manually added to obtain the relative noisy measurements obtained in the actual scene. As shown in Table 2, the deviation concerning velocity noise, the slip angle, and the heading angle are considered as 0.01 m, 0.005 rad, and 0.0005 rad, respectively. Figure 2 exhibits four original trajectories measured by the GPS/IMU system.
During the experiment, the statistical properties of the cumulative regarding different trajectories are calculated with the following steps (here, only concerning the first moment in X-direction).
E ( x ˜ 1 m ) = v 1 m ( e ( δ θ 2 + δ β 2 ) e ( δ θ 2 + δ β 2 ) 2 ) cos ( j = 1 1 θ j m + β 1 m ) , E ( x ˜ 2 m ) = E ( x ˜ 1 m ) + v 2 m ( e ( 2 δ θ 2 + δ β 2 ) e ( 2 δ θ 2 + δ β 2 ) 2 ) cos ( j = 1 2 θ j m + β 2 m ) , E ( x ˜ 3 m ) = E ( x ˜ 2 m ) + v 3 m ( e ( 3 δ θ 2 + δ β 2 ) e ( 3 δ θ 2 + δ β 2 ) 2 ) cos ( j = 1 3 θ j m + β 3 m ) , E ( x ˜ n m ) = E ( x ˜ n 1 m ) + v n m ( e ( n δ θ 2 + δ β 2 ) e ( n δ θ 2 + δ β 2 ) 2 ) cos ( j = 1 n θ j m + β n m ) .
Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 illustrate the results by using Monte-Carlo simulation. Figure 3, Figure 4, Figure 5 and Figure 6 are the expectation of cumulative error in the X direction and Y direction. Figure 7, Figure 8, Figure 9 and Figure 10 show the cumulative error variances in X and Y directions. From Figure 3, Figure 4, Figure 5 and Figure 6, the blue line and red line are represented as the proposed model and the ground truth, which illustrates that most estimated results are approximately equal to the true cumulative error by Monte-Carlo simulations. However, the estimated variance has deviated in contrast to the ground truth, which is caused by high-order nonlinear transformation issues.
Regarding trajectory 1 (Figure 2a), Figure 3 and Figure 7 are the expectation and variance of its cumulative error, respectively. From Figure 2a, we can see that the vehicle moves with a round-trip trajectory in the X-direction and multiple circles in the Y-direction. Hence, the expectation of its cumulative error concerning the X-direction first decreases and then increases, whereas the value in the Y-direction increases and decreases frequently. The numerical change in the corresponding direction is consistent with the movement. However, nonlinear changes lead to inaccurate estimation results in the variance estimation process. Figure 4 and Figure 8 are the expectation and variance of the cumulative error with trajectory 2 (Figure 2b). It is observed that the vehicle has made multiple round-trip movements in both directions, respectively. Noticed that the number of numerical changes is consistent with the number of round-trip movements. Similarly, trajectory 3 and 4 (Figure 2c,d) have also been evaluated in the experiment. However, variance estimation always has inaccurate results caused by nonlinear changes.
In summary, experimental results have demonstrated that the proposed model fits the statistical characteristics well regarding the first-order moment. Hence, it could provide a mathematical solution for autonomous driving in GPS denied environments.

4. Conclusions

Localization uncertainty estimation from odometry measurements in the urban environment has great potential for autonomous driving, especially in GPS denied environments. In this paper, the localization error is modeled to approximate its first and second-order moments based on characteristics of odometry sensors. The proposed approach recursively estimates both the bias and uncertainty without ground truths compared to the related work. Numerical results demonstrate the validity of the proposed formula by using Monte-Carlo simulations. Future work focuses on applying the proposed approach by considering 6-DOF information from the odometry sensor.

Author Contributions

F.Z. provided the initial motivition and ideas. Z.W. conducted data analysis and realized the design of the simulation experiment. Y.Z. and L.C. analyzed the experimental results. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) grant number 52171322.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository. The data presented in this study are openly available in KITTI at doi:10.1109.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GPSGlobal Position System
IIDIndependent and Identical Distribution

References

  1. Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V.; et al. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden, Germany, 5–9 June 2011; pp. 163–168. [Google Scholar]
  2. Ge, Y.; Wang, Y.; Yu, R.; Han, Q.; Chen, Y. Research on test method of autonomous driving based on digital twin. In Proceedings of the 2019 IEEE Vehicular Networking Conference (VNC), Los Angeles, CA, USA, 4–6 December 2019; pp. 1–2. [Google Scholar]
  3. Chan, S.H.; Wu, P.T.; Fu, L.C. Robust 2D indoor localization through laser SLAM and visual SLAM fusion. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1263–1268. [Google Scholar]
  4. Duan, Y.; Lam, K.Y.; Lee, V.C.; Nie, W.; Liu, K.; Li, H.; Xue, C.J. Data rate fingerprinting: A WLAN-based indoor positioning technique for passive localization. IEEE Sens. J. 2019, 19, 6517–6529. [Google Scholar] [CrossRef]
  5. Lee, B.H.; Song, J.H.; Im, J.H.; Im, S.H.; Heo, M.B.; Jee, G.I. GPS/DR error estimation for autonomous vehicle localization. Sensors 2015, 15, 20779–20798. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Yu, C.; Liu, Z.; Liu, X.J.; Xie, F.; Yang, Y.; Wei, Q.; Fei, Q. DS-SLAM: A semantic visual SLAM towards dynamic environments. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1168–1174. [Google Scholar]
  7. Sumikura, S.; Shibuya, M.; Sakurada, K. Openvslam: A versatile visual slam framework. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 2292–2295. [Google Scholar]
  8. Chen, M.; Tang, Y.; Zou, X.; Huang, Z.; Zhou, H.; Chen, S. 3D global mapping of large-scale unstructured orchard integrating eye-in-hand stereo vision and SLAM. Comput. Electron. Agric. 2021, 187, 106237. [Google Scholar] [CrossRef]
  9. Zhen, W.; Zeng, S.; Soberer, S. Robust localization and localizability estimation with a rotating laser scanner. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 6240–6245. [Google Scholar]
  10. Poulose, A.; Kim, J.; Han, D.S. A sensor fusion framework for indoor localization using smartphone sensors and Wi-Fi RSSI measurements. Appl. Sci. 2019, 9, 4379. [Google Scholar] [CrossRef] [Green Version]
  11. Poulose, A.; Han, D.S. Hybrid indoor localization using IMU sensors and smartphone camera. Sensors 2019, 19, 5084. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Kim, W.Y.; Seo, H.I.; Seo, D.H. Nine-Axis IMU-based Extended inertial odometry neural network. Expert Syst. Appl. 2021, 178, 115075. [Google Scholar] [CrossRef]
  13. Alonso, I.P.; Llorca, D.F.F.; Gavilan, M.; Pardo, S.Á.Á.; García-Garrido, M.Á.; Vlacic, L.; Sotelo, M.Á. Accurate global localization using visual odometry and digital maps on urban environments. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1535–1545. [Google Scholar] [CrossRef] [Green Version]
  14. Wan, G.; Yang, X.; Cai, R.; Li, H.; Zhou, Y.; Wang, H.; Song, S. Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 4670–4677. [Google Scholar]
  15. Song, H.; Choi, W.; Kim, H. Robust vision-based relative-localization approach using an RGB-depth camera and LiDAR sensor fusion. IEEE Trans. Ind. Electron. 2016, 63, 3725–3736. [Google Scholar] [CrossRef]
  16. Rekleitis, I.; Bedwani, J.L.; Gemme, S.; Lamarche, T.; Dupuis, E. Terrain modelling for planetary exploration. In Proceedings of the Fourth Canadian Conference on Computer and Robot Vision, Montreal, QC, Canada, 28–30 May 2007; pp. 243–249. [Google Scholar]
  17. Knuth, J.; Barooah, P. Error Scaling in Position Estimation from Noisy Relative Pose Measurements; Technical Report; Mechanical and Aerospace Engineering, University of Florida: Gainesville, FL, USA, 2011. [Google Scholar]
  18. Liu, G.; Geng, Y.; Pahlavan, K. Effects of calibration RFID tags on performance of inertial navigation in indoor environment. In Proceedings of the 2015 International Conference on Computing, Networking and Communications (ICNC), Garden Grove, CA, USA, 16–19 February 2015; pp. 945–949. [Google Scholar]
  19. Poulose, A.; Eyobu, O.S.; Kim, M.; Han, D.S. Localization error analysis of indoor positioning system based on UWB measurements. In Proceedings of the 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), Zagreb, Croatia, 2–5 July 2019; pp. 84–88. [Google Scholar]
  20. Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef] [PubMed]
  21. Brossard, M.; Barrau, A.; Bonnabel, S. AI-IMU dead-reckoning. IEEE Trans. Intell. Veh. 2020, 5, 585–595. [Google Scholar] [CrossRef]
  22. Shit, R.C.; Sharma, S.; Yelamarthi, K.; Puthal, D. AI-Enabled Fingerprinting and Crowdsource-Based Vehicle Localization for Resilient and Safe Transportation Systems. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4660–4669. [Google Scholar] [CrossRef]
  23. Choi, J.; Chun, D.; Kim, H.; Lee, H.J. Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 502–511. [Google Scholar]
  24. Grigorescu, S.; Trasnea, B.; Cocias, T.; Macesanu, G. A survey of deep learning techniques for autonomous driving. J. Field Robot. 2020, 37, 362–386. [Google Scholar] [CrossRef]
  25. Zhang, F.; Simon, C.; Chen, G.; Buckl, C.; Knoll, A. Cumulative error estimation from noisy relative measurements. In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, 6–9 October 2013; pp. 1422–1429. [Google Scholar]
  26. Kitt, B.; Geiger, A.; Lategahn, H. Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 486–492. [Google Scholar]
Figure 1. (a) Schematic diagram of ackerman model; (b) Relationship between the relative pose measurements and the position.
Figure 1. (a) Schematic diagram of ackerman model; (b) Relationship between the relative pose measurements and the position.
Electronics 11 00647 g001
Figure 2. (ad) The four global trajectories of the vehicle.
Figure 2. (ad) The four global trajectories of the vehicle.
Electronics 11 00647 g002
Figure 3. The expectation of cumulative error for trajectory (Figure 2a). (a) The expectation of cumulative error in the X direction; (b) The expectation of cumulative error in the Y direction.
Figure 3. The expectation of cumulative error for trajectory (Figure 2a). (a) The expectation of cumulative error in the X direction; (b) The expectation of cumulative error in the Y direction.
Electronics 11 00647 g003
Figure 4. The expectation of cumulative error for trajectory (Figure 2b). (a) The expectation of cumulative error in the X direction; (b) The expectation of cumulative error in the Y direction.
Figure 4. The expectation of cumulative error for trajectory (Figure 2b). (a) The expectation of cumulative error in the X direction; (b) The expectation of cumulative error in the Y direction.
Electronics 11 00647 g004
Figure 5. The expectation of cumulative error for trajectory (Figure 2c). (a) The expectation of cumulative error in the X direction; (b) The expectation of cumulative error in the Y direction.
Figure 5. The expectation of cumulative error for trajectory (Figure 2c). (a) The expectation of cumulative error in the X direction; (b) The expectation of cumulative error in the Y direction.
Electronics 11 00647 g005
Figure 6. The expectation of cumulative error for trajectory (Figure 2d). (a) The expectation of cumulative error in the X direction; (b) The expectation of cumulative error in the Y direction.
Figure 6. The expectation of cumulative error for trajectory (Figure 2d). (a) The expectation of cumulative error in the X direction; (b) The expectation of cumulative error in the Y direction.
Electronics 11 00647 g006
Figure 7. The variance of cumulative error for trajectory (Figure 2a). (a) The variance of cumulative error in the X direction; (b) The variance of cumulative error in the Y direction.
Figure 7. The variance of cumulative error for trajectory (Figure 2a). (a) The variance of cumulative error in the X direction; (b) The variance of cumulative error in the Y direction.
Electronics 11 00647 g007
Figure 8. The variance of cumulative error for trajectory (Figure 2b). (a) The variance of cumulative error in the X direction; (b) The variance of cumulative error in the Y direction.
Figure 8. The variance of cumulative error for trajectory (Figure 2b). (a) The variance of cumulative error in the X direction; (b) The variance of cumulative error in the Y direction.
Electronics 11 00647 g008
Figure 9. The variance of cumulative error for trajectory (Figure 2c). (a) The variance of cumulative error in the X direction; (b) The variance of cumulative error in the Y direction.
Figure 9. The variance of cumulative error for trajectory (Figure 2c). (a) The variance of cumulative error in the X direction; (b) The variance of cumulative error in the Y direction.
Electronics 11 00647 g009
Figure 10. The variance of cumulative error for trajectory (Figure 2d). (a) The variance of cumulative error in the X direction; (b) The variance of cumulative error in the Y direction.
Figure 10. The variance of cumulative error for trajectory (Figure 2d). (a) The variance of cumulative error in the X direction; (b) The variance of cumulative error in the Y direction.
Electronics 11 00647 g010
Table 1. Notations definition.
Table 1. Notations definition.
NotationDefinitionNotationDefinition
vvelocity θ ¯ n true value
θ heading angle θ ˜ n error
β slip angle θ n m noisy measurements
E ( x n m ) expectation v a r ( x n m ) variance
Table 2. Parameters for simulation.
Table 2. Parameters for simulation.
ParametersValue
δ v 2 0.01 m/s
δ θ 2 0.0005 rad
δ β 2 0.005 rad
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, F.; Wang, Z.; Zhong, Y.; Chen, L. Localization Error Modeling for Autonomous Driving in GPS Denied Environment. Electronics 2022, 11, 647. https://doi.org/10.3390/electronics11040647

AMA Style

Zhang F, Wang Z, Zhong Y, Chen L. Localization Error Modeling for Autonomous Driving in GPS Denied Environment. Electronics. 2022; 11(4):647. https://doi.org/10.3390/electronics11040647

Chicago/Turabian Style

Zhang, Feihu, Zhiliang Wang, Yaohui Zhong, and Liyuan Chen. 2022. "Localization Error Modeling for Autonomous Driving in GPS Denied Environment" Electronics 11, no. 4: 647. https://doi.org/10.3390/electronics11040647

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop