Next Article in Journal
Carry-Propagation-Adder-Factored Gemmini Systolic Array for Machine Learning Acceleration
Next Article in Special Issue
A Lightweight Motional Object Behavior Prediction System Harnessing Deep Learning Technology for Embedded ADAS Applications
Previous Article in Journal
Cyber Attacks and Faults Discrimination in Intelligent Electronic Device-Based Energy Management Systems
Previous Article in Special Issue
A Survey on Deep Learning Based Approaches for Scene Understanding in Autonomous Driving
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State Estimation for Cooperative Lateral Vehicle Following Using Vehicle-to-Vehicle Communication

by
Wouter Schinkel
*,†,
Tom van der Sande
and
Henk Nijmeijer
Department of Mechanical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
*
Author to whom correspondence should be addressed.
Current address: Gemini-Zuid 0.143, P.O. Box 513, 5600 MB Eindhoven, The Netherlands.
Electronics 2021, 10(6), 651; https://doi.org/10.3390/electronics10060651
Submission received: 28 February 2021 / Revised: 8 March 2021 / Accepted: 9 March 2021 / Published: 11 March 2021
(This article belongs to the Special Issue Autonomous Vehicles Technology)

Abstract

:
A cooperative state estimation framework for automated vehicle applications is presented and demonstrated via simulations, the estimation framework is used to estimate the state of a lead and following vehicle simultaneously. Recent developments in the field of cooperative driving require novel techniques to ensure accurate and stable vehicle following behavior. Control schemes for the cooperative control of longitudinal and lateral vehicle dynamics generally require vehicle state information about the lead vehicle, which in some cases cannot be accurately measured. Including vehicle-to-vehicle communication in the state estimation process can provide the required input signals for the practical implementation of cooperative control schemes. This study is focused on demonstrating the benefits of using vehicle-to-vehicle communication in the state estimation of a lead and following vehicle via simulations. The state estimator, which uses a cascaded Kalman filtering process, takes the operating frequencies of different sensors into account in the estimation process. Simulation results of three different driving scenarios demonstrate the benefits of using vehicle-to-vehicle communication as well as the attenuation of measurement noise. Furthermore, in contrast to relying on low frequency measurement data for the input signals of cooperative control schemes, the state estimator provides a state estimate at every sample.

Graphical Abstract

1. Introduction

With the implementation of communication abilities in mass production vehicles [1], the realization of cooperative driving features in commercial vehicles is near. Cooperative driving is a form of automated driving where vehicles exploit communication with each other or with infrastructure [2], via vehicle-to-vehicle (V2V) communication, vehicle-to-infrastructure (V2I) communication, or a combination of both. The most commonly known application of cooperation among road vehicles is cooperative adaptive cruise control (CACC) [3], which is an enhanced version of adaptive cruise control (ACC) using information received through V2V communication. CACC can result in improvements in safety and ride comfort [4], increased traffic throughput [3], and fuel savings up to 20% for commercial vehicles [5].
A wide variety of communication structures and control approaches have been proposed for CACC [2,3], varying from simple uni-directional to more complex multi-directional communication solutions. In addition, cooperation among vehicles can be used for exploration [6], traffic light scheduling [7], and path planning [8]. Longitudinal and lateral cooperative driving is realized via decoupled [9,10,11] and combined approaches [12], where the latter is a preferred design approach because of the coupled longitudinal and lateral vehicle dynamics [12]. Control of lateral vehicle motions can be divided into path following and trajectory following [12]. Path following is a time independent spatial approach, and trajectory following relies on a time dependent signal, imposing requirements on the velocity and acceleration. Platooning is a term that is used for vehicles driving together (cooperating), typically in the form of a string of vehicles following each other on a short distance. For platooning, where the short inter-vehicle distance can significantly limit the view [13], a vehicle following approach is preferred [12]. Cooperative driving technologies mostly use a combination of on-board sensor information (e.g., radar) and V2V communicated information. With use of V2V communication and on-board sensors, preceding vehicles can be accurately followed [14]. It is shown in [15] that the localization accuracy can be improved if more vehicles are equipped with V2V communication. However, a factor limiting the performance of automated drivign features is the available hardware [16]. For example, current ACC applications require sensor update rates from 10 Hz to 20 Hz , but future safety applications may require update rates up to 50 Hz [16]. Applying a control approach similar to [12] requires detailed information of state components of the lead vehicle, which is typically not measurable from the following vehicle. In the remainder of the work, the lead vehicle is referred to as the target vehicle and the following vehicle is referred to as the host vehicle. The required state information for control approaches like presented in [12] can be obtained via V2V communication, and an estimator can be used to improve the signal quality and the continuity of the signals [15].
Common dynamic models used for vehicle tracking range from simple kinematic models to more complex models like the single-track model [16]. One of the determining factors of the model complexity is the specific state components that need to be available for the considered controller. Some of these state components may not be directly measurable and therefore need to be estimated. Bayuwindra [12] presented an orientation error observer based on an extended unicycle model. Ploeg et al. [17] presented a state estimator for the longitudinal acceleration of the preceding vehicle in order to (partly) maintain the favorable string stability properties during persistent loss of communication. A Singer acceleration model is a model where the acceleration is described as a first-order system with stochastic disturbances, and such a model has been adopted in [17], with position and velocity measurements obtained from a radar sensor [17]. A similar approach is presented in [18], but in [18] an adaptive Kalman filter has been used, improving the performance compared to using the Singer acceleration model from [17] via a continuously updated adaptive acceleration mean. Wei et al. [14] presented a longitudinal and lateral vehicle following control approach using only a radar and V2V communication, where the heading, acceleration, and yaw rate are communicated. The host vehicle dynamics are estimated using a single-track model [14]. Ess et al. [19] presented an object detection and tracking method using vision sensors. A kinematic steering model is used to track vehicles, but the low localization accuracy increases the difficulty of generating vehicle trajectories [19]. A survey paper [20] on vision based vehicle detection and tracking reveals that (extended) Kalman filtering and particle filtering are common approaches to predict vehicle motions from vision data. In addition, the use of Kalman filters applied in different applications is widely covered in the literature, e.g., the estimation of battery state of charge and capacity [21,22,23]. Finally, variations of a constant acceleration model with yaw rate have been used in [24] for tracking incoming traffic in urban scenarios.
In most sampled estimation frameworks, measurements are considered to be available at every sample. However, in practice this is often not the case, especially in multi-sensor set-ups. In the design estimation of the framework the different sensor update rates should be taken into account. In the literature, examples can be found of systems where different operating frequencies are present [25,26,27]. In some studies the issues of multi-rate sensing are discussed in combination with time delays ([28]) and inter-sample measurements ([26]). In [27], a multi-rate setup is considered in the estimation and control of a vehicle. The multi-rate Kalman filter estimate is only updated with a measurement if a new measurement is available, which in the case of [27] is on a fixed multiple of the sampling time. In between the measurement updates the estimate is solely based on the prediction step of the Kalman filter [27]. The authors of [29] discuss the fusion of radar and camera measurements; the estimates of parallel Kalman filters for the different sensors are combined via Bayesian estimation. The method presented in [29] could also be used to deal with multi-rate issues. In [30], two fusion methods are discussed; measurement fusion and state fusion. In case of a multi-rate sensor set-up, both methods could be implemented. One method of measurement fusion is based on the augmentation of the measurement vector and the observation matrix [30]. The observation matrix relates the measurements to the system state components. When multiple measurements are available the measurement vector and observation matrix can be increased in size accordingly [30]. One of the state fusion methods, which is also known as track-to-track fusion, requires multiple estimators to run simultaneously, and when multiple estimates are close together, they can be fused into one state estimate [30]. In [31], the set of measurements considered includes (time-stamped) raw radar and lidar data and bounding boxes obtained through a vision system. Therefore, the size of the measurement set is time-varying, resulting in a measured fusion based approach similar to an example discussed in [30]. In [28], a multi-sensor set-up is considered where the Kalman filter is operating at a frequency over 50 times higher than the highest measurement frequency. Similarly to methods discussed in [25,26,30,31], the estimator in [28] is only performing a measurement update when a new measurement is available. Another possibility is to introduce a multi-rate estimator as presented in [21]; two decoupled estimators are operating at a different frequency improving the model accuracy and stability.
From the material presented in Section 1 thus far, it can be concluded that the longitudinal and lateral control of platoons is widely investigated. However, in cooperative technologies the inputs are typically assumed to be continuously available and of sufficient quality, which usually is not the case in practical scenarios. Even in papers where the use of V2V communication for state estimation or control is discussed [16,32,33], practical impairments such as limited update rates and high measurement noise are not always included in the analysis.
The goal of this study is to present a cooperative state estimation framework for automotive applications. The study is focused on demonstrating the benefits of using V2V communication in the estimation process via simulations. The vehicle dynamics and (virtual) measurements considered in this study are modeled in a simulation environment. Onboard sensing is used in combination with measurements received through V2V communication to estimate the state of a host and target vehicle via a cascaded Kalman filtering approach. The state estimate resulting from the framework presented in this study can be used in a longitudinal and lateral platooning framework such as presented in [12]. The effect of the communication frequency on the estimation performance is analyzed via a root-mean-square error analysis. For this study perfect vehicle following behavior and a constant time gap are considered. The state estimator is located on the host vehicle, and onboard sensors from the host vehicle are used, such as a radar and odometer, in combination with measurements from a target vehicle received via V2V communication. The effect of different operating frequencies for the different sensors is taken into account in the estimator design. The performance of the state estimator is analyzed via simulations and error analyses, where realistic measurement noise and sensor operating frequencies are used. The framework presented in this study can easily be modified for a different sensor set-up.
The remainder of this study is organized as follows. In Section 2, the methodology is discussed, consisting of the sensor set-up, driving scenario, vehicle and measurement model, state estimator design, and the selection of the estimator setting. The simulation results and discussion are presented in Section 3 followed by the conclusions of this study in Section 4.

2. Methodology

Multi-vehicle state estimation is realized with cascaded Kalman filtering using sensors located on the host vehicle in combination with V2V communicated measurements from the preceding vehicle. First, the sensor set-up considered in this study is discussed. Second, the dynamic model representing the vehicles is discussed, including the available sensor signals. Finally, the design and selection of the process noise values of the state estimator is discussed in detail.

2.1. Sensor Set-Up

In the i-CAVE project [34] two Renault Twizys have been modified such that autonomous and cooperative driving features can be implemented and tested. Considering the implementation of the framework onto a vehicle set-up in future work, these vehicles define the available sensors and sensor signals considered in this study. The vehicles are shown in Figure 1, where the labels host and target refer to the following vehicle and lead vehicle, respectively. Each vehicle is equipped with its own set of sensors and a WiFi-P module [35], facilitating communication between the two vehicles. An overview of the relevant sensors and the respective operating frequencies is listed in Table 1. Furthermore, the real-time operating system of both vehicles is running at a base frequency of 100 Hz .

2.2. Dynamic Model and Measurements

The state estimator presented in this study is based on a Kalman filter framework [37]. Such a framework requires a model of the vehicle dynamics and the considered measurements. First, the selected model and the equations of motion in continuous time are presented, followed by the transition into state-space formulation. Next, the discretization of the model is discussed such that it can be used in a sampled application.

2.2.1. Continuous Time Equations of Motion and System Output

The dynamics of each vehicle are modeled using a unicycle model [12,38]. A schematic drawing of the two unicycle models including the relevant degrees of freedom is shown in Figure 2. The subscripts t and h represent the target and host vehicle, respectively. Furthermore, the X- and Y-coordinates define the position in a global frame, θ is the global heading angle and v s . is the longitudinal vehicle velocity. The global X- and Y-coordinates considered in this study correspond with the east and north direction of an earth-centered, earth fixed coordinate system, respectively. The global heading angle is the angle between the X- and Y-axis and indicated in Figure 2. The global X- and Y-position is defined as
X ˙ i = v i cos θ i , i = { t , h }
Y ˙ i = v i sin θ i , i = { t , h }
respectively. Note that the lateral velocity is assumed to be zero. The longitudinal acceleration a i of the vehicle is
a i = v ˙ i , i = { t , h }
and the derivative of the heading angle θ i is the yaw rate θ ˙ i , i = t , h . Finally, the longitudinal acceleration and yaw acceleration of the unicycle are considered constant, giving
a ˙ i = v ¨ = 0 + w a , i , i = { t , h }
θ ¨ i = 0 + w θ ˙ , i , i = { t , h }
where w θ ˙ and w a are zero-mean Gaussian white noise on the yaw acceleration and longitudinal jerk. These noise terms will be used in the Kalman filter, which is discussed in Section 2.3. The total system dynamics of a single unicycle is thus described with (1) to (5).
The system output (measurements) is determined by the available sensors, which are listed in Table 1. In this study indirect measurements such as the GPS velocity are not considered with the exception of the heading angle obtained from the GPS sensor. The heading angle is required for observability of the system, which is elaborated on in the end of Section 2.2.2. Except for the measurements of the radar sensor, all measurements are direct state measurements. The IMU sensor measures the longitudinal accelerations and the yaw rate of the vehicle. The odometer measures the motor speed, which is translated to a vehicle speed using a fixed gear ratio and the effecting rolling radius, and is therefore also considered as a direct state measurement. The GPS sensors measures the global X- and Y-position and the global heading angle, the latter is pre-filtered. The position measurements of the GPS sensor correspond with the centers gravity of the respective vehicles and the initial position of the host vehicle is used to null the measurements, therefore creating the origin X h t = 0 , Y h t = 0 : = 0 , 0 . The radar sensor is located on the front of the host vehicle and measures the relative speed and distance to the rear bumper of the target vehicle. The relative distance r is defined as
r = X t X h 2 + Y t Y h 2 L
where L is the length the vehicle and the X- and Y-positions of the host and target vehicle correspond with their center of gravity. Note that the expression in (6) is a simplification where the difference in heading angle between the two vehicles is ignored. The effects of this simplification can be included in the measurement noise, which also accounts for the uncertainty of the measurement location on the target vehicle’s bumper. The relative speed r ˙ is defined as
r ˙ = v t v h cos θ r
where θ r is the azimuth angle measured by the radar. The relative speed is derived from a constant radius cornering assumption where the vehicles drive with the same curvature and have no relative lateral offset with respect to the desired trajectory. In this study the azimuth angle θ r is not considered as a measurement because the measurement quality cannot be defined [39].

2.2.2. State-Space Formulation

In order to adopt a state-space notation as required for practical cascaded Kalman filtering, a cascading system description is considered, allowing us to transform the system dynamics into state-space form. The schematics of the cascading systems approach is shown in Figure 3, where w 1 and w 2 are process noise vectors, y 1 and y 2 measurement vectors and x 1 a state vector. It should be noted that adopting the cascading system description influences the propagation of process noise through the system, hence the term cascaded Kalman filtering. The process noise w 1 entering system 1, the yaw dynamics, influences the state x 1 and measurements y 1 . As shown in Figure 3, the state x 1 enters system 2, the unicycle dynamics, resulting in an indirect injection of the process noise w 1 filtered by the dynamics of system 1 into system 2. Therefore, the process noise term w 1 is propagated to system 2 via system 1, influencing system 2 as filtered noise. The system and output dynamics are described via the linear functions f 1 , f 2 , h 1 and non-linear function h 2 according to
x ˙ j = f j x j , u j , w j , j = 1 , 2
y = h x , u , v , j = 1 , 2
where x is the state vector, u the input vector, w the process noise vector, y the measurement vector, v the measurement noise vector and j refers to the system number corresponding with Figure 3. The state vectors
x 1 = θ t θ ˙ t θ h θ ˙ h T ,
x 2 = X t Y t v t a t X h Y h v h a h T ,
measurement vectors
y 1 = θ t θ ˙ t θ h θ ˙ h T ,
y 2 = X t Y t v t a t θ ˙ t X h Y h v h a h θ ˙ h r r ˙ T
and process noise vectors
w 1 = w θ ˙ , t w θ ˙ , h T ,
w 2 = w a , t w a , h T .
are introduced. In state-space form, the system dynamics are described as
x ˙ j = A j x j + B j u j + E j w j , j = 1 , 2
y j = C j x j + D j u j + J j v j , j = 1 , 2
where A is the system matrix, B the input matrix and E the process noise matrix. Furthermore, C is the observation matrix, D the observed input matrix, and J and v are the measurement noise matrix and vector. The explicit expressions for A , B , u and E are presented in Appendix A.1. It should be noted that via the adoption of the cascading systems approach, Figure 3, the system matrix A 2 representing the unicycle dynamics is a function of the state vector x 1 , giving A 2 : = A 2 x 1 .
The system output is defined in (12) and (13) using the relative distance and speed from (6) and (7). The matrices C , D and J are obtained by calculating the Jacobian matrices of h x , u , v with respect to x , u and v . The measurement noise vectors are described as
v 1 = v θ t v θ ˙ t v θ h v θ ˙ h T ,
v 2 = v X t v Y t v v t v a t v X h v Y h v v h v a h v r v r ˙ T .
Again, the explicit expressions for C , D , J and v are presented in Appendix A.1.
As mentioned in Section 2.2.1, the heading angles obtained from pre-filtered measurement data are included in the measurement vector y 1 to ensure observability. A system is observable if for any initial state and any final time the initial state can be uniquely determined from the measured input and output signals [37]. Whether a system is observable can be determined by analyzing the rank of the observability matrix O . The observability matrix is defined as
O j = C j C j A j C j A j n 1
and observability is achieved if rank O j = n j where n j is the length of the state vector x j [37]. Since y 1 and y 2 contain all state components described in x 1 and x 2 , respectively, it is evident that both system 1 and system 2 are observable ( rank C j = n j ).

2.2.3. Model Discretization

Finally, because the vehicles are operated using a discrete-time (sampled) platform, the expressions from (16) and (17) need to be discretized. From hereon, all equations are expressed in discrete-time, and are therefore a function of sample number k. The state-space expressions from (16) and (17) are expressed in discrete-time as
x j , k + 1 = F j , k x j , k + G j , k u j , k + M j , k w j , k , j = 1 , 2
z j , k = H j , k x j , k + L j , k u j , k + N j , k v j , k , j = 1 , 2
In the expressions in (21) and (22) F k is the system matrix, G k the input matrix, M k the process noise matrix, H k the observation matrix, L k the observed input matrix and N k the measurement noise matrix. The discrete-time form of the relevant matrices are obtained according to [37]
F k = e A T s t = k T s , G k = τ = 0 T s e A τ d τ B t = k T s , M k = τ = 0 T s e A τ d τ E t = k T s , H k = C t = k T s , L k = D t = k T s , N k = J t = k T s .
The matrices in (23) are evaluated at the current time t = k T s where T s is the sampling time of the estimator (in this study T s = 0.010 s ). The discrete-time state vector, input vector and measurement vector are equivalent to their continuous-time counterparts, giving
x j , k = x j t = k T s , u j , k = u j t = k T s , w j , k = w j t = k T s , z j , k = y j t = k T s , v j , k = v j t = k T s
for j = 1 , 2 , which evidently results in
x 1 , k = θ t , k θ ˙ t , k θ h , k θ ˙ h , k T ,
x 2 , k = X t , k Y t , k v t , k a t , k X h , k Y h , k v h , k a h , k T ,
z 1 , k = θ t , k θ ˙ t , k θ h , k θ ˙ h , k T ,
z 2 , k = X t , k Y t , k v t , k a t , k X h , k Y h , k v h , k a h , k r ¯ k r ˙ k T .
The algebraic expressions for F k , G k , M k , H k , L k and N k obtained by using (23) are presented in Appendix A.2.

2.3. State Estimator Design

The state estimation of the system consisting of the host and target vehicle is realized using a cascaded Kalman filter [37,40,41]. A Kalman filter is an estimator where the gain is chosen such that the estimation error covariance is minimized. In this section, the general equations of a Kalman filter are presented in combination with the amendments following from the cascading systems formulation. In addition, the use of multi-rate sensors in the estimation process is discussed.

2.3.1. General Kalman Filtering Process

The Kalman filter considered in this study is based on a discrete-time model representing the system dynamics, which in this case is the combination of a host and target vehicle. The dynamics from (21) and (22) are modeled with
x ^ j , k + 1 = F j , k x ^ j , k + G j , k u j , k , j = 1 , 2
z ^ j , k = H j , k x ^ j , k + L j , k u j , k , j = 1 , 2
where x ^ k is the state estimate and z ^ k the estimated measurement. Do note that from the cascading systems approach it follows that F 2 , k x ^ 1 , k . Because the model presented in Section 2.2 does not include any inputs, it is evident that u k = 0 , allowing us to simplify the estimation model to
x ^ j , k + 1 = F j , k x ^ j , k , j = 1 , 2
z ^ j , k = H j , k x ^ j , k , j = 1 , 2 .
Generally, a Kalman filtering process consists of two steps, a prediction and a correction step. Through the use of the cascading systems approach, in this study four steps are considered. The cascaded Kalman filtering process for a single time step is summarized with the four steps [37]:
(1)
State estimation and update of the state estimation covariance matrix for system 1, the yaw dynamics
The a priori state estimate x ^ 1 , k + 1 | k is obtained via the modeled system dynamics from (31), resulting in
x ^ 1 , k + 1 | k = F 1 , k x ^ 1 , k | k ,
after which the state estimation covariance P 1 , k | k is updated according to
P 1 , k + 1 | k = F 1 , k P 1 , k | k F 1 , k T + M 1 , k Q 1 , k M 1 , k T ,
resulting in the a priori state estimation covariance P 1 , k + 1 | k . In (34) Q 1 , k is the process noise covariance.
(2)
Measurement correction and update of the state estimation covariance matrix for system 1 (only executed when a new measurement is available in z 1 , k )
The estimated output is obtained via
z ^ 1 , k = H 1 , k x ^ 1 , k + 1 | k
followed by a calculation of the Kalman gain K 1 , k according to
K 1 , k = P 1 , k + 1 | k H 1 , k T H 1 , k P 1 , k + 1 | k H 1 , k T + N 1 , k R 1 , k N 1 , k T 1 .
Using the Kalman gain, the a posteriori state estimation covariance P 1 , k + 1 | k + 1 is obtained using
P 1 , k + 1 | k + 1 = I K 1 , k H 1 , k P 1 , k + 1 | k I K 1 , k H 1 , k T + K 1 , k N 1 , k R 1 , k N 1 , k T K 1 , k T
where R 1 , k is the measurement noise covariance. Note that (37) can be simplified to P 1 , k + 1 | k + 1 = I K 1 , k H 1 , k P 1 , k + 1 | k , both forms are described in the literature. Finally, a measurement correction on the state estimate is performed using
x ^ 1 , k + 1 | k + 1 = x ^ 1 , k | k + 1 + K 1 , k z 1 , k z ^ 1 , k ,
resulting in the a posteriori state estimate x ^ 1 , k + 1 | k + 1 . If there is no new measurement available in z 1 , k at sample k, it follows that x ^ 1 , k + 1 | k + 1 = x ^ 1 , k + 1 | k .
(3)
State estimation and update of the state estimation covariance for system 2, the unicycle dynamics
The a priori state estimate x ^ 2 , k + 1 | k and state estimation error covariance P 2 , k + 1 | k are obtained via
x ^ 2 , k + 1 | k = F 2 , k x ^ 1 , k + 1 | k + 1 x ^ 2 , k | k ,
P 2 , k + 1 | k = F 2 , k x ^ 1 , k + 1 | k + 1 P 2 , k | k F 2 , k T x ^ 1 , k + 1 | k + 1 + M 2 , k Q 2 , k M 2 , k T .
(4)
Measurement correction and update of the state estimation error covariance for system 2
Similar to step (2), this step is only executed if z 2 , k contains a new measurement. The estimated output z 2 , k and the Kalman gain K 2 , k are obtained using
z ^ 2 , k = H 2 , k x ^ 2 , k + 1 | k ,
K 2 , k = P 2 , k + 1 | k H 2 , k T H 2 , k P 2 , k + 1 | k H 2 , k T + N 2 , k R 2 , k N 2 , k T 1 .
Subsequently, the a posteriori state estimation error covariance is calculated as
P 2 , k + 1 | k + 1 = I K 2 , k H 2 , k P 2 , k + 1 | k I K 2 , k H 2 , k T + K 2 , k N 2 , k R 2 , k N 2 , k T K 2 , k T
followed by the measurement correction
x ^ 2 , k + 1 | k + 1 = x ^ 2 , k | k + 1 + K 2 , k z 2 , k z ^ 2 , k
where x ^ 2 , k + 1 | k + 1 = x ^ 2 , k | k + 1 if no new measurement is located in z 2 , k .
The process noise covariance and measurement noise covariance matrices are determined using the expected state and measurement noise:
Q 1 , k = E w n w m T = σ w θ ˙ t 2 0 0 σ w θ ˙ h 2 δ nm = Q 1 ,
Q 2 , k = E w n w m T = σ w a t 2 0 0 σ w a h 2 δ nm = Q 2 ,
R 1 , k = E v n v m T = σ θ t , k 2 0 0 0 0 σ θ ˙ t , k 2 0 0 0 0 σ θ t , k 2 0 0 0 0 σ θ ˙ h , k 2 δ nm = R 1 ,
R 2 , k = E v n v m T = σ X t , k 2 0 0 0 0 0 0 0 0 0 0 σ Y t , k 0 0 0 0 0 0 0 0 0 0 σ v t , k 2 0 0 0 0 0 0 0 0 0 0 σ a t , k 2 0 0 0 0 0 0 0 0 0 0 σ X h , k 2 0 0 0 0 0 0 0 0 0 0 σ Y h , k 2 0 0 0 0 0 0 0 0 0 0 σ v h , k 2 0 0 0 0 0 0 0 0 0 0 σ a h , k 2 0 0 0 0 0 0 0 0 0 0 σ r ˙ k 2 0 0 0 0 0 0 0 0 0 0 σ r · k 2 δ nm = R 2 ,
where σ 2 is the standard deviation of the respective signals defined in w and v . Zero-mean Gaussian white noise is considered for both the process noise and measurement noise (however, note the previous comment about filtered process noise entering system 2), and because of the Kroneckerdelta function δ n m the noise matrices do not depend on time (there is no correlation between different samples), giving Q 1 , k = Q 1 , Q 2 , k = Q 2 , R 1 , k = R 1 and R 2 , k = R 2 [37]. The power of the process and measurement noise is considered to be constant, yielding constant Q and R matrices. The measurement noise values described in R 1 and R 2 are obtained through experiments performed prior to this study. In Table 2, a list of the measurements signals together with their respective noise levels is provided. The values for the noise levels in Table 2 are expressed as standard deviations. The matrices Q and R considered in this study are square and diagonal, giving Q = Q T and R = R T . Furthermore, Q and R are considered positive definite.

2.3.2. Multi-Rate Sensing

Set-ups including a variety of sensors typically deal with different sensor update rates. In Table 1, it is shown that this is also the case for the sensor set-up considered in this study. Taking the different sensor update rates into account in the estimator design can increase the estimation performance. In this study it is assumed that all measurements are a multitude of the base frequency of the state estimator, and therefore the effect of inter-sample measurements is not considered. Furthermore, the effect of time delays is not considered in this work. From the literature presented in Section 1 it can be concluded that performing measurement updates only when new measurements are available is a common approach. Although this methodology results in a size-varying measurement vectors, observation matrices and measurement noise matrices, it is a straightforward approach which ensures measurements are only used once to update the state estimator. This method is implemented by removing the elements of z k , H k , and N k which do not correspond with new measurements [25]. This is realized with the introduction of S k , which is a matrix of dimension m k × n with m k the number of new measurements at sample k and n the total number of rows of H k [25]. The elements of S k are 1 at the positions 1 , s 1 , , m k , s m k and 0 elsewhere, with s 1 to s m k corresponding with the row numbers of H k belonging to the new measurements. Using the matrix S k results in modified versions of the observation vectors z k , observation matrices H k , and measurement noise matrices N k :
z j , k * = S j , k z j , k , H j , k * = S j , k H j , k , N j , k * = S j , k N j , k , j = 1 , 2 .
Note that in (33) to (44) the vectors and matrices z k , H k , N k should be replaced with z k * , H k * , N k * . This methodology is similar to what is presented in [26], where all measurements which are unavailable are set to the measurement prediction, thus not resulting in any measurement corrections. However, setting the unavailable measurement equal to the prediction does result in a modified estimation error covariance matrix (see (35) to (37) and (41) and (43)).
One of the issues that may arise using the method resulting in (49), is that the majority of the measurement corrections, Equations (38) and (44), is based on one or a limited number of sensors. For example, considering a system with two equivalent sensors, of which sensor 1 operates at a significantly higher frequency in comparison to sensor 2, the state estimate will be mostly based on the information coming from sensor 1. This might lead to drift and/or large estimation errors of state components which are measured less frequently, either direct or indirect. In order to prevent large estimation errors, the weight of the measurement is adjusted with the ratio between the sampling frequency of Kalman filter f 0 and the sampling frequency of the sensor f s . In essence, this approach normalizes the weight of a measurement based on the sensor sampling frequency. The weighting factor is implemented in the Kalman filter by multiplying the sensor standard deviation σ with the ratio between the sampling frequency of the specific sensor and of the operating system, resulting in
σ * = f 0 f s σ
where f s is the sampling frequency of the corresponding sensor and f 0 the sampling frequency of the operating system. Note that for the measurements obtained through V2V communication, the communication frequency may influence the weighting factor from (50). For all sensors located on the host vehicle it follows that f s is upperbounded by the communication frequency. Implementing this approach can decrease the sensitivity to sensor faults and errors of sensors with a relatively high sampling frequency. The effect of the weighting factor on the estimation performance is demonstrated in Appendix B.

2.4. Selection of the Process Noise Values

The performance of a Kalman filter depends on the specific setting, which is generally determined by the Q and R matrices. More specifically, the ratio between Q and R is of importance. In the literature various methods to acquire the process and/or measurement noise values (hence Q and R ) can be found, examples are [42,43,44,45,46]. Bian et al. [47] presented a two-step parameter optimization method for low-order model based estimation, where a particle swarm optimization algorithm is used to optimize the model parameters and to tune the error covariance matrices of an extended Kalman filter. Saha et al. [42] presented a selection procedure based on a robustness and sensitivity metric, allowing the estimator to be tuned via a weighted trade-off between the two metrics. Gelen and Atasoy [43] presented a selection procedure based on three metrics, and two parameter were defined in order to test consistency of the Kalman filter (the normalized estimation error squared and the normalized innovation squared). Shu et al. [44] presented a detailed error analysis and optimize the estimation error with respect to the process noise covariance. Åkesson et al. [45] presented a generalized autocovariance method for tuning of estimators with correlated noise. Note that noise might become correlated after discretizing continuous-time systems [45]. Correlated noise results in the Kalman filter to lose its optimality, yielding sub-optimal results and invalidity of (a part of) the stability and convergence proofs of Kalman filters. Lastly, Matisko and Havlena [46] presented a method for noise covariance estimation using a Bayesian approach next to Monte Carlo numerical methods.
Although numerous procedures used to determine the estimator setting can be found in the literature, the methods are often relatively complex to implement because of the large number of equations, assumptions and parameters, and are subject to manual selection of relevant parameters. In this study, the setting of the estimator is solely determined by optimization of the estimator performance through the Q matrices, since the values in the R matrices are available from prior experiments. The estimator setting is determined by the minimization of a global error function for different levels of process noise covariances Q 1 and Q 2 , which in essence is similar to the approach from [44]. Because each unicycle model contains two noise terms, w a for the longitudinal jerk in (4) and w θ ˙ for the yaw acceleration in (5), the process noise covariance for each unicycle consists of two parameters that can be selected. Note that it is assumed that process noise is uncorrelated as is described with (45) and (46). Furthermore, the process noise on the target vehicle is assumed to be equal to the process noise on the host vehicle, reducing the total optimization to a two-parameter problem. Similarly to the method presented in [42], the process noise covariance is varied according to
Q 1 = 10 P θ ˙ 0 0 10 P θ ˙ Q 0 , Q 2 = 10 P a 0 0 10 P a Q 0
by adapting P a and P θ ˙ and where Q 0 = I . The performance of the state estimator can be analyzed for different settings of the process noise covariance, and based on an error function a preferred setting can be selected. The error function E, which has to be minimized for P a and P θ ˙ , is defined as
E = n = 1 N 1 1 K k = 1 K g 1 , n x 1 , n , k gt x ^ 1 , n , k 2 + n = 1 N 2 1 K k = 1 K g 2 , n x 2 , n , k gt x ^ 2 , n , k 2
where g 1 and g 2 describe the relative weights for the different states and N 1 and N 2 are the number of elements in x 1 , k and x 2 , k as well as g 1 and g 2 . Furthermore, K is the total number of samples used in the calculation of E and the superscript gt refers to the ground truth signals. The error function is a weighted summation of the root-mean-square (RMS) errors of all the states. The weight vectors g are defined as
g 1 = g h 2 g y 2 g h 2 g y 2 , g 2 = g p 2 g p 2 g v 2 g a 2 g p 2 g p 2 g v 2 g a 2
where the subscripts h , y , p , v , a are short for the heading, yaw rate, position, velocity and acceleration, respectively. Note that the weights in g are also used to make the individual state components dimensionless, allowing for summation of the errors. The final setting Q 1 , final and Q 2 , final of the estimator is obtained by minimizing E for Q 1 and Q 2 . Note that using the methodology presented here, in Section 2.4, locating a global minimum is not guaranteed.

3. Simulation Results

The performance of the state estimator is analyzed through simulations based on a multi-body vehicle model and different driving scenarios. All simulations presented in this study are realized with Matlab/Simulink R2019b [48]. After setting the process noise covariance values of the estimator via the method presented in Section 2.4, the performance is analyzed via a RMS error analysis, where the effect of the V2V communication frequency is investigated in more detail.

3.1. Multi-Body Vehicle Model

In order to ensure the simulations are a realistic representation, a detailed multi-body vehicle model is used. The model is developed as part of the i-CAVE project and can be used to investigate the vehicle behavior in presence of sensor and actuator faults, indicating the high level of detail [34,49]. The model is validated through measurements and experiments in previous work [49], and serves as a suitable representation for the vehicles from Figure 1. For more information about this model the reader is referred to [49]. It is assumed that the host vehicle is accurately following the target vehicle via a longitudinal and lateral cooperative control approach. The distance between the vehicle is defined by a constant time gap h. In the simulations it is assumed the host vehicle follows the target vehicle perfectly. The simulation data obtained with the multi-body model is used as a ground truth in the analysis of the estimation performance. All ground truth data is provided as a file supplementary to this study.

3.2. Driving Scenarios

The performance of the state estimator is analyzed for different driving scenarios, such that it can be shown that the state estimator is suited for multiple scenarios. In this study, three different scenarios are considered:
  • Straight line driving
  • Constant curvature cornering
  • 8-shape cornering
In each scenario a constant forward velocity is considered. The ground truth trajectories for all driving scenarios are shown in Figure 4, with the x-axis truncated to maintain visibility of scenario 1 and 2. The first scenario is the most simple one, where all states except for the global X-position are constant. In scenario 2 a constant radius cornering maneuver is considered, resulting in valid modeling assumptions of a constant yaw rate and acceleration. Scenario 3 contains a varying road curvature which results in a non-constant yaw rate, thus making the constant yaw rate assumption in the model invalid. In all driving scenarios the lateral acceleration remains limited, yielding linear tyre characteristics [50], and each simulation is terminated after 30 s .

3.3. Process Noise Covariance Selection

The setting of the state estimator is obtained by minimizing E from (52) for Q 1 and Q 2 for one driving scenario. By using only one scenario to select the process noise covariance values of the state estimator, the other two scenarios provide more objective results of the performance. Furthermore, this gives an indication of the estimation performance for different scenarios. Scenario 3, the 8-shape driving maneuver, is used to tune the estimator. This scenario is the most complex where the model assumption of a constant yaw rate is not valid, therefore being most representative for a regular driving scenario. The default process noise covariance matrices are chosen as Q 1 , 0 = Q 2 , 0 = I . The error function E from (52) is minimized for P a and P θ ˙ . The weights that have been used in this study are summarized in Table 3. The effect of the weighting values in Table 3 on the estimation behavior is not considered and can be analyzed in future work. The error is calculated in the time interval 5 , 30 s , such that initialization errors do not affect the error E and thus the eventual setting of the estimator.
In Figure 5, the error function is plotted for different values of P a and P θ ˙ , and the minimum is indicated with a black dot. The results are generated by varying P a and P θ ˙ with a step-size of 0.5 . The resulting process noise covariance matrices Q 1 , final and Q 2 , final are obtained for P a = 3.5 and P θ ˙ = 0.0 . As can be observed in Figure 5, the surface of E is relatively flat around the minimum, and therefore the estimator has a low sensitivity for (limited) variations in Q . Figure 5 shows that variations of P θ ˙ with P θ ˙ > = 1.0 do not visibly influence the error E, which is caused by the low measurement noise of the yaw rate measurement (Table 2).

3.4. Estimation Performance

Now that the setting of the state estimator is selected via a RMS error analysis, the performance of the estimator can be analyzed in more detail. First the estimated trajectories are presented in combination with the ground truth trajectories. Next, the performance is analyzed via RMS and absolute maximum errors, and finally the effect of the V2V communication frequency is investigated. In the latter, the values of the process noise matrices Q 1 and Q 2 is considered to be fixed.

3.4.1. Trajectory Estimation

The estimated spatial trajectories are shown in Figure 6 and serve as a graphical way of indicating the estimation performance. Quantitative results corresponding with the spatial trajectories from Figure 6 are summarized in Table 4 via mean values and standard deviations. The estimator setting obtained from Figure 5, which is solely based on driving scenario 3, is used for all three scenarios. The estimated trajectories visually correspond relatively well with the ground truth trajectories for all three scenarios. The mean values and standard deviations of the error signals listed in Table 4 demonstrate that the mean values of the estimation error ( groundtruth estimation ) are generally higher than the mean values of the measurement error ( groundtruth measurement ). Looking at the results from Figure 6 this is as expected, since mean error values are affected by offsets. Furthermore, since the measurement noise is generated in simulations, the mean value should go to zero for t . The standard deviations from Table 4 demonstrate that the estimation of the host vehicle outperforms the target vehicle for all scenarios, with an reduction of around 25 to 65% for different scenarios. In interesting observation is that the improvement are the smallest for the scenario which is used to obtain the estimator settings, but considering that scenario (number 3) is most complex this is a reasonable result. For the target vehicle, the standard deviation of the estimation error increases with approximately 5 to 45%, this increase is most likely caused by delayed estimator behavior clearly demonstrated in Figure 6b,c. This effect is caused by the limited update frequency of the GPS sensor located on the target vehicle in combination with the communication device limiting the update rates of all measurements belonging to the target vehicle. It is expected that the standard deviation of the estimation error can become smaller than the standard deviation of the measurement error by increasing the relative weight of the GPS measurements of the target vehicle, for example by modifying the measurement noise of the measured position X t , Y t . The results in Figure 6 and Table 4 confirm that the trajectory of the host vehicle has smaller errors with respect to the ground truth compared to the trajectory of the target vehicle. The errors in the estimated Y-positions in Figure 6a remain below 2 m . Furthermore, all results in Figure 6 show limited estimation errors considering the quality of the available measurements. It should be noted that the initial conditions of the state estimator correspond exactly with the initial states of the vehicles. The effect of the initial conditions might be investigated in future work. Figure 6b reveals that during the time in between the GPS measurements, the estimator is able to follow the curvature of the cornering maneuver.
The value of the error function E for the different scenarios is shown in Figure 7. The results show that the value of E is similar for scenario 2 and scenario 3, where the error for scenario 3 is slightly smaller. This may be caused by the fact that scenario 3 is used to select the setting of the estimator. The error E for scenario 1 is smaller compared to the values for the other scenarios, most likely caused by the limited complexity of the driving scenario, i.e., in scenario 1 no cornering maneuver is considered.
The ground truth and estimation results over time are presented in Figure 8 for the global heading of both the target and host vehicle. The mean values and standard deviations of the error signals corresponding with the results from Figure 8 are listed in Table 5. The results in Figure 8 demonstrate that the estimated signals correspond well with the ground truth. This is confirmed with Table 5, which also demonstrates that the errors in the heading angles can be significantly reduced using the proposed estimator. Although the mean values show an increase comparing the estimation errors with the measurement errors, the standard deviations of the error are reduced by roughly 25 to 65% during cornering maneuvers and over 90% for straight line driving. Interestingly, comparing the results from Table 5 for the three scenarios shows that the presence of cornering behavior influences the estimation errors, but the complexity of the cornering maneuver seems to have minimal effect on the estimation errors (scenario 2 vs. scenario 3). Finally, Figure 8b,c clearly show that the results for the host vehicle have a constant lag with respect to the target vehicle. This is caused by the constant time gap, ensuring a constant inter-vehicle distance.

3.4.2. State and Measurement Errors

The estimator performance can be analyzed numerically by calculating the errors between the estimation and ground truth signals, and compare these with the errors between the ground truth and the measurements. The RMS and absolute maximum errors between the estimated state components and ground truth are listed in Table 6, where the superscript gt refers to ground truth. In addition, the errors between the measurement and ground truth signals are also included in Table 6. For the expressions of the RMS and maximum errors the estimated heading of the target vehicle is selected as an example, resulting in
RMS = 1 K k = 1 K θ t , k gt θ ^ t , k 2
Max = θ t gt θ ^ t 2
where K is the number of samples considered.
First, let us focus on the top part of Table 6. Similarly to Section 3.3, the errors are calculated over the time interval 5 , 30 s to limit the effect of initialization. The difference in errors between the states of the host vehicle and target vehicle are small for the individual scenarios. Comparing the scenarios with each other reveals that all errors are similar in magnitude and the results for scenario 1 show smaller errors compared to scenario 2 and 3. Considering the overall agreement in errors for the different scenarios, it can be concluded that the estimator has a small sensitivity to the complexity of the driving scenario. The results do indicate that including a cornering maneuver increases the errors, which is in line with the expectation. The setting of the estimator is based on scenario 3 only, and the maximum and RMS errors are similar or smaller for the other scenarios, which implies the setting is suitable for a variety of scenarios. Considering the implementation onto the vehicle set-up in future work, these results are positive, since the setting of the estimator is independent of the driving maneuver. The results listed in Table 6 are in line with the results from Figure 5.
By continuing to the middle and bottom part of Table 6, the estimation performance can be compared with the quality of the measurements, providing a more objective performance analysis. The RMS and maximum errors for measurements of direct state components are equal for all scenarios, because the measurement noise is created in simulation via pre-specified noise levels (Table 2). Furthermore, in simulation the measurements are not influenced by external disturbances. Comparing the errors between the measurements and the ground truth, and the estimates and the ground truth, allows us to analyze the attenuation of measurement noise. For the majority of the state components the measurement noise is attenuated. Noteworthy is the increase RMS and maximum error for the longitudinal velocity, which is caused by the corrections using the GPS measurements. Furthermore, the errors in the position estimate of the target vehicle are larger compared with the measurements, this is caused by estimation errors in the heading angle. For the host vehicle this is not the case, because the measured noise on the host vehicle’s heading angle is smaller, the position estimation is more accurate. Furthermore, it should be noted that the results depend on the weights described in Table 3, the sensor operating frequencies from Table 1 and the noise levels from Table 2. The results from Table 6 reveal that while using the state estimator, measurement noise can be reduced to over 50%, depending on the state component. From Table 6 it can be concluded that the estimator can significantly reduce the measurement noise, which confirms the benefits of using a state estimator for cooperative applications. However, the results also indicate the measurement noise on the heading angles have a significant influence on the quality of the position estimate. In contrast to solely relying on limited frequency measurement data, using the estimator results in a state estimate which is both available and updated every sample.

3.4.3. V2V Communication Frequency

The effect of the communication frequency on the estimation performance is analyzed by varying the communication frequency f c while keeping all other parameters fixed, including the setting of the estimator. Note that varying f c does have an effect on the specific weighting of the communicated measurements through (50). The communication frequency is varied by taking multitudes of the sampling time of the operating frequency. The error function for different sampling times of the V2V communication is shown in Figure 9 for all three scenarios. The results show that the shape of the error function is somewhat similar for all scenarios. The trend of the error functions shown in Figure 9 reveals that the error increases for an increasing sampling time of the V2V communication with the exception of the results for scenario 1. This is most likely caused by constant velocity and the lack of cornering maneuvers in scenario 1, reducing the sensitivity to delays caused by limited sensor update rates.
It is evident that the results presented in Figure 9 depend on the sensor operating frequencies and thus the relative weights, (50), the measurement noise levels and the weights used in the error function (Table 3). This is demonstrated with Figure 10, which displays the RMS error between the ground truth and estimate of the yaw rate and longitudinal acceleration for different communication frequencies. The shapes of the curves in Figure 10a,b show clear differences. Because the yaw rate estimate of the target vehicle is predominantly based on the yaw rate measurement, it is expected that the estimation error decreases for an increasing communication frequency which is confirmed with Figure 10a. The estimate of the longitudinal acceleration, however, is also influenced by velocity and position measurements as well as the relative measurements from the radar sensor. The weight of the these measurements depends on the communication frequency through (50). A smaller sampling time for the V2V communication results in a relatively higher weight of the GPS measurements of the target vehicle, visibly affecting the longitudinal acceleration estimates. For communication samplings times above approximately 0.10 s most curves in Figure 10 show a constant or increasing trend, which is in agreement with the general expectation that the error increases for a larger sampling time.

3.5. Discussion

The presented framework can be used for the state estimation of a host and target vehicle and thus cooperative lateral vehicle following. For selected state components, measurement noise is significantly attenuated. Moreover, using the presented approach results in availability of the state of the target vehicle on every sample, in contrast to methods which rely solely on V2V communication [14], with potentially a limited number of samples. This contribution relaxes the high update rate requirements of future safety applications [16]. Relating this work to previous works on state estimation for cooperative automotive applications such as [17] shows that the use of V2V communication in state estimation can significantly improve the estimation performance. It has to be noted that the approach presented in [17] serves as a fall-back in case communication is lost, rather than a general solution. The framework presented in this study provides accurate information of the state components of the host and target vehicle, which can be used in cooperative control frameworks. Moreover, depending on the required input signals for the considered cooperative control approach, the use of V2V communication might become a necessity to ensure observability of the system. With the presented approach it is possible to estimate state components which are otherwise difficult to estimate, for example the yaw rate of the target vehicle. Comparing the obtained results with other works, mostly control techniques ([32,33,51]), reveals that the presented framework does not solely rely on one or two sensors of which the measurements are communicated wireless, but instead uses a combination of on-board sensors and V2V communication, and takes different update rates into account. Therefore, the presented approach is potentially more robust to sensor faults and missing information due to for example packet loss. However, studies as [52,53] consider these kind of communication specifications in more detail. The presented approach is potentially more robust compared to approaches using only on-board information such as [13]. The statements presented in this paragraph clearly define the benefits of using V2V communication for state estimation, especially considering the application in a cooperative control framework, providing detailed information of the relevant state components.
Regarding the simulation results, the estimation errors have the same level of magnitude for all scenarios, implying a small sensitivity to the specific scenario. In other words, the estimator is likely to display a similar performance throughout different scenarios of varying complexity. The results also show that during cornering maneuvers the estimation errors increase compared to straight line driving. Furthermore, the effect of measurement noise can be significantly reduced, depending on the setting of the estimator and the measurement noise of relevant sensors. In general, increasing the V2V communication frequency decreases the estimation errors. However, the results depend on the weighting values defined in Table 3 as well as the operating frequencies of the different sensors.
Limitations of the presented approach include the effect of time delays, the assumptions on the measurement noise, the requirements in measurement quality, and the effect of vehicle dynamics and external disturbances, as is explained further in the remainder of this section. Typically, sensors are subject to time delays, which results in measurement obtained at t actually correspond with t τ where τ is the delay. In this study, the effect of time delays is not considered and therefore set to zero for all sensors. The values for the measurement noise are considered to be known exactly and they are assumed to be constant. Although one can calculate the measurement noise through experiments, as is done in this study, it remains difficult to determine the noise level exactly, like in any other practical set-up. The wide variety of automotive sensors, each with their own signal quality and operating frequency, makes it difficult to present a fixed or optimal setting. For that reason, the properties of the sensors already located on the vehicle set-up are considered fixed except for the operating frequency of the V2V communication. Furthermore, excited vehicle dynamics and external disturbances also influence the measurements in practice. Regarding the required measurement quality, in the preparation of the simulations it was found that the quality of the measurement signals can have a significant effect on the estimation performance. Specifically the quality of the measured heading angles is of importance, high measurement noise in the measured heading angles leads to poor estimation results. This is in part caused by the adaptation of the cascaded systems approach. With this approach some relations between the different state components are ignored, for example those between the heading angle and position (the heading angle estimates are not corrected using the position measurements). Finally, the effect of sensor offsets is also not considered in this study.
Several opportunities for future work are identified, in part based on the results presented in this study. First and foremost is the implementation of the presented approach onto a full-size vehicle setup, such that the methodology can be verified and demonstrated with experiments. With the implementation onto a vehicle setup, the effect of external disturbances, e.g., driving on a poor quality road, on the performance of the methodology presented in this study can be analyzed. In addition, investigating the effect of sensor time delays on the estimation performance is considered an interesting route for future work. Furthermore, the effect of relevant V2V communication properties, including packet loss and communication delays, is a compelling direction for future work. The effect of the weighting values listed in Table 3 on the estimation performance as mentioned in Section 3.3 as well as the effect of the initial conditions as mentioned in Section 3.4.1 are items that can also be analyzed in future work. The effect of measurement noise and process noise can be investigated in future work as well. Although measurement noise can be determined from experiments, in the preparation of the work presented in this study, the specific noise levels demonstrated to play a significant role in the estimation performance. Finally, in Section 1 it is noted that future safety applications may require high update rates, and therefore it would be interesting to research the extent to which the methodology presented in this study can provide a solution for the increasing safety requirements.

4. Conclusions

A framework for state estimation using V2V communication for cooperative driving applications is presented and demonstrated by means of simulations. The main contribution of this study is the successful demonstration of a cooperative state estimation framework, which has good potential for a diverse selection of cooperative driving applications. The framework can be implemented on a wide variety of vehicles with different sensor and communication setups, given that observability of the system is maintained. The estimator provides detailed information of relevant state components, which can be used as input for longitudinal and lateral cooperative driving control schemes. Via simulations it is shown that the estimator has a small sensitivity to the specific setting as well as the complexity of the driving scenario. The effect of measurement noise can be significantly reduced, although the results vary for the different state components, which is presented by means of both quantitative and qualitative results. The operating frequencies of different sensors are taken into account in the estimator design. The analysis of the effect of the V2V communication frequency on the estimation performance indicates the estimation quality increases for higher communication frequencies. Limitations of this work include the assumptions on the measurement noise, the magnitude of the measurement noise and the exclusion of external disturbances. The effect of sensor time delays and implementation onto a vehicle set-up is considered in future work. Practical implementation and performing experiments will automatically include the effect of external disturbances.

Supplementary Materials

The following are available online at https://www.mdpi.com/2079-9292/10/6/651/s1.

Author Contributions

Conceptualization, W.S.; methodology, W.S.; software, W.S.; validation, W.S.; formal analysis, W.S.; investigation, W.S.; resources, W.S., T.v.d.S. and H.N.; data curation, W.S.; writing–original draft preparation, W.S.; writing–review and editing, T.v.d.S. and H.N.; visualization, W.S.; supervision, T.v.d.S. and H.N.; project administration, W.S. and T.v.d.S.; funding acquisition, T.v.d.S. and H.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work is part of the research programme i-CAVE with project number 14893, which is partly financed by the Dutch Research Council (NWO).

Data Availability Statement

All ground truth data that is used in the simulations presented in this study is available supplementary to this article. Supplementary to this study, the relevant simulation data is available in a separate data file.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data, or in the writing of the manuscript. All funders agreed on the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ACCAdaptive cruise control
CACCCooperative adaptive cruise control
RMSRoot-mean-square
V2VVehicle-to-vehicle
V2IVehicle-to-infrastructure

Appendix A. State-Space Formulation of the System Dynamics

In this part of the appendix, the explicit expressions for matrices and vectors used in the state-space formulation of the system dynamics are presented. First, the continuous time matrices and vectors from (16) and (17) are presented, followed by their respective discrete time counterparts from (21) and (22).

Appendix A.1. Continuous Time System Matrices

First, let us focus on the dynamics of the system. Considering the formulation from (16), the dynamics in (1) to (5) are described in state-space form using the cascading systems approach from Figure 3 with
A 1 = 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 , E 1 = 0 0 1 0 0 0 0 1 , B 1 = 0 , u 1 = 0
for system 1 and
A 2 t = 0 0 cos θ t t 0 0 0 0 0 0 0 sin θ t t 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 cos θ h t 0 0 0 0 0 0 0 sin θ h t 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 , E 2 = 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 , B 2 = 0 , u 2 = 0
for system 2 in which the state components of system 1 are used.
As mentioned in Section 2.2, the matrices C , D and J are obtained by taking the Jacobian of h x , u , v . This gives C = h x , u , v x , D = h x , u , v u and J = h x , u , v v , which results in
C 1 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , D 1 = 0 , J 1 = 1
and
C 2 t = 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 c 1 t c 2 t 0 0 c 1 t c 2 t 0 0 0 0 cos θ r t 0 0 0 cos θ r t 0 , D 2 = 0 , J 2 = 0
c 1 t = X t t X h t X t t X h t 2 + Y t t Y h t 2 , c 2 t = Y t t Y h t X t t X h t 2 + Y t t Y h t 2 .

Appendix A.2. Discrete Time System Matrices

Using the discretization procedure from (23) allows us to obtain an algebraic expressions for F k , G k , M k , H k , L k and N k , resulting in
F 1 , k = 1 T s 0 0 0 1 0 0 0 0 1 T s 0 0 0 1 , M 1 , k = T s 2 2 0 T s 0 0 T s 2 2 0 T s , H 1 , k = C 1 t = k T s , L 1 , k = D 1 t = k T s , N 1 , k = J 1 t = k T s ,
and
F 2 , k = 1 0 T s cos θ t , k + 1 T s 2 2 cos θ t , k + 1 0 0 0 0 0 1 T s sin θ t , k + 1 T s 2 2 sin θ t , k + 1 0 0 0 0 0 0 1 T s 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 T s cos θ h , k + 1 T s 2 2 cos θ h , k + 1 0 0 0 0 0 1 T s sin θ h , k + 1 T s 2 2 sin θ h , k + 1 0 0 0 0 0 0 1 T s 0 0 0 0 0 0 0 1 ,
M 2 , k = T s 3 6 cos θ t , k T s 3 6 sin θ t , k T s 2 2 T s 0 0 0 0 0 0 0 0 T s 3 6 cos θ h , k T s 3 6 sin θ h , k T s 2 2 T s T , T s 3 6 cos θ h , k T s 3 6 sin θ h , k T s 2 2 T s
H 2 , k = C 2 t = k T s , L 2 , k = D 2 t = k T s , N 2 , k = J 2 t = k T s .

Appendix B. The Effect of Scaling for Multi-Rate Sensing on the Estimation Performance

In this section, the effect of using the weighting factor from (50) on the estimation performance is demonstrated. This weighting factor is implemented to prevent large estimation errors or drift in a multi-rate sensor setup. Analyzing the performance through the cost function from (52) allows us to compare the performance for the case where the weighting factor from (50) is implemented versus when this is not implemented. The cost function for different settings, i.e., different values for P a , P θ ˙ , is shown in Figure A1 for both cases. Comparing the results without additional weights in Figure A1b to the results where the weighting from (50) is used in Figure A1a demonstrates that the use of the weighting factor from (50) overall results in a smaller error. In addition, including the weighting factor reduces the sensitivity of the estimation performance with respect to the setting of the estimator. That is, the gradient of the error E around the minimum of E is smaller when (50) is used.
Figure A1. The cost function results for different settings of the estimator for the case with and without the weighting factor from (50). (a) Case where the weighting from (50) is implemented. (b) Case where no weighting factor is implemented.
Figure A1. The cost function results for different settings of the estimator for the case with and without the weighting factor from (50). (a) Case where the weighting from (50) is implemented. (b) Case where no weighting factor is implemented.
Electronics 10 00651 g0a1

References

  1. Carney, D. NXP Makes V2X Communication a Standard in the New Volkswagon Golf. Available online: https://www.designnews.com/sensors/nxp-makes-v2x-communication-standard-new-volkswagon-golf/187677649261784 (accessed on 3 December 2019).
  2. Dey, K.C.; Yan, L.; Wang, X.; Wang, Y.; Shen, H.; Chowdhury, M.; Yu, L.; Qiu, C.; Soundararaj, V. A Review of Communication, Driver Characteristics, and Controls Aspects of Cooperative Adaptive Cruise Control (CACC). IEEE Trans. Intell. Transp. Syst. 2016, 17, 491–509. [Google Scholar] [CrossRef]
  3. Ploeg, J.; Serrarens, A.F.A.; Heijenk, G.J. Connect & Drive: Design and evaluation of cooperative adaptive cruise control for congestion reduction. J. Mod. Transp. 2011, 19, 207–213. [Google Scholar] [CrossRef]
  4. Xu, Q.; Hedrick, K.; Sengupta, R.; Van der Werf, J. Effects of vehicle-vehicle/roadside-vehicle communication on adaptive cruise controlled highway systems. In Proceedings of the IEEE 56th Vehicular Technology Conference, Vancouver, BC, Canada, 24–28 September 2002; Volume 2, pp. 1249–1253. [Google Scholar] [CrossRef]
  5. Stanger, T.; del Re, L. A model predictive Cooperative Adaptive Cruise Control approach. In Proceedings of the 2013 American Control Conference, Washington, DC, USA, 17–19 June 2013; pp. 1374–1379. [Google Scholar] [CrossRef]
  6. Mahdoui, N.; Frémont, V.; Natalizio, E. Communicating Multi-UAV System for Cooperative SLAM-based Exploration. J. Intell. Robot. Syst. 2020, 98, 325–343. [Google Scholar] [CrossRef] [Green Version]
  7. Wu, L.; Nie, L.; Khan, S.U.; Khalid, O.; Wu, D. A V2I communication-based pipeline model for adaptive urban traffic light scheduling. Front. Comput. Sci. 2019, 13, 929–942. [Google Scholar] [CrossRef]
  8. Rothfuß, S.; Prezdnyakov, R.; Flad, M.; Hohmann, S. Decentralized path planning for cooperating autonomous mobile units. Forsch. Ingenieurwesen 2019, 83, 137–147. [Google Scholar] [CrossRef]
  9. Lee, G.D.; Kim, S.W.; Yim, Y.U.; Jung, J.H.; Oh, S.Y.; Kim, B.S. Longitudinal and lateral control system development for a platoon of vehicles. In Proceedings of the 199 IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems, (Cat. No.99TH8383), Tokyo, Japan, 5–8 October 1999; pp. 605–610. [Google Scholar] [CrossRef]
  10. Rajamani, R.; Tan, H.S.; Law, B.K.; Zhang, W.B. Demonstration of integrated longitudinal and lateral control for the operation of automated vehicles in platoons. IEEE Trans. Control Syst. Technol. 2000, 8, 695–708. [Google Scholar] [CrossRef]
  11. Latrech, C.; Chaibet, A.; Boukhnifer, M.; Glaser, S. Integrated longitudinal and lateral networked control system design for vehicle platooning. Sensors 2018, 18, 3085. [Google Scholar] [CrossRef] [Green Version]
  12. Bayuwindra, A. Look-ahead Tracking Controllers for Integrated Longitudinal and Lateral Control of Vehicle Platoons. Ph.D. Thesis, Eindhoven University of Technology, Eindhoven, The Netherlands, 2019. [Google Scholar]
  13. Schmeitz, A.; Zegers, J.; Ploeg, J.; Alirezaei, M. Towards a generic lateral control concept for cooperative automated driving theoretical and experimental evaluation. In Proceedings of the 2017 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), Naples, Italy, 26–28 June 2017; pp. 134–139. [Google Scholar] [CrossRef]
  14. Wei, S.; Zou, Y.; Zhang, X.; Zhang, T.; Li, X. An Integrated Longitudinal and Lateral Vehicle Following Control System with Radar and Vehicle-to-Vehicle Communication. IEEE Trans. Veh. Technol. 2019, 68, 1116–1127. [Google Scholar] [CrossRef]
  15. Hossain, M.A.; Elshafiey, I.; Al-Sanie, A. Cooperative vehicle positioning with multi-sensor data fusion and vehicular communications. Wirel. Netw. 2018, 6, 1–11. [Google Scholar] [CrossRef]
  16. de Ponte Müller, F. Survey on Ranging Sensors and Cooperative Techniques for Relative Positioning of Vehicles. Sensors 2017, 17, 271. [Google Scholar] [CrossRef] [Green Version]
  17. Ploeg, J.; Semsar-Kazerooni, E.; Lijster, G.; Van De Wouw, N.; Nijmeijer, H. Graceful degradation of cooperative adaptive cruise control. IEEE Trans. Intell. Transp. Syst. 2015, 16, 488–497. [Google Scholar] [CrossRef] [Green Version]
  18. Wu, C.; Lin, Y.; Eskandarian, A. Cooperative Adaptive Cruise Control with Adaptive Kalman Filter Subject to Temporary Communication Loss. IEEE Access 2019, 7, 93558–93568. [Google Scholar] [CrossRef]
  19. Ess, A.; Schindler, K.; Leibe, B.; Gool, L.V. Object Detection and Tracking for Autonomous Navigation in Dynamic Environments. Int. J. Robot. Res. 2010, 29, 1707–1725. [Google Scholar] [CrossRef] [Green Version]
  20. Sivaraman, S.; Trivedi, M.M. Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1773–1795. [Google Scholar] [CrossRef] [Green Version]
  21. Wei, Z.; Zhao, J.; Ji, D.; Tseng, K.J. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model. Appl. Energy 2017, 204, 1264–1274. [Google Scholar] [CrossRef]
  22. Wei, Z.; Zhao, J.; He, H.; Ding, G.; Cui, H.; Liu, L. Future smart battery and management: Advanced sensing from external to embedded multi-dimensional measurement. J. Power Sources 2021, 489, 229462. [Google Scholar] [CrossRef]
  23. Wei, Z.; Dong, G.; Zhang, X.; Pou, J.; Quan, Z.; He, H. Noise-Immune Model Identification and State-of-Charge Estimation for Lithium-Ion Battery Using Bilinear Parameterization. IEEE Trans. Ind. Electron. 2021, 68, 312–323. [Google Scholar] [CrossRef]
  24. Barth, A.; Franke, U. Tracking Oncoming and Turning Vehicles at Intersections. In Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems, Funchal, Portugal, 19–22 September 2010; pp. 861–868. [Google Scholar] [CrossRef] [Green Version]
  25. Cipra, T.; Romera, R. Kalman filter with outliers and missing observations. Test 1997, 6, 379–395. [Google Scholar] [CrossRef]
  26. Wang, Y.; Nguyen, B.M.; Fujimoto, H.; Hori, Y. Multirate estimation and control of body slip angle for electric vehicles based on onboard vision system. IEEE Trans. Ind. Electron. 2014, 61, 1133–1143. [Google Scholar] [CrossRef]
  27. Kim, W.; Kang, C.M.; Son, Y.S.; Lee, S.H.; Chung, C.C. Vehicle path prediction using yaw acceleration for adaptive cruise control. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3818–3829. [Google Scholar] [CrossRef]
  28. Kim, S.; Kim, H.; Yoo, W.; Huh, K. Sensor Fusion Algorithm Design in Detecting Vehicles Using Laser Scanner and Stereo Vision. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1072–1084. [Google Scholar] [CrossRef]
  29. Yu, Z.; Bai, J.; Chen, S.; Huang, L.; Bi, X. Camera-Radar Data Fusion for Target Detection via Kalman Filter and Bayesian Estimation. SAE Technical Paper. 2018, pp. 1–8. Available online: https://www.sae.org/publications/technical-papers/content/2018-01-1608/ (accessed on 7 August 2018).
  30. Gao, J.B.; Harris, C.J. Some remarks on Kalman filters for the multisensor fusion. Inf. Fusion 2002, 3, 191–201. [Google Scholar] [CrossRef]
  31. Cho, H.; Seo, Y.w.; Kumar, B.V.; Rajkumar, R.R. A multi-sensor fusion system for moving object detection and tracking in urban driving environments. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1836–1843. [Google Scholar] [CrossRef]
  32. Mcaree, O.; Veres, S.M. Lateral control of vehicle platoons with on-board sensing and Inter-Vehicle Communication. In Proceedings of the 2016 European Control Conference, Aalborg, Denmark, 29 June–1 July 2016; pp. 2465–2470. [Google Scholar]
  33. Solyom, S.; Idelchi, A.; Salamah, B.B. Lateral Control of Vehicle Platoons. In Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 4561–4565. [Google Scholar] [CrossRef] [Green Version]
  34. i-CAVE. Available online: https://i-cave.nl/ (accessed on 22 December 2020).
  35. IEEE. IEEE 802.11p-2010—IEEE Standard for Information Technology—Local and Metropolitan Area Networks–Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 6: Wireless Access in Vehicul; IEEE: Piscataway, NJ, USA, 2010. [Google Scholar]
  36. ITS Europe. ITS Congresses. Available online: https://www.flickr.com/photos/itscongresses/47989400781/ (accessed on 12 December 2019).
  37. Simon, D. Optimal State Estimation: Kalman, H, and Nonlinear Approaches; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006; pp. 1–526. [Google Scholar] [CrossRef]
  38. Li, S.; Zhang, G.; Lei, X.; Yu, X.; Qian, H.; Xu, Y. Trajectory tracking control of a unicycle-type mobile robot with a new planning algorithm. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics, Macao, China, 5–8 December 2018; pp. 1–7. [Google Scholar] [CrossRef]
  39. NXP. Internal Communication. 2019. Available online: https://media.nxp.com/node/11211/pdf (accessed on 28 October 2019).
  40. Zihajehzadeh, S.; Loh, D.; Lee, M.; Hoskinson, R.; Park, E.J. A cascaded two-step Kalman filter for estimation of human body segment orientation using MEMS-IMU. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 6270–6273. [Google Scholar] [CrossRef]
  41. Zihajehzadeh, S.; Loh, D.; Lee, T.J.; Hoskinson, R.; Park, E.J. A cascaded Kalman filter-based GPS/MEMS-IMU integration for sports applications. Measurement 2015, 73, 200–210. [Google Scholar] [CrossRef]
  42. Saha, M.; Ghosh, R.; Goswami, B. Robustness and sensitivity metrics for tuning the extended kalman filter. IEEE Trans. Instrum. Meas. 2014, 63, 964–971. [Google Scholar] [CrossRef]
  43. Gelen, A.G.; Atasoy, A. A New Method for Kalman Filter Tuning. In Proceedings of the 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 28–30 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
  44. Shu, H.; Simon, E.P.; Ros, L. Third-Order Kalman Filter: Tuning and Steady-State Performance. IEEE Signal Process. Lett. 2013, 20, 1082–1085. [Google Scholar] [CrossRef] [Green Version]
  45. Åkesson, B.M.; Jørgensen, J.B.; Poulsen, N.K.; Jørgensen, S.B. A generalized autocovariance least-squares method for Kalman filter tuning. J. Process Control 2008, 18, 769–779. [Google Scholar] [CrossRef]
  46. Matisko, P.; Havlena, V. Noise covariance estimation for Kalman filter tuning using Bayesian approach and Monte Carlo. Int. J. Adapt. Control Signal Process. 2013, 27, 957–973. [Google Scholar] [CrossRef]
  47. Bian, X.; Wei, Z.; He, J.; Yan, F.; Liu, L. A two-step parameter optimization method for low-order model-based state of charge estimation. IEEE Trans. Transp. Electrif. 2020, 7782. [Google Scholar] [CrossRef]
  48. MathWorks. Matlab/Simulink R2019b; MathWorks: Natick, MA, USA, 2019. [Google Scholar]
  49. Baaij, S. Development and Validation of a Multibody Model of a Renault Twizy. Master’s Thesis, Eindhoven University of Technology, Eindhoven, The Netherlands, 2019. [Google Scholar]
  50. Pacejka, H.B. Tyre and Vehicle Dynamics, 2nd ed.; Butterworth-Heinemann: Oxford, UK, 2006; pp. 16–17. [Google Scholar] [CrossRef]
  51. Kianfar, R.; Falcone, P.; Fredriksson, J. A Distributed Model Predictive Control Approach to Active Steering Control of String Stable Cooperative Vehicle Platoon. IFAC Proc Vol. 2013, 46, 750–755. [Google Scholar] [CrossRef]
  52. Öncü, S. String Stability of Interconnected Vehicles: Network-Aware Modelling, Analysis & Experiments. Ph.D. Thesis, Eindhoven University of Technology, Eindhoven, The Netherlands, 2014. [Google Scholar]
  53. Öncü, S.; Ploeg, J.; van de Wouw, N.; Nijmeijer, H. Cooperative Adaptive Cruise Control: Network-Aware Analysis of String Stability. IEEE Trans. Intell. Transp. Syst. 2014, 15, 1527–1537. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Two Renault Twizys modified for autonomous and cooperative driving in the i-Cave project [34,36]. The host vehicle (left) is following the target vehicle (right) in both longitudinal and lateral direction.
Figure 1. Two Renault Twizys modified for autonomous and cooperative driving in the i-Cave project [34,36]. The host vehicle (left) is following the target vehicle (right) in both longitudinal and lateral direction.
Electronics 10 00651 g001
Figure 2. Schematic drawing of a unicycle vehicle model including the relevant degrees of freedom.
Figure 2. Schematic drawing of a unicycle vehicle model including the relevant degrees of freedom.
Electronics 10 00651 g002
Figure 3. Schematic drawing of the cascading systems approach used to allow the transformation of the system dynamics into state-space form.
Figure 3. Schematic drawing of the cascading systems approach used to allow the transformation of the system dynamics into state-space form.
Electronics 10 00651 g003
Figure 4. Ground truth trajectories for all scenarios.
Figure 4. Ground truth trajectories for all scenarios.
Electronics 10 00651 g004
Figure 5. Error function E for different scaling factors of the default process noise covariance matrix. The black dot indicates the minimum, corresponding with the final setting.
Figure 5. Error function E for different scaling factors of the default process noise covariance matrix. The black dot indicates the minimum, corresponding with the final setting.
Electronics 10 00651 g005
Figure 6. The spatial ground truth trajectories in combination with the estimation results for all scenarios. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3.
Figure 6. The spatial ground truth trajectories in combination with the estimation results for all scenarios. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3.
Electronics 10 00651 g006
Figure 7. Error function E for all three scenarios.
Figure 7. Error function E for all three scenarios.
Electronics 10 00651 g007
Figure 8. The ground truth and estimation result for the global heading for all three scenarios. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3.
Figure 8. The ground truth and estimation result for the global heading for all three scenarios. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3.
Electronics 10 00651 g008
Figure 9. Error function E versus the V2V communication sampling time for all three scenarios. The black dotted line indicates the default sampling frequency of the V2V communication device (Table 1).
Figure 9. Error function E versus the V2V communication sampling time for all three scenarios. The black dotted line indicates the default sampling frequency of the V2V communication device (Table 1).
Electronics 10 00651 g009
Figure 10. The RMS errors of selected state components for different values of the V2V communication sampling time. All three scenarios are included. (a) The RMS values of θ ˙ t gt θ ˙ ^ t and θ ˙ h gt θ ˙ ^ h . (b) The RMS values of a t gt a ^ t and a h gt a ^ h .
Figure 10. The RMS errors of selected state components for different values of the V2V communication sampling time. All three scenarios are included. (a) The RMS values of θ ˙ t gt θ ˙ ^ t and θ ˙ h gt θ ˙ ^ h . (b) The RMS values of a t gt a ^ t and a h gt a ^ h .
Electronics 10 00651 g010
Table 1. List of sensors considered in this study, including the respective operating frequencies.
Table 1. List of sensors considered in this study, including the respective operating frequencies.
VehicleSensorOperating Frequency
HostIMU100 Hz
Odometer100 Hz
GPS5.0 Hz
Radar14.3 Hz
TargetIMU25 Hz
Odometer25 Hz
GPS1.0 Hz
Table 2. List of the available measurements considered in this chapter. The noise levels are obtained via experiments and are expressed in terms of standard deviation (SD).
Table 2. List of the available measurements considered in this chapter. The noise levels are obtained via experiments and are expressed in terms of standard deviation (SD).
SensorMeasurement SignalNoise Level (SD)
IMU (host)Longitudinal acceleration a h 0.189 m / s 2
Yaw rate θ ˙ h 0.0138 rad / s
Odometer (host)Longitudinal velocity v h 0.0721 m / s
GPS (host)Global X-position X h 0.702 m
Global Y-position Y h 0.702 m
Global heading angle θ h 0.0347 rad
Radar (host)Relative distance r 0.0106 m
Relative speed r ˙ 0.138 m / s
IMU (target)Longitudinal acceleration a t 0.294 m / s 2
Yaw rate θ ˙ t 0.0139 rad / s
Odometer (target)Longitudinal velocity v t 0.0814 m / s
GPS (target)Global X-position X t 0.493 m
Global Y-position Y t 0.493 m
Global heading angle θ t 0.0910 rad
Table 3. Weighting values for the different system states.
Table 3. Weighting values for the different system states.
Weight g h g y g p g v g a
Value 1 0.1 1 / rad 1 0.1 s / rad 1 0.05 1 / m 1 0.5 s / m 1 0.1 s 2 / m
Table 4. Mean values and standard deviations corresponding to the results presented in Figure 6 (*: scenario that is used to select the estimator setting).
Table 4. Mean values and standard deviations corresponding to the results presented in Figure 6 (*: scenario that is used to select the estimator setting).
(Scenario 1)(Scenario 2)(Scenario 3) *
Error SignalMean (m)St.d. (m)Mean (m)St.d. (m)Mean (m)St.d. (m)
X t gt X t 0.0215 0.415 0.0215 0.415 0.0215 0.415
X t gt X ^ t 0.0448 0.187 0.123 0.604 0.0788 0.432
Y t gt Y t 0.0961 0.476 0.0961 0.476 0.0961 0.476
Y t gt Y ^ t 0.846 0.577 0.0542 0.6278 0.346 0.536
X h gt X h 0.0823 0.713 0.0823 0.713 0.0823 0.713
X h gt X ^ h 0.111 0.253 0.0591 0.354 0.141 0.414
Y h gt Y h 0.0447 0.723 0.0447 0.723 0.0447 0.723
Y h gt Y ^ h 0.136 0.0657 0.0118 0.495 0.0518 0.527
Table 5. Mean values and standard deviations corresponding to the results presented in Figure 8 (*: scenario that is used to select the estimator setting).
Table 5. Mean values and standard deviations corresponding to the results presented in Figure 8 (*: scenario that is used to select the estimator setting).
(Scenario 1)(Scenario 2)(Scenario 3) *
Error SignalMean (rad)St.d. (rad)Mean (rad)St.d. (rad)Mean (rad)St.d. (rad)
θ t gt θ t 0.0293 0.0845 0.0293 0.0845 0.0293 0.0845
θ t gt θ ^ t 0.0845 0.0076 0.0292 0.0641 0.0292 0.0641
θ h gt θ h 0.000731 0.0333 0.000731 0.0333 0.000731 0.0333
θ h gt θ ^ h 0.00217 0.00218 0.00130 0.0124 0.00122 0.0124
Table 6. Maximum and RMS errors between the ground truth, the estimates and the measurements for all three driving scenarios (*: scenario for which the state estimator setting is selected). The top part contains the errors between the ground truth and estimation results, the middle part contains the errors between the ground truth and the measurements of direct state components, and the bottom part contains errors related to the radar sensor (**: reconstructed via (6) and (7)).
Table 6. Maximum and RMS errors between the ground truth, the estimates and the measurements for all three driving scenarios (*: scenario for which the state estimator setting is selected). The top part contains the errors between the ground truth and estimation results, the middle part contains the errors between the ground truth and the measurements of direct state components, and the bottom part contains errors related to the radar sensor (**: reconstructed via (6) and (7)).
(Scenario 1)(Scenario 2)(Scenario 3) *
Error SignalRMSAbs. maxRMSAbs. maxRMSAbs. max
X t gt X ^ t m 1.75 × 10 1 5.60 × 10 1 6.63 × 10 1 13.8 × 10 1 4.63 × 10 1 12.9 × 10 1
Y t gt Y ^ t m 11.2 × 10 1 18.4 × 10 1 6.80 × 10 1 17.6 × 10 1 6.89 × 10 1 17.7 × 10 1
v t gt v ^ t m / s 7.63 × 10 2 19.7 × 10 2 30.6 × 10 2 73.1 × 10 2 25.0 × 10 2 48.0 × 10 2
a t gt a ^ t m / s 2 2.81 × 10 2 8.32 × 10 2 11.4 × 10 2 24.3 × 10 2 8.94 × 10 2 23.2 × 10 2
θ t gt θ ^ t rad 6.47 × 10 2 15.2 × 10 2 6.47 × 10 2 15.2 × 10 2 6.47 × 10 2 15.2 × 10 2
θ ˙ t gt θ ˙ ^ t rad / s 13.6 × 10 3 42.3 × 10 3 13.6 × 10 3 42.3 × 10 3 14.0 × 10 3 44.4 × 10 3
X h gt X ^ h m 2.80 × 10 1 7.95 × 10 1 3.77 × 10 1 7.93 × 10 1 4.66 × 10 1 9.60 × 10 1
Y h gt Y ^ h m 1.61 × 10 1 2.62 × 10 1 5.41 × 10 1 12.1 × 10 1 5.80 × 10 1 11.7 × 10 1
v h gt v ^ h m / s 8.70 × 10 2 27.3 × 10 2 20.4 × 10 2 49.6 × 10 2 24.9 × 10 2 51.0 × 10 2
a h gt a ^ h m / s 2 3.23 × 10 2 9.88 × 10 2 9.02 × 10 2 27.3 × 10 2 8.79 × 10 2 20.2 × 10 2
θ h gt θ ^ h rad 1.26 × 10 2 3.38 × 10 2 1.26 × 10 2 3.38 × 10 2 1.26 × 10 2 3.38 × 10 2
θ ˙ h gt θ ˙ ^ h rad / s 8.74 × 10 3 42.1 × 10 3 8.74 × 10 3 42.1 × 10 3 8.87 × 10 3 42.3 × 10 3
X t gt X t m 3.72 × 10 1 9.97 × 10 1 3.72 × 10 1 9.97 × 10 1 3.72 × 10 1 9.97 × 10 1
Y t gt Y t m 5.00 × 10 1 12.0 × 10 1 5.00 × 10 1 12.0 × 10 1 5.00 × 10 1 12.0 × 10 1
v t gt v t m / s 7.87 × 10 2 24.4 × 10 2 7.87 × 10 2 24.4 × 10 2 7.87 × 10 2 24.4 × 10 2
a t gt a t m / s 2 30.0 × 10 2 105 × 10 2 30.0 × 10 2 105 × 10 2 30.0 × 10 2 105 × 10 2
θ t gt θ t rad 8.20 × 10 2 17.9 × 10 2 8.20 × 10 2 17.9 × 10 2 8.20 × 10 2 17.9 × 10 2
θ ˙ t gt θ ˙ t rad / s 13.9 × 10 3 42.8 × 10 3 13.9 × 10 3 42.8 × 10 3 13.9 × 10 3 42.8 × 10 3
X h gt X h m 7.31 × 10 1 20.7 × 10 1 7.31 × 10 1 20.7 × 10 1 7.31 × 10 1 20.7 × 10 1
Y h gt Y h m 7.36 × 10 1 21.2 × 10 1 7.36 × 10 1 21.2 × 10 1 7.36 × 10 1 21.2 × 10 1
v h gt v h m / s 7.18 × 10 2 26.4 × 10 2 7.18 × 10 2 26.4 × 10 2 7.18 × 10 2 26.4 × 10 2
a h gt a h m / s 2 18.7 × 10 2 69.6 × 10 2 18.7 × 10 2 69.6 × 10 2 18.7 × 10 2 69.6 × 10 2
θ h gt θ h rad 3.38 × 10 2 8.09 × 10 2 3.38 × 10 2 8.09 × 10 2 3.38 × 10 2 8.09 × 10 2
θ ˙ h gt θ ˙ h rad / s 13.8 × 10 3 46.8 × 10 3 13.8 × 10 3 46.8 × 10 3 13.8 × 10 3 46.8 × 10 3
r gt r ^ ** (m) 12.1 × 10 3 34.8 × 10 3 25.7 × 10 3 61.3 × 10 3 14.1 × 10 3 37.8 × 10 3
r gt r (m) 15.3 × 10 3 46.8 × 10 3 120 × 10 3 154 × 10 3 60.9 × 10 3 146 × 10 3
r ˙ gt r ˙ ^ ** (m/s) 5.95 × 10 2 15.2 × 10 2 36.2 × 10 2 74.3 × 10 2 30.5 × 10 2 62.6 × 10 2
r ˙ gt r ˙ m / s 13.9 × 10 2 47.7 × 10 2 13.9 × 10 2 47.7 × 10 2 13.9 × 10 2 47.7 × 10 2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Schinkel, W.; van der Sande, T.; Nijmeijer, H. State Estimation for Cooperative Lateral Vehicle Following Using Vehicle-to-Vehicle Communication. Electronics 2021, 10, 651. https://doi.org/10.3390/electronics10060651

AMA Style

Schinkel W, van der Sande T, Nijmeijer H. State Estimation for Cooperative Lateral Vehicle Following Using Vehicle-to-Vehicle Communication. Electronics. 2021; 10(6):651. https://doi.org/10.3390/electronics10060651

Chicago/Turabian Style

Schinkel, Wouter, Tom van der Sande, and Henk Nijmeijer. 2021. "State Estimation for Cooperative Lateral Vehicle Following Using Vehicle-to-Vehicle Communication" Electronics 10, no. 6: 651. https://doi.org/10.3390/electronics10060651

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop