Next Article in Journal
Digital Transformation and Cybersecurity Challenges for Businesses Resilience: Issues and Recommendations
Previous Article in Journal
Identification of Language-Induced Mental Load from Eye Behaviors in Virtual Reality
Previous Article in Special Issue
Next-Generation Pedal: Integration of Sensors in a Braking Pedal for a Full Brake-by-Wire System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vehicle State Estimation Combining Physics-Informed Neural Network and Unscented Kalman Filtering on Manifolds

1
Automotive Engineering Research Institute, Jiangsu University, Zhenjiang 212013, China
2
School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang 212013, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(15), 6665; https://doi.org/10.3390/s23156665
Submission received: 7 June 2023 / Revised: 12 July 2023 / Accepted: 21 July 2023 / Published: 25 July 2023
(This article belongs to the Special Issue Sensors for Road Vehicles of the Future-Edition 2)

Abstract

:
This paper proposes a novel vehicle state estimation (VSE) method that combines a physics-informed neural network (PINN) and an unscented Kalman filter on manifolds (UKF-M). This VSE aimed to achieve inertial measurement unit (IMU) calibration and provide comprehensive information on the vehicle’s dynamic state. The proposed method leverages a PINN to eliminate IMU drift by constraining the loss function with ordinary differential equations (ODEs). Then, the UKF-M is used to estimate the 3D attitude, velocity, and position of the vehicle more accurately using a six-degrees-of-freedom vehicle model. Experimental results demonstrate that the proposed PINN method can learn from multiple sensors and reduce the impact of sensor biases by constraining the ODEs without affecting the sensor characteristics. Compared to the UKF-M algorithm alone, our VSE can better estimate vehicle states. The proposed method has the potential to automatically reduce the impact of sensor drift during vehicle operation, making it more suitable for real-world applications.

1. Introduction

The rapid development of sensor technology and intelligent transportation systems [1] in recent years has led to the introduction of new vehicle chassis subsystems by original equipment manufacturers [2]. These subsystems improve specific vehicle performance, making driving more efficient and effective. Vehicle chassis subsystems not only make driving more convenient [3], but they also make the vehicle a complex autonomous system. The optimal coordination of chassis system (OCCS) [4] is applied to coordinate different complementary chassis subsystems. However, accurate vehicle state information is required for chassis coordinated control in order to correctly coordinate subsystems and identify driving conditions [5]. Furthermore, sensor noise can affect the accuracy, reliability, and continuity of vehicle state information. Extensive research [6,7] has been conducted to investigate various estimation algorithms based on cost-effective sensors and available measurements.
With the development of sensor technology, the use of vehicle control sensors such as the global navigation satellite system (GNSS) and the inertial measurement unit (IMU) has increased. In the OCCS, the IMU plays a crucial role in measuring the angular rate and acceleration of the vehicle body, enabling accurate calculations of the vehicle's attitude, velocity, and position [8]. However, due to installation errors [9] or coupling effects between the IMU and vehicle motion [10], IMU misalignment is inevitable. Indirect state estimation methods [9,10] are proposed to mitigate the drift of IMUs. To achieve advanced control, such as the OCCS, the coordinator requires multiple advanced sensor inputs and more estimation outputs, primarily to avoid conflicts downstream in subsystems [2]. This has motivated us to develop a method that leverages the relationships between advanced sensors to eliminate IMU drift.
The extended Kalman filter (EKF [11]) and unscented Kalman filter (UKF [12]) are commonly used for optimal integration between the GNSS and inertial navigation system (INS). The unscented Kalman filter on manifolds (UKF-M) [13] is a novel filtering algorithm that provides more accurate and robust navigation estimates. Compared with the existing state-of-the-art integrated navigation algorithms, the UKF-M-based integrated navigation estimation algorithm [14] has higher accuracy and faster convergence speed. These methods incorporate various vehicle/tire models and real-time states [15] to handle disturbances and noise. However, the local linearization operation of EKF/UKF can introduce significant estimation errors [7]. Additionally, sensor offset can cause integration errors in the INS/GNSS model [16].
A virtual sensor (VS) [17] can be used to replace a redundant sensor, which can mitigate sensor noise. A VS is a type of software sensor that can integrate multiple data sources to improve system reliability. Kim et al. [18] combined the adaptive Kalman filter with a deep neural network (DNN) to estimate the sideslip angle. The proposed model utilized the DNN output as a VS. Combining the EKF/UKF model with the DNN-based VS, this model not only provided accurate estimates of the sideslip angle but also quantified the uncertainty associated with the estimation. In another study, Kim et al. [19] used a long short-term memory (LSTM) network to filter the noise and bias of the original sensor data. Leandro et al. [20] combined a neural network and a Kalman filter to estimate the vehicle's roll angle. The neural network output was used as the pseudo-roll angle to build the Kalman module. Soriano et al. [21] proposed a neural network-based calibration method for a two-axis accelerometer. Their experimental results demonstrated that the accelerometer error model based on the neural network had better accuracy and robustness than the explicit accelerometer error model method.
The data-driven vehicle model [18,19,20] serves as an approach for establishing VS. These models utilize a data-driven approach to estimate vehicle states by leveraging the hidden relationships between them. In the development of data-driven vehicle dynamics models, the linear time-invariant (LTI) state-space model [22,23,24,25] is commonly employed. Experimental results [23,24,25] demonstrated that the LTI-based data-driven vehicle model outperformed comparative vehicle dynamics models. However, these data-driven vehicle models [18,19,20,23,24,25] rely on the assumption that the selected supervised learning vehicle states accurately represent the true vehicle states. This assumption can impact the effectiveness of online learning in these models. Additionally, due to the influence of sensor noise [26], neural network-based models may introduce errors without physics-based models.
The physics-informed neural network (PINN) [27] is a novel deep learning method that integrates domain-specific knowledge into a neural network architecture. Xu et al. [28] proposed a PINN-based model for unmanned surface vehicle dynamics. Compared with traditional neural networks, their PINN-based unmanned surface vehicle dynamics model had better prediction accuracy for the sway and surge velocity and rotation speed. Franklin et al. [29] used PINN as a hybrid virtual sensor to estimate the flow metering in oil wells. Wong et al. [30] demonstrated that PINN can effectively mitigate the impact of noise in data originating from low-quality sensors. In state estimation, combining PINN with a state-space model formulation can avoid computationally costly time integration [31].
In this study, we propose a vehicle state estimation (VSE) method that combines PINN and UKF-M (PINN UKFM). The contributions of the paper can be divided into two main parts.
  • We use PINN as a data-driven vehicle dynamics model to establish a VS, using the IMU calibration values as the output. By leveraging the loss calculation method of PINN, the vehicle dynamics can be integrated with the data-driven model, enabling the incorporation of information from multiple sensor sources during vehicle operation. The experimental results in a real vehicle platform indicated that the PINN-based model effectively integrates multiple sensor inputs to achieve improved estimation of the vehicle’s state, surpassing both the physical and neural network-based models.
  • Based on the IMU calibration values, we utilize the UKF-M algorithm to estimate the altitude, velocity, and position of the vehicle. By fusing data from multiple sensors, PINN UKFM provided accurate and comprehensive vehicle states that included six-dimensional vehicle dynamics, 3D attitude, speed, and position, which can be used in various vehicle dynamic control systems. For example, the six-dimensional vehicle dynamics can define the chassis motion, which can be used in OCCS. The 3D position can be applied to vehicle navigation in a GNSS-denied environment. The experimental results indicated that the PINN-based model can effectively incorporate multiple sensor inputs to mitigate IMU biases and enhance the accuracy of the existing state-of-the-art integrated navigation algorithm, UKF-M.
The rest of work is organized as follows: Section 2 introduces the vehicle model, the PINN UKFM sensor states, and defines the PINN UKFM problem. Next, Section 3 introduces the proposed PINN UKFM algorithm. In Section 4, the proposed method is tested using realistic vehicle data. Rainy weather is used as an experimental condition as it reduces the road adhesion coefficient and increases the nonlinearity of vehicle dynamics. Finally, Section 5 presents the conclusions of this paper.

2. Estimation Problem

2.1. The Vehicle Model

The PINN UKFM algorithm used a six-degrees-of-freedom (6-DoF) kinematic vehicle model [32] to obtain accurate state estimates. As shown in Figure 1, the model could define the chassis movement [33], which included the navigation and vehicle body coordinates [5]. The starting point of the navigation coordinates was defined as the track start. The navigation coordinates consisted of three variables: E (east), N (north), and U (upward).
The vehicle body’s coordinate origin point was located at its center of mass and the right-hand rule was used. The x-, y-, and z-directions pointed forward, left, and upward, respectively. Acceleration and velocity could be broken down into longitudinal acceleration a x /velocity v x , lateral acceleration a y /velocity v y , and vertical acceleration a z /velocity v z . The vehicle’s direction angle was defined as the rolling angle (around x, roll rate ω x ), pitching angle (around y, pitch rate ω y ), and heading angle (around z, yaw rate ω z ). Finally, other vehicle states could be extrapolated from these. For example, the sideslip angle β could be calculated through the following trigonometric operation:
β = tan 1 ( v y v x )

2.2. The Sensor States

In the experiment, the vehicle had multiple sensors, including the GNSS, IMU, measurement steering wheels (MSW), wheel force transducers (WFT), 2-axis optical sensors (S-Motion), and a monocular camera. The experimental platform structure is presented in Section 4.1. Due to the real-time computation requirement, the signals from the monocular camera were not used in the PINN UKFM algorithm. The sensor inputs of PINN UKFM are introduced in Table 1.
The symbols E and N represented easting and northing in the navigation coordinates. Using the universal transverse mercator (UTM) and the navigation starting point, easting and northing were transformed from the GNSS coordinates into the navigation coordinates. The UTM velocity was calculated from navigation coordinates as:
v g p s = E t E t 1 2 + N t N t 1 2 + U t U t 1 2 d t
where dt is a single GNSS interval, and t and t − 1 are GNSS coordinate timestamps.

2.3. Problem Definition

The PINN-based VS was defined as a time-series forecasting model in which sensor signals were taken as discrete variables. Assuming that “ n ” represents the current timestamp, the PINN module input states were defined as:
X = X n L , X n L + 1 , , X n X n = [ X n ( 1 ) , X n ( 2 ) , , X n ( ϑ ) ]
where n L is the starting time step, X represents the sensor signals shown in Table 1, and ϑ is the number of sensor signals.
PINN was used as a universal function approximator to achieve IMU calibration. Building upon previous artificial intelligence-based techniques and the integration of the Kalman filter for estimation [23], the calibrated values were called “pseudo-states.” By applying the conservation principles derived from the vehicle dynamics, the pseudo-states satisfied the conservation principles originating from the vehicle dynamics. Therefore, the PINN module output states were defined as:
u ^ θ = [ u ^ θ ( 1 ) , u ^ θ ( 2 ) , , u ^ θ ( ι ) ]
where u ^ θ represents the pseudo-states, and ι is the number of pseudo-states. Based on the LTI state space assumption, the pseudo-states were inputted to ODEs to compute the integrated states. The sensor measurements of these integrated states z were represented as:
z = [ z n + 1 , z n + 2 , , z n + F ] z n + 1 = [ z n + 1 ( 1 ) , z n + 1 ( 2 ) , , z n + 1 ( κ ) ]
where n + 1 , n + 2 , , n + F represent the output timestamps, κ is the number of the integrated states, and n + F is the ending time step.
For better integration with the vehicle control, the authors hypothesized that the vehicle physical model satisfied an LTI state-space model [23,24], which could be defined as:
x ˙ = A x t + B t y t = C x t + D t
where x t + 1 is the state variable at next time step, x t is the state variable at current time step, y t represents the output variable at current time step, B t and D t are disturbances or noise, and x ˙ denotes the rate of change of the state variable. It should be noted that the system dynamics and output equations did not change over n + 1 , n + 2 , , n + F .
By utilizing the LTI state-space model, the output of the PINN module could be used to compute the other vehicle states. Based on the ODE constraints, PINN ensured the estimated IMU states satisfied the physical relationships among sensor measurements.
Next, the pseudo-states were inputted into the UKF-M-based IMU-GNSS sensor-fusion model [34]. Using these pseudo-states, the UKF-M module could estimate both the velocity and position of the vehicle.

3. Methodology

3.1. Structure of the Proposed VSE

In recent years, there has been growing interest in the development of multi-sensor systems for vehicle state estimation due to their potential to improve accuracy and robustness in complex environments. The proposed PINN UKFM is one such system, integrating multiple sensors to estimate the vehicle dynamics in real time. The architecture of our proposed PINN UKFM is illustrated in Figure 2 and consisted of three modules: the sensors module, the PINN module, and the UKF-M module.
The sensors module in PINN UKFM consisted of various sensors, such as GNSS, IMU, MSW, WFT, and S-Motion. These sensors provide rich information about the vehicle's motion, including its position, velocity, acceleration, and orientation. For adaptive reduction of data noise, the PINN module was employed to learn the complex, nonlinear relationship between the sensor signals and the filtered vehicle states. Specifically, the PINN module used the time-series sensor signals as input and outputted the corresponding pseudo-states. These pseudo-states were then fed into the UKF-M module, which used a state-space model to estimate the vehicle's position, velocity, and other parameters. By combining the strengths of both PINN and UKF-M, the proposed PINN UKFM achieved accurate and robust vehicle state estimation.

3.2. The PINN Module

In practical applications, different sensors may measure data with noise and drift due to their different characteristics and working environments. To obtain accurate vehicle state estimation, we used PINN to integrate the vehicle dynamics into a neural network architecture. The PINN module punished the loss function with ODEs and algebraic equations to make the sensor data consistent with the vehicle dynamics. The PINN module fused the data from multiple sensors to reduce measurement errors and make the data more consistent with the actual vehicle dynamics.
Therefore, the PINN module found the angle and velocity by integrating the acceleration and angular velocity. Temporal interaction is widely used in establishing data-driven vehicle models [19,25], as it can capture the complex temporal and hidden correlations for better state prediction. Therefore, a temporal model consisting of an encoder layer, a temporal interaction layer, and a decoder layer was proposed, as shown in Figure 3.
The encoder layer was used to embed and encode sensor signal X . Data were mapped into the high dimension through the multilayer perceptron (MLP). The encoder layer was defined as:
e t = σ ( W e m b X t + b e m b )
where e t denotes the embedded feature vector. Additionally, the MLP contained the weighting matrix W e m b , bias term b e m b , and the rectified linear unit function (ReLU) activation function σ [35].
Since vehicle states have temporal interactions, the temporal interaction layer supposed the temporal interactions of different hidden states. The time dimension of e t was connected:
e 0 = Concat e t , t [ n L , n ]
where the embedded feature vectors were concatenated to form e 0 . Then, the PINN module learned the temporal interaction:
e l = σ W t i m e l e l 1 + b t i m e l ,   l = 1 ,   2 ,   , s
where e l 1 and e l are the input and output of layer l, respectively.
The decoder layer predicted the pseudo-states as:
u ^ θ = ( W d e c e s + b d e c ) + u p a s t
where u p a s t represents the past measurement of the IMU, and u ^ θ refers to the pseudo-states that represent the IMU calibration values. We defined the residual between the pseudo-states and past measurement of the IMU as the drift of the IMU. This structure was similar to the residual block in a residual network [36]. W d e c e s + b d e c = u ^ θ u p a s t represented the latent (hidden) solution of the drift of IMU. The PINN determined the parameter θ of the NN [27] by minimizing the loss function:
θ = argmin   L θ L θ = L F θ + L d a t a θ L d a t a θ = 1 N d i = 1 N d | u ^ θ X ; θ u X | 2 L F θ = k = 1 7 ω k · [ 1 F 1 N F t = n + 1 n + F i = 1 N F g k u ^ θ X ; θ , X n , z t 2 ] + k = 8 12 ω k · [ 1 N F i = 1 N F f k u ^ θ X ; θ , X n 2 ]
where L F θ represents the mean square error of residuals of the physics-based equations; L d a t a θ represents the mean square error of residuals of the measurement data; ω 1 , ω 2 , , ω 12 denote the weights associated with physical constraints; N d and N F represent the batch size; u X represents the future measurement of the IMU; t represents the output timestamps; F represents the forecast horizon; g u ^ θ X ; θ , X n , t represents the ordinary differential equations [37]; and f u ^ θ X ; θ , X n , t represents the algebraic equations [37]. The ODEs and algebraic equations were utilized as an additional regularization term [38]. By minimizing the loss function of the physics-based equations, we could incorporate the laws of physics into the NN [38,39,40].
When used for computing partial differential equations (PDE), the L F θ in PINN is typically used to penalize the degree to which the model violates physical laws. However, the PDE-based L F θ was not applicable in the continuous time modeling and prediction problems addressed in this paper. The physical laws of vehicle dynamics models are often subject to ODEs and algebraic equations. To address this, we drew inspiration from the approach of a physics-constrained neural network (PCNN) [39,40] and neural ordinary differential equations (NODEs) [41,42]. PCNN is a specific variant of PINN that introduces a regularization parameter to control the trade-off between data and knowledge-based regularization [43]. In NODEs, the hidden layers of a neural network are treated as the states of an ODE, and an ODE solver is used to compute the evolution of these states. Specifically, we incorporated ODEs to represent the dynamic evolution of the system and used algebraic equations to represent the vehicle dynamics model.
The PINN module provided the signals of the pseudo-states, which represented the rate of change of the state variable x ˙ in the LTI state space. The inputs and outputs of the PINN module were denoted as:
X 1 22 = a x , a y , a z , ω x , ω y , ω z , v x , v y , v x , φ , Θ , ψ , M y l , M y r , F x f l , F x r l , F y f l , F y r l , d E , d N , d U , δ s u ^ θ X ; θ 1 6 = [ a x , a y , a z , ω x , ω y , ω z ]
where d E ,   d N ,   and   d U are the navigation coordinates minus the current vehicle coordinates [44].
The ordinary differential equations were represented as:
g q u ^ θ X ; θ , X n , z t = X n ( q + 6 ) + u ^ θ X ; θ ( q ) × d t z t ( q ) , q = 1 ,   2 ,   ,   6 g 7 u ^ θ X ; θ , X n , z t = i = 1 3 [ X n i + 6 × d t + u ^ θ X ; θ i × d t 2 2 ] z t 7
where X n 7 12 = [ v x n , v y n , v z n , φ n , Θ n , ψ n ] are the initial states of ODE outputs; z t 1 6 = [ v x t , v y t , v z t , φ t , Θ t , ψ t ] are the measurements of ODE outputs at timestamp t ; and d t is the time interval between n and t . By minimizing g 1 ~ g 6 , the pseudo-states could incorporate information from the related vehicle states. The state z t 7 represents the position change of the vehicle, which was represented as:
z t 8,9 , 10 = [ d E t , d N t , d U t ] z t 7 = E t E n 2 + N t N n 2 + U t U n 2 = d E t 2 + d N t 2 + d U t 2
where [ E , N , U ] are the outputs of variable y t in the LTI state-space vehicle model. By minimizing g 7 , the physical knowledge of y t was incorporated into the physics-informed loss function. The position-updated formula of the vehicle dynamics [45] may affect the learning effectiveness of the neural network. Therefore, we used the Euler integral to calculate the displacement of the vehicle. By utilizing the loss calculation method of PINN, the ODE vehicle dynamics could be combined with a data-driven model to consider multiple sensor sources.
Algebraic equations capture simple dependence relationships among vehicle states [37]. The algebraic equations representing the linear dynamics in the LTI state space could be written as x ˙ = A X n + B n . By minimizing the loss of u ^ θ X ; θ x ˙ , we could incorporate the physical dynamics model into the physics-informed loss function. The referenced vehicle dynamics model was based on the two-degree-of-freedom (2-DOF) vehicle dynamics model [23,45]. To simplify the vehicle dynamics [23], we decoupled the longitudinal and latitudinal dynamics by neglecting the influence of the latitudinal and longitudinal forces.
The algebraic equation of f 8 u ^ θ X ; θ , X n was represented as:
f 8 u ^ θ X ; θ , X n = a x n 1 u ^ θ X ; θ 1 X n 7,13,14 = [ v x n , M y l , n , M y r , n ] m a x n ( 1 ) = F e n + F b n + F s n + F f n F D n a x n ( 1 ) = 1 m ( k 1 × M y l , n + k 2 × M y r , n + m g γ + m g μ k D v x n 2
where a x n ( 1 ) is calculated based on the vehicle longitudinal dynamics; F e n represents the transmitted force of the vehicle; F b n represents the brake force of the vehicle; m is the gross vehicle mass; and M y l , n and M y r , n represent the y-axis torque of the two front wheels. We assumed F e n and F b n were transmitted to the front wheels and converted into M y l , n and M y r , n , respectively (the experimental car was a front-wheel drive vehicle). Therefore, the transmitted and brake forces were calculated using the least squares method. Additionally, a small slope angle was assumed. Therefore, the dissipative force was F s n = 0 and the frictional force was F f n = m g μ . Here, g represented the gravitational constant and μ was the road–wheel static friction coefficient. The air drag force F D n was calculated based on the longitudinal velocity and the drag coefficient k D . The parameters k 1 , k 2 , μ , and k D were calculated using the least squares method to realize online learning of the parameters.
In this paper, the tire force was directly measured. This allowed for the direct calculation of the longitudinal acceleration a x n ( 2 ) from the measured tire force. The algebraic equation of a x n ( 2 ) was represented as:
f 9 u ^ θ X ; θ , X n = a x n 2 u ^ θ X ; θ 1 X n 15,16 = [ F x f l , n , F x f r , n ] a x n ( 2 ) = 1 m F x f l , n + F x f r , n
where F x f l , n   and   F x f r , n are the front left and right wheel longitudinal forces, respectively; a x n ( 2 ) represents the longitudinal acceleration, 1 is the parameter ( 1 = F x f l , 1 + F x f r , 1 m a x 1 ), and [ F x f l , 1 , F x f r , 1 , a x 1 ] represents the known measurement data.
The algebraic equation of f 10 u ^ θ X ; θ , X n was calculated from the vehicle latitudinal dynamics, which was represented as:
f 10 u ^ θ X ; θ , X n = a y n 1 u ^ θ X ; θ 2 X n 7 , 8 , 12 , 22 = [ v x n , v y n , ω z n , δ s n ] δ f n = 1 i δ s n m a y n ( 1 ) = F y f l , n + F y f r , n + F y r l , n + F y r r , n a y n 1 = 1 m [ 2 C v y n 2 C l f l r ω z n v x n m v x n ω z n + 2 C δ f n ]
where C represents the tire cornering stiffness; l f   and   l r represent the distances from the center of the vehicle’s mass to the front and rear axles, respectively; ω z denotes the yaw rate; F y f l , n , F y f r , n , F y r l , n ,   and   F y r r , n denote the lateral tire forces at the front left/right and rear left/right wheels, respectively; δ s n represents the steering wheel angle; δ f n represents the front wheel steering angle; and i denotes the function of the variable steering gear ratio [46]. This was a linear tire model and we assumed the vehicle had equal cornering stiffness for all four wheels.
Similar to a x n ( 2 ) , we directly calculated the latitudinal acceleration a y n ( 2 ) from the measured tire force. The algebraic equation of a y n ( 2 ) was represented as:
f 11 u ^ θ X ; θ , X n = a y n 2 u ^ θ X ; θ 2 X n 17,18 = [ F y f l , n , F y r l , n ] a y n ( 2 ) = 2 m ( F y f l , n + F y f r , n )
where F y f l , n   and   F y f r , n represent the front left and right wheel latitudinal forces, respectively; a y n ( 2 ) represents the latitudinal acceleration; and 2 is the parameter ( 2 = F y f l , 1 + F y f r , 1 m a y 1 ).
The algebraic equation of f 12 u ^ θ X ; θ , X n was calculated from the kinematic vehicle model, which was represented as:
f 12 u ^ θ X ; θ , X n = ω z n ( 1 ) u ^ θ X ; θ 6 β n = tan 1 [ l r l f + l r tan ( δ f n ) ] ω z n ( 1 ) = v x n l f × sin ( β n )
where β n represents the vehicle slip angle.
The physics-based equations presented in this paper, ( g 1 7 u ^ θ X ; θ , X n , z t and f 8 12 u ^ θ X ; θ , X n ), were applicable within the LTI state space. This implied that the pseudo-states were assumed to remain constant over the time interval [ n + 1 , n + 2 , …, n + F ]. This assumption ensured the validity and applicability of the equations mentioned earlier, allowing for the incorporation of physics-based constraints into the neural network model.
The PINN module utilized the PyTorch deep learning framework. To consider the dynamics of different states, the Adam optimizer with a 0.001 learning rate was used to train the network. The model was trained using a Nvidia GTX 3080Ti GPU.

3.3. The UKF-M Module

The UKF-M is a novel UKF on manifolds, with versatility that allows direct application to numerous practical manifolds. For stochastic processes on Riemannian manifolds, the theory of Lie groups [47] is used to define the vehicle's attitude estimation. The IMU-GNSS sensor-fusion model [34] was a UKF-M-based filter, which is a standard 3D kinematic model based on inertial inputs. The UKF-M algorithm utilized in PINN UKFM was based on the methodology proposed in [34]. The modification of PINN UKFM involved using the outputs of the PINN module to replace the original IMU measurements.
The states of a moving vehicle in a discrete dynamic system are represented as:
χ n M = { C n R 3 × 3 , v n R 3 , P n R 3 , b g n R 3 , b a n R 3 }
where χ n denotes the state of a vehicle belonging to a parallelizable manifold M ; n is the current timestamp; v n = ( v E n , v N n , v U n ) is the velocity vector ( v E n –velocity east and v N n -velocity north); P n = ( E n ,   N n , H n ) is vehicle coordinate in the navigation coordinates; b g n = ( b ω x ,   n , b ω y ,   n , b ω z ,   n ) is the gyro bias; b a n = ( b a x ,   n , b a y ,   n , b a z ,   n ) is the accelerometer bias; and C n is a special orthogonal group that represents 3D rotation [47].
S O 3 : = { C n R 3 × 3 | C n C n T = 1 , det C n = 1 }
where 1 is the identity matrix. Based on the time derivative of C n C n T = 1 , a skew-symmetric matrix C n T C ˙ n was obtained:
C n T C ˙ n + C ˙ n T C n = 0
The C n T C ˙ n as a skew-symmetric matrix is often noted as [ ω ] ×   :
C n T C ˙ n = [ ω ] × = 0 ω z ω y ω z 0 ω x ω y ω x 0
where [ ω ] × is in the Lie algebra of S O 3 . The Lie algebra is a vector space and can be decomposed into:
[ ω ] × = ω x 0 0 0 0 0 1 0 1 0 + ω y 0 0 1 0 0 0 1 0 0 + ω z 0 1 0 1 0 0 0 0 0
where ω = [ ω x , ω y , ω z ] is in the vector of angular velocities. For the ω constant, we obtained the ODE solution:
C n = exp ω × n
where exp() is the exponential map on the SO(3) [42] and C 0 = I . The exp() map was derived from the time derivatives of χ n M . The vector fields were defined as:
V 1 C n = C n 1 0 0 ^ , V 2 C n = C n 0 1 0 ^ , V 3 C n = C n 0 0 1 ^
where ^ is the hat map [47], and V 1 , V 2 , and V 3 are the vector fields.
Similar to the Gaussian belief of the Kalman filter, the UKF-M algorithm builds a probability distribution as χ n ~ N ( χ ^ n , P n ) for the random variable χ n M as:
χ n = φ χ ^ n , ξ n , ξ n ~ N ( 0 , P n )
where χ ^ n is viewed as the mean estimate at timestep n ; φ is the propagation function; ξ n R d is a random Gaussian vector; N is the Gaussian distribution; and P n R d × d is the covariance matrix. φ χ ^ n , ξ n   M is obtained by starting from χ ^ n and integrating the vector field i = 1 d ξ n i V i (d is the dimension of the associated vector fields).
Consider that the probability distribution of χ n is p ( χ n ) . The additional information about χ n is obtained from the measurement y n as:
y n = h χ n + v n
where h is the observation function, and v n ~   N ( 0 , R n ) denotes the white Gaussian noise. The UKF-M module used the gyro measurements and acceleration as inputs to update the random variable χ . The measurements of this standard 3D kinematic model were represented as:
y n = { μ n R 3 , a b n R 3 }
where μ n = ( ω x n , ω y n , ω z n ) represents the gyroscope, and a b n = ( a x n , a y n , a z n ) denotes the accelerometer. The UKF-M algorithm [34] updated the state and covariance by combining measurements y n and system states χ n .
In PINN UKFM, the output of the PINN module was utilized to filter the noise and minimize the norm errors. Pseudo-states u ^ θ served as the calibrated IMU measurements for the filtering process. The states of the pseudo-states were represented as:
y n = y n = u ^ θ = { μ n R 3 , a b n R 3 }
where y n = u ^ θ represents the PINN module output as the pseudo-states.
Using the propagation function [34], PINN UKFM built the vehicle model. First, the gyro measurements were inputted to calculate the rotation matrix:
C n + 1 = C n exp μ n b g n + w n ( 0 : 3 ) × d t
where exp is the exponential map on the SO(3), and d t is the integration step. In this vehicle model, w n ( 0 : 12 ) represented the noise, and w n ( 0 : 3 ) , w ( 3 : 6 ) , w ( 6 : 9 ) , and w ( 9 : 12 ) were the submatrices of w n ( 0 : 12 ) . Next, the vehicle acceleration was inputted to calculate the calibrated vehicle acceleration:
a = C n a b n b a n + w ( 3 : 6 ) + g
where g = [ 0 , 0 , 9.82 ] represents the gravitational constant and a represents the calibrated vehicle acceleration. Based on the calibrated vehicle acceleration, the model updated the vehicle speed as:
v n + 1 = v n + a d t
where v n = ( v E n , v N n , v U n ) denotes the velocity vector. Based on the vehicle velocity vector, the model updated the vehicle position vector as:
P n + 1 = P n + v n × d t + a × d t 2 2
where P n = ( E n ,   N n , U n ) is the vehicle coordinates in the navigation coordinates. Finally, the model uploaded the gyro and accelerometer biases as:
b a , n + 1 = b a , n + w ( 6 : 9 ) × d t b g , n + 1 = b g , n + w ( 9 : 12 ) × d t
where b g , n represents the gyro bias, and b a , n represents the accelerometer bias.
The probability distribution of χ and the propagation function remained unchanged; therefore, the posterior distribution p χ n y n was calculated as p χ n y n . The pseudo-states y n provided information about the sigma point ξ n . First, the sigma points ξ n were computed as:
ξ j 𝓷 = col λ + d P n j , j = 1 , , d col λ + d P n j , j = d + 1 , , 2 d λ = ( α 2 1 ) d
where λ is the scale parameter [48]; α is a free parameter chosen by the practitioner [49] ( α must be small); d is the dimension of the associated vector fields; P n is the covariance matrix at timestep n ; and col represents that the j column of the matrix is the weight associated with the j point. Second, these sigma points passed through the vehicle model to yield the set of transformed sigma point y j n :
y j n = h φ χ ^ n , 0 , j = 0 h φ χ ^ n , ξ j n , j = 1 , , 2 d
where h φ χ ^ n , 0 is the unnoisy state model, h is the observation function (as in Equations (31–35)), and χ ^ n is the prior mean estimate of the current state. Third, the mean and covariance of the transformed sigma points [30] were computed as:
y n ¯ = ϖ m y 0 n + j = 1 2 d ϖ j y j n P y n y n = j = 0 2 d ϖ j ( y j n y n ¯ ) ( y j n y n ¯ ) T + R n P ξ n y n = j = 1 2 d ϖ j ξ j n ( y j n y n ¯ ) ϖ m = λ λ + d   ,   ϖ j = 1 / 2 λ + d  
where y n ¯ represents the mean of the transformed sigma points; P y n y n is the covariance of the transformed sigma points; P ξ n y n is the cross-covariance of the transformed sigma points; ϖ m and ϖ j are weights; and R n is the covariance matrix of white Gaussian noise. Next, PINN UKFM employed the Kalman updated equation to update the state and covariance as:
K n = P ξ n y n P y n y n 1 χ ^ n + = φ χ ^ n , K n ( y n y n ¯ ) P n + = P n K n P y n y n K n T
where y n is the PINN module output (pseudo-states), K n is the gain matrix, χ ^ n + is the posterior mean estimate state, P n is the prior estimate covariance matrix of the current state, and P n + is the posterior estimate covariance matrix.
The unscented transformation [50] was employed to approximate the posterior p ξ n y n for ξ n as:
p ξ n y n ~ N ξ n ¯ , P n + ξ n ¯ = K n ( y n y n ¯ )
where ξ n ¯ represents the noise-free mean.
The unscented approximation to the posterior p ξ n y n was thus the distribution of a Gaussian ξ n ¯ + ξ n + with ξ n + ~ N 0 , P n + [34]. Then, PINN UKFM approximated the posterior distribution p χ n y n as:
χ n φ χ ^ n + , ξ n + ,   ξ n + ~ N 0 , P n + χ ^ n + = φ χ ^ n , ξ n ¯ χ n φ φ χ ^ n , ξ n ¯ , ξ n +
where ξ n + represents the posterior noise. The posterior distribution p χ n y n boiled down to a Bayesian estimation problem [47] that incorporated the information from the PINN module.
In this paper, we focused on describing how PINN UKFM updated the state estimation χ ^ n + and covariance matrix P n + when pseudo-states y n arrived. Additionally, the UKF-M algorithm could propagate the state without sensor measurement [34], which was implemented in our program.

4. Experimental Results

4.1. Vehicle Platform

A vehicle platform was built to validate the proposed algorithm. The hardware configuration is shown in Figure 4. The sensors included GNSS, IMU, MSW, WFT, S-Motion, and a camera. D[WE-43A-USB and Dewesoft SIRIUS acquisition systems were used. The DEWE-43A-USB system obtained the S-Motion and MSW signals through high-speed CAN channels, while the Dewesoft SIRIUS system collected the WFT signals through high-speed CAN channels. A computer was used for all data acquisition.
First, the Mako G-192B monocular camera [51], produced by the Allied Vision Company, captured image data of the environment. Then, the starNeto XW-GI [52] provided the IMU and GNSS data, using differential methods to measure both position and attitude. $GPFPD was selected as the starNeto communication protocol to acquire signals for [ E , N , U , φ , Θ , ψ , v e , v n , v z ]. In the third step, the S-Motion system, produced by Kistler, provided signals for [ v x , a x , v y , a y , a z , ω x , ω y , ω z ] . In the fourth step, the MSW by Kistler was used to measure the steering wheel angle, speed, and torque. Finally, the WFT by Kistler was used to measure the wheel forces and moments under dynamic conditions for the left and right front wheels. The WFT provided signals for F x , M x , F y , M y , F z , M z in the front wheels and three axes of WFT, similar to the vehicle body coordinates. The hardware implementation in the test vehicle [52,53,54,55,56] is shown in Figure 5.
The test was conducted on cement pavement at Jiangsu University, and the vehicle trajectory is shown in Figure 6. The weather during the test was rainy, which reduced the tire–road friction coefficient and increased the nonlinearity of the vehicle's dynamic relationships. A total of 85% of the data were used for training and to verify the PINN module, while the remaining 15% were reserved for testing. The dataset included scenarios such as an overpass (high speed), a school (low speed), a slope, traffic lights, and vehicle turning. The road scene could be changed by altering the dataset splits. In the experiment, the last 15% of the dataset was used to evaluate the performance of PINN UKFM.

4.2. Parameter Settings and Training of All Comparative Methods

PINN: The structure of the PINN was chosen as {22, 32, 32 × 5, 64, 128, 128, 64, 32, 6}. For easy calculation of v z , we subtracted the gravity vector from the acceleration a z during data processing [34]. The layer {22, 32} was the encoder layer, and the layers {32, 32 × 5, 64, 128, 128, 64, 32} were the temporal interaction layers. The layer {32, 6} was the decoder layer. The prediction states of the PINN module were the IMU calibration values.
subPINN: To address the influence of loss backward during training, a single-output scheme of the PINN was implemented. The vehicle dynamics/kinematic models were used to build the data-driven model [ a x , a y , ω z ] . This subPINN had the same structure as the PINN and a single output. Specifically, for training a z , ω x , ω y , the structure {1, 2, 32, 32, 64} was used to reduce the computational complexity and improve accuracy. The inputs of a z , ω x , ω y were [ a z , v z ], [ ω x , φ ], and [ ω y , Θ ], respectively. Different from the UTM coordinates, the attitudes did not differ greatly, so they were directly inputted here.
MLP [24,25,57]: The multilayer perceptron (MLP) is a commonly used approach for building data-driven vehicle dynamics models. The structure of the MLP used in this study was inspired by [24] and was chosen as {22, 32, 32 × 5, 64, 128, 128, 64, 32, 6}. The ReLU activation function was applied between each layer. Different from the approach used in [24], we used the layers {32, 32 × 5} as the temporal interaction layers, which is a common approach in data-driven dynamics modeling [25].
LSTM [24,57,58]: Long Short-Term Memory (LSTM) is another classic method used to build data-driven dynamics models, which is employed for predicting the acceleration and angular velocity of vehicles [57,58]. LSTM is known for its ability to capture temporal dependencies in the data. In this study, LSTM was implemented with a general transformation structure as {22, 32, 128, 32, 6}, where {22, 32} and {32, 6} represented the MLP layers. The exponential linear unit (ELU) [35] function was applied after the first layer, and {32, 32, 128} represented the two LSTM layers.
Physical models [23,45]: The yaw rate formulas based on longitudinal/lateral dynamics and kinematic models were used in this paper. The longitudinal dynamics parameters were calculated using the least squares method [23]. The lateral dynamics parameters were computed based on the actual vehicle parameters ( l f , l r , m ) and the relationships between states ( C = F y a y ). We assumed the yaw rate was an LTI state. Therefore, a kinematic model [45] was used to calculate the yaw rate.
The wheel force [ F x , F y ] measured by WFT could be utilized to calculate the acceleration through algebraic equations. Additionally, we used the linearly constrained least squares method to combine these dynamics as:
m a x n = K 1 × M y l , n + K 2 × M y r , n + K 3 m g K 4 v x n 2 + K 5 F x f l , n + K 6 F x f r , n + K 7 m a y n = K 8 4 C v y n 2 C l f l r ω z n v x n m m v x n ω z n + 2 C δ n m + K 9 F y f l , n + K 10 F y f r , n + K 11
where K 1 ~ K 6 , K 8 ~ K 10 represent the parameters of the least squares method, and K 7   and   K 11 represent the coefficients of the least squares method.
The resulting states of the physical models are illustrated in Figure 7. In the experiment, we not only compared the prediction states with the sensor signals but also compared the integration states [ v x , v y , v z , φ , Θ , ψ ] with the integrated sensor signals, which could be expressed as:
Z t = Z 0 + i = 0 t u i d t
where Z 0 is initial state, t is the timestamp, and u i = [ a x , a y , a z , ω x , ω y , ω z ] are the states at timestamp i .
As shown in Figure 7, the “Algebraic equation” represented the calculation results of Equations (16) and (18). The “Linearly constrained least square” represented the calculation results of Equation (42). By combining the forces and dynamics, the output of this equation provided a better fit to the IMU measurements. This method could fit multiple sensors to improve the accuracy of the dynamics, when the IMU measurements were accurate. However, due to the IMU drift, the integration error of this model was relatively large. The “Dynamics” represented the longitudinal/lateral dynamics model based on Equations (15) and (17), which is commonly used for vehicle control. In this paper, “Linearly constrained least square” and “Dynamics” were established based on the IMU measurement data. Therefore, the results of these models were close to the IMU measurements but also had large integration errors.
To maintain visual clarity, we only used the vehicle dynamics (“Dynamics”) and kinematic models for comparison. MLP and LSTM also utilized the PyTorch deep learning framework. We employed the Adam optimizer with a 0.001 learning rate to train the network. The models were trained using the Nvidia GTX 3080Ti GPU. All models, including PINN, subPINN, MLP, and LSTM, were trained using the same training dataset. However, there was a difference in the loss calculation method. Different from the loss calculation method established by PINN, MLP and LSTM used the IMU measurements to build data-driven vehicle models [25,57,58]. These methods commonly assumed that the ground truth states of the vehicle were accessible. In this paper, we used highly accurate S-motion data as a surrogate for the actual vehicle state for MLP and LSTM training.

4.3. Validation of the PINN Module

In this subsection, we compare the PINN module with the MLP, LSTM, and vehicle dynamics/kinematic models and vehicle measurements. These data-driven models were trained using the dataset described in Section 4.1. The root mean square error (RMSE) [24,59] was used to evaluate the performance of the models, which was represented as:
RMSE = 1 S i = 1 S ( ϰ i ϰ i ^ ) 2
where S = 1800 is the predicted steps, ϰ i = [ u i , Z i ] represents the i-th value of the predicted results, and ϰ i ^ represents the i-th value of the reference results (sensor measurements).
Table 2 presents the RMSEs of the predicted states using different methods. The results demonstrated that except for longitudinal/lateral acceleration, the PINN-based approaches (PINN and subPINN) estimated the vehicle state more accurately. The inferior prediction performance on longitudinal/lateral acceleration was attributed to sensor drift. By integrating the acceleration to obtain velocity, it was shown that the PINN-based methods could effectively take the influence of other sensors and realize IMU calibration.
As shown in Figure 8, the estimation results of [ a x , v x , a y , v y ] were obtained. “Sensor a x integration” was the ODE states inferred from the original signals using Equation (43) and is shown in green. The PINN model incorporated both longitudinal/lateral vehicle dynamics and LTI-based vehicle displacement, which yielded higher-precision results than the subPINN model. Compared with the MLP and LSTM models, the PINN model had better prediction accuracy of the integrated states. This was due to the PINN model being able to incorporate more dynamics knowledge into the data-driven model.
The estimation results of states [ a z , v z , ω x , φ , ω y , Θ , ω z , ψ ] were obtained as shown in Figure 9. The experiment of [ a z , ω x , ω y ] aimed to reduce the noise in acceleration and angular velocity by combining the PINN module with the linear ODEs. As the noise impact on the S-Motion sensor increased over time, the subPINN model demonstrated superior accuracy in estimating the state variables while maintaining the sensor data characteristics and reducing noise. Both the PINN and subPINN models used the vehicle kinematic model in modeling ω x . The subPINN model converged more readily in yaw rate prediction than the other models. Compared to the MLP and LSTM models, the subPINN model had better prediction accuracy in each state.
The subPINN model estimated a single vehicle state, which simplified the training process and enabled easier convergence during training. In contrast, the training process of the PINN model was more complex and it was difficult to achieve optimal results. The subPINN model offered advantages in terms of training efficiency and high accuracy results. This gave the subPINN model a certain attractiveness for engineering implementations. However, the subPINN model could not establish connections between multiple states like the PINN model in order to achieve more comprehensive modeling. Therefore, the PINN model performed better in terms of longitudinal/lateral dynamics accuracy.

4.4. Validation of PINN UFM

The experimental results presented above demonstrated the advantages of the PINN module, but some issues in the comparison still remained. The vehicle states were not connected through a vehicle-based model during the comparison, which failed to reflect the impact of the estimation states on the vehicle dynamics. Compared to other vehicle states, the cm-level GNSS positioning could better reflect the effectiveness of the presented estimation states.
In this subsection, the UKF-M module based on the output of the PINN module is demonstrated. This module reduced state noise through the Gaussian noise hypothesis. Figure 10 shows the trajectories of PINN UKFM and sensor inputs. The vectors [ E ,   N ,   h ] were entered at 20 Hz. The covariance matrices of the point were iterated quickly, which could not encapsulate the performance improvement of PINN UKFM.
Therefore, the frequency of the trajectory coordinates was reduced to 1 Hz through downsampling. As illustrated in Figure 11, the results obtained using PINN UKFM were superior to those obtained using the conventional UKF-M algorithm. Without coordinate input, the vehicle coordinates were updated in the vehicle model using the UKF-M module. The PINN module could reduce the influence of vehicle sensor drift and provided a reliable state estimation result that was closer to the true state of the vehicle. Therefore, the updated vehicle position of PINN UKFM was also closer to the true vehicle state.
Additionally, the UKF-M module could estimate the vehicle speed, which included [ v e , v n , v u ] . However, the [ v e , v n , v u ] were not inputted into the PINN module. Next, the modeling results were compared to the measurement states to verify PINN UKFM correctness. As shown in Figure 12, compared to the UKF-M algorithm, PINN UKFM could better estimate v u .
Considering that the cm-level GNSS positioning has a great impact on the covariance matrix, the velocity at the 1 Hz GNSS position was also compared, as shown in Figure 13. The PINN UKFM results were significantly better than the UKF-M results.
Evidently, PINN UKFM exhibited remarkable performance in vehicle state estimation, although there was still potential for further improvement. A possible enhancement lies in leveraging the signals from the CAN bus to replace the WFT and MSW sensors. The CAN bus grants access to a wealth of vehicle sensor data, including valuable information such as the steering angle, throttle opening, and brake pressure. By incorporating these signals as inputs to the model, a dynamic vehicle model could be constructed, enabling more accurate state estimation. Additionally, the S-Motion sensor can also be replaced by a cheaper IMU. Using more universal CAN bus signals and IMUs, the model could be implemented on more vehicle platforms without being limited by specific sensors.

5. Conclusions

In this study, a novel method for vehicle state estimation combining PINN and UKF-M has been proposed. The sensor module collects various sensor signals and feeds them into the PINN module. The PINN module automatically calibrates the IMU by utilizing the ODE relationships that exist between multiple sensors. The PINN-based model mitigates sensor errors and facilitates sensor fusion, leveraging the LTI hypothesis for loss calculation in order to reduce noise and bias in the original sensor data. It exhibits clear advantages in multi-sensor fusion compared to existing data-driven vehicle dynamics models. The resulting pseudo-states are then used as inputs to the UKF-M module to model the vehicle trajectory. The experimental results confirmed its effectiveness, with the PINN module successfully eliminating IMU drift and the PINN UKFM trajectory exhibiting less deviation from the cm-level GNSS positioning than the UKF-M trajectory. In the future, we aim to explore more cost-effective solutions to implement the PINN UKFM method. Additionally, the authors intend to use PINN UKFM in OCCS control.

Author Contributions

Conceptualization, C.T., Y.C. and X.S.; Methodology, C.T.; Software, C.T.; Resources, Y.C.; Writing—original draft, C.T.; Writing—review & editing, Y.C.; Supervision, Y.C., H.W. and L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Key Research and Development Program of China (2022YFB2503302), National Natural Science Foundation of China (52225212, U20A20333, U20A20331, 52072160, 52272418, U22A20100), Key Project for the Development of Strategic Emerging Industries of Jiangsu Province (BE2020083-3, BE2020083-2).

Data Availability Statement

The dataset from our experiments and model can be downloaded in Google drive. Moreover, we have developed a Jupyter notebook (script/PINN_UKFM_TEST.ipynb) that enables online access to all experimental results. https://drive.google.com/drive/folders/1iKkIuy6L8LUOVyypgxCLsdL_lAH1mCjG, accessed on 1 June 2023.

Acknowledgments

We would like to express our gratitude to Yubo Lian and Yilin Zhong from BYD Company for their valuable assistance in the installation of sensors on the autonomous vehicle. Their support has greatly contributed to our research project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guerrero-Ibáñez, J.; Zeadally, S.; Contreras-Castillo, J. Sensor Technologies for Intelligent Transportation Systems. Sensors 2018, 18, 1212. [Google Scholar] [CrossRef] [Green Version]
  2. Kissai, M.; Monsuez, B.; Tapus, A. Review of integrated vehicle dynamics control architectures. In Proceedings of the 2017 European Conference on Mobile Robots (ECMR), Paris, France, 6–8 September 2017; pp. 1–8. [Google Scholar] [CrossRef]
  3. Kissai, M. Optimal Coordination of Chassis Systems for Vehicle Motion Control. Ph.D. Thesis, Université Paris-Saclay (ComUE), Orsay, France, 2019. [Google Scholar]
  4. Zhang, L.; Zhang, Z.; Wang, Z.; Deng, J.; Dorrell, D. Chassis coordinated control for full X-by-wire vehicles-A review. Chin. J. Mech. Eng. 2021, 34, 42. [Google Scholar] [CrossRef]
  5. Xia, X.; Xiong, L.; Lu, Y.; Gao, L.; Yu, Z. Vehicle sideslip angle estimation by fusing inertial measurement unit and global navigation satellite system with heading alignment. Mech. Syst. Signal Process. 2021, 150, 107290. [Google Scholar] [CrossRef]
  6. Melzi, S.; Sabbioni, E. On the vehicle sideslip angle estimation through neural networks: Numerical and experimental results. Mech. Syst. Signal Process. 2011, 25, 2005–2019. [Google Scholar] [CrossRef]
  7. Yin, Y.; Zhang, J.; Guo, M.; Ning, X.; Wang, Y.; Lu, J. Sensor Fusion of GNSS and IMU Data for Robust Localization via Smoothed Error State Kalman Filter. Sensors 2023, 23, 3676. [Google Scholar] [CrossRef]
  8. Laftchiev, E.I.; Lagoa, C.M.; Brennan, S.N. Vehicle localization using in-vehicle pitch data and dynamical models. IEEE Trans. Intell. Transp. Syst. 2014, 16, 206–220. [Google Scholar] [CrossRef]
  9. Xia, X.; Xiong, L.; Huang, Y.; Lu, Y.; Gao, L.; Xu, N. Estimation on IMU yaw misalignment by fusing information of automotive onboard sensors. Mech. Syst. Signal Process. 2022, 162, 107993. [Google Scholar] [CrossRef]
  10. Song, R.; Fang, Y. Vehicle state estimation for INS/GPS aided by sensors fusion and SCKF-based algorithm. Mech. Syst. Signal Process. 2021, 150, 107315. [Google Scholar] [CrossRef]
  11. Wang, W.; Liu, Z.; Xie, R. Quadratic extended Kalman filter approach for GPS/INS integration. Aerosp. Sci. Technol. 2006, 10, 709–713. [Google Scholar] [CrossRef]
  12. Zhang, W.; Wang, Z.; Zou, C.; Drugge, L.; Nybacka, M. Advanced vehicle state monitoring: Evaluating moving horizon estimators and unscented Kalman filter. IEEE Trans. Veh. Technol. 2019, 68, 5430–5442. [Google Scholar] [CrossRef]
  13. Hauberg, S.; Lauze, F.; Pedersen, K.S. Unscented Kalman filtering on Riemannian manifolds. J. Math. Imaging Vis. 2013, 46, 103–120. [Google Scholar] [CrossRef]
  14. Du, S.; Huang, Y.; Lin, B.; Qian, J.; Zhang, Y. A lie group manifold-based nonlinear estimation algorithm and its application to low-accuracy SINS/GNSS integrated navigation. IEEE Trans. Instrum. Meas. 2022, 71, 1–27. [Google Scholar] [CrossRef]
  15. Chen, T.; Cai, Y.; Chen, L.; Xu, X.; Jiang, H.; Sun, X. Design of vehicle running states-fused estimation strategy using Kalman filters and tire force compensation method. IEEE Access 2019, 7, 87273–87287. [Google Scholar] [CrossRef]
  16. Park, G.; Choi, S.B.; Hyun, D.; Lee, J. Integrated observer approach using in-vehicle sensors and GPS for vehicle state estimation. Mechatronics 2018, 50, 134–147. [Google Scholar] [CrossRef]
  17. Šabanovič, E.; Kojis, P.; Šukevičius, Š.; Shyrokau, B.; Ivanov, V.; Dhaens, M.; Skrickij, V. Feasibility of a Neural Network-Based Virtual Sensor for Vehicle Unsprung Mass Relative Velocity Estimation. Sensors 2021, 21, 7139. [Google Scholar] [CrossRef]
  18. Kim, D.; Min, K.; Kim, H.; Huh, K. Vehicle sideslip angle estimation using deep ensemble-based adaptive Kalman filter. Mech. Syst. Signal Process. 2020, 144, 106862. [Google Scholar] [CrossRef]
  19. Kim, D.; Kim, G.; Choi, S.; Huh, K. An integrated deep ensemble-unscented Kalman filter for sideslip angle estimation with sensor filtering network. IEEE Access 2021, 9, 149681–149689. [Google Scholar] [CrossRef]
  20. Vargas-Meléndez, L.; Boada, B.L.; Boada, M.J.L.; Gauchía, A.; Díaz, V. A Sensor Fusion Method Based on an Integrated Neural Network and Kalman Filter for Vehicle Roll Angle Estimation. Sensors 2016, 16, 1400. [Google Scholar] [CrossRef] [Green Version]
  21. Soriano, M.A.; Khan, F.; Ahmad, R. Two-axis accelerometer calibration and nonlinear correction using neural networks: Design, optimization, and experimental evaluation. IEEE Trans. Instrum. Meas. 2020, 69, 6787–6794. [Google Scholar] [CrossRef]
  22. Boada, B.L.; Boada, M.J.L.; Diaz, V. Vehicle sideslip angle measurement based on sensor data fusion using an integrated ANFIS and an Unscented Kalman Filter algorithm. Mech. Syst. Signal Process. 2016, 72, 832–845. [Google Scholar] [CrossRef] [Green Version]
  23. Vicente, B.A.H.; James, S.S.; Anderson, S.R. Linear system identification versus physical modeling of lateral–longitudinal vehicle dynamics. IEEE Trans. Control. Syst. Technol. 2020, 29, 1380–1387. [Google Scholar] [CrossRef]
  24. Xiao, Y.; Zhang, X.; Xu, X.; Liu, X.; Liu, J. Deep neural networks with Koopman operators for modeling and control of autonomous vehicles. IEEE Trans. Intell. Veh. 2023, 8, 135–146. [Google Scholar] [CrossRef]
  25. Spielberg, N.A.; Brown, M.; Kapania, N.R.; Kegelman, J.C.; Gerdes, J.C. Neural network vehicle models for high-performance automated driving. Sci. Robot. 2019, 4, eaaw1975. [Google Scholar] [CrossRef] [PubMed]
  26. Xiao, Z.; Xiao, D.; Havyarimana, V.; Jiang, H.; Liu, D.; Wang, D.; Zeng, F. Toward accurate vehicle state estimation under non-Gaussian noises. IEEE Internet Things J. 2019, 6, 10652–10664. [Google Scholar] [CrossRef]
  27. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  28. Xu, P.-F.; Han, C.-B.; Cheng, H.-X.; Cheng, C.; Ge, T. A Physics-Informed Neural Network for the Prediction of Unmanned Surface Vehicle Dynamics. J. Mar. Sci. Eng. 2022, 10, 148. [Google Scholar] [CrossRef]
  29. Franklin, T.S.; Souza, L.S.; Fontes, R.M.; Martins, M.A. A Physics-Informed Neural Networks (PINN) oriented approach to flowmetering in oil wells: An ESP lifted oil well system as a case study. Digit. Chem. Eng. 2022, 5, 100056. [Google Scholar] [CrossRef]
  30. Wong, J.C.; Chiu, P.H.; Ooi, C.C.; Da, M.H. Robustness of Physics-Informed Neural Networks to Noise in Sensor Data. arXiv 2022, arXiv:2211.12042. [Google Scholar]
  31. Arnold, F.; King, R. State–space modeling for control based on physics-informed neural networks. Eng. Appl. Artif. Intell. 2021, 101, 104195. [Google Scholar] [CrossRef]
  32. Alatise, M.B.; Hancke, G.P. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter. Sensors 2017, 17, 2164. [Google Scholar] [CrossRef] [Green Version]
  33. Gräber, T.; Lupberger, S.; Unterreiner, M.; Schramm, D. A hybrid approach to side-slip angle estimation with recurrent neural networks and kinematic vehicle models. IEEE Trans. Intell. Veh. 2018, 4, 39–47. [Google Scholar] [CrossRef]
  34. Brossard, M.; Barrau, A.; Bonnabel, S. A code for unscented Kalman filtering on manifolds (UKF-M). In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 5701–5708. [Google Scholar]
  35. Zhang, W.; Al Kobaisi, M. On the Monotonicity and Positivity of Physics-Informed Neural Networks for Highly Anisotropic Diffusion Equations. Energies 2022, 15, 6823. [Google Scholar] [CrossRef]
  36. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
  37. Rai, R.; Sahu, C.K. Driven by data or derived through physics? a review of hybrid physics guided machine learning techniques with cyber-physical system (cps) focus. IEEE Access 2020, 8, 71050–71073. [Google Scholar] [CrossRef]
  38. Fioretto, F.; Van Hentenryck, P.; Mak, T.W.; Tran, C.; Baldo, F.; Lombardi, M. Lagrangian duality for constrained deep learning. In Proceedings of the Applied Data Science and Demo Track: European Conference, ECML PKDD 2020, Part, V.. Ghent, Belgium, 14–18 September 2020; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 118–135. [Google Scholar]
  39. Bajaj, C.; McLennan, L.; Andeen, T.; Roy, A. Recipes for when Physics Fails: Recovering Robust Learning of Physics Informed Neural Networks. Mach. Learn. Sci. Technol. 2023, 4, 015013. [Google Scholar] [CrossRef]
  40. Liu, D.; Wang, Y. A Dual-Dimer method for training physics-constrained neural networks with minimax architecture. Neural Netw. 2021, 136, 112–125. [Google Scholar] [CrossRef]
  41. Chen, T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D.K. Neural ordinary differential equations. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Munich, Germany, 8–14 September 2018; pp. 6571–6583. [Google Scholar]
  42. Yuan, L.; Ni, Y.Q.; Deng, X.Y.; Hao, S. A-PINN: Auxiliary physics informed neural networks for forward and inverse problems of nonlinear integro-differential equations. J. Comput. Phys. 2022, 462, 111260. [Google Scholar] [CrossRef]
  43. Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific machine learning through physics–informed neural networks: Where we are and what’s next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  44. Deo, N.; Trivedi, M.M. Convolutional social pooling for vehicle trajectory prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1468–1476. [Google Scholar]
  45. Kong, J.; Pfeiffer, M.; Schildbach, G.; Borrelli, F. Kinematic and dynamic vehicle models for autonomous driving control design. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Republic of Korea, 28 June–1 July 2015; pp. 1094–1099. [Google Scholar]
  46. Gao, Z.; Wang, J.; Wang, D. Dynamic modeling and steering performance analysis of active front steering system. Procedia Eng. 2011, 15, 1030–1035. [Google Scholar] [CrossRef] [Green Version]
  47. Barrau, A.; Bonnabel, S. Intrinsic filtering on Lie groups with applications to attitude estimation. IEEE Trans. Autom. Control 2014, 60, 436–449. [Google Scholar] [CrossRef] [Green Version]
  48. Barfoot, T.D.; Furgale, P.T. Associating uncertainty with three-dimensional poses for use in estimation problems. IEEE Trans. Robot. 2014, 30, 679–693. [Google Scholar] [CrossRef]
  49. Brossard, M.; Bonnabel, S.; Condomines, J.P. Unscented Kalman filtering on Lie groups. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2485–2491. [Google Scholar]
  50. Julier, S.J.; Uhlmann, J.K. New extension of the Kalman filter to nonlinear systems. Signal Process. Sens. Fusion Target Recognit. VI. Spie 1997, 3068, 182–193. [Google Scholar]
  51. Liu, Z.; Cai, Y.; Wang, H.; Chen, L.; Gao, H.; Jia, Y.; Li, Y. Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6640–6653. [Google Scholar] [CrossRef]
  52. Yin, C.; Jiang, H.; Tang, B.; Zhu, C.; Lin, Z.; Yin, Y. Handling Stability and Energy-Saving of Commercial Vehicle Electronically Controlled Hybrid Power Steering System. J. Jiangsu Univ. Nat. Sci. 2019, 40, 269. [Google Scholar]
  53. Wang, H.; Ming, L.; Yin, C.; Long, C. Vehicle target detection algorithm based on fusion of lidar and millimeter wave radar. J. Jiangsu Univ. Nat. Sci. 2021, 4, 003. [Google Scholar]
  54. Wang, H.; Chen, Z.; Cai, Y.; Chen, L.; Li, Y.; Sotelo, M.A.; Li, Z. Voxel-rcnn-complex: An effective 3-d point cloud object detector for complex traffic conditions. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
  55. Wang, H.; Chen, Y.; Cai, Y.; Chen, L.; Li, Y.; Sotelo, M.A.; Li, Z. SFNet-N: An improved SFNet algorithm for semantic segmentation of low-light autonomous driving road scenes. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21405–21417. [Google Scholar] [CrossRef]
  56. Wang, H.; Cheng, L.; Yin, C.; Long, C.; You, H. Real-time visual vehicle detection method based on DSP platform. J. Jiangsu Univ. Nat. Sci. 2019, 1, 001. [Google Scholar]
  57. Hermansdorfer, L.; Trauth, R.; Betz, J.; Lienkamp, M. End-to-end neural network for vehicle dynamics modeling. In Proceedings of the 2020 6th IEEE Congress on Information Science and Technology (CiSt), Agadir-Essaouira, Morocco, 5–12 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 407–412. [Google Scholar]
  58. Xu, J.; Luo, Q.; Xu, K.; Xiao, X.; Yu, S.; Hu, J.; Miao, J.; Wang, J. An automated learning-based procedure for large-scale vehicle dynamics modeling on baidu apollo platform. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Ro- bots and Systems (IROS), Macau, China, 3–8 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5049–5056. [Google Scholar]
  59. Nie, X.; Min, C.; Pan, Y.; Li, Z.; Królczyk, G. An Improved Deep Neural Network Model of Intelligent Vehicle Dynamics via Linear Decreasing Weight Particle Swarm and Invasive Weed Optimization Algorithms. Sensors 2022, 22, 4676. [Google Scholar] [CrossRef]
Figure 1. The 6-DoF vehicle model coordinates.
Figure 1. The 6-DoF vehicle model coordinates.
Sensors 23 06665 g001
Figure 2. The structure of the proposed VSE.
Figure 2. The structure of the proposed VSE.
Sensors 23 06665 g002
Figure 3. The proposed PINN module.
Figure 3. The proposed PINN module.
Sensors 23 06665 g003
Figure 4. Hardware configuration.
Figure 4. Hardware configuration.
Sensors 23 06665 g004
Figure 5. The test vehicle.
Figure 5. The test vehicle.
Sensors 23 06665 g005
Figure 6. The trajectory of the experiment.
Figure 6. The trajectory of the experiment.
Sensors 23 06665 g006
Figure 7. The results of physical models.
Figure 7. The results of physical models.
Sensors 23 06665 g007
Figure 8. The prediction results of states [ a x , v x , a y , v y ] and sensor measurements.
Figure 8. The prediction results of states [ a x , v x , a y , v y ] and sensor measurements.
Sensors 23 06665 g008
Figure 9. The prediction results of states [ a z , v z , ω x , φ , ω y , Θ , ω z , ψ ] and sensor measurements.
Figure 9. The prediction results of states [ a z , v z , ω x , φ , ω y , Θ , ω z , ψ ] and sensor measurements.
Sensors 23 06665 g009
Figure 10. Compared trajectories at 20 Hz GNSS position.
Figure 10. Compared trajectories at 20 Hz GNSS position.
Sensors 23 06665 g010
Figure 11. Compared trajectories at 1 Hz GNSS position.
Figure 11. Compared trajectories at 1 Hz GNSS position.
Sensors 23 06665 g011
Figure 12. Compared 3D velocities at 20HZ GNSS position.
Figure 12. Compared 3D velocities at 20HZ GNSS position.
Sensors 23 06665 g012
Figure 13. Compared 3D velocities at 1HZ GNSS position.
Figure 13. Compared 3D velocities at 1HZ GNSS position.
Sensors 23 06665 g013
Table 1. The sensor inputs of PINN UKFM.
Table 1. The sensor inputs of PINN UKFM.
Sensor TypesSignal NameSymbolUnits
GNSSEasting E m
GNSSNorthing N m
GNSSAltitude U m
GNSSUTM velocity V g p s m / s
IMURoll angle φ rad
IMUPitch angle Θ rad
IMUYaw angle ψ rad
IMUVertical velocity v z m / s
S-MotionLongitudinal velocity v x m / s
S-MotionLongitudinal acceleration a x m / s 2
S-MotionLateral velocity v y m / s
S-MotionLongitudinal acceleration a y m / s 2
S-MotionVertical acceleration a z m / s 2
S-MotionRoll rate ω x rad/s
S-MotionPitch rate ω y rad/s
S-MotionYaw rate ω z rad/s
WFTWheel force F x , F y , F z k N
WFTWheel torque M x , M y , M z k N · m
MSWSteering wheel angle δ s rad
Table 2. The RMSEs of the predicted states using different methods.
Table 2. The RMSEs of the predicted states using different methods.
Model a x a y a z ω x ω y ω z
MLP0.11170.15240.27290.01640.01710.0174
LSTM0.11670.14440.26260.01710.01750.0167
PINN0.27330.39040.28060.01100.00840.0058
subPINN0.23010.36060.21670.00760.00490.0036
Model v x v y v z φ Θ ψ
MLP20.236525.27572.74790.10270.18650.3334
LSTM19.695524.84130.56090.06130.13910.3856
PINN1.30074.67070.56090.08060.04290.0475
subPINN1.81717.35590.41690.03120.01130.0217
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tan, C.; Cai, Y.; Wang, H.; Sun, X.; Chen, L. Vehicle State Estimation Combining Physics-Informed Neural Network and Unscented Kalman Filtering on Manifolds. Sensors 2023, 23, 6665. https://doi.org/10.3390/s23156665

AMA Style

Tan C, Cai Y, Wang H, Sun X, Chen L. Vehicle State Estimation Combining Physics-Informed Neural Network and Unscented Kalman Filtering on Manifolds. Sensors. 2023; 23(15):6665. https://doi.org/10.3390/s23156665

Chicago/Turabian Style

Tan, Chenkai, Yingfeng Cai, Hai Wang, Xiaoqiang Sun, and Long Chen. 2023. "Vehicle State Estimation Combining Physics-Informed Neural Network and Unscented Kalman Filtering on Manifolds" Sensors 23, no. 15: 6665. https://doi.org/10.3390/s23156665

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop