Autonomous Vehicle State Estimation and Mapping Using Takagi–Sugeno Modeling Approach

This paper proposes an optimal approach for state estimation based on the Takagi–Sugeno (TS) Kalman filter using measurement sensors and rough pose obtained from LIDAR scan end-points matching. To obtain stable and optimal TS Kalman gain for estimator design, a linear matrix inequality (LMI) is optimized which is constructed from Lyapunov stability criteria and dual linear quadratic regulator (LQR). The technique utilizes a Takagi–Sugeno (TS) representation of the system, which allows modeling the complex nonlinear dynamics in such a way that linearization is not required for the estimator or controller design. In addition, the TS fuzzy representation is exploited to obtain a real-time Kalman gain, avoiding the expensive optimization of LMIs at every step. The estimation schema is integrated with a nonlinear model-predictive control (NMPC) that is in charge of controlling the vehicle. For the demonstration, the approach is tested in the simulation, and for practical validity, a small-scale autonomous car is used.


Introduction
To solve the issue of traffic accidents and congestion, autonomous vehicles provide a promising solution [1]. Research work on this technology has immensely progressed from perception algorithms to vehicle control algorithms. The perception stack exploits the sensor measurements to provide the vehicle state as well as environmental information. The correction of these measurements is crucial to the safety of the vehicle when navigating within the environment. On the other hand, the planner and the controller use this information to plan the motion and achieve the desired trajectory while maintaining some objectives such as speed, time to reach the destination, and comfort. For such reasons, the measurement filtering and the obtaining of unmeasured states are paramount (e.g., lateral velocity in many cases can not be measured directly, which is important for lateral control). Advanced controllers, such as, e.g., NMPC, require a full state observation, which can be achieved using a state estimator in the loop for the full-body control.
The state estimation of an autonomous vehicle involves an algorithm that fuses the raw information provided by the sensors, which is affected by noise, and the system model, which is affected by some uncertainty represented as a disturbance. The estimation algorithm considers the measurement model, vehicle model, sensor uncertainty, and model perturbation to estimate the correct states. The standard approach is based on some form of Kalman filter, originally developed by R.E Kalman [2]. A Kalman filter provides an optimal estimate in the least-square sense, assuming that the model perfectly matches with the real system such that the noises/disturbances are Gaussian and their covariances are known. The first assumption is not always true, especially in nonlinear systems with complex dynamics. A variant named extended Kalman filter (EKF) for the nonlinear systems has been developed, which provides the solution by linearizing the system and the measurement model around the current states. The linearization is based on a Taylor series expansion, rejecting higher-order terms. The linearization process does not preserve the random variable distributions of the states and the measurements as Gaussian, which means that the optimality condition is not guaranteed. For the systems with a complex nonlinear model, such as a vehicle, the assumption of a small range of operation for a linear model acquired by linearizing the nonlinear model is not valid. For all these reasons, new methods for extending Kalman filters to nonlinear systems are required.
The TS framework offers a systematic approach to obtain a multi-model system from the original mathematical model of the system, see, e.g., for more detail, [3]. The main feature of this approach is that it allows representing the model of the nonlinear system as a set of linear system models and the overall TS model of the system is achieved by the polytopic combination of these linear system models. This allows extending the LMI design procedures for control and estimation to the nonlinear systems represented in polytopic TS form. Moreover, there exists a systematic procedure for generating the TS model for the nonlinear model and approximating it in a polytopic way [4].
In this paper, the state estimation of an autonomous vehicle is developed using an approach alternative to the use of EKF. Moreover, this approach will be combined with a LIDAR scan matching algorithm that provides a rough pose estimation. For LIDAR scan matching, the real-time correlative scan matching algorithm is being exploited instead of iterative closest point (ICP) [5], which has a drawback of expensive search for point correspondences in every iteration. As LIDAR scan matching is not within the scope of this paper, more details can be found in [6,7]. The methods such as Gmapping [8], which generally require a large number of particles to obtain good results, lead to higher computational complexity compared with our approach. In comparison with the methods such as LOAM [9], our method fuses the LIDAR and other sensor information with the accurate vehicle model prediction for precise localization. The proposed method can be extended to any degree of freedom (DoF) system if the model is known and additional sensors can be integrated. In our approach, dynamic states, such as longitudinal velocity and angular velocity, will be directly measured from the sensors. Then, the rough pose obtained from the LIDAR scan matching and direct measurements from sensors will be corrected using the TS Kalman filter design. A similar work [10] was carried out to improve the approximation error by utilizing the Takagi-Sugeno model; however, this approach considers a limited and known number of landmarks and neither ensures optimality. Another work [11] utilized a linear parameter varying (LPV) system, which is analogous to TS and has shown good performance in the simulation with different applications.
To summarize, the contributions of our work are as follows: • A TS Kalman filter is developed which does not need linearization at each step, as in the case of EKF. In addition, the estimator provides a stable and optimal solution. • The design of the TS Kalman filter is solved offline using LMIs while online (in real-time), only the interpolation of the TS subsystems gains is required. • A case study of a small-scale autonomous vehicle is presented to obtain the full state by correcting the raw measurement obtained from sensors and LIDAR scan end-point matching.

•
The whole framework, including NMPC, is validated on a small computation board Odroid XU4. The low computational requirements of the framework, especially SLAM, make it easy to deploy on small robots.
The structure of the paper is the following: Section 2 describes the proposed approach and presents the autonomous vehicle considered as a case study. Section 4 presents the proposed TS Kalman filter for the state estimation. Section 5 presents the simulation and experimental results using the considered case study. Finally, Section 6 draws the main conclusions of the paper and proposes future research paths. Figure 1 presents the overall architecture and integration of different modules for the autonomy. The proposed pipeline also includes an NMPC controller. Descriptions of the states x = [v x , v y , ω, X, Y, θ] T and control u = [D, δ] T symbol are listed in Table 1. The NMPC controller uses the error model of the system so the estimated states [X,Ŷ,θ] T are converted to the errors states [θ e ,ŝ,ŷ e ] T using the offline planner; details can be found in [12]. For the state measurements, an inertial measurement unit (IMU), a motor encoder, and scan end-point matching modules are utilized. The mapping module is dedicated to map the environment using the scanned points and estimated pose of the vehicle. The previous map (occupancy grid) is queried by the scan-matching module to roughly estimate the relative position of the current scan. Finally, the estimator module is responsible for correcting the uncertainty in the measurements and scan matching, as well as obtaining the hidden state v y .

Symbol Description v x
Longitudinal velocity of vehicle in center of gravity (CoG) frame (C), see Figure 2.  To generalize the SLAM algorithm, i.e., free from the number of landmarks and environment, the rough pose estimation is based on LIDAR scan end-point matching by exploiting the Gauss-Newton optimization process. This matching approach allows estimating the pose without any prior information of the environment or landmarks. The scan end-points matching is similar to the work carried out in the paper [6], which is presented in Section 3. In this development, LIDAR's scan end-points are exploited for observing the environment. Instead, a camera can also be used for the rough localization using either feature matching technique [13] or deep learning technique [14]. Then, to correct the rough pose from the scan matching and acquired sensor measurements, a TS modeling approach is considered to design an optimal and stable Kalman estimator. The whole design procedure consists of the following steps:

1.
Derive a TS model from the nonlinear model which embeds the nonlinearity term inside the system matrices.

2.
Obtain the polytopic systems and derive the fuzzy representation of the TS model [4].

3.
Formulate LMIs for Lyapunov stability and optimal dual LQR for all the obtained polytopes or system vertices of the fuzzy model to obtain the offline Kalman gain.

4.
Exploit fuzzy interpolation technique to find the online gain for the TS Kalman filter.

5.
Apply online TS Kalman gain for the state estimation.
Steps 3 to 5 are presented thoroughly in Section 4 of this paper, while the remaining steps are detailed in this section. The vehicle schematic shows the inertial frame (O), the center of gravity (CoG) frame (C) attached to the center of gravity of the vehicle. The forces F rx and F ry on the rear wheel are the longitudinal and lateral force, respectively. The front wheel forces F f long and F f lat are the longitudinal and lateral forces, respectively, the forces F f x and F f y are the result of these forces in frame C. The lengths l f and l r are the distance from CoG to the front wheel and rear wheel, respectively. Formalizing the TS model for LMIs and fuzzy gain interpolation technique is based on the concept of fuzzy theory [4,15], which offers a systematic approach to obtain a multimodel system from the original mathematical model of the system. The main feature of this model is to express the local dynamics of each fuzzy implication by a set of linear system models, and the overall fuzzy model of the system is achieved by the fuzzy blending of these linear system models. It is proved that TS fuzzy models are universal approximators with a high degree of precision of any smooth nonlinear system [16].

Considered Autonomous Vehicle
To validate the proposed approach, a case based on a small-scale autonomous car is used. The states of the car have to be estimated using available sensors, and, further, this information will be exploited by the full-body controller.
The dynamic states, such as longitudinal velocity, can be roughly estimated using the radius and RPM of the rear wheel measured by the motor encoder. Angular velocity can be measured by an IMU sensor. The rough position of the vehicle can be obtained using the scan end-point matching algorithm and orientation from IMU. Figure 1 shows the high-level scheme of the proposed pipeline which includes a model-predictive controller (MPC) in the loop. We do not provide further details about the MPC algorithm, as this is not the focus of this work. We assume that the control of the car and the track plan can be obtained.
As the estimator is based on a system model, the dynamics of the system are derived using the bicycle model [17] representation shown in Figure 2. The vehicle model includes kinematic as well as dynamic equations, represented by: where v x , v y , ω are the longitudinal, lateral, and angular velocity of the vehicle, respectively. X, Y, θ are the global pose of the vehicle in a fixed inertial frame. The available sensors are able to measure, directly or indirectly, [v x , ω, X, Y, θ] states, with some errors. The lateral velocity v y cannot be measured and will be estimated. F rx is the longitudinal force in the rear wheel which consists of force from the motor, drive-line resistance, and drag force. F f lat and F ry are the lateral tire force in the front and rear wheel, respectively, which is obtained by simplifying the Pacejka tire model [18]. Longitudinal force on the front wheel is considered to be negligible since no brake nor torque is applied to it. The forces are given by where δ and D are two control inputs, steering angle and duty cycle, respectively. Some of the system parameters are obtained by physical measurement, and the remaining ones by least-squares optimization. The obtained values and description of the parameters are listed in Table 2.

. Construction of TS Model
The goal is to derive a TS model from the nonlinear vehicle dynamics (1) as if the response of the TS model exactly matches the response of a nonlinear system within a specified domain with the same input u. To form the TS model, the varying nonlinear terms in the equations are set as fuzzy variables or scheduling variables (ϑ i (t)).
The set of fuzzy (or scheduling) variables is represented in Table 3. These are the state and control variables on which the nonlinear dynamics are dependent. These variables represent the physical limit of the vehicle, for example, the maximum and minimum longitudinal velocities or maximum and minimum steering angle that a vehicle can achieve. Table 3. Scheduling variables, i.e., states or inputs on which the system matrix A(ϑ) and B(ϑ) are dependent. The maximum and minimum value of these variables can define any polytopic system.
From the nonlinear model (1), the TS model (5) is obtained by embedding schedule variables inside the matrix: where A i,j and B i,j are the varying sub-elements of matrix A(ϑ) and matrix B(ϑ), respectively, and can be expressed as The scheduling variables can be further represented as a set of membership function ϑ i , as follows: where ϑ i , ϑ i are the minimum and maximum scheduling variables, respectively, and the µ ij (ϑ i (t)) are the fuzzy sets with two membership functions for each scheduling variable being in total 32 combinations. For each membership function, a system vertex or polytopic system represented by A i x(t) + B i u(t) can be obtained, and by applying the fuzzy rules, a nonlinear system can be modeled as: where, and The polytopic system representation will be used to formulate the LMIs to obtain offline gain in Section 4.1. Additionally, the fuzzy representation (9) of the TS model will also allow the computation of online gain using the interpolation technique without optimizing the LMIs at each time step.

Measurement Model
The rough position of the vehicle is obtained from the scan matching algorithm, and other information is directly obtained from various sensors listed in Table 4. Table 4. Noise of dedicated sensors or algorithms used in the experimental platform to measure or estimate the rough information.

States
Sensors The discretized state-space measurement model for the system is defined by where v k ∈ R n y is the measurement noise and n y = 5, E v is the distribution matrix with appropriate dimension. D = 0 5×2 is taken as there is no interaction from control input. The C matrix relates the system state to the output measurement y k . The T has noise of each sensor with the covariance listed in Table 4.

Observability Analysis
Before designing the estimator, the observability of Equation (5) needs to be analyzed for singularity. This equation consists of n x = 6 state variables and the observability matrix defined by: If the rank of observability is equal to n x , then the system is observable. To check the observability, Equation (5) is discretized using Euler approximation A d = I + A · ∆t s . During the analysis, at certain states, the rank of the observability matrix O reaches singularity, precisely when any of the v x = 0, v y = 0 and δ = 0. To resolve this issue whenever this state variable attains this value, a bias = 0.0001 is added to this variable.

SLAM Algorithm
The measurement from the LIDAR provides the environment information through building an occupancy grid probability map using Bayes inference theory which is later on converted to occupancy grids for detecting obstacles and free zones within the map. Using the occupancy probability map at time {t} and previous probability maps {t − 1 : 0}, a robust matching is performed using Gauss-Newton optimization to obtain the relative pose. This approach allows to map and estimate the pose without any prior information about the environment or landmarks. Later on, this pose and map will be cured using the TS estimator due to corrupted matching as the LIDAR has an error in the measurement. For robust matching between occupancy probability maps, the sub-grid-level probability is accessed for sub-grid-level accuracy [6]. The algorithm has two main steps: (a) mapping the obtained map into sub-grid level accuracy, and (b) localization using the optimization process.

Mapping
The map used for matching is developed using an interpolation scheme to overcome the discrete nature of the map through bilinear filtering, which allows sub-grid-level cell accuracy for estimating occupancy probabilities and derivatives. The occupancy value M(P m ) can be approximated by linear interpolation of a scan end-point P m , similar to Figure 3, in a continuous map coordinate given as: where P 00 , P 01 , P 10 , and P 11 are the closest integer point coordinates in a grid map. The partial derivatives along the X and Y axis can be approximated by:

Localization
The localization algorithm is based on the matching of scan end-points shown in Figure 4 with the existing maps to find the rigid transformation ξ = (X, Y, θ) T that minimizes where S i (ξ) is the location of scan end-points in the world coordinate frame, given by: where s i = (s i,x , s i,y ) T is the location of end-points in the LIDAR coordinate frame. The X, Y, and θ are the pose of the LIDAR in the world coordinate frame. The function M(S i (ξ)) provides the map value at the coordinate given by S i (ξ). Let us suppose the robot started at initial pose ξ and control input is given to move by ∆ξ. At position ξ, the map is built from the measurement, and at ξ + ∆ξ, a new map is registered. To estimate the ∆ξ, the optimizer minimizes the error according to: Solving for ∆ξ, using first-order Taylor expansion of M(S i (ξ + ∆ξ)) and setting the partial derivative with respect to ξ, yields the Gauss-Newton minimization equation: where and from Equation (17), the gradient of ∂ξ can be represented as and the map gradient ∆M(S i (ξ)) can be approximated from Equation (15). The obtained pose might be inaccurate due to sensor noises and errors which need to be corrected before building the map. The estimator which is discussed in the next section will be used to correct this pose. The corrected pose is applied to Equation (17) to relocate the scan end-points S i , which eventually updates the occupancy probability grid map.

Takagi-Sugeno Kalman Filter Design
For the development of the proposed estimator design, first, in Section 4.1, LMIs for the TS Kalman filter design are formulated. Second, in Section 4.2, the measurement noise and system perturbation matrix are provided. Finally, the implementation of the proposed approach and its improvement are discussed in Section 4.3.

LMI Design Procedure
The following Kalman filter to obtain thex estimation is required: where A(ϑ), B(ϑ) can be obtained by using TS model (5), L(ϑ) is the online Kalman gain, and D = 0 5×2 . The above continuous Kalman estimator is discretized for implementation on a real-time system. Now, the aim is to find the optimal L(ϑ) which converges to the estimation ground truth in the presence of sensor noise and system disturbance. To obtain an optimal Kalman gain L(ϑ), first, LMIs are offline optimized to obtain gain L o f f . Second, the obtained L o f f is interpolated in real time to obtain L(ϑ) by exploiting relation (9). To formulate LMIs for the discretized system, the continuous fuzzy model Equation (9) can be discretized using Euler approximation with sampling time ∆t s : The LQR optimal control objective is to find a state feedback control u(k) = −Kx(k), where K is the gain for the system matrix, that minimizes the following objective function: Introducing a Lyapunov function for each vertex of the polytopic model: that satisfies the following conditions: Integrating the Equation (28) and substituting u(k) = −Kx(k) yields The above equation ensures that LQR objective function is bounded by the optimal value γ for all the polytopes. The increment of Lyapunov function (26): For the closed-loop system the Lyapunov function becomes The previous inequality Equations (29) and (31) can be rewritten as follows: [19] to LQR inequality Equation (33) results: By applying some algebraic manipulations to Equations (32) and (34), some changes of variables (Q = H T H, Y = P −1 ) and Schur complement lead to the following LMIs optimization: The optimal gain can be found by L = (WY −1 ) T . The solution involves optimization at each time step, due to the varying system matrices (A d , B d ), which are computationally expensive and slow. Instead, we exploit the method developed in Section 2.1.1. Any nonlinear system can be represented in the form (24), which means the offline gain will be found at the system vertices (polytopes) of these systems and later on interpolated online using fuzzy membership grade function (11) for the TS model (5). Therefore, for the 5 scheduling variables, 32 system matrices can be obtained at the polytopes, result-ing in 32 LMIs and offline gain. The gain obtained from solving LMIs will be called L o f f i ∈ [1, · · ·, 32]. All the steps for obtaining the LMIs are mentioned in Algorithm 1.

Disturbance and Noise Matrix
The disturbance of the vehicle model and sensor noise is modeled by Gaussian distribution whose mean is set to zero, and covariance is obtained experimentally. The disturbance and the noise of the model are found experimentally, and some of the sensor noise from the manufacturer's data sheet. The scan matching covariance is obtained via a sampling-based covariance estimate. However, a second method based on the Hessian matrix can also be used [20]. The disturbance in the model is considered to be higher than the measurement noise to compensate for uncaptured dynamics during parameter estimation. The following covariances (m 2 ) for the experiment were set for the disturbance (Q) and noise (R) matrices, respectively:

Switching Estimator Design
During the experiment phase, the scheduling variable θ : [−π, 0] does not yield a better approximation of the nonlinear function. This can be seen by substituting the values in element A 41 , A 42 , A 51 , and A 52 in Equation (5) and taking only these subelements: The above polytopic systems are true considering the minimum and maximum vertices in cos function, but it is not valid for sin function. To accurately model this effect, the θ scheduling variable is chosen for each quadrant ϑ 4j = [0, π 2 , −π, −π 2 ], j ∈ [1, .., 4]. This improvement provides the system vertex to reach all the possible values: For the implementation of this system, every four sections of the quadrant are considered and the LMIs for all the quadrants are computed offline, following the steps mentioned in Algorithm 1.

3.
A dij , B dij ← Discretize using sampling time (∆t) ; 4.  ) lies. For example, in Figure 5, θ lies in the first quadrant, then L o f f i1 gain will be weighted using the TS fuzzy interpolation function depending on the scheduling variable [0, π 2 ] to obtain the current online gain. Likewise, for each quadrant, the L o f f ij is interpolated for the current states using the membership function. Algorithm 2 presents the method to obtain the fuzzy gain interpolation scheme. Figure 5. A defined sector that is designed to capture kinematic system matrices to obtain offline gain. Later on, using the same sector approach, online interpolated gains are obtained.

Algorithm 2: Gain Interpolation Algorithm
L_interpolation (ϑ(k), L o f f ij ); Input : Scheduling variable ϑ(k) Output : Interpolated gain L ϑ i , ϑ i ← Load scheduling variables; µ i1 (k), µ i2 (k) ← Compute fuzzy sets for all the ϑ(k) using Equations (7) and (8); h i (ϑ(k)) ← Compute fuzzy interpolation function using Equation (11); The algorithm for the proposed state estimation technique can be referred from Algorithm 3. The wrap() function is used to discretize the θ value between [−π, π], and the unwrap() function is used to change the θ value to continuous value. This is required as the yaw measurement from the sensor provides a measurement between [−π, π], which is discontinuous. The measurement discontinuity results in wrong estimates due to the lagging or leading effect of the measurement. In Figure 6, the wrong estimation of states is depicted. Due to the abrupt change in yaw measurement in Figure 6b, the robot position is wrongly estimated in Figure 6a. It can be seen in Figure 6b that at around 12.5 s, the estimated yaw slightly leads the measurement, due to which the estimator tries to track the measurement data, which results in the full turn of the robot at location [−1.8, 4]. The same phenomenon at 19 s also occurs when the estimated yaw lags behind the measurement data. Changing the discontinuity using the unwrap() function provides an accurate estimation of all the states, which is shown in Figure 6c,d.

Results
To validate the estimator performance, experiments were performed in the simulator as well in a real environment. First, the estimator is tested using manual control input, and once it is proved to be working, an NMPC controller is utilized to follow a track. The vehicle is 41 cm long and 21 cm wide, and the track is 50 cm in width. The setting of the controller is tuned to complete two laps while achieving a longitudinal velocity of 0.8 m/s, keeping the vehicle inside the track and closer to the center track, shown as a dotted line in Figure 7. The objective of the validation is to (i) estimate full states correctly, including unmeasured state v y , in the simulator as well as real vehicle, and (ii) validate real-time performance. For the simulator, the NRMSE evaluation metric is used to compare the error between the simulated and estimated state, which can be computed by:

Simulation
The vehicle simulator is developed using the vehicle full state dynamics obtained in Section 2.1. The perturbations and noises are injected into the vehicle model and sensor model, respectively, to simulate the real world. The perturbation and noises are kept a little higher than the actual one to ensure the estimator works even in the worst cases. By analyzing Figure 7, the estimated position is very close to the simulator state of the vehicle. The NRMSE error for all the estimated states is presented in Table 5. The rest of the estimated states are compared in Figure 8. The lateral velocity (v y ), which cannot be measured directly, has an NRMSE value of 0.0323 (Figure 8b). The errors of all the dynamic estimated states are within a certain range, which indicates that the estimator is fully working and ready to be tested on the real vehicle.

Real Experiment
Note that the RMSE error is not used here due to a lack of ground truth measurement. The validation for this experiment is performed visually.
The estimated X-Y position for the real experiment is shown in Figure 9. Some snapshots for visual validation are shown in Figure 10 and compared to Figure 9; it can be noticed that the estimated position of the vehicle matches the real position of the vehicle in the snapshots. For properly substantiated illustration, the media attached (https://youtu.be/Oey2ZxsxlnY (accessed on 29 March 2022)) shows the estimated states and NMPC controller performance. The media validates the performance of the estimator design.  Additional estimated states are shown in Figure 11. The velocity obtained from the motor encoder is very noisy and inaccurate (see Figure 11a), but the TS Kalman filter can provide quite a clean estimation. Similarly, this happens with the angular velocity. The lateral velocity was not measured and is successfully estimated. The final corrected map after completing the laps is shown in Figure 12b. Figure 12a represents the environment which includes fixtures, obstacles, and free space. There are some unoccupied cells, particularly when the LIDAR scan is hindered by the fixtures present in the environment. For this reason, the error between ground truth and map is only calculated for properly scanned areas, i.e., the fixture occlusion region is excluded (see Figure 12a). The intersection-over-union (IOU) score of 0.9388 is obtained between the ground truth and the corrected map.

Simulated vs. Real Experiment
It is worth noting that the time taken for completing two laps in the simulator and real experiments are 38 s and 32 s, respectively.
The real experiment completed the lap faster than the simulation due to strictly maintaining a longitudinal velocity of 0.8 m/s. There are discrepancies between some estimated states which are due to different control input in simulation and real experiment phase or mismatch of the simulator model from the real vehicle model. Table 6 shows the minimum and the maximum absolute differences between estimated and measured states, as well as their standard deviations for both the simulator and real platform experiment. The minimum deviations for the real experiment are similar to those of the simulator, while there is a deviation of about 10 cm for real experiment values compared with simulator values; this is because the standard deviation of simulation measurements is set higher than the real standard deviation. This can also be validated from the standard deviation values in the simulator and real experiments. Even though the performance of the estimator in the real experiment cannot be verified due to the lack of ground truth, the values in the real experiment appear to be similar to those in the simulator, suggesting that the estimator performs similarly. Table 6. Evaluation between estimated states and measured states for the experiment in the simulator and real platform. The min and max header represent the minimum and maximum absolute difference between estimated and measured states, respectively. The std header represents the standard deviation between estimated and measured states.

Conclusions
This paper has presented an approach for full-state estimation of an autonomous vehicle using a TS Kalman filter as well as imprecise scan matching and measurements. Further, this technique can also correct the map obtained from the noisy LIDAR scan end-points matching. The proposed approach provides a stable and optimal solution in the presence of model disturbance and measurement noises at a high update rate of 100 Hz for state estimation, and simultaneous correction of the obtained map. The update rate can go much faster than 100 Hz; the only limitation is the update rate of the sensor measurements. The computational cost of the proposed approach is fairly low if compared with EKF SLAM, which depends on the number of landmarks. The result produced motivates the usage of the TS-based state estimation and mapping in the field of autonomous vehicles and robotics.
The proposed estimation technique depends on the model of the system and the measurement model with certain disturbance and noise, respectively. This requirement is sometimes hard to fulfill, and also, for some systems, the parameters change within a certain range, for example, the tire coefficient of the vehicle. For such a kind of system, online model learning techniques need to be incorporated [21,22]. The proposed method can also be scaled to a high-DoF system if the system model is known, and extra sensors can be installed, for example, a height sensor, such as a barometer, or range sensor can be used to detect z-pose. The proposed scan matching technique can be applied to an unknown environment but the accuracy may degrade in a highly dynamic environment. For such cases, robust scan matching techniques need to be applied, such as in [23].
As future work, the proposed approach will be benchmarked against competing approaches.
Author Contributions: Conceptualization, S.C. and V.P.; methodology, S.C. and V.P.; software, S.C.; writing-original draft preparation, S.C.; writing-review and editing, V.P.; supervision, V.P. All authors have read and agreed to the published version of the manuscript