Adaptive Diagnosis for Fault Tolerant Data Fusion Based on α-Rényi Divergence Strategy for Vehicle Localization

When applying a diagnostic technique to complex systems, whose dynamics, constraints, and environment evolve over time, being able to re-evaluate the residuals that are capable of detecting defaults and proposing the most appropriate ones can quickly prove to make sense. For this purpose, the concept of adaptive diagnosis is introduced. In this work, the contributions of information theory are investigated in order to propose a Fault-Tolerant multi-sensor data fusion framework. This work is part of studies proposing an architecture combining a stochastic filter for state estimation with a diagnostic layer with the aim of proposing a safe and accurate state estimation from potentially inconsistent or erroneous sensors measurements. From the design of the residuals, using α-Rényi Divergence (α-RD), to the optimization of the decision threshold, through the establishment of a function that is dedicated to the choice of α at each moment, we detail each step of the proposed automated decision-support framework. We also dwell on: (1) the consequences of the degree of freedom provided by this α parameter and on (2) the application-dictated policy to design the α tuning function playing on the overall performance of the system (detection rate, false alarms, and missed detection rates). Finally, we present a real application case on which this framework has been tested. The problem of multi-sensor localization, integrating sensors whose operating range is variable according to the environment crossed, is a case study to illustrate the contributions of such an approach and show the performance.


Introduction
More than 90 per cent of road crashes are the result of driver error [1], causing millions of people die from traffic accidents worldwide each year, according to a study by the U.S. Department of Transportation's National Highway Safety Administration. While other factors often come into play, such as distractions due to weather and/or other vehicles, and physical limitations, such as vision and response time. These reasons are creating great interest in providing safe driving technologies. Starting by helping the human driver through an Advanced Driver Assistance Systems (ADAS), toward a full automated vehicle. The functions to be developed to replace the driver are numerous. However, to simplify, let us retain four main functions: • Localization: being the ability to know how to define more or less precisely, and in an absolute or relative way, its position; • Perception: knowing how to analyze the nearby environment and act accordingly (detection of obstacles, signaling, etc.); • Control: take control of the vehicle's actuators (acceleration, braking, steering angle, etc.); and, • Navigation: knowing how to plan and execute a route to a destination.
All of these functions are interdependent and they must meet the same requirements: the robustness, availability, accuracy, reliability, and of course safety for the surrounding and the system itself. The localization function is particularly interesting insofar as it provides information considered to be critical from a safety point of view and used as input by the other functions. The Global Navigation Satellites System (GNSS) is the most commonly used localization system for autonomous vehicles, as it is cheap and easily accessible. It also provides a first position without prior knowledge. However, GNSS standalone suffers, in land applications, from poor reliability due to limitation in precision, which affects the safety-critical Intelligent Transportation System (ITS) [2]. Different techniques for enhancing the performance of GNSS exist in the literature. In [3] and [4], the map-matching algorithm is applied. One of the limits of this technique is that the performance of a map-matching algorithm depends strongly on the resolution of the digital map. As well, the processing time for the estimation is typically high [2].
Based on the sensors fusion technique, a GNSS and Inertial Measurement Unit (IMU) sensor fusion approach provides a centimeter level of accuracy under no GNSS signal loss condition [5]. When the GNSS signal is lost for three seconds, the performance obviously decreases to a meter level. For better performance, [6] uses cameras for lateral positioning based on lane markers recognition with the GNSS/IMU for global positioning. To decrease the cost of used sensors, [7] rely on a low cost pulse-based Short Range Radar (SRR) and were able to achieve a good accuracy in their application. Additionally, in [8], GNSS, IMU, and odometer are integrated, and an infrared LiDAR is used to generate highresolution maps. Subsequently, a SLAM relaxation algorithm that is based on PF localizes the vehicle in the created map. The two previous approaches offer good performance with an acceptable mean errors, but the drawback of the LiDAR, concerning the power needed, the computational requirements and its implementation costs, is considered high.
As we have seen, sensors are indispensable elements in ITS, even if they have limitations. However, these limitations create faults that directly affect the system performance. Hence, there must be a way to monitor the system and take an appropriate decision in the case of faults' existence. Therefore, problems and faults due to feared events can be detected and corrected while the system is still available and safe. This avoids future failures in the system as well as costs that result from repairs. In navigation systems, detecting the fault is not enough to ensure the availability; thus, the fault must be monitored and evaluated in order to isolate or identify the error. Advanced methods of diagnostic, Fault Tolerance (FT), and fault management, called Fault Detection and Isolation (FDI) [9], are valuable tools for satisfying such needs. The existing approaches in the literature are based on the duplication and comparison techniques that can be divided into two categories [10]: Hardware Redundancy approaches and Analytical Redundancy approaches.
The basic concept of the hardware redundancy approaches is to measure a single crucial input parameter with two or more sensors (usually three or more), and then the operation of detection is done through consistency checking provided by the redundant sensor measurements, and the isolation for the faulty sensor using majority voting process. In [11], an FDI method is presented and a feasibility study for fault detection is shown by the temporal analysis of conflict resulting from combining three data sources that are based on Smets's Transferable Belief Model (TBM). Accordingly, if the conflict of the source is high when compared to the other sources, it is considered to be faulty and it is isolated before the final fusion process. An adaptive technique to weight the outputs sensors is presented in [12]. Where, authors estimates the standard deviation of each sensor by statistics and a time factor related to the previous data, then mitigate the impact of of hardware degradation by calculating the sensor output update factor for the data fusion process. In [13], data fusion approaches are described in order to increase the accuracy and improve the fault tolerance of inertial network systems. These approaches are based on the Kalman Filter (KF) techniques for the fusion process, and use the IMU redundancy for  fault masking, taking one master node for the navigation states estimation and slave nodes  for the local states and local inertial vector information. On the other hand, the concept of the analytical redundancy approaches (also known as functional, inherent, or artificial redundancy) is achieved by finding the relation between the measured inputs based on mathematical model, and generates residuals in order to detect and isolate the faulty sensor [14]. After the residuals generation, a decision is made through thresholding technique to evaluate the residuals and make the final decision for detection and isolation. For example, based on the innovation that is calculated using the KF, [15] describes a general approach for error detection, diagnosis, and prognosis in systems that can be described using mathematical models. It is based on System Theory and Statistical Decision Theory. The paper considers the special case of Gaussian random input linear dynamic systems and shows how the statistical properties of the innovation process can be used for error detection and diagnosis. A Fault Detection (FD) algorithm, based on the Extended Kalman Filter (EKF) to track the outputs from the localization methods, followed by the CUmulative SUM (CUSUM) to test the filter's residual, is proposed in [16] to identify any unexpected large deviation. The authors in [17] propose an FD method for a woodland vehicle localization. Data fusion occurs using GPS, encoders, and IMU under EKF estimator after a Normalized Innovation Squared (NIS) tests dealing swith GPS measurements.
In all of the articles cited above, the environment in which the system operates is always considered the same, which directly affects the way in which the limitations of the sensors are treated in the diagnostic part. Where, in fact, these limitations are related to different elements that depend on the type of application and the surrounding environment of the system. In this paper, the objective is to develop an Adaptive Fault-Tolerant Fusion (AFTF) localization approaches for autonomous vehicles by multi-sensor data fusion, through the integration of a diagnostic layer that allows the detection and isolation of faulty sensors. More specifically, we are interested in the development of systems to provide a measure of confidence in the calculated information and improve the accuracy of the positioning system. Where, in the diagnostic part, our developments are oriented towards the use of informational approaches based on the use of filtering techniques and informational metrics for residuals generation. These residuals can be created by different existing metrics. Starting by the euclidean metric, also called euclidean distance [18], which is considered to be the simplest way to compare the two distributions using the distance between their means. But taking only the mean as comparison factor leads to a big lose in information related to the characteristics of the distributions, such the volume, the form and the orientation. In 1948, Claude Elwood Shannon proposed [19] a new concept dealing with information, called Shannon's entropy. This information theory allows for generating residuals while taking the means as well as the uncertainty into consideration as an important factor. Recently, researchers resorted involving information metrics as support for the design in the residuals in FDI methods in order to reach higher integrity and accuracy through FDI methods. The Mutual Information (MI) in [20] and [21] is used as similarity measure for the residual created and it shows higher accuracy and better results when comparing with other methods. In [22], a robust unknown input observer for a class of nonlinear systems is designed for FDI methods through the Bhattacharyya Distance (BD). The Quantum Jensen-Shannon divergence (QJSD) is used in pattern recognition [23], chemical physics [24], and other applications [25,26]. Additionally, the Kullback-Leibler Divergence proved its efficiency in different domains, in Fault Tolerant Fusion (FTF) approaches with FDI method for multi-robots localization in [27]. A general survey for the various informational divergences and measures used in the literature can be found in [28].
In the work that is cited above, in order to design residuals, the hypothesis, through the chosen information metric, is that the evolution model is always considered as a reference with respect to the observation model, despite the relevance of the cases where the observations are very reliable. Besides, in cases where the uncertainty may be high on both models, this assumption may lead to problems of missed detection in cases of possible faulty cases.
In this work, we propose a new way for generating these residuals by weighting, with a suitable way, the two covariance matrices (evolution model and observations) using α as a weight in the α-RD. Through the variation of α ∈ [0, ∞], the α-RD generalizes a large number of possible informational measures. Corresponding residuals are then created based on proposed models for the fault-free cases and the faulty cases. After residual creation, an adaptive thresholding method that is based on α-Rényi criterion (α-Rc) is proposed in order to make decisions. In the case of error detection, an isolation algorithm based on Unknown Input Observers (UIO) is used to remove the error from the correction step of the chosen estimator.
The main contributions of this paper are: the development of a tightly coupling multi-sensor integration, the guarantee of the availability and the integrity thanks to an adaptive diagnostic layer able to detect both proprioceptive and exteroceptive sensors faults, the use of an advanced information metric, namely α-Rényi divergence, to design an adaptive and optimised thresholding strategy, and the validation of the approach with real experimental data.
This paper is organized, as follows: Section 2 presents the problem statement and introduces the Key Performance Indicators (KPIs). Section 3 presents the fusion architecture using Nonlinear Information Filter (NIF). Section 4 introduces the α-RD for fault detection and isolation, and introduces the balanced α as a proposed solution. Adaptive thresholding using the entropy based criterion is detailed in Section 5. The results with data from real experiment are shown and discussed in Section 6, followed by a conclusion in Section 7.

Problem Statement
The adoption of a technological solution as a means of localization of an Intelligent Transport System requires the validation of the usual key performance indicators. These are mainly accuracy, availability, continuity, and safety. Therefore, a sensor that is highly sensitive to the context in which it operates must be monitored with the appropriate diagnostic algorithm that can be adapted to the surrounding environment. Indeed, when one thinks not only of the vehicle, but also of the environment in which it evolves, it becomes obvious that the expected operational needs of the positioning system are also variable over time. Where, for example, the requirements in terms of expected precision or continuity are not the same in open sky as in urban canyon. Adding also the effect of relation between some of these KPIs, which can be sometimes contradictory. Taking for example, the availability and the safety which are two antagonism terms. So, one can see that achieving one KPI may affect the other(s). This link between the KPIs creates many important challenges that face the design of a robust, safe and available system. In this section, we highlight this relation by answering to the main operational challenges for any proposed estimation approach, then we deliver the proposed approach in details.

Diagnostic as a Guarantee of Safety
Safety requirements are expressed in terms of Tolerable Hazardous Rate (THR), which represents a maximum limit for iteration of system failures.
A very conservative diagnostic policy is required to achieve the objective. At the slightest suspicion of error, this policy will prefer to render the localization function unavailable. Here, we can see here the negative impact that this type of overly conservative policy can have on the availability and continuity of system. Having an adaptive diagnosis makes it possible to envisage relaxing the constraints of operational safety according to the context, all ensuring that the THR requested is not exceeded to improve availability.
In other words, a standard diagnostic allows for pursuing a single objective (i.e., the probability of false alarm or the probability of missed detection). Where, an adaptive diagnostic adds a degree of freedom allowing, depending on the change of environment and/or KPIs, reaching a higher level of sensitivity to the various faults, and to compromise between these two objectives.

Fault Tolerance as an Availability Booster
As we have seen, a conservative diagnostic can affect the localization function availability. Hence, in order to mitigate this problem and achieve continuous and safe positioning solutions, a fault tolerant layer must monitor the health of the sensor measurements and it analyzes the current situation to detect potentially dangerous and incipient or sudden changing situations, in order to provide an appropriate system behavior estimation to ensure a desired level of safety and maximum availability. In the literature, well known FDI algorithms that supervise and tolerate the fault are based on Solutions Separation(SS) [29]. These architectures are seemingly complex and very demanding in terms of computing resources, but they have the advantage of detecting a fault, only if it has an impact on the estimated position.

Proposed Approach Block Diagram
After presenting the main three challenges, we detail the proposed approach in its different levels. Figure 1 illustrates the proposed fault tolerant multi-sensor fusion approach. The algorithm is applied by fusing GNSS observations (pseudo-ranges) and odometer data using a Nonlinear Information Filter (NIF) as the main estimator.
After specifying the value of α thanks to a predefined objective function, all of the received observations are involved in the calculation of the global α-RD. The global residual is then tested with the threshold provided by the α-Rc. The α-Rc is used to provide the optimal missed detection probability (P md ) and detection probability (P D ) for each case. If the global residual is below the specified threshold, so the use of all observations in the correction step of the chosen NIF is safe for estimating the position. Where in the opposite case, the isolation step is activated to remove the erroneous measurements. The isolation method is a hierarchical algorithm that is based on the UIO, where, at each level, sets of the chosen IF sub-filters are created using the number of observations available at the previous level (n − 1). At each level, local α-RD residuals are calculated for each sub-filter, and one erroneous satellite measurement is removed from the next level. This process continues until all of the erroneous measurements are removed and the final subset of observations has an α-RD below the threshold. Finally, the final safe subset is used in the correction step of the global NIF and the estimated position is calculated.
Noting that, during the whole process, the number of satellites is taken into consideration. It should not be less than four satellites at instant k in order to deliver a position [30].

Nonlinear Information Filter
The information form of the EKF, called NIF, is used in order to estimate the position of the vehicle [21]. As the EKF, the NIF consists of two main steps, called the prediction and correction steps, using the covariance matrix as information matrix and the state vector as information vector.
Consider the following non-linear system: where, X k is the state vector, f (.) is the non-linear function, u k is the input vector, and w k ∼ N (0, Q k ) is the model noise that is considered as white Gaussian noise of zero mean value and covariance matrix Q k .
The non-linear observation model has the following form: where, Z k is the observations vector and k is the observation noise vector that is considered as white Gaussian noise of zero mean value and covariance matrix R k = E[ k T k ]. The first step of the NIF is called the prediction step, and it uses the following information matrix and information vector to predict the position: with: These equations are written in the second step, called the correction step, as follows: where gI i (k) and pI i (k) are seen as information contribution calculated as: Additionally, N is the number of observations at instant k.

Adaptive diagnostic layer based on α-Rényi Divergence
One of the problems faced in diagnosis is the use of one chosen model when the system evolves in a dynamic environment and in different conditions. An effective diagnostic layer has to be presented qfter a robust sensors fusion. In this work, the α-RD for residual design is proposed, then, the features and advantages for this divergence are presented in detail.
The α-RD choice is based on the wide range of divergence measures offered, by changing the value of α [31], which makes the divergence more flexible with the type of application.

α-Rényi Divergence as a Parametric Residual
For two probability distributions P and Q, the α-RD between P and Q is non-decreasing as a unction of its order α, and it is continuous on the set of α for which it is finite. It can be written as: where α ∈ R − {1} [32]. The generation of residuals through α-RD makes the residual more flexible and adaptable with the environment, the dynamic, and the kind of errors. Hence, to calculate the divergence between the two probability distribution functions (pdfs) g(k/k − 1) ∼ N (X k/k−1 , Σ k/k−1 ) and g(k/k) ∼ N (X k/k , Σ k/k ), representing the two covariance matrices of the prediction and the correction steps that are provided by the chosen NIF estimator, the α-RD can be written as the following equation [33]: Taking into consideration that Hence, based on Appendix A, the Equation (10) can be written in the following form: This residual consists of three kind of tests that deal with the two pdfs in different ways: 1.
The first test: represented by the weighted Mahalanobis distance, is to measure the distance between the means, while taking the α value into consideration, which weights distance through its impact on the covariance matrices, 2.
The second test: log | Y k/k−1 Y k/k αY k/k−1 +(1−α)Y k/k |, can be compared to the weighted Bregman, while taking the weight of α for each covariance matrice into account,

3.
The third test: log

Establish a Residuals Parameterization Policy Based on Operational Requirements and Changes in Navigation Context
Taking into account the different purposes for the use of an adaptive diagnosis explained in the previous sections, the adaptive diagnosis is proposed in this section based on multiple reasons. First, we are dealing with two models, the evolution model and the observation model. The limitations that restrain the odometer sensor, as well as the limitations of GNSS, due to many different causes that are listed in [34], lead us to completely rely on neither of the two models.
The infinity of particular cases that are generalized by α-RD correspond to a particular appreciation of the trust between the evolution model and the observations. In the case of the Kullback Leibler (α → 1), the residuals are generally designed to only detect observation faults and all of the confidence is given to the evolution model. To be effective, the model must be based on mathematical knowledge of the system. For this knowledge to be precise, the model must be complex and hard to establish. Otherwise, if the model is not very accurate, the uncertainty fixed around this model must be large. This reduces the possibility of fault detection.In the case of Bhattacharya divergence, the appreciation gauge is placed in the middle and neither of the two models is a priori privileged. However, there remains an imbalance due to the original covariance of each of the models. For the choice of the α value at each instant k, and knowing that there is no reliable reference between the two models, we propose a solution to weight the two models equally through α balance. The following equation takes into consideration the covariance matrix Σ k/k−1 of the prediction step representing the evolution model, and the covariance matrix Σ k/k of the correction step representing the observation model, then equally weight the matrices.
this equation leads to the weighted value of α: which is written in informational form as: One can notice that, regardless of their original weights, using the equality in Equation (14), the weight is equally distributed between the prediction and correction pdfs. The adaptive α avoids the problem of model's reference faced along the trajectory by proposing a new residual at each instant k.

An Infinity of Residuals Implies an Infinity of Statistical Characterization ... How to Solve?
The policy that is chosen in treating the adaptive diagnosis leads to an infinity of residuals, which implies an infinity of statistical characterization. In fact, creating the two pdfs for an infinite values of α is unthinkable. Hence, to avoid such problems, one of the solutions is by creating mathematical models that are able to estimate the two pdfs. The diagram presented in Figure 2 shows the procedure that is used to create these models. Using real sensors data, for many values of α, residuals are calculated and the two pdfs for faulty and non-faulty cases are created through a non-supervised learning method for each value of α. Subsequently, the means and the variances are extracted and plotted with respect to α. Hence, a function that approximates these values is created. This function represents the mathematical model to estimate a new value of α (Figure 3).  The mean and variance models created take the following mathematical form: 1.
For non-faulty cases: 2. For faulty cases: Where a, b, c, and d are considered to be constant values. Using the above models, one is able to approximate the mean and variance for the new value of α in order to create the two pdfs.

α-Rc Design
Based on Equation (9), the informational gain that is associated to the decision u j , represents the α-Rényi divergence between the a priori and a posteriori distributions. This gain can be written as: Therefore, the α-Rényi criterion that is represented by the summation of the informational gain corresponding to decisions u 0 and u 1 , is written as: which leads to: Thus, the α-Rényi criterion is written as: where: γ 0 = 1 − P F , β 0 = 1 − P D , γ 1 = P F , β 1 = P D , P F is false alarm of the probability and P 0 is calculated using the following equation: where h i = 0 if the decision is H 0 or h i = 1 if the decision is H 1 . Accordingly, ∑ n i=1 h i represents the windows taken for n past hypotheses.

Variation of α-Rc
The variation of α-Rc is studied through the derivative of the α-Rc with respect to a given variable v. This derivative is calculated based on Equation (20), and it is written as: 1.
for P F = constant, the derivative of α-Rc with respect to P D is written as: Based on Equation (23): We conclude that α-Rc is a decreasing function on [0, P D [ and an increasing function on ]P D , 1] with a minimum point reached at P D = P F . Hence, by maximizing the α-Rc, the P D is maximized when P F is fixed.

2.
for P D = constant, the derivative of α-Rc with respect to P F is written as: Based on Equation (24): We conclude that α-Rc is a decreasing function on [0, P F [ and an increasing function on ]P F , 1] with a minimum point reached at P F = P D . Accordingly, by maximizing the α-Rc, the P F is minimized when P D is fixed. As conclusion, in order to minimize the false alarm probability and maximize the detection probability, it is equivalent to maximizing the α-Rc.

Threshold Optimization Algorithm
The threshold corresponding to these conditions is found by the maximization of α-Rc, as we are interested in maximizing the detection probability and minimizing the false alarm probability.
Starting from the Bayes rule: which leads to the likelihood ratio test that is written as: and by setting the derivative of α-Rc in Equation (22) to zero ( ∂Rc α ∂v | v=th = 0), the likelihood ratio, linked to the threshold ∂P D ∂P F = Λ, which maximizes the α-Rc, is written as: where: with: The threshold optimization algorithm is given, as follows (Algorithm 1):

Experimental Results
With the use of a robust filter for sensors fusion that can ensure the availability of the system and, in most of cases, the accuracy as well, the need of the safety attribute is necessary. Hence, a diagnostic layer that is based on α-RD to deliver the safety and avoid accuracy problems caused by feared events is proposed. In order to validate our proposed approach, experimental results using an experimental vehicle, the equipped autonomous vehicle of the CRIStAL Lab, are shown in the following section.
Thus, two scenarios are presented based on the proposed α-RD for residuals generation. The first scenario deals with fixed α value in the whole trajectory, where the second one deals with balanced α. The results are shown and a result comparison is done in the end of the two presented scenarios.
Noting that, in both scenarios, the same trajectory C3 seen in Figure 4 and detailed in the Table 1, is compared with the same conditions and the same collected data.

Results without FDI Approach
The following results in Figure 5 illustrate the positioning estimation without any FDI algorithm on the trajectory C3 (Figure 4). In both views, one can see the level of error presenting especially in the z-axis.  These errors are due to either the multipaths that are caused by signals reflection, Non Line-Of-Sight (NLOS), or related to the low elevation of the satellites that are available at the instant k. The elevation of all satellites available in this trajectory can be seen in Figure 6. We provide the results with the FDI proposed algorithm using both the fixed and balanced α strategy in order to detect and isolate these errors. However, before going to the next section, Figure 7 illustrates the global α-Rényi residuals for the fixed and balanced α. These residuals that are calculated without the FDI approach show disarrayed alignments, which indicate the presence of faults.

Results with FDI Approach
A good residual design reduces the loss of some fault information, as mentioned in Section 4. From this point, we deliver the results for two different scenarios. In the first scenario, residuals are generated through fixed α with α value equal to 0.5 which is related to the Bhattacharyya divergence [35]. The second implements the α balance strategy.

Residual Design Using Fixed α
The residuals that are generated by the fixed RD for each satellites during the whole trajectory can be seen in Figure 8. The behavior of each satellite is monitored at each epoch during the whole trajectory. The detection of any erroneous behavior is carried out while using the Rényi criterion (for α = 0.5) by optimizing the threshold value. This threshold is illustrated with the global residuals, without and with the FDI approach, in the Figure 9. Fixed Renyi divergence without FDI with FDI Threshold Figure 9. RD 0.5 divergence without/with FDI with adaptive threshold.
As we can see, the threshold is adaptive to the residuals, which means that it is calculated based on a variable that is related to the decisions. This variable is P 0 , which takes into consider-ation the last 10 decisions during the past epochs, and can be calculated using Equation (21). The P 0 represents the threshold in Figure 9 that can be seen in Figure 10. The behavior of P 0 indicates the low probability to be in a case where no fault will be detected. Where, the value increases as much as an isolation step is conducted.

Residual Design Using α Balanced
The same structure is followed in this section using the α balanced. Where, the α-Rényi residuals for each satellite are presented in the Figure 11. Many of the available satellites carry an erroneous information that should be isolated from the final estimation, like satellites 7 (dark blue), 15 (mauve), and 8 (orange). Unlike RD 0.5 , these residuals are generated by equally 'reweighting' the prediction and correction covariance matrices. The value of α calculated to weight in a manner to equally take into consideration the two uncertainties, while using Equation (14), can be seen in Figure 12. Subsequently, the detection is occurred using an adaptive threshold that is optimized by the α-Rc, where each residual is judged with its corresponding criterion. Hence, referring to the problem that is discussed in Section 4.3, where an infinity of residuals implies an infinity of statistical characterization, we used the proposed models to estimate the mean and variance for the corresponding value of α.
In our application, the models for estimating the means and the variances of faulty and non-faulty distributions are created by fitting the real samples ( Figure 13).
The fitting functions for the real means and variances lead to the following mathematical models for estimating the two distributions of other value of α: • For non-faulty cases: • For faulty cases: These models are created by fitting the real values of means and variances, taking twenty values of α between [0, 1]. Where, the real means and variances are extracted using real experimental pdfs. After generating the statistical characterization that is used for α-Rc optimization, the adaptive threshold with the global α-Rényi residuals are presented in Figure 14.
The P 0 that is related to the previous threshold is seen in Figure 15. The difference in behavior between the two P 0 and the two thresholds related to the fixed and balanced α is an indicator of a difference in the decision, which will surely affect the final estimation.
In the next section, we compare the final results and try to highlight the source of the differences.

Results Comparison
We have to see the impact of the residuals design and the resulting decisions on the estimation position in order to compare the two scenarios. Let us first look at the effect of the two design architecture on the isolated measurements. Figure 16 illustrates the isolated satellites based on the decisions of the fixed and balanced α. The two figures show a lot of commune measurements isolation, as well many differences that are present in different epochs and/or in the same epoch. Figure 17 only shows the differences between the isolated measurements in the two scenarios. For example, in the fixed α scenario, the decision was to isolate satellite 26 from epoch 1039 to 1208 (pink zone). Where, in the α balance scenario, the satellite 26 is considered to be non-faulty and the decision was to isolate satellite 8. The same scenario is repeated between epochs 1424 and 1449 with two satellites in difference, and in many different epochs. On the other hand, the fixed α in many epochs decides to isolate some satellites, where, using α balance, no isolation occurred. For example, between epochs 332 and 442 (green zone), satellites 8 and 10 are isolated using the RD 0.5 , and used in the final estimation by the α balance.
W will present the effect of these decisions on the final positioning estimation for both scenarios when compared to the reference using Figure 18 in order to judge fairly these decisions. Additionally, we zoomed on a selected area (black zone) to see clearly the effect of each decision on the position. Noting that the fixed α trajectory is plotted in orange color, and the blue trajectory refers to the α balanced estimated position. For deeper analysis, we can see, in Figure 19, the impact of the decisions for both scenarios on the position error calculated by the difference between the position error of fixed α diagnostic and the position error of the balanced α diagnostic. The blue color in the figure means that the balance α diagnostic at these epochs better estimates the position than the fixed α diagnostic, which is reflected by decreasing the position error. This difference, as we can see, can reach at some epochs the 50 m as difference. That difference means that the decision to use the observations at these epochs at the same weight as the evolution model was right and it can be seen as an error detection for the evolution model. Where, at the same epochs, the fixed alpha decisions were based on the fact that the evolution model is better than the observation model leading to a high error in the position estimation.
In contrast, the appearance of the red color is also logical. Because, in some epochs, giving the observations the same confidence as the prediction, under the bridges, for example, where the observations are distorted, is considered to be a mistake tbat will lead to an error estimation.
These results confirm our proposition to create a diagnostic layer that is able to take the changing environment into consideration and study each moment based on the new change and state. Hence, even if the results of the balanced α are better than the fixed α, but one can deny the fact of an adaptive α that is related to the changing environment and the changing of KPIs can be better. Figure 19. The difference in decisions between fixed α and α balance.
As final results, and in order to evaluate the two diagnostic methods in numerical way and show the effect of the two FDI approaches on the position estimation, Table 2 presents the mean error, and the max error removed during the whole trajectory. These results underline the performance of adaptive diagnosis between the two proposed scenarios. In addition, they show that it is not always a good policy to remove measurements from the final estimation and, to do so, adaptive diagnosis must be used while taking into account different factors, such as KPIs and environment. While, in some cases, adaptive diagnostics can act like normal diagnostics, such as when the vehicle is moving in the open sky over its entire trajectory. In this way, the adaptive diagnosis will provide a high degree of confidence in the observations at all times, and vice versa.

Conclusions
In this paper, the problem of fault-tolerant localization system for autonomous vehicle is treated. First, the concept of key performance indicators, which is well known in the domain of diagnostic, is introduced and the interaction between these terms is described. Subsequently, the adaptive fault tolerant fusion was presented. After explaining the relevance of the adaptive diagnostic concept, and how the fail-safe design architecture should be related to the changing environment of a vehicle, and able to provide the KPI requirements at each moment, the target to design an adaptive diagnostic layer through residual design was shown and detailed. The adaptive FTF approach was proposed using a parametric residuals that were generated through the α-Rényi divergence. To adapt this parameter α, we proposed a solution, called the α balance. This proposal led to a large number of statistical characterizations in order to create the faulty and non-faulty distributions. A solution is proposed to this problem using a mathematical model that is based on non-supervised real data. For decision-making through a thresholding method for the residuals, a new criterion design that was based on α-RD called α-Rényi criterion was developed. This criterion holds a large choice of criterion, where we linked the criterion to the corresponding calculated divergence. For the isolation part of the diagnostic, we showed the efficiency of using the unknown input observers as a separation method to isolate erroneous measurements. Finally, the whole algorithm was tested using real GNSS/odometer data using different tests trajectories. The results show good performance for the adaptive diagnostic, and the decision-making part.
The encouraging results obtained confirm our motivations for the choice of the methodological proposed tools, in particular the adaptive diagnostic in the informational framework. For the future research work, we consider the use of the INS with the GNSS/odo in tight coupling in order to reach higher level in accuracy and availability. In the diagnostic layer, the way for selecting the parameter α for residuals design in the adaptive diagnosis can be designed in integrating Artificial Intelligence (AI) tools. Regarding the decision part, the P 0 is calculated on the basis of the windowing method, which has some limitations regarding the fast adaptation to each case and the heuristic way of choosing the dimension of the window, which directly affects the sudden error detection. An AI tool is considered to be used for the P 0 estimation.