Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase

In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.

proposed to improve the performance of integrity monitoring. As the satellite measurements are insufficient due to the obstructions, our basic idea is to introduce the landmarks, photographed by a vision system, as the pseudo-satellites to enrich the navigation measurements. To aid RAIM, however, the vision measurements applied for RAIM should be accurate and reliable. Although the existing computer vision technologies perform well in vision information extraction, the vision measurements inevitably incur errors due to background interference and platform dithering. Furthermore, the vision errors may be proportional to the length of the light-of-sight as the vision system is inherently an angling system [12]. To ensure the accuracy of the vision system, the received GPS signals are utilized to calibrate the vision measurements, so as to reduce the error of the vision system. Then, the calibrated vision measurements are utilized to expand the GPS measurement equation in order to improve the performance integrity monitoring.
To the best of our knowledge, studies on vision-aided RAIM in the approach and landing phase are quite rare. This paper aims to improve the RAIM performance with the computer vision system in the low GPS visibility condition. In addition, the method in this paper contains some similarity with the vision navigation, while as mentioned by Dusha et al. [13], most emphasis in the vision navigation is on positioning where the satellites is unavailable. Actually, there are only a few researches on the vision/GPS integration. For example, Won et al. proposed an integrated navigation system that improves the performance of vision-based navigation by integrating the limited GPS measurements in a low GPS visibility condition [14]. Some other applications of the video-based navigation method can be found in [15]. However, they paid comparatively little attention on utilizing the vision system to augment RAIM.
The remainder of this paper is organized as follows. The detail of the proposed VA-RAIM approach is presented in Section 2. Section 3 is the experiments to demonstrate the practical utility of our approach. Finally, the conclusions are shown in Section 4.

Overview of VA-RAIM
The baseline RAIM algorithm is mainly composed of two parts, i.e., availability and fault detection. The availability performance is a major concern of RAIM, which ascertains whether the conditions exist to perform fault detection with sufficient power. Fault detection is a safeguard of the correct function of the system and ensures that measurements do not contain significant failures [16].
The availability performance relies on the number of visible satellites and the geometrical configuration. Normally, the RAIM availability requires at least five satellites in view. Besides, the protection level (PL) that is decided by the geometrical configuration should be less than the alert limit (AL) [17]. Unfortunately, in the approach and landing phase, the interferences and obstructions may result in the signal loss and the large mask angle, which will lead to less visible satellites and a poor geometrical configuration. The scenario of an approach and landing phase is shown in Figure 1. In this condition, the availability performance of RAIM will decrease dramatically. A typical example of the approach and landing phase is Chinese LinZhi Airport, which is located at an elevation of 2950 m and flanked by mountains about 5000 m [5]. In some cases, the number of visible satellites is only four, as the satellites with low elevation are blocked by high mountains. When RAIM is available, the RAIM method should detect the system fault in a timely manner to ensure the integrity of the positioning results. The purpose of our work is to develop a new RAIM method for the approach and landing phase with the aid of a vision system. To overcome the insufficiency of visible satellites, the VA-RAIM introduces the landmarks photographed by the vision system as pseudo-satellites for additional measurements, as shown in Figure 1. Then, the vision measurements and the GPS observations are integrated to improve navigation integrity. The framework of our approach is shown in Figure 2. To utilize the landmark information with a vision system, the detection and matching algorithm is applied to obtain the landmarks in a given image. However, the image processing will inevitably incur errors due to background interference, platform dithering, etc. These errors consist of random time-variant error and time-invariant error, which may be adverse to the integrity performance of the VA-RAIM. To solve this problem, a vision model with the calibration method is proposed. Specifically, the landmarks are introduced as pseudo-satellites and the vision system is modeled as a measurement equation similar to the GPS system. Then, the fine GPS measurements received in high altitude position (such as x(0) in Figure 1) is applied to calibrate the vision pseudoranges to reduce the time-invariant error. Finally, the VA-RAIM is designed by using the weighted integration of the vision and GPS measurements. The test statistics at discrete time k of the integrated system SSEI(k) is calculated to compare with a decision threshold TSSE. If the test statistics is less than the decision threshold, the output will be the location result of the vision/GPS integration. Otherwise, the output will be an integrity warning message. In the proposed method, the vision system is applied as an assistance to enrich the navigation observations and enhance the geometrical configuration, thus improving the performance of GPS integrity monitoring in the approach and landing phase.

Vision Model with Calibration
Under the framework of our approach, the vision system is applied to extract accurate vision pseudoranges from a given image to aid RAIM. Then, a calibration algorithm based on the GPS measurements is designed to reduce the error of the vision observations.

Vision Model
Inspired by the principle of GPS position model [18], the landmarks are regarded as pseudo-satellites to obtain the similar navigation measurements and a vision pseudorange is defined as the estimation of the distance between the user and a landmark. However, different from GPS pseudoranges, the vision pseudoranges are calculated but directly measured. As shown in Figure 3 , which is composed of NV (k) vision pseudoranges. By applying the cosine rule, the vision pseudorange vector is the solution of the over-determined equation set composed by NC (k) =NV (k) (NV (k) − 1)/2 equations as: where mi (k) ∈ R + is the vision pseudorange of the i th landmark pi ∈ R 3 at time k. cij (k) = cosθij (k) ∈ [−1, 1], in which θij (k) ∈ [0, π] is the angle between the two lines-of-sight of landmarks pi and pj ∈ R 3 . As for the other terms in Equation (1), dij = ||pi − pj|| ∈ R + is the nominal distance between landmarks pi and pj, which is time-invariant. As shown in Equation (1), the vision pseudorange vector is decided by d 2 ij and cij (k) . In this section, we take into account NV (k) = 3 to simplify the following analysis, while our approach can also be easily extended to any number of landmarks. The relation between the vision pseudorange error δm (k) = [m 1(k) , m 2(k) , m 3(k) ] T and the error of these parameters δd = [δd 2 12 , δd 2 23 , δd 2 31 ] T and δc (k) = [δc 12(k) , δc 23(k) , δc 31(k) ] T can be obtained according to the differential parameters of Equation (1): Then we can have: In Equation (3), since δd is fixed with a set of given lengths, it generates the time-invariant error of the vision system. δc (k) is decided by the detection and matching results at time k, which generates the random time-variant error. The properties of δd and δc (k) are analyzed as follows.
If the distance of each pair of landmarks is accurately obtained, we have δd = 0 and the second part on the right of Equation (3) disappears. However, it is very challenging to measure the distance directly. In practice, we can measure the position of a landmark up to centimeter-level accuracy with the utilization of real-time kinematic (RTK), and then calculate the distance. Specifically, the position difference is calculated as d 2 ij = ||pi − pj|| 2 , and the error of the position difference is obtained as δd 2 ij = 2(pi − pj) T (δpi − δpj). Suppose the landmark position errors in each degree of freedom obey N(0,σ 2 g ) and independent from each other, δd 2 ij obey Gaussian distribution as with zero mean and variance 8d 2 ij σ 2 g , which is proportional to the d 2 ij .
With the transformation projective from object to image space [19], the parameter cij (k) is calculated by (4) where <,· > is the vector inner product, p C i(k) ∈ R 2 and p C j(k) ∈ R 2 is the i th and the j th landmarks in the camera frame, as shown in Figure 3. The landmarks in the camera frame are obtained through the detection and matching algorithm, and the detection error δp C i(k) is assumed to follow the independent normal distribution with zero mean and covariance matrix Σp ∈ R 2 × 2 .
Linearizing Equation (4) yields: As proofed in Appendix, δcij (k) can be obtained as a linear combination of δp C i(k) and δp C j(k) , i.e., which follows a normal distribution with zero mean and variance μ T ij(k) Σpμij(k) + μ T ji(k) Σpμji(k). Thus, we can obtain that δc (k) follows a normal distribution with zero mean and covariance matrix Σc For the RAIM algorithm, a GPS fault is defined as a pseudorange bias deviating from its nominal behavior [2]. The probabilities of simultaneous satellite faults with different number of visible satellite are plotted in Figure 4. Since the scenario of this paper focuses on the approach and landing phase that is a low GPS visibility (often less than 8), the probability of multi-fault simultaneity is less than the integrity requirement 10 −7 and can be ignored in applications. Thus, the threat of GPS is a fault in at most one visible satellite in this paper. However, if the visible satellites are sufficient that multi-fault cannot be ignored, we can detect the faults by using standalone satellite observations [9]. When used for aiding navigation, vision system may also suffer from faults. With the introduction of vision pseudorange, the fault of vision system ultimately causes a vision pseudorange bias similar to GPS pseudorange bias. As shown in Equation (3), the pseudorange fault is decided by the fault of δd and δc (k) . In real applications, there are various types of threats that might cause a fault on δd or δc (k) . In this paper, two typical examples are considered as follows, i.e., fault in the landmark location and fault in the feature detection result during image processing.
(1) Fault on δd If there is a fault Δpi in the location of the i th landmark, the error of position difference can be obtained as 2(pi − pj) T  If there is a fault Δp C i(k) in the feature detection result during image processing, combined with Equation (6), δcij (k) can be updated to The probabilities of the vision fault can be statistically obtained with real computer vision data, and the relatively high probabilities often reflect the challenging environment of the data collection, which will be further researched.

(C) Vision Measurement Equation
Similar with the linear GPS measurement equation [20], the linear vision measurement equation with NV (k) landmarks can be modeled as: is the vector of vision measurements obtained by the vision pseudoranges at time k, 4 is the state vector includes three position elements (x (k) , y (k) and z (k) ) and the receiver clock bias Cb (k) . εV(k) ∈ According to Equations (3) and (8), εV(k) and bV (k) with the consideration of vision fault Δd and Δc (k) can be obtained as: where εV(k) follows a normal distribution with zero mean and covariance matrix ΣV(k) = A −1 (k) Σc(k) (A −1 (k) ) T . According to Equation (10), different with single fault in GPS pseudorange, one fault in Δd or Δc (k) will have an impact in more than one of the vision pseudorange measurements in Equation (8). In the following section, a calibration method is presented to mitigate the impact of the vision faults.

Calibration Method
In this section, the GPS measurements are applied to calibrate the vision system to reduce the error and fault of vision measurements. First, the bias caused by the error and fault of the landmark locations, i.e., δd and Δd, is removed. Then a modified vision equation with vision single fault is obtained.
With NG (k) GPS satellites in view at discrete time k, the linear measurement equation is described as: where zV (k) ∈ is the measurement noise vector, which are generally assumed to be normally distributed with zero mean and covariance matrix ΣG(k) ∈ is the measurement-fault vector to be detected. At the beginning of the approach and landing phase, such as x (0) in Figure 1, the satellite measurements are sufficient due to little obstruction in the high altitude environment. Under this condition, the fine GPS observations can be obtained and the traditional RAIM is available to compute the trustable positioning results from GPS. Specifically, the GPS positioning solution (0) x ∈ R 4 obeys normal According to Equations (8)-(10), the vision equation at the initial time k = 0 and time k > 0 is described as: and Multiply Equation (12) into Equation (14) yields: The vision model with calibration is obtained as: . It is easy to obtain that ()Vk ε follows a normal distribution with zero mean and covariance matrix

Σ Σ A B B Σ B B A A B B A H Δ H A B B A
By comparing the bias vector in Equations (8) and (17), it is illustrated that our method removes the bias caused by the error and fault of the landmark locations. However, the impact of the fault during image processing, i.e., Δc (k) , will still exist. According to Equation (17), a fault in Δc (k) or Δc (0) will ripple through all the elements of ()Vk z . To mitigate the impacts of Δc (k) and Δc (0) , a modified vision equation with single fault is obtained as follows.
In this paper, we assume that there is one fault in Δc (k) and the fault index is invariant in a short period of time, i.e., the fault index in Δc (k) and Δc (0) are the same. Multiplying Equation (17) by A (k) yields a modified vision equation as: where ' ,which follows a normal distribution with zero mean and covariance matrix ' 1

A Σ A B B Σ B B B B A H Δ H A B B
In Equation (18), '1

VA-RAIM
This section presents the VA-RAIM method with the calibrated vision model. We begin with showing the integration of vision/GPS measurement equation. Then the procedure of PL calculation and fault detection of the VA-RAIM is proposed.
By combining Equations (11) and (18), the linearized measurement model at time k for the vision/GPS integrated system is obtained as: With the aid of vision measurements, the over-determined integrated vision/GPS system can improve the integrity performance in the approach and landing phase.

Protection Level and Availability
Protection level computation is in essence a performance safeguard to evaluate the power of fault detection. Denote NI (k) = NG (k) + NV (k) is the sum number of satellites and landmarks, Hslope(i) and Vslope(i) (i = 1,2, …, NI (k) ) of the integrated system is obtained as [21]: If the calculated HPL/VPL is larger than the horizon/vertical alert limit (HAL/VAL), i.e., HPL > HAL or VPL > VAL, the system is not available and an alert is provided to the user. Otherwise, the fault detection procedure will be processed as follows.

Fault Detection
The fault detection method can be classified as snapshot method [22,23] or filtering method [24]. The former is generally evaluated with the current observations only, while the latter uses both the current and historical measurements. Compared with the filtering method, the snapshot scheme is more widely used due to its faster response to sudden failures [25,26]. Given the measurement Equation (19), the weighted least-squares solution for the estimation of x (k) is given by: Then, the residual vector is defined as: The test statistic of the snapshot RAIM at discrete time k is given by: As discussed, e.g., in [27], it is well known that the test statistic SSEI (k) follows a noncentral chi-squared distribution with NI (k) − 4 degrees of freedom and noncentrality parameter λ 2 (k) . The integrity warning is outputted when SSEI (k) is larger than a detection threshold T(NI (k) , PFA). Or, it will output the position () k x if SSEI (k) is less than T(NI (k) , PFA).

Numerical Experiments and Discussions
In this paper, three separate numerical experiments were designed to evaluate the performances of the proposed approach. The first experiment is to assess the performance of the vision pseudorange with calibration. The second experiment is to test the availability result with the aid of vision system.

Vision System
Three landmarks were generated with a fixed height at Chinese LinZhi airport. They were evenly distributed on a circle with a center point Op = [90.3359°, 29.3065°, 2950 m] T in the Longitude Latitude Height (LLH) frame, and a radius of 100 m. As shown in Figure 5, the azimuths of the landmarks p 1 , p 2 , p 3 relative to the center point Op are 0°, 120°, and 240°, respectively. To simulate the obstructions, two mountain chains with unified heights of 1000 m are symmetrically located in the 3000 m east and the 3000 m west of the center point.
As shown in Equation (3), the error of vision peseudorange is decided by the landmark position error (LPE) and the feature detection error (DE). In our experiment, different value of LPE and DE were set to evaluate the performance of vision system. LPE was assumed to be centimeter-level that varies from 1 cm to 10 cm (zero LPE is an ideal scenario). DE was set as a white Gaussian distribution [30] with a standard deviation of 1 pixel to 4 pixels on an image with resolution of 120 dpi, 300 dpi, 480 dpi and 720 dpi, where dpi is a unit of image resolution, which means the number of pixels per inch.

Approach and Landing Operations
The length of approach and landing phase was set as 6000 m, which starts from the point [90.2539°, 29.3065°, 3650 m] T in the LLH frame and ends at point [90.3144°, 29.3065°, 3150 m] T in the LLH frame. The total length of the simulation date is 6 × 10 6 s, which is composed by 10 5 sorties and 60 s for each sortie.

Vision Pseudorange with Calibration
The mean of VP error with different LPE is shown in Figure 6. As shown in Figure 6a, there is a nonzero value range from 4.7 m to 0.2 m in the mean of VP error without calibration. Although the landmark positions are very accurate and LPE is centimeter-level, LPE still causes VP error that cannot be ignored and may raise a false alarm for integrity monitoring. Besides, as the vision system is an angling system, the error caused by LPE approximate linear increases as the length of the line-of-sight increases. As shown in Figure 6b, with the calibration algorithm, the mean of the calibrated VP error is less than 1 m during the simulation. The results illustrate that our calibration algorithm reduces the mean of VP errors significantly. Theoretically, the mean of the calibrated VP error is 0 as shown in Equation (14), while the error cannot be completely reduced due to the linearization error. The standard deviation of VP error with different image resolution and DE is shown in Figure 7. The results show that higher resolution, shorter line-of-sight and less DE can generate more accurate vision pseudoranges, while the error of vision system is still much larger than GPS system. Such a situation may be improved as feature matching accuracy has been increasing over the years [31]. Although the time-variant error in the vision measurements cannot be protected against directly, we can mitigate its impact by increasing the image resolution and improving the accuracy or robustness of the feature detection algorithm, which is also an important topic in the computer vision community. Furthermore, a consistency check method for the standalone vision system can be applied to reduce the impact of vision fault [12]. The image processing for aviation applications is worth being investigated in our future work.

Performance Index
The performances index is defined with respect to the level of service that the system is designed to provide. The performance requirements for typical approach operations are shown in Table 1, including HAL/VAL, horizon/vertical accuracy (HA/VA (95%)) and time to alert (TTA) [32,33]. To evaluate the availability improvement provided by the vision system, the HPL/VPL is computed during the approach and landing phase. The availability of these two methods is calculated and compared with the service levels in Table 1. If the HPL exceeds the HAL or the VPL exceeds the VAL, the integrity is said to be unavailable for operation. The HPL/VPL curves of one sortie are shown in Figure 8. The results show that with the aid of vision system, the HPL decreases from 41 m to 12-26 m, and the VPL decreases from 56 m to 22-40 m, and higher image resolutions can generate less protection level of the integrated system. The reason is that the proposed VA-RAIM provides navigation measurements which are integrated with the GPS measurements to improve availability. Furthermore, since the line-of-sight of vision system is below the aircraft, the vision measurements can improve the geometrical configuration and decrease the protection level. During approach and landing operations, with an accurate feature detection result on a high resolution image, the VA-RAIM can improve the availability performance for APV-I and LPV-200 applications. However, as the performance requirement is very stringent for APV-II, the method needs to be further investigated in the future.

Position Accuracy
Besides the availability requirements, HA/VA (95%) requirements for aviation position during typical operations are shown in Table 1. The horizon error (95%) and vertical error (95%) of our method are 7.1 m and 4.3 m, respectively, and the error (95%) of standalone GPS navigation in horizon and vertical are 8.0 m and 5.2 m, respectively. The position results show that our method can improve the accuracy and meet the HA/VA (95%) requirements of APV-I and APV-II. Recently, vision-aided positioning is being discussed for aviation applications [15], while in this paper we pay more attention to researching integrity during the approach and landing phase.

Time Cost
TTA is another important performance index, as shown in Table 1. The VA-RAIM takes an average of 10 ms in the simulation, since it only contains some fundamental matrix operations except image processing. In real applications, the image processing will be a major part of the time cost. Thus, some efficient feature detection methods would be applied for VA-RAIM to meet the TTA requirement, e.g., scale invariant feature transform (SIFT), which has been proved to be very effective and low time cost for object tracking. For example, the time cost of SIFT for an image with size (pixel × pixel) of 256 × 256 and 441 × 552 are 1.7 ms and 4.4 ms, respectively [34].

GPS Fault
To evaluate the performance of our approach, the GPS RAIM and the VA-RAIM were compared in terms of fault detection. We randomly selected one visual satellite and added the fault bias on the pseudorange. The fault detection results with different methods are shown in Figure 9, which shows that the proposed VA-RAIM performs better than the GPS RAIM method with a higher fault detection rate under the conditions of the same fault. For example, when the fault bias is 50 m, the fault detection rate of the GPS RAIM is 84.3%, and the fault detection rate of VA-RAIM with 300 dpi and 1 pixel DE is 97.5%.The VA-RAIM has an increase 13.2% when compared with the fault detection result of the GPS RAIM. Taking into account that the detection power is 99%, the minimal detectable bias (MDB) [16] of the VA-RAIM with 300 dpi and 2 pixels DE is 75 m, which is 21.9% less than the GPS RAIM's 96 m. Conclusively, the proposed VA-RAIM outperforms the GPS RAIM with a higher level of fault detection rate and lower MDB in the approach and landing phase.

Vision Fault
In addition, the VA-RAIM may also consider any potential faults within the visual measurements. In this paper, the vision fault is defined to be a fault bias on one of the feature detection results. The fault detection results with different DE on a 300 dpi image are shown in Figure 10. The experimental results show that the vision/GPS integration system can detect the fault of the vision system effectively. With more accurate DE, the method can obtain higher fault detection rate of the vision system, and future research will cover the protection levels associated with visual measurement faults.

Conclusions
In this paper, we have proposed a VA-RAIM for GPS integrity monitoring in approach and landing phase. To solve the problem that the GPS signals are insufficient for RAIM in approach and landing phase, in the proposed method, the vision system has been used to assist in enriching the navigation observations and geometrical configuration. First, a vision model with the calibration method has been presented to reduce the invariant error of the vision system. Then, the calibrated vision measurements were integrated with the GPS observations to improve the performance of integrity monitoring in the approach and landing phase. Experimental results have demonstrated the effectiveness of the VA-RAIM over the conventional RAIM in availability and fault detection rate.
In addition, the vision system might be limited during night, fog, rain and snow, etc., which may have a large negative impact on the performance of VA-RAIM. To solve this problem, a more powerful feature detection method is worth being investigated. Since the primary concern of this paper is the measurements provided by the vision system rather than the imaging process itself, the location of landmarks in the camera frame is simulated with a Gaussian noise. The future work is to evaluate the practical utility of our VA-RAIM with real data in various scenarios.