Next Article in Journal
Urban Automation Networks: Current and Emerging Solutions for Sensed Data Collection and Actuation in Smart Cities
Previous Article in Journal
Push-Broom-Type Very High-Resolution Satellite Sensor Data Correction Using Combined Wavelet-Fourier and Multiscale Non-Local Means Filtering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase

1
School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
2
School of Civil and Environmental Engineering, UNSW Australia, Sydney NSW 2052, Australia
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(9), 22854-22873; https://doi.org/10.3390/s150922854
Submission received: 19 March 2015 / Revised: 11 May 2015 / Accepted: 8 September 2015 / Published: 10 September 2015
(This article belongs to the Section Physical Sensors)

Abstract

:
In the 1980s, Global Positioning System (GPS) receiver autonomous integrity monitoring (RAIM) was proposed to provide the integrity of a navigation system by checking the consistency of GPS measurements. However, during the approach and landing phase of a flight path, where there is often low GPS visibility conditions, the performance of the existing RAIM method may not meet the stringent aviation requirements for availability and integrity due to insufficient observations. To solve this problem, a new RAIM method, named vision-aided RAIM (VA-RAIM), is proposed for GPS integrity monitoring in the approach and landing phase. By introducing landmarks as pseudo-satellites, the VA-RAIM enriches the navigation observations to improve the performance of RAIM. In the method, a computer vision system photographs and matches these landmarks to obtain additional measurements for navigation. Nevertheless, the challenging issue is that such additional measurements may suffer from vision errors. To ensure the reliability of the vision measurements, a GPS-based calibration algorithm is presented to reduce the time-invariant part of the vision errors. Then, the calibrated vision measurements are integrated with the GPS observations for integrity monitoring. Simulation results show that the VA-RAIM outperforms the conventional RAIM with a higher level of availability and fault detection rate.

1. Introduction

With the wide utilization of satellite navigation technology, the integrity of navigation solutions has become a major issue, especially for safety-of-life applications [1,2,3]. Receiver autonomous integrity monitoring (RAIM) is essential for a GPS receiver, which is developed to assess the integrity of the received GPS signals and provide a timely warning message to the user in the presence of a failure. As an approach and landing phase is a typical part of a flight path with unsafe incidents, designing a suitable RAIM procedure is especially important to guarantee flight safety. Normally, the performance of RAIM relies on a sufficient number of visible satellites and a fine geometrical configuration to check the consistency of the measurements. However, since the approach and landing phase is usually consistent with low GPS visibility conditions due to obstructions, there is a larger mask angle leading to insufficient visible satellites and a poor geometrical configuration. Consequently, the availability and fault detection performance of RAIM will decrease dramatically [4,5,6]. Therefore, it is essential to further investigate the RAIM methods in the approach and landing phase.
When the satellite measurements are insufficient, the conventional RAIM method will not be able to meet the stringent navigation performance requirements in the approach and landing phase. To solve this problem, a logical idea is to bring other aids to provide additional measurements for RAIM. In recent decades, additional navigation devices such as barometric altimeter, inertial navigation system, and other satellite constellations have been considered. For example, the vertical ranging of a barometric altimeter is applied to enrich the navigation measurements for RAIM augmentation [7]. However, the accuracy of the barometric altimeter is susceptible to the quickly atmospheric pressure change, which is very common in the approach and landing phase. With the speeds and attitudes provided by an inertial navigation system (INS), Bhatti et al. established the Kalman equations of integrated INS/GPS to improve the performance of RAIM [8], while the error of the INS may drift over the time and contaminate the integrity result. Inspired by the traditional random sample consensus (RANSAC) algorithm in pattern recognition, Schroth et al. designed the range consensus (RANCO) algorithm to improve the multi-fault detection result with the assistance of other constellations [9]. Although the RANCO algorithm performs well with sufficient satellites in the simulation, many parameters need to be set empirically. More recently, some augmented RAIM methods without using additional navigation devices are also investigated. Shi et al. proposed a new receiver clock bias prediction model based on the discrete grey model to design the clock-aided RAIM method [10], while some factors upon the receiver clock bias series are uncertain. The map-aided integrity method has been investigated for use in intelligent transportation systems [11]. Unfortunately, the performance of availability has not been improved, as the map-aided method gives the user an alert and changes to an alternative navigation system once GPS is unavailable.
Considering these deficiencies above, it is essential to research on utilizing other measurement sources to aid RAIM in approach and landing phase. In this paper, a vision-aided RAIM (VA-RAIM) is proposed to improve the performance of integrity monitoring. As the satellite measurements are insufficient due to the obstructions, our basic idea is to introduce the landmarks, photographed by a vision system, as the pseudo-satellites to enrich the navigation measurements. To aid RAIM, however, the vision measurements applied for RAIM should be accurate and reliable. Although the existing computer vision technologies perform well in vision information extraction, the vision measurements inevitably incur errors due to background interference and platform dithering. Furthermore, the vision errors may be proportional to the length of the light-of-sight as the vision system is inherently an angling system [12]. To ensure the accuracy of the vision system, the received GPS signals are utilized to calibrate the vision measurements, so as to reduce the error of the vision system. Then, the calibrated vision measurements are utilized to expand the GPS measurement equation in order to improve the performance integrity monitoring.
To the best of our knowledge, studies on vision-aided RAIM in the approach and landing phase are quite rare. This paper aims to improve the RAIM performance with the computer vision system in the low GPS visibility condition. In addition, the method in this paper contains some similarity with the vision navigation, while as mentioned by Dusha et al. [13], most emphasis in the vision navigation is on positioning where the satellites is unavailable. Actually, there are only a few researches on the vision/GPS integration. For example, Won et al. proposed an integrated navigation system that improves the performance of vision-based navigation by integrating the limited GPS measurements in a low GPS visibility condition [14]. Some other applications of the video-based navigation method can be found in [15]. However, they paid comparatively little attention on utilizing the vision system to augment RAIM.
The remainder of this paper is organized as follows. The detail of the proposed VA-RAIM approach is presented in Section 2. Section 3 is the experiments to demonstrate the practical utility of our approach. Finally, the conclusions are shown in Section 4.

2. The Proposed VA-RAIM

2.1. Overview of VA-RAIM

The baseline RAIM algorithm is mainly composed of two parts, i.e., availability and fault detection. The availability performance is a major concern of RAIM, which ascertains whether the conditions exist to perform fault detection with sufficient power. Fault detection is a safeguard of the correct function of the system and ensures that measurements do not contain significant failures [16].
The availability performance relies on the number of visible satellites and the geometrical configuration. Normally, the RAIM availability requires at least five satellites in view. Besides, the protection level (PL) that is decided by the geometrical configuration should be less than the alert limit (AL) [17]. Unfortunately, in the approach and landing phase, the interferences and obstructions may result in the signal loss and the large mask angle, which will lead to less visible satellites and a poor geometrical configuration. The scenario of an approach and landing phase is shown in Figure 1. In this condition, the availability performance of RAIM will decrease dramatically. A typical example of the approach and landing phase is Chinese LinZhi Airport, which is located at an elevation of 2950 m and flanked by mountains about 5000 m [5]. In some cases, the number of visible satellites is only four, as the satellites with low elevation are blocked by high mountains. When RAIM is available, the RAIM method should detect the system fault in a timely manner to ensure the integrity of the positioning results.
Figure 1. An approach and landing phase scenario.
Figure 1. An approach and landing phase scenario.
Sensors 15 22854 g001
The purpose of our work is to develop a new RAIM method for the approach and landing phase with the aid of a vision system. To overcome the insufficiency of visible satellites, the VA-RAIM introduces the landmarks photographed by the vision system as pseudo-satellites for additional measurements, as shown in Figure 1. Then, the vision measurements and the GPS observations are integrated to improve navigation integrity. The framework of our approach is shown in Figure 2. To utilize the landmark information with a vision system, the detection and matching algorithm is applied to obtain the landmarks in a given image. However, the image processing will inevitably incur errors due to background interference, platform dithering, etc. These errors consist of random time-variant error and time-invariant error, which may be adverse to the integrity performance of the VA-RAIM. To solve this problem, a vision model with the calibration method is proposed. Specifically, the landmarks are introduced as pseudo-satellites and the vision system is modeled as a measurement equation similar to the GPS system. Then, the fine GPS measurements received in high altitude position (such as x(0) in Figure 1) is applied to calibrate the vision pseudoranges to reduce the time-invariant error. Finally, the VA-RAIM is designed by using the weighted integration of the vision and GPS measurements. The test statistics at discrete time k of the integrated system SSEI(k) is calculated to compare with a decision threshold TSSE. If the test statistics is less than the decision threshold, the output will be the location result of the vision/GPS integration. Otherwise, the output will be an integrity warning message. In the proposed method, the vision system is applied as an assistance to enrich the navigation observations and enhance the geometrical configuration, thus improving the performance of GPS integrity monitoring in the approach and landing phase.
Figure 2. The framework of VA-RAIM.
Figure 2. The framework of VA-RAIM.
Sensors 15 22854 g002

2.2. Vision Model with Calibration

Under the framework of our approach, the vision system is applied to extract accurate vision pseudoranges from a given image to aid RAIM. Then, a calibration algorithm based on the GPS measurements is designed to reduce the error of the vision observations.

2.2.1. Vision Model

Inspired by the principle of GPS position model [18], the landmarks are regarded as pseudo-satellites to obtain the similar navigation measurements and a vision pseudorange is defined as the estimation of the distance between the user and a landmark. However, different from GPS pseudoranges, the vision pseudoranges are calculated but directly measured. As shown in Figure 3, at discrete time k, denote the vision pseudorange vector m(k) = [m1, m2, … , m N V ( k ) ]T R N V ( k ) , which is composed of NV(k) vision pseudoranges. By applying the cosine rule, the vision pseudorange vector is the solution of the over-determined equation set composed by NC(k)=NV(k)(NV(k) − 1)/2 equations as:
m i ( k ) 2 + m j ( k ) 2 2 m i ( k ) m j ( k ) c i j ( k ) d i j 2 = 0 , i ( k ) j ( k ) i ( k ) , j ( k ) 1 , 2 ,   ...   , N V ( k )
where mi(k)R+ is the vision pseudorange of the ith landmark piR3 at time k. cij(k) = cosθij(k) ∈ [−1, 1], in which θij(k) ∈ [0, π] is the angle between the two lines-of-sight of landmarks pi and pjR3. As for the other terms in Equation (1), dij = ||pipj|| ∈ R+ is the nominal distance between landmarks pi and pj, which is time-invariant.
Figure 3. The model of vision pseodoranges.
Figure 3. The model of vision pseodoranges.
Sensors 15 22854 g003
(A) Error Analysis
As shown in Equation (1), the vision pseudorange vector is decided by i j 2 and c i j ( k ) . In this section, we take into account NV(k) = 3 to simplify the following analysis, while our approach can also be easily extended to any number of landmarks. The relation between the vision pseudorange error δm(k) = [m1(k), m2(k), m3(k)]T and the error of these parameters δd = [δ d 12 2 , δ d 23 2 , δ d 31 2 ]T and δc(k) = [δc12(k), δc23(k), δc31(k)]T can be obtained according to the differential parameters of Equation (1):
A ( k ) δ m ( k ) = δ c ( k ) + B ( k ) δ d
where A ( k ) = [ 1 m 2 ( k ) c 12 ( k ) m 1 ( k ) 1 m 1 ( k ) c 12 ( k ) m 2 ( k ) 0 0 1 m 3 ( k ) c 23 ( k ) m 2 ( k ) 1 m 2 ( k ) c 23 ( k ) m 3 1 m 3 ( k ) c 31 ( k ) m 1 ( k ) 0 1 m 1 ( k ) c 31 ( k ) m 3 ( k ) ] and B ( k ) = 1 2 m 1 ( k ) m 2 ( k ) m 3 ( k ) [ m 3 ( k ) 0 0 0 m 1 ( k ) 0 0 0 m 2 ( k ) ] .
Then we can have:
δ m ( k ) = A ( k ) 1 δ c ( k ) + A ( k ) 1 B ( k ) δ d
In Equation (3), since δd is fixed with a set of given lengths, it generates the time-invariant error of the vision system. δc(k) is decided by the detection and matching results at time k, which generates the random time-variant error. The properties of δd and δc(k) are analyzed as follows.
If the distance of each pair of landmarks is accurately obtained, we have δd = 0 and the second part on the right of Equation (3) disappears. However, it is very challenging to measure the distance directly. In practice, we can measure the position of a landmark up to centimeter-level accuracy with the utilization of real-time kinematic (RTK), and then calculate the distance. Specifically, the position difference is calculated as d i j 2 = ||pipj||2, and the error of the position difference is obtained as δ d i j 2 = 2(pipj)Tpi − δpj). Suppose the landmark position errors in each degree of freedom obey N(0, σ g 2 ) and independent from each other, δ d i j 2 obey Gaussian distribution as with zero mean and variance 8 d i j 2 σ g 2 , which is proportional to the d i j 2 .
With the transformation projective from object to image space [19], the parameter cij(k) is calculated by
c i j ( k ) = p i ( k ) C p i ( k ) C , p j ( k ) C p j ( k ) C
where <,·> is the vector inner product, p i ( k ) C R2 and p j ( k ) C R2 is the ith and the jth landmarks in the camera frame, as shown in Figure 3. The landmarks in the camera frame are obtained through the detection and matching algorithm, and the detection error δ p i ( k ) C is assumed to follow the independent normal distribution with zero mean and covariance matrix ΣpR2×2.
Linearizing Equation (4) yields:
δ c i j ( k ) = δ p i ( k ) C p i ( k ) C , p j ( k ) C p j ( k ) C + p i ( k ) C p i ( k ) C , δ p j ( k ) C p j ( k ) C
As proofed in Appendix, δcij(k) can be obtained as a linear combination of δ p i ( k ) C and δ p j ( k ) C , i.e.,
δ c i j ( k ) = μ i j ( k ) T δ p i ( k ) C + μ j i ( k ) T δ p j ( k ) C
which follows a normal distribution with zero mean and variance μ i j ( k ) T Σpμij(k) + μ j i ( k ) T Σpμji(k). Thus, we can obtain that δc(k) follows a normal distribution with zero mean and covariance matrix Σc(k) R N C ( k ) × N C ( k ) .
(B) Fault Analysis
For the RAIM algorithm, a GPS fault is defined as a pseudorange bias deviating from its nominal behavior [2]. The probabilities of simultaneous satellite faults with different number of visible satellite are plotted in Figure 4. Since the scenario of this paper focuses on the approach and landing phase that is a low GPS visibility (often less than 8), the probability of multi-fault simultaneity is less than the integrity requirement 10−7 and can be ignored in applications. Thus, the threat of GPS is a fault in at most one visible satellite in this paper. However, if the visible satellites are sufficient that multi-fault cannot be ignored, we can detect the faults by using standalone satellite observations [9].
Figure 4. Probabilities of simultaneous satellite faults.
Figure 4. Probabilities of simultaneous satellite faults.
Sensors 15 22854 g004
When used for aiding navigation, vision system may also suffer from faults. With the introduction of vision pseudorange, the fault of vision system ultimately causes a vision pseudorange bias similar to GPS pseudorange bias. As shown in Equation (3), the pseudorange fault is decided by the fault of δd and δc(k). In real applications, there are various types of threats that might cause a fault on δd or δc(k). In this paper, two typical examples are considered as follows, i.e., fault in the landmark location and fault in the feature detection result during image processing.
(1) Fault on δd
If there is a fault Δpi in the location of the ith landmark, the error of position difference can be obtained as 2(pipj)Tpi + Δpi − δpj)=δ d i j 2 + Δ d j i 2 , where Δ d i j 2 = 2(pipj)TΔpi is a bias caused by the location fault. As the location results are prior given, the bias fault is time-invariant during the flight operations. Thus, δd can be updated to δd = δd + Δd, where Δd = [Δ d 12 2 , Δ d 23 2 , Δ d 31 2 ]T.
(2) Fault on δc(k)
If there is a fault Δ p i ( k ) C in the feature detection result during image processing, combined with Equation (6), δcij(k) can be updated to
δ c i j ( k ) = δ c i j ( k ) + μ i j ( k ) T Δ p i ( k ) C + μ j i ( k ) T Δ p j ( k ) C
Denote Δcij(k) = μ i j ( k ) T Δ p i ( k ) C + μ j i ( k ) T Δ p j ( k ) C , then δc(k) can be updated to δc(k) = δc(k) + Δc(k), where Δc(k) = [Δc12(k), Δc23(k), Δc31(k)]T.
The probabilities of the vision fault can be statistically obtained with real computer vision data, and the relatively high probabilities often reflect the challenging environment of the data collection, which will be further researched.
(C) Vision Measurement Equation
Similar with the linear GPS measurement equation [20], the linear vision measurement equation with NV(k) landmarks can be modeled as:
z V ( k ) = H V ( k ) x ( k ) + ε V ( k ) + b V ( k )
where zV(k) R N V ( k ) is the vector of vision measurements obtained by the vision pseudoranges at time k, HV(k) R N V ( k ) × 4 is the vision observation matrix. x(k)R4 is the state vector includes three position elements (x(k), y(k) and z(k)) and the receiver clock bias Cb(k). εV(k) R N V ( k ) and bV(k) R N V ( k ) are the random error and bias of vision pseudoranges, respectively.
According to Equations (3) and (8), εV(k) and bV(k) with the consideration of vision fault Δd and Δc(k) can be obtained as:
ε V ( k ) = A ( k ) 1 δ c ( k )
b V ( k ) = A ( k ) 1 B ( k ) ( δ d + Δ d ) + A ( k ) 1 Δ c ( k )
where εV(k) follows a normal distribution with zero mean and covariance matrix ΣV(k) = A ( k ) 1 Σc(k) ( A ( k ) 1 ) T . According to Equation (10), different with single fault in GPS pseudorange, one fault in Δd or Δc(k) will have an impact in more than one of the vision pseudorange measurements in Equation (8). In the following section, a calibration method is presented to mitigate the impact of the vision faults.

2.2.2. Calibration Method

In this section, the GPS measurements are applied to calibrate the vision system to reduce the error and fault of vision measurements. First, the bias caused by the error and fault of the landmark locations, i.e., δd and Δd, is removed. Then a modified vision equation with vision single fault is obtained.
With NG(k) GPS satellites in view at discrete time k, the linear measurement equation is described as:
z G ( k ) = H G ( k ) x ( k ) + ε G ( k ) + b G ( k )
where zV(k) R N G ( k ) is the vector of measurements at time k, HG(k) R N G ( k ) × 4 is the observation matrix, which reflects the geometrical configuration. εG(k) R N G ( k ) is the measurement noise vector, which are generally assumed to be normally distributed with zero mean and covariance matrix ΣG(k) R N G ( k ) × N G ( k ) . bG(k) R N G ( k ) is the measurement-fault vector to be detected.
At the beginning of the approach and landing phase, such as x(0) in Figure 1, the satellite measurements are sufficient due to little obstruction in the high altitude environment. Under this condition, the fine GPS observations can be obtained and the traditional RAIM is available to compute the trustable positioning results from GPS. Specifically, the GPS positioning solution x ˜ ( 0 ) R4 obeys normal distribution N(x(0), ΔG(0)), where the covariance matrix ΔG(0) = ( H G ( 0 ) T   Σ G ( 0 ) 1   H G ( 0 ) )−1R4 × 4.
According to Equations (8)–(10), the vision equation at the initial time k = 0 and time k > 0 is described as:
z V ( 0 ) = H V ( 0 ) x ( 0 ) + A ( 0 ) 1 δ c ( 0 ) + A ( 0 ) 1 B ( 0 ) ( δ d + Δ d ) + A ( 0 ) 1 Δ c ( 0 )
and
z V ( k ) = H V ( k ) x ( k ) + A ( k ) 1 δ c ( k ) + A ( k ) 1 B ( k ) ( δ d + Δ d ) + A ( k ) 1 Δ c ( k )
Multiply Equation (12) by ( A ( k ) 1 B ( k ) B ( 0 ) 1 A ( 0 ) ) and subtract the result from Equation (13) to eliminate the error and fault of landmark locations, and then we have
z V ( k ) = H V ( k ) x ( k ) + A ( k ) 1 ( δ c ( k ) + Δ c ( k ) ) + A ( k ) 1 B ( k ) B ( 0 ) 1 ( A ( 0 ) z V ( 0 ) δ c ( 0 ) Δ c ( 0 ) ) A ( k ) 1 B ( k ) B ( 0 ) 1 A ( 0 ) H V ( 0 ) x ( 0 )
Substituting x ( 0 ) = x ˜ ( 0 ) ( x ˜ ( 0 ) x ( 0 ) ) into Equation (14) yields:
z V ( k ) + A ( k ) 1 B ( k ) B ( 0 ) 1 A ( 0 ) ( H V ( 0 ) x ˜ ( 0 ) z V ( 0 ) ) = H V ( k ) x ( k ) + A ( k ) 1 ( δ c ( k ) + Δ c ( k ) ) + A ( k ) 1 B ( k ) B ( 0 ) 1 ( A ( 0 ) H V ( 0 ) ( x ˜ ( 0 ) x ( 0 ) ) δ c ( 0 ) Δ c ( 0 ) )
Denote
z ^ V ( k ) = z V ( k ) + A ( k ) 1 B ( k ) B ( 0 ) 1 A ( 0 ) ( H V ( 0 ) x ˜ ( 0 ) z V ( 0 ) )
The vision model with calibration is obtained as:
z ^ V ( k ) = H V ( k ) x ( k ) + ε ^ V ( k ) + b ^ V ( k )
where ε ^ V ( k ) = A ( k ) 1 ( δ c ( k ) B ( k ) B ( 0 ) 1 δ c ( 0 ) ) + A ( k ) 1 B ( k ) B ( 0 ) 1 A ( 0 ) H V ( 0 ) ( x ˜ ( 0 ) x ( 0 ) ) , b ^ V ( k ) = A ( k ) 1 ( Δ c ( k ) B ( k ) B ( 0 ) 1 Δ c ( 0 ) ) . It is easy to obtain that ε ^ V ( k ) follows a normal distribution with zero mean and covariance matrix
Σ ^ V ( k ) = Σ V ( k ) + A ( k ) 1 B ( k ) B ( 0 ) 1 Σ C ( 0 ) B ( 0 ) 1 B ( k ) ( A ( k ) 1 ) T + A ( k ) 1 B ( k ) B ( 0 ) 1 A ( 0 ) H V ( 0 ) Δ G ( 0 ) H V ( 0 ) T A ( 0 ) T B ( 0 ) 1 B ( k ) ( A ( k ) 1 ) T
By comparing the bias vector in Equations (8) and (17), it is illustrated that our method removes the bias caused by the error and fault of the landmark locations. However, the impact of the fault during image processing, i.e., Δc(k), will still exist. According to Equation (17), a fault in Δc(k) or Δc(0) will ripple through all the elements of z ^ V ( k ) . To mitigate the impacts of Δc(k) and Δc(0), a modified vision equation with single fault is obtained as follows.
In this paper, we assume that there is one fault in Δc(k) and the fault index is invariant in a short period of time, i.e., the fault index in Δc(k) and Δc(0) are the same. Multiplying Equation (17) by A(k) yields a modified vision equation as:
z ^ V ( k ) = H V ( k ) x ( k ) + ε ^ V ( k ) + b ^ V ( k )
where z ^ V ( k ) = A ( k ) z ^ V ( k ) , H V ( k ) = A ( k ) H V ( k ) , and ε ^ V ( k ) = δ c ( k ) B ( k ) B ( 0 ) 1 δ c ( 0 ) + B ( k ) B ( 0 ) 1 A ( 0 ) H V ( 0 ) ( x ˜ ( 0 ) x ( 0 ) ) ,which follows a normal distribution with zero mean and covariance matrix
Σ ^ V ( k ) = A ( k ) Σ V ( k ) A ( k ) T + B ( k ) B ( 0 ) 1 Σ C ( 0 ) B ( 0 ) 1 B ( k ) + B ( k ) B ( 0 ) 1 A ( 0 ) H V ( 0 ) Δ G ( 0 ) H V ( 0 ) T A ( 0 ) T B ( 0 ) 1 B ( k )
In Equation (18), b ^ V ( k ) = Δ c ( k ) B ( k ) B ( 0 ) 1 Δ c ( 0 ) . According to Equation (2), B ( k ) and B ( 0 ) 1 are diagonal matrices. Thus, the fault index in Δc(k) and B ( k ) B ( 0 ) 1 Δ c ( 0 ) are the same in a short period of time. If there is one fault in Δc(k), only one fault will exist in the modified observation z ^ V ( k ) . In the following section, the modified vision equation is integrated with the GPS equation for integrity monitoring.

2.3. VA-RAIM

This section presents the VA-RAIM method with the calibrated vision model. We begin with showing the integration of vision/GPS measurement equation. Then the procedure of PL calculation and fault detection of the VA-RAIM is proposed.
By combining Equations (11) and (18), the linearized measurement model at time k for the vision/GPS integrated system is obtained as:
z I ( k ) = H I ( k ) x ( k ) + ε I ( k ) + b I ( k )
where z I ( k ) = [ z G ( k ) z ^ V ( k ) ] , H I ( k ) = [ H G ( k ) H V ( k ) ] , ε I ( k ) = [ ε G ( k ) ε ^ V ( k ) ] , b I ( k ) = [ b G ( k ) b ^ V ( k ) ] . The random error εI(k) follows a normal distribution with zero mean and covariance matrix Σ I ( k ) = [ Σ G ( k ) 0 0 Σ ^ V ( k ) ] .
With the aid of vision measurements, the over-determined integrated vision/GPS system can improve the integrity performance in the approach and landing phase.

2.3.1. Protection Level and Availability

Protection level computation is in essence a performance safeguard to evaluate the power of fault detection. Denote NI(k) = NG(k) + NV(k) is the sum number of satellites and landmarks, Hslope(i) and Vslope(i) (i = 1,2, …, NI(k)) of the integrated system is obtained as [21]:
H s l o p e ( i ) = A 1 i 2 + A 2 i 2 S i i
V s l o p e ( i ) = | A 3 i | S i i
where A1i, A2i, and A3i are the elements of matrix AI(k) = ( H I ( k ) T Σ I ( k ) 1 H I ( k ) ) 1 H I ( k ) T Σ I ( k ) 1 , Sii is the ith diagonal elements of matrix SI(k) = I N I ( k ) H I ( k ) AI(k), I N I ( k ) is a NI(k) by NI(k) identity matrix.
Then the calculation of horizon/vertical PL (HPL/VPL) is given as
H P L = max ( H s l o p e ( i ) ) T ( N I ( k ) , P F A ) + k ( P M D ) J 11 + J 22
V P L = max ( V s l o p e ( i ) ) T ( N I ( k ) , P F A ) + k ( P M D ) J 33
where T(NI(k), PFA) is a threshold value, which is a function of NI(k) and the probability of false alarm (FA) PFA, and follows a chi-square distribution with NI(k) − 4 degrees of freedom. k(PMD) is the number of standard deviations corresponding to a specified probability of missed detection (MD) PMD. J11, J22, and J33 are the diagonal elements of matrix ΔI(k) = ( H I ( k ) T   Σ I ( k ) 1   H I ( k ) ) 1 .
If the calculated HPL/VPL is larger than the horizon/vertical alert limit (HAL/VAL), i.e., HPL > HAL or VPL > VAL, the system is not available and an alert is provided to the user. Otherwise, the fault detection procedure will be processed as follows.

2.3.2. Fault Detection

The fault detection method can be classified as snapshot method [22,23] or filtering method [24]. The former is generally evaluated with the current observations only, while the latter uses both the current and historical measurements. Compared with the filtering method, the snapshot scheme is more widely used due to its faster response to sudden failures [25,26]. Given the measurement Equation (19), the weighted least-squares solution for the estimation of x(k) is given by:
x ˜ ( k ) = A I ( k ) z I ( k )
Then, the residual vector is defined as:
r I ( k ) = S I ( k ) ( ε I ( k ) + b I ( k ) )
The test statistic of the snapshot RAIM at discrete time k is given by:
S S E I ( k ) = r I ( k ) T Σ I ( k ) 1 r I ( k )
As discussed, e.g., in [27], it is well known that the test statistic SSEI(k) follows a noncentral chi-squared distribution with NI(k) − 4 degrees of freedom and noncentrality parameter λ I ( k ) 2   =   b I ( k ) T   Σ I ( k ) 1 S I ( k ) b I ( k ) . The integrity warning is outputted when SSEI(k) is larger than a detection threshold T(NI(k), PFA). Or, it will output the position x ˜ ( k ) if SSEI(k) is less than T(NI(k), PFA).

3. Numerical Experiments and Discussions

In this paper, three separate numerical experiments were designed to evaluate the performances of the proposed approach. The first experiment is to assess the performance of the vision pseudorange with calibration. The second experiment is to test the availability result with the aid of vision system. The final experiment is to evaluate the fault detection performance of the VA-RAIM compared with the conventional GPS RAIM algorithm.

3.1. Simulation Data

Since the real flying data in the approach and landing phase are very difficult to obtain, simulation data were applied to evaluate the performance of our method. The simulation data are described as follows.

3.1.1. GPS System

In the simulation, the 24 satellite GPS constellation was used to simulate the satellite observations. The peseudorange noises follow the same uncorrelated Gaussian distribution and the diagonal covariance matrices ΣG(k) satisfies
Σ G ( k ) ( i , i ) = σ U R A ( k ) , i 2 + σ t r o p o ( k ) , i 2 + σ u s e r ( k ) , i 2
where ΣG(k)(i,i) is the ith diagonal elements of ΣG(k), σURA(k),i is the user range accuracy (URA) of the ith satellite at time k, which is set to be 0.75 m [28]. The error model for tropospheric delay error σtropo(k),i is defined by:
σ t r o p o ( k ) , i = 0.12 × 1.001 / 0.002001 + sin ( θ i ( k ) ) 2
where θi(k) is the elevation angle of the ith satellite. As for the other terms in Equation (27), the user error σuser(k),i is modeled according to [29] as:
σ u s e r ( k ) , i = σ M P 2 + σ N o i s e 2 f L 1 4 + f L 5 4 / f L 1 2 f L 5 2
where σ M P = 0.13 + 0.53 exp ( 18 θ i ( k ) / π ) , σ N o i s e = 0.15 + 0.43 exp ( 180 θ i ( k ) / 6.9 / π ) , fL1 = 1575.42 MHz, and fL5 = 1176.45 MHz. The bias of the faulty pseudorange was manually set to assert the performance of integrity monitoring.

3.1.2. Vision System

Three landmarks were generated with a fixed height at Chinese LinZhi airport. They were evenly distributed on a circle with a center point Op = [90.3359°, 29.3065°, 2950 m]T in the Longitude Latitude Height (LLH) frame, and a radius of 100 m. As shown in Figure 5, the azimuths of the landmarks p1, p2, p3 relative to the center point Op are 0°, 120°, and 240°, respectively. To simulate the obstructions, two mountain chains with unified heights of 1000 m are symmetrically located in the 3000 m east and the 3000 m west of the center point.
As shown in Equation (3), the error of vision peseudorange is decided by the landmark position error (LPE) and the feature detection error (DE). In our experiment, different value of LPE and DE were set to evaluate the performance of vision system. LPE was assumed to be centimeter-level that varies from 1 cm to 10 cm (zero LPE is an ideal scenario). DE was set as a white Gaussian distribution [30] with a standard deviation of 1 pixel to 4 pixels on an image with resolution of 120 dpi, 300 dpi, 480 dpi and 720 dpi, where dpi is a unit of image resolution, which means the number of pixels per inch.
Figure 5. The positions of the landmarks.
Figure 5. The positions of the landmarks.
Sensors 15 22854 g005

3.1.3. Approach and Landing Operations

The length of approach and landing phase was set as 6000 m, which starts from the point [90.2539°, 29.3065°, 3650 m]T in the LLH frame and ends at point [90.3144°, 29.3065°, 3150 m]T in the LLH frame. The total length of the simulation date is 6 × 106 s, which is composed by 105 sorties and 60 s for each sortie.

3.2. Vision Pseudorange with Calibration

The mean of VP error with different LPE is shown in Figure 6. As shown in Figure 6a, there is a nonzero value range from 4.7 m to 0.2 m in the mean of VP error without calibration. Although the landmark positions are very accurate and LPE is centimeter-level, LPE still causes VP error that cannot be ignored and may raise a false alarm for integrity monitoring. Besides, as the vision system is an angling system, the error caused by LPE approximate linear increases as the length of the line-of-sight increases. As shown in Figure 6b, with the calibration algorithm, the mean of the calibrated VP error is less than 1 m during the simulation. The results illustrate that our calibration algorithm reduces the mean of VP errors significantly. Theoretically, the mean of the calibrated VP error is 0 as shown in Equation (14), while the error cannot be completely reduced due to the linearization error.
Figure 6. The mean of VP error with different LPE. (a) The mean of VP error without calibration; (b) The mean of calibrated VP error.
Figure 6. The mean of VP error with different LPE. (a) The mean of VP error without calibration; (b) The mean of calibrated VP error.
Sensors 15 22854 g006
The standard deviation of VP error with different image resolution and DE is shown in Figure 7. The results show that higher resolution, shorter line-of-sight and less DE can generate more accurate vision pseudoranges, while the error of vision system is still much larger than GPS system. Such a situation may be improved as feature matching accuracy has been increasing over the years [31].
Figure 7. The standard deviation of VP error with different image resolution and DE. (a) The standard deviation of VP with different image resolutions with 1 pixel DE; (b) The standard deviation of VP with different DE on a 300 dpi image.
Figure 7. The standard deviation of VP error with different image resolution and DE. (a) The standard deviation of VP with different image resolutions with 1 pixel DE; (b) The standard deviation of VP with different DE on a 300 dpi image.
Sensors 15 22854 g007
Although the time-variant error in the vision measurements cannot be protected against directly, we can mitigate its impact by increasing the image resolution and improving the accuracy or robustness of the feature detection algorithm, which is also an important topic in the computer vision community. Furthermore, a consistency check method for the standalone vision system can be applied to reduce the impact of vision fault [12]. The image processing for aviation applications is worth being investigated in our future work.

3.3. Performance Index

The performances index is defined with respect to the level of service that the system is designed to provide. The performance requirements for typical approach operations are shown in Table 1, including HAL/VAL, horizon/vertical accuracy (HA/VA (95%)) and time to alert (TTA) [32,33].
Table 1. The performance requirements for aviation.
Table 1. The performance requirements for aviation.
Performance RequirementAPV-ILPV-200APV-II
HAL40 m40 m40 m
VAL50 m35 m20 m
HA (95%)16 m16 m16 m
VA (95%)20 m4 m8 m
TTA10 s6.2 s6 s

3.3.1. Availability

To evaluate the availability improvement provided by the vision system, the HPL/VPL is computed during the approach and landing phase. The availability of these two methods is calculated and compared with the service levels in Table 1. If the HPL exceeds the HAL or the VPL exceeds the VAL, the integrity is said to be unavailable for operation. The HPL/VPL curves of one sortie are shown in Figure 8. The results show that with the aid of vision system, the HPL decreases from 41 m to 12–26 m, and the VPL decreases from 56 m to 22–40 m, and higher image resolutions can generate less protection level of the integrated system. The reason is that the proposed VA-RAIM provides navigation measurements which are integrated with the GPS measurements to improve availability. Furthermore, since the line-of-sight of vision system is below the aircraft, the vision measurements can improve the geometrical configuration and decrease the protection level. During approach and landing operations, with an accurate feature detection result on a high resolution image, the VA-RAIM can improve the availability performance for APV-I and LPV-200 applications. However, as the performance requirement is very stringent for APV-II, the method needs to be further investigated in the future.
Figure 8. (a) The result of HPL with 1 pixel DE; (b) The result of HPL with 1 pixel DE.
Figure 8. (a) The result of HPL with 1 pixel DE; (b) The result of HPL with 1 pixel DE.
Sensors 15 22854 g008

3.3.2. Position Accuracy

Besides the availability requirements, HA/VA (95%) requirements for aviation position during typical operations are shown in Table 1. The horizon error (95%) and vertical error (95%) of our method are 7.1 m and 4.3 m, respectively, and the error (95%) of standalone GPS navigation in horizon and vertical are 8.0 m and 5.2 m, respectively. The position results show that our method can improve the accuracy and meet the HA/VA (95%) requirements of APV-I and APV-II. Recently, vision-aided positioning is being discussed for aviation applications [15], while in this paper we pay more attention to researching integrity during the approach and landing phase.

3.3.3. Time Cost

TTA is another important performance index, as shown in Table 1. The VA-RAIM takes an average of 10 ms in the simulation, since it only contains some fundamental matrix operations except image processing. In real applications, the image processing will be a major part of the time cost. Thus, some efficient feature detection methods would be applied for VA-RAIM to meet the TTA requirement, e.g., scale invariant feature transform (SIFT), which has been proved to be very effective and low time cost for object tracking. For example, the time cost of SIFT for an image with size (pixel × pixel) of 256 × 256 and 441 × 552 are 1.7 ms and 4.4 ms, respectively [34].

3.4. Fault Detection

3.4.1. GPS Fault

To evaluate the performance of our approach, the GPS RAIM and the VA-RAIM were compared in terms of fault detection. We randomly selected one visual satellite and added the fault bias on the pseudorange. The fault detection results with different methods are shown in Figure 9, which shows that the proposed VA-RAIM performs better than the GPS RAIM method with a higher fault detection rate under the conditions of the same fault. For example, when the fault bias is 50 m, the fault detection rate of the GPS RAIM is 84.3%, and the fault detection rate of VA-RAIM with 300 dpi and 1 pixel DE is 97.5%.The VA-RAIM has an increase 13.2% when compared with the fault detection result of the GPS RAIM. Taking into account that the detection power is 99%, the minimal detectable bias (MDB) [16] of the VA-RAIM with 300 dpi and 2 pixels DE is 75 m, which is 21.9% less than the GPS RAIM’s 96 m. Conclusively, the proposed VA-RAIM outperforms the GPS RAIM with a higher level of fault detection rate and lower MDB in the approach and landing phase.
Figure 9. Fault detection rate of GPS RAIM and VA-RAIM. (a) GPS RAIM and VA-RAIM with different resolutions with 1 pixel DE; (b) GPS RAIM and VA-RAIM with different DE on a 300 dpi image.
Figure 9. Fault detection rate of GPS RAIM and VA-RAIM. (a) GPS RAIM and VA-RAIM with different resolutions with 1 pixel DE; (b) GPS RAIM and VA-RAIM with different DE on a 300 dpi image.
Sensors 15 22854 g009

3.4.2. Vision Fault

In addition, the VA-RAIM may also consider any potential faults within the visual measurements. In this paper, the vision fault is defined to be a fault bias on one of the feature detection results. The fault detection results with different DE on a 300 dpi image are shown in Figure 10. The experimental results show that the vision/GPS integration system can detect the fault of the vision system effectively. With more accurate DE, the method can obtain higher fault detection rate of the vision system, and future research will cover the protection levels associated with visual measurement faults.
Figure 10. Fault detection rate of vision system with different DE on a 300 dpi image.
Figure 10. Fault detection rate of vision system with different DE on a 300 dpi image.
Sensors 15 22854 g010

4. Conclusions

In this paper, we have proposed a VA-RAIM for GPS integrity monitoring in approach and landing phase. To solve the problem that the GPS signals are insufficient for RAIM in approach and landing phase, in the proposed method, the vision system has been used to assist in enriching the navigation observations and geometrical configuration. First, a vision model with the calibration method has been presented to reduce the invariant error of the vision system. Then, the calibrated vision measurements were integrated with the GPS observations to improve the performance of integrity monitoring in the approach and landing phase. Experimental results have demonstrated the effectiveness of the VA-RAIM over the conventional RAIM in availability and fault detection rate.
In addition, the vision system might be limited during night, fog, rain and snow, etc., which may have a large negative impact on the performance of VA-RAIM. To solve this problem, a more powerful feature detection method is worth being investigated. Since the primary concern of this paper is the measurements provided by the vision system rather than the imaging process itself, the location of landmarks in the camera frame is simulated with a Gaussian noise. The future work is to evaluate the practical utility of our VA-RAIM with real data in various scenarios.

Acknowledgments

The presented research work is supported by the National Basic Research Program of China (Grant No. 2011CB707000), the National Science Fund for Distinguished Young Scholars (Grant No. 61425014) and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 61221061).

Author Contributions

Each co-author has made substantive intellectual contributions to this study. L.F. implemented the algorithm, analyzed the data and performed the experiments. J.Z., R.L., X.C. and J.W. have contributed to the design and geometrical analysis of VA-RAIM.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

Given a vector p, we have
δ p p = δ p p p δ p p 2
Substituting δ||p|| = pTδp/||p|| into Equation (A1) yields,
δ p p = p T p δ p p p T δ p p 3
Denote p=r[cosθ, sinθ]TR2, we can obtain
δ p p = 1 r δ p 1 r [ cos 2 θ cos θ sin θ cos θ sin θ sin 2 θ ] δ p = 1 r [ sin 2 θ cos θ sin θ cos θ sin θ cos 2 θ ] δ p
According to Equation (A3), denote p i ( k ) C = ri(k)[cosθi(k), sinθi(k)]T and p J ( k ) C = rj(k)[cosθj(k), sinθj(k)]T, Equation (5) in Section 2.2.1 can be obtained as
δ c i j ( k ) = ( p j ( k ) C p j ( k ) C ) T δ p i ( k ) C p i ( k ) C + ( p i ( k ) C p i ( k ) C ) T δ p j ( k ) C p j ( k ) C = [ cos θ j ( k ) , sin θ j ( k ) ] 1 r i ( k ) [ sin 2 θ i ( k ) cos θ i ( k ) sin θ i ( k ) cos θ i ( k ) sin θ i ( k ) cos 2 θ i ( k ) ] δ p i ( k ) C + [ cos θ i ( k ) , sin θ i ( k ) ] 1 r j ( k ) [ sin 2 θ j ( k ) cos θ j ( k ) sin θ j ( k ) cos θ j ( k ) sin θ j ( k ) cos 2 θ j ( k ) ] δ p j ( k ) C
Define μ i j ( k ) = 1 r i ( k ) [ sin 2 θ i ( k ) cos θ i ( k ) sin θ i ( k ) cos θ i ( k ) sin θ i ( k ) cos 2 θ i ( k ) ] [ cos θ j ( k ) sin θ j ( k ) ] , we have
δ c i j ( k ) = μ i j ( k ) T δ p i ( k ) C + μ j i ( k ) T δ p j ( k ) C

References

  1. Feng, S.; Ochieng, W.; Walsh, D.; Ioannides, R. A measurement domain receiver autonomous integrity monitoring algorithm. GPS Solut. 2006, 10, 85–96. [Google Scholar] [CrossRef]
  2. Hewitson, S.; Wang, J. GNSS receiver autonomous integrity monitoring (RAIM) performance analysis. GPS Solut. 2006, 10, 155–170. [Google Scholar] [CrossRef]
  3. Schuster, W.; Bai, J.; Feng, S.; Ochieng, W. Integrity monitoring algorithms for airport surface movement. GPS Solut. 2012, 16, 65–75. [Google Scholar] [CrossRef]
  4. Abidat, A.; Li, C.; Tan, Z. The study of RAIM performance by simulation. Comput.-Aided Draft. Des. Manuf. 2006, 16, 58–64. [Google Scholar]
  5. Wang, Z.; Macabiau, C.; Zhang, J.; Escher, A. Prediction and analysis of GBAS integrity monitoring availability at LinZhi airport. GPS Solut. 2014, 18, 27–40. [Google Scholar] [CrossRef]
  6. Webb, T.; Groves, P.; Cross, P.; Mason, R.; Harrison, J. A new differential positioning method using modulation correlation of signals of opportunity. In Proceedings of the IEEE/ION Position Location and Navigation Symposium (PLANS), Indian Wells, CA, USA, 4–6 May 2010; pp. 972–981.
  7. Jan, S.; Gebre-Egziabher, D.; Walter, T.; Enge, P. Improving GPS-based landing system performance using an empirical barometric altimeter confidence bound. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 127–146. [Google Scholar]
  8. Bhatti, U.; Ochieng, W.; Feng, S. Integrity of an integrated GPS/INS system in the presence of slowly growing errors. Part I: A critical review. GPS Solut. 2007, 11, 173–181. [Google Scholar] [CrossRef]
  9. Schroth, G.; Enge, A.; Blanch, J.; Walter, T.; Enge, P. Failure detection and exclusion via range consensus. In Proceedings of the European Navigation Conference GNSS Conference, Toulouse, France, 22–25 April 2008.
  10. Shi, Y.; Teng, Y. The clock-aided RAIM method and its application in improving the positioning precision of GPS receiver. Acta Astronaut. 2012, 77, 126–130. [Google Scholar] [CrossRef]
  11. Velaga, N.; Quddus, M.; Bristow, A.; Zheng, Y. Map-aided integrity monitoring of a land. IEEE Trans. Intell. Transp. Syst. 2012, 13, 848–858. [Google Scholar] [CrossRef]
  12. Larson, C. An Integrity Framework for Image-Based Navigation Systems. Ph.D. Thesis, Air Force Institute of Technology, Dayton, OH, USA, 2010. [Google Scholar]
  13. Dusha, D.; Mejias, L. Error analysis and attitude observability of a monocular GPS/visual odometry integrated navigation filter. Int. J. Robot. Res. 2012, 31, 714–737. [Google Scholar] [CrossRef] [Green Version]
  14. Won, D.; Lee, E.; Heo, M.; Sung, S.; Lee, J.; Lee, Y. GNSS integration with vision-based navigation for low GNSS visibility conditions. GPS Solut. 2014, 18, 177–187. [Google Scholar] [CrossRef]
  15. Vezinet, J.; Escher, A.C.; Guillet, A.; Macabiau, C. State of the art of image-aided navigation techniques for aircraft approach and landing. In Proceedings of the International Technical Meeting (ITM) of The Institute of Navigation, San Diego, CA, USA, 27–29 January 2013.
  16. Milner, C.; Ochieng, W. Weighted RAIM for APV: The ideal protection level. J. Navig. 2011, 64, 61–73. [Google Scholar] [CrossRef]
  17. Bruckner, D.; Graas, F.; Skidmore, T. Statistical characterization of composite protection levels for GPS. GPS Solut. 2011, 15, 263–273. [Google Scholar] [CrossRef]
  18. Groves, P. Principles of GNSS, Inertial and Multi-Sensor Integrated Navigation Systems, 2nd ed.; Artech House Publishers: Norwood, MA, USA, 2013. [Google Scholar]
  19. Ma, Y.; Soatto, S.; Kosecka, J.; Sastry, S. An Invitation to 3D Vision: From Images to Geometric Models; Springer: Berlin, Germany, 2004. [Google Scholar]
  20. Grewal, M.; Weill, L.; Andrews, A. Global Positioning System, Inertial Navigation, and Integration, 2nd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 2007. [Google Scholar]
  21. Walter, T.; Enge, P. Weighted RAIM for precision approach. In Proceedings of the ION GPS, Palm Springs, CA, USA, 12–15 September 1995.
  22. Lee, Y.C. Analysis of Range and Position Comparison Methods as a Means to Provide GPS Integrity in the User Receiver. In Proceedings of the Global Positioning System, the Institute of Navigation, Seattle, WA, USA, 24–26 June 1986.
  23. Parkinson, B.W.; Axelrad, P. Autonomous GPS Integrity Monitoring Using the Pseudorange Residual. Navigation 1988, 35, 255–274. [Google Scholar] [CrossRef]
  24. Hewitson, S.; Wang, J. GNSS Receiver Autonomous Integrity Monitoring with a Dynamic Model. J. Navig. 2007, 60, 247–263. [Google Scholar] [CrossRef]
  25. Panagiotakopoulos, D.; Majumdar, A.; Ochieng, W. Extreme value theory-based integrity monitoring of global navigation satellite systems. GPS Solut. 2014, 18, 133–145. [Google Scholar] [CrossRef] [Green Version]
  26. Bhatti, U. Improved Integrity Algorithms for the Integrated GPS/INS Systems in the Presence of slowly Growing Errors. Ph.D. Thesis, Imperial College London, London, UK, 2008. [Google Scholar]
  27. Sturza, M. Navigation system integrity monitoring using redundant measurements. J. Inst. Navig. 1988, 35, 483–501. [Google Scholar] [CrossRef]
  28. Blanch, J.; Walter, T.; Enge, P.; Lee, Y.; Pervan, B.; Rippl, M.; Spletter, A. Advanced RAIM user Algorithm Description: Integrity Support Message Processing, Fault Detection, Exclusion, and Protection Level Calculation. In Proceedings of the 25th International Technical Meeting of the Satellite Division of the Institute of Navigation, Nashville, TN, USA, 17–21 November 2012.
  29. Minimum Operational Performance Standards for Airborne Supplemental Navigation Equipment Using Global Positioning System; RTCA/DO-208, RTCA Special Committee 159: Washington, DC, USA, 1991.
  30. Vezinet, J.; Escher, A.C.; Guillet, A.; Macabiau, C. Video integration in a GPS/INS hybridization architecture for approach and landing. In Proceedings of the 2014 IEEE/ION PLANS Technical Program, Monterey, CA, USA, 5–8 May 2014.
  31. Gruen, A. Development and Status of Image Matching in Photogrammetry. Photogramm. Rec. 2012, 27, 36–57. [Google Scholar] [CrossRef]
  32. ICAO Standards and Recommended Practices. Annex 10. In Radio Navigation Aids; ICAO: Montréal, QC, Canada, 2006. [Google Scholar]
  33. Global Positioning System Wide Area Augmentation System (WAAS) Performance Standard, 1st ed.Department of Transportation and Federal Aviation Administration (FAA): Washington, DC, USA, 2008.
  34. Zhang, M.; Li, Z.; Zhang, C.; Bai, H. Adaptive feature extraction and image matching based on haar wavelet transform and SIFT. Int. J. Digit. Content Technol. Appl. 2012, 6, 1–8. [Google Scholar]

Share and Cite

MDPI and ACS Style

Fu, L.; Zhang, J.; Li, R.; Cao, X.; Wang, J. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase. Sensors 2015, 15, 22854-22873. https://doi.org/10.3390/s150922854

AMA Style

Fu L, Zhang J, Li R, Cao X, Wang J. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase. Sensors. 2015; 15(9):22854-22873. https://doi.org/10.3390/s150922854

Chicago/Turabian Style

Fu, Li, Jun Zhang, Rui Li, Xianbin Cao, and Jinling Wang. 2015. "Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase" Sensors 15, no. 9: 22854-22873. https://doi.org/10.3390/s150922854

Article Metrics

Back to TopTop