Next Article in Journal
Evaluation of a Wi-Fi Signal Based System for Freeway Traffic States Monitoring: An Exploratory Field Test
Next Article in Special Issue
Adaptive Estimation and Cooperative Guidance for Active Aircraft Defense in Stochastic Scenario
Previous Article in Journal
A Diagnostic Device for In-Situ Detection of Swine Viral Diseases: The SWINOSTICS Project
Previous Article in Special Issue
Decoupling of Airborne Dynamic Bending Deformation Angle and Its Application in the High-Accuracy Transfer Alignment Process
Article Menu
Issue 2 (January-2) cover image

Export Article

Sensors 2019, 19(2), 408; https://doi.org/10.3390/s19020408

Article
Infrared-Inertial Navigation for Commercial Aircraft Precision Landing in Low Visibility and GPS-Denied Environments
1
School of Computer Science and Engineering, Northwestern Polytechnical University, Dongda Road, Changan District, Xi’an 710072, China
2
Xi’an Aeronautics Computing Technique Research Institute, Aviation Industry Corporation of China, Jinyeer Road, Yanta District, Xi’an 710065, China
*
Author to whom correspondence should be addressed.
Received: 22 December 2018 / Accepted: 17 January 2019 / Published: 20 January 2019

Abstract

:
This paper proposes a novel infrared-inertial navigation method for the precise landing of commercial aircraft in low visibility and Global Position System (GPS)-denied environments. Within a Square-root Unscented Kalman Filter (SR_UKF), inertial measurement unit (IMU) data, forward-looking infrared (FLIR) images and airport geo-information are integrated to estimate the position, velocity and attitude of the aircraft during landing. Homography between the synthetic image and the real image which implicates the camera pose deviations is created as vision measurement. To accurately extract real runway features, the current results of runway detection are used as the prior knowledge for the next frame detection. To avoid possible homography decomposition solutions, it is directly converted to a vector and fed to the SR_UKF. Moreover, the proposed navigation system is proven to be observable by nonlinear observability analysis. Last but not least, a general aircraft was elaborately equipped with vision and inertial sensors to collect flight data for algorithm verification. The experimental results have demonstrated that the proposed method could be used for the precise landing of commercial aircraft in low visibility and GPS-denied environments.
Keywords:
infrared-inertial navigation; homography; runway detection; observability analysis; precise landing; low visibility; GPS-denied

1. Introduction

Landing is the most accident-prone phase of flight for both military and civil aircraft. This is due to the manoeuvring sequence required to exhaust a large amount of aircraft kinetic energy in a relatively small area. Fixed-wing aircraft usually descend smoothly at a constant angle, pointing in the direction of the runway centerline, and touch down at the beginning of the runway. If low visibility conditions (e.g., fog or haze) are encountered, the pilots have no choice but to manipulate the aircraft to land using navigation instruments. If the conventional radio navigation systems are disturbed or disabled, they can mislead the pilots and cause a Controlled Flight Into Terrain (CFIT) accident. In addition, most airports are equipped with simple and coarse radio beacons rather than expensive and precise ground-based guidance systems. Nowadays neither avionics systems, nor airport infrastructures are perfectly designed to support precision landing. Faced with these challenges, an autonomous, accurate and affordable landing navigation mechanism is extremely necessary for most fixed-wing aircrafts.
The traditional landing aid systems include the Instrument Landing System (ILS), and Global Position System (GPS). However, these systems themselves have deficiencies for fixed-wing aircraft precision landing. ILS can only guide an aircraft to the decision height (DH, usually DH = 100 ft) and cannot guide it onto the runway. Besides, ILS, with its high cost and complicated maintenance is not suitable for general aviation airports. Although GPS can meet the needs of class I and II landings for most aircraft, its signal is vulnerable to jamming or disabling [1].
Recently vision-based landing navigation, which has the benefits of accuracy, autonomy and low cost, is becoming a central research topic [2,3]. Existing studies on vision-based landing for fixed-wing aircraft are classified into two categories, namely ground-based and onboard-based. The ground-based methods [4,5,6,7,8,9,10] often utilize multilocular vision systems arranged on the ground to detect, track, and guide the aircraft to landing. Martínez et al. [4] designed a trinocular system, which is composed of three Firewire cameras fixed on the ground, to estimate an UAV’s position and orientation by tracking color land markers on the UAV. Kong et al. [5,6,7] developed a custom-built infrared stereo camera with a large field of view and claimed that their system could resist all weather conditions, and further improve the detection precision by the Chan-Vese method [8] and the saliency-inspired method [9]. In addition, Yang et al. [10] showed promising results for UAV auto-landing in GPS-denied environments using a ground-based infrared camera array and a near infrared laser lamp-based cooperative optical imaging method.
Onboard-based vision landing navigation based on looking-forward images and computer vision algorithms can be divided into two types, namely moving platform-based and airport runway-based methods. For landing on a moving platform, the core solution is to track a known target, e.g., an aircraft carrier, and compute its relative position and orientation. Coutard et al. at the French INRIA [11,12] proposed a method for carrier visual detection and tracking for landing on the deck. The carrier is detected in the image by a warped patch of a reference image. Ding et al. [13] presented a FLIR/INS/RA integrated landing guidance approach to estimate the aircraft states and carrier dynamics for fixed-wing aircraft landing on the deck in low-visibility weather and high sea states by employing the Newton iterative algorithm, Kalman filter and wavelet transform. Jia et al. [14] put forward a carrier landing algorithm based on point and line features for fixed-wing UAVs. This algorithm calculates the attitude according to the sky-sea line and runway vanishing point, estimates the position parameters on the basis of the landmark and tracklines’ collinear equations by least square solutions, but it was only verified by simulation experiments. Recently Muskardin et al. in DLR [15] analyzed and proposed an algorithm for a solar-powered fixed-wing UAV landing on top of a mobile ground vehicle. For landing on an airport runway, a fixed-wing aircraft should descend smoothly at a constant angle, pointed in the direction of the runway centerline, and touch down at the beginning of the runway. Korn et al. at DLR [16] proposed a simple method to estimate the relative of position of an aircraft with respect to a runway based only on camera images. Neither a calibrated camera, nor any knowledge of special points of the runway are needed. The premise of this method is to accurately detect the horizon, but it is not suitable for all airports. Goncalves et al. [17] presented a study of a vision-based automatic approach and landing for an aircraft use an efficient second-order minimization-based tracking method. In contrast with feature extraction methods, direct methods can achieve ideal accuracy, but they are computationally consuming. Gui et al. [18] proposed an airborne vision-based navigation approach for UAV accuracy landing based on artificial markers. This method needs to install a visible light camera integrated with a DSP processor on the UAV and place four infrared lamps on the runway. Guo et al. [19] designed a vision-aided landing navigation system based on a fixed waveband guidance illuminant using a single camera. Bras et al. [20] used the edges and the front corner points of the runway extracted from the forward-looking images to implement a visual servo control method for autonomous UAV landings. Fan et al. [21] adopted a spectral residual saliency map to detect regions of interest, then selected sparse coding and spatial pyramid matching to recognize runways and used orthogonal iteration to estimate position and attitude. Burlion et. al. at the French ONERA [22] studied the vision-based flight control problem under field of view constraints and proposed a vision-based landing framework for a fixed-wing UAV on an unknown runway. Gibert et al. at Airbus [23,24,25] designed two nonlinear observers based on a high gain approach and sliding mode theory and applied them to a vision-based solution for civil aircraft landing on an unknown runway. However, this method does not utilize inertial measurements with high update. Ruchanurucks et al. [26] used an Efficient Perspective-n-Point (EPnP) solution to estimate relative pose for an automatic aided landing system for landing a fixed-wing UAV on a runway. The accuracy of this method is however susceptible to runway detection errors.
Although the above algorithms have achieved remarkable progress in vision-aided landing navigation, there are four main problems which need to be coped with. Firstly, image sensors cannot satisfy the requirements of high-speed landing navigation because of their low image update rates, whereas, IMUs can measure accelerations and rotational velocities at high update rates. The two types of sensors can complement each other in nature and integrate well in an optimized framework [27]. Secondly, most often fixed-wing aircrafts landing scenes are characterized by large-scale, no loop closure and plane. Thus, real-time SLAM algorithms [28,29,30] cannot be adopted directly, and a method based on sparse runway features must be developed. Thirdly, in order to operate smoothly under low visibility conditions, a forward-looking infrared (FLIR) camera can be used to monitor the runway, however, a big problem is that fewer features can be extracted from infrared images, especially in the runway area, due to their low resolution and poor texture [31]. Therefore, it is necessary to improve existing algorithms [32,33,34,35] to meet the robustness and accuracy requirements of runway detection in FLIR images. Finally, considering the flight safety, the observability of the proposed visual-inertial navigation system must be analyzed.
The main contributions presented in this paper are as follows: we propose a visual-inertial landing navigation method based on SR_UKF in which inertial data, infrared images and geo-referenced information are fused to estimate the landing kinetic states of the aircraft. Firstly, a short-wave infrared (SWIR) camera is used to capture a forward-looking infrared (FLIR) image to meet the requirement of precise landing under low visibility. Moreover, the homography that contains the measured pose deviation of the FLIR camera is directly created as the vision observation instead of its decomposition, because it implies the pose deviation between the measured camera and the true camera. Furthermore, an improved runway detection algorithm based on FLIR images is proposed to reach more robustness and accuracy. Specially, a non-linear observability analysis based on Lie derivatives [36,37,38] is performed to ensure that the sensor measurements provide sufficient information for motion estimation. Finally, we design a flight data acquisition platform based on a general aircraft and adopt real flight data to verify that the proposed method can be used for precise landing of commercial aircraft in GPS-denied and low visibility environments. This paper is organized as follows: in Section 2, we propose the visual-inertial navigation system for aircraft precise landing and discuss its observability. Section 3 gives the experimental results and discussions. The conclusions are drawn in Section 4.

2. Methodology

This section explicitly details the framework of the proposed visual-inertial navigation approach. Vision observations with the camera pose deviations are designed elaborately, then the visual-inertial fusion based on SR-UKF is constructed, and its observability is analyzed.

2.1. Framework of Infrared-Inertial Landing Navigation

In general, a complete landing procedure of a commercial aircraft includes two parts: an instrument flight segment and a natural vision segment. The instrument portion of an instrument landing procedure ends at the Decision Altitude (DA), and the visual segment begins just below DA and continues to the runway. Prior to reaching DA, the pilot’s primary references for maneuvering the airplane are the aircraft instruments and onboard navigation system. As the pilot approaches the DA, he or she looks for the approach lighting system, if there is one, as well as the runway threshold and touchdown zone lights, markings, surfaces. These visual references help the pilot align the aircraft with the runway and provide position and distance remaining information. At 100 feet above the Threshold Elevation (THRE), the visual transition point, the pilot makes a determination about whether the flight visibility is sufficient to continue the approach and distinctly identify the required visual references using natural vision. If the requirements identified above are met, the pilot may continue descending below DA down to 100 feet height above THRE [39]. Otherwise, the pilot should pull up the aircraft at once, as shown in Figure 1. In order to land commercial airplane safely in GPS-denied and low visibility environments, the pilot needs to obtain accurate navigation information, especially the flight altitude. In the present paper, the proposed method is aimed at landing above 100 feet.
Among several flight parameters, the flight height is one of the most important ones for the pilot’s decision in a landing procedure. Usually the height measured by barometer or radio altimeter is inaccurate, while the altitude captured by GPS is unreliable. In this paper, the FLIR camera and IMU can complement each other in nature and fuse well in a filtering framework. This paper proposes a novel visual-inertial landing navigation approach based on the SR-UKF, in which visual observation and inertial measurements are integrated to estimate aircraft landing motion. This novel visual-inertial navigation system (VINS) is composed of a FLIR camera, an IMU, a barometer (BARO), a radio altimeter (RALT) and a processing unit that is in charge of motion estimation of the aircraft.
As shown in Figure 2, the inertial measurements are used to propagate the system states, whereas the homography is chosen as the visual observation. The proposed visual-inertial integration can be used for commercial aircraft precise landing in GPS-denied and low visibility.
This method involves three key issues: process modeling, measurement modeling, and its observability. Firstly, this novel vision observation is designed in Section 2.2. Then the visual-inertial navigation based on SR-UKF is proposed in Section 2.3. Finally, the observability of the proposed algorithm is analyzed in Section 2.4.

2.2. Vision Observation

2.2.1. Homography between Synthetic and Real Images

Before proposing the measurement model, we need to analyze the vision measurement mechanism. During the landing, the aircraft descends along the glide slope, and the optical axis of the FLIR camera is aligned with the airport runway. The camera pose is composed of the calibrated IMU/camera relative pose and the measured IMU pose. Ideally the measured camera pose should be equal to the real camera pose. The synthetic image is derived by the terrain data and the measured camera pose, and the real image is captured by the FLIR camera. Therefore, in the image plane the synthetic runway features should be in coincidence with the real detected features accurately. However, the random errors of inertial sensors bring a deviation between the measured camera pose and the real camera pose and further lead to the mismatch between the synthetic runway features and the real runway features. The relationships between the measured camera pose ( Φ ^ n , P ^ n ) and the real camera pose ( Φ n , P n ) in the navigation reference frame are described as follows:
{ Δ ψ n = Φ n Φ ^ n Δ P n = P n P ^ n
where Φ ^ n and P ^ n denote the measured attitude and position of the FLIR camera in the navigation reference frame, respectively, Φ n and P n represent the real attitude and position of the FLIR camera in the navigation reference frame separately, Δ ψ n and Δ P n are the attitude and position measurement deviations of the FLIR camera in the navigation reference frame individually.
As shown in Figure 3, at the time t the transformation from the synthetic image to the real image satisfies the homography H M R ( t ) , so the synthetic runway features and the real runway features can be understood as two independent visual projections of the same runway from the geodetic coordinate system to the pixel coordinate system, respectively, derived by the real camera pose and the estimated pose. R M R ( t ) and T M R ( t ) represent the relative rotation and translation of the FLIR camera from the measured pose to the real pose separately. N M ( t ) is the unit normal vector of the airport plane with respect to the FLIR camera in the measured pose, d M ( t ) denotes the distance from the airport plane to the optical center of the FLIR camera in the measured pose.
Note that the matrix H M R ( t ) depends on the motion parameters { R M R ( t ) , T M R ( t ) } as well as the structure parameters { N M ( t ) , d M ( t ) } of the ground plane [40,41]. To increase the readability of the mathematical formulae, the time variables in H M R ( t ) , R M R ( t ) , T M R ( t ) , N M ( t ) and d M ( t ) will be omitted in the sequel. Then, the homography H M R can be expressed as:
H M R = R M R + 1 d M T M R · N M T
It is notable that the terms R M R , T M R , N M and d M can be further written in the VINS states as:
R M R = R C b n · ( M C b n ) T
T M R = ( R C b n ) T · ( M P n R P n )
N M = 1 · ( R C b n ) T · e 3 ,   with   e 3 = [ 0 0 1 ] T
d M = 1 · e 3 T · M P n
where R C b n is the attitude matrix of the FLIR camera in the real pose, and M C b n is the attitude matrix of the FLIR camera in the measured pose. Obviously, the homography matrix contains the deviation between the real camera pose and the measured camera pose, which can be calculated by the line features of synthetic runway and real runway. Furthermore, the synthetic runway features can be derived by geo-information and inertial measurements, and the real runway features can be extracted from FLIR images in real-time.

2.2.2. Synthetic Runway Features

In the proposed VINS, a FLIR camera and an IMU are installed on the aircraft. As shown in Figure 4, these reference frames obey the right-hand rule in this paper.
{ E } is the Earth-centered earth fixed (ECEF) reference frame, and a point f in { E } is E P f 3 . { G } denotes the geographic reference frame, any point f in { G } is G P f . { B } represents the body reference frame. Its origin B O is at the center of IMU, X B axis points to the head, Y B axis points toward the right, Z B axis is upward. A point f in { B } denotes B P f 3 . { C } is the camera reference frame with the origin C O at the camera optical center. The Z C axis coincides with the camera principle axis and points to the forward direction. The X C axis points to the column scan direction, while the Y C axis faces to the row scan direction. A point f in { C } is C P f . { P } denotes the pixel reference frame with its origin P O located in the upper-left of image plane. The u and v axes in { P } point to the right and downward directions. A point f in { P } denotes P P f 2 . The runway features in the synthetic image are derived by the runway geographic information and the measured pose of IMU. This vision projection process involves five coordinate transformations as follows:
(1) Transformation between geodetic and Cartesian coordinates in the ECEF reference frame
The geodetic coordinate that contains longitude L i , latitude λ i and ellipsoidal height h i of any point can be transformed to the Cartesian coordinate in the ECEF reference frame by the following equation:
E P i = [ ( R n + h i ) · cos L i · cos λ i , ( R n + h i ) · cos L i · sin λ i , ( ( 1 e 2 ) · R n + h i ) · sin L i ] T
where R n is the radius of curvature in the prime vertical, and e is the first eccentricity of the Earth.
(2) From { E } to { G }
Any known point in the ECEF can be projected into the geographic coordinate system with the IMU center as its origin:
G P f = [ sin L a · cos λ a sin L a · sin λ a cos L a sin λ a cos λ a 0 cos L a · cos λ a cos L a · sin λ a sin L a ] · ( E P f E P a )
where E P f denotes the Cartesian coordinates of any point f on the runway surface, E P a represents the Cartesian coordinates of the IMU. In order to facilitate the coordinate transformation, the geographic coordinate system { G } is selected as the navigation coordinate system { N } .
(3) From { N } to { B }
The navigation coordinate system { N } has the same origin with the body coordinate system { B } , the former rotates yaw-pitch-roll angle round X N Y N Z N axis to the latter in sequence, as follows:
B P f = C b n T · N P f
where C b n denotes the attitude matrix.
(4) From { B } to { C }
The rigid connection between aircraft body and camera contains a relative rotation R B C and translation T B C that has been accurately calibrated before flight:
C P f = R B C · B P f + T B C
(5) From { C } to { P }
According to the pinhole imaging model [42], the homogeneous coordinate projection of any point in the pixel coordinate system is:
p P f = 1 Z c [ 1 / d x s u 0 0 1 / d y v 0 0 0 1 ] · [ f 0 0 0 f 0 0 0 1 ] · C P f
where Zc is the normalization coefficient, dx and dy represent the pixel sizes in image u and v axes respectively, (u0, v0) are the coordinates of the principal point, s is the skew parameter, and f is the focal length of the FLIR camera.
Equations (10)–(14) give a complete transformation from the runway plane to the pixel plane of airborne FLIR camera, as shown in Figure 5. Therefore, a marking point E P f on the airport runway can be projected onto the pixel plane as a point P P f 2 .
Line features of airport runway can be generated by the projection model combining IMU pose and runway geo-information. Consequently, the pixel coordinate of line features can be described as:
l s t = [ 1 , ( r s r t ) / ( c s c t ) , ( ( r s r t ) / ( c s c t ) ) · c t r t ] T
where P P f = [ r s c s ] T is the pixel coordinate projected from the starting point E P s , P P t = [ r t c t ] T is the pixel coordinate projected from the terminal point E P t .

2.2.3. Real Runway Features

Visible images have high spatial resolution and rich texture details, but these images can be easily influenced by severe conditions, such as poor illumination, fog, and other effects of bad weather. Visible images capture reflected light, whereas infrared images capture thermal radiation. In general, infrared images are resistant to these disturbances. In the present paper, we adopt the SWIR camera to capture FLIR images with important airfield features in low visibility. However, infrared images typically have defects of low resolution and poor texture [31]. Existing runway detection algorithms [32,33,34] cannot satisfy the requirements of robustness and accuracy in airborne extract runway features from FLIR images accurately and robustly. Improvements have been made on the basis of our recently proposed method [35]. In the presented paper, the detection result of the previous image is used as the prior knowledge of the next image to detect and extract four runway edges instead of left and right edges from the FLIR image, as shown in Figure 6.
This improved method adopts a coarse-to-fine hierarchical idea in which the runway region of interest (ROI) is preliminarily estimated in the FLIR image and the runway edges are finely extracted from the ROI. At the coarse layer, the runway ROI can be calculated by the aircraft pose parameters and airport geo-information in the first few frames. Then, the detected runway is used as the prior knowledge of the next image. Meanwhile, considering the errors of aircraft pose parameters, the runway ROI based on special confidence interval can be estimated. The higher the confidence level is, the larger the runway ROI will be. Therefore, surrounding useless objects and complex background texture can be excluded from ROI so as to reduce interference and image processing time. Especially the errors transfer equations of vision projection model can be given as follows:
Δ r = J r · x ¯ ,   with x ¯ = [ Δ L a Δ λ a Δ h a Δ ψ Δ θ Δ ϕ ] T
Δ c = J c · x ¯
where Δ r is the error of pixel row and Δ c is the error of pixel column. J r is the Jacobian of row pixel r with respect to x ¯ , and J c is the Jacobian of column pixel c with respect to x ¯ . Δ L a , Δ λ a , Δ h a , Δ ψ , Δ θ , and Δ ϕ are the measurement deviations of longitude L a , latitude λ a , ellipsoidal height h a , yaw ψ , pitch θ , and roll ϕ of the IMU respectively.
At the fine layer, EDLines detector [43] is used to extract straight line segments from ROI, then fragmented line segments generated by EDLines are linked into complete runway edge lines based on the morphology of synthetic runway in the ROI. Due to the less texture and low resolution of the FLIR image, the detected edges are divided into small segments and scattered in the ROI disorderly. However, each synthetic runway line has a neighborhood which is determined by the pixel errors ( r Δ r r ^ r + Δ r , c Δ c c ^ c + Δ c ) of its endpoints. If one of the fragmented line segments locates in the neighborhood of any synthetic runway line and the angle between them is less than 3˚, it belongs to the set of the synthetic runway line candidates. Therefore, in the ROI four sets of lines are extracted from the detected line segments individually, and other lines are abandoned. In view of these facts, our method calculates the weight of each line segments according to its length and width. In each set, a number of points are randomly selected from these small line segments according to the line weight value. Obviously, the large weight line segment contributes greatly to the fitting of the line segments. Finally, each set of the line segments can be fitted into an edge line by using the RANSAC method. The detection and extraction results of runway features are given in Section 3.2.

2.3. Visual-Inertial Navigation

The UKF adopts a deterministic sampling technique to estimate the state and covariance of the non-linear models directly. Compared with the EKF, the UKF can predict the state of the non-linear system more accurately rather than calculate the Jacobian and Hessian matrices of the process and measurement models. However, the UKF need calculate the square root of state covariance matrix during sigma points update, it may occasionally generate a negative definite state covariance matrix which will cause the program to abort. The SR-UKF requires less numerical computations and has more accuracy by using a Cholesky factorization of the error covariance matrix in propagation directly [44]. The proposed visual-inertial navigation approach adopts SR-UKF to integrate nonlinear visual observation and inertial measurements to estimate aircraft motion.

2.3.1. Process Modeling

Firstly, we define the system state as:
x T = [ ψ T δ v T δ p T ε T T ]
where ψ n 3 , δ v n 3 and δ p n 3 are the attitude, velocity and position errors of INS respectively. ε n 3 denotes the gyroscope drift, n 3 represents the accelerometer bias. Then the continuous-time system process model is given by:
x ˙ ( t ) = A ( t ) · x ( t ) + w ( t )
A = [ 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 [ f n × ] 0 3 × 3 0 3 × 3 0 3 × 3 C b n 0 3 × 3 0 3 × 3 0 3 × 3 C b n 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 ]
w = [ ε n n 0 1 × 3 w g w a ] T
Considering the discrete-time, the model can be written as follows:
x k = Φ k / k 1 x k 1 + w k 1
Φ k / k 1 = e t k 1 t k A ( τ ) d τ e A ( t k 1 ) Δ t I + A ( t k 1 ) Δ t ,   with   Δ t = t k t k 1

2.3.2. Vision Measurement Model

Because the homography matrix contains the deviation of aircraft pose, four groups of possible solutions can be obtained by decomposing the homography matrix according to the traditional method [40,41], and then a set of solutions which are closest to the true value, i.e., the deviation of aircraft pose, can be selected by prior knowledge as UKF measurement. However, the homography matrix decomposition not only increases computation, but also introduces computation errors. In this paper, the measured homography matrix is transformed into one-dimensional column vector, which is used as visual measurement to participate in UKF.
Suppose that H ^ M R 3 × 3 and H M R 3 × 3 are the measurement and the estimation of the homography, then H ^ M R and H M R can be converted into two column vectors v e c H ^ M R 9 and v e c H M R 9 respectively. Considering the measurement noises of the homography H ^ M R 3 × 3 , the nonlinear vision measurement model is formalized as:
v e c H ^ M R = v e c H M R + v f l i r
where v f l i r 9 is assumed to be a zero-mean Gaussian noise.
(1) H ^ s v e v Calculation
The homography H ^ M R 3 × 3 can be calculated by the feature matching between synthetic images and real images, which is described in Section 2.3.2. The detailed algorithm for homography calculation refers to [42] which gives the transformation rule for lines. A line transforms as:
l R = ( H ^ M R ) T · l M
where ( l R , l M ) is a line pair between the synthetic image and the infrared image. The main line features include the four edges of runway at least that support the calculation of the homography with eight degrees of freedom.
(2) H M R Estimation
M C b n is the estimated attitude matrix from the body frame to the navigation frame, and M P n is the estimated position of the body, while R C b n is the measured attitude matrix, and R P n is the measured position of the body. According to Equations (2)–(6), H M R can be calculated as follows:
H M R = R C b n · M C n b + M C n b · ( R P n M P n ) · e 3 T · M C b n e 3 T · M P n
So v e c H M R can be expressed as the function of attitude error ψ n and position error δ P n through the conversion H M R v e c H M R .

2.3.3. Other Observations

Besides the above visual measurements, the proposed landing navigation can integrate with other common observations such as air pressure height and radio altitude. These measurement models can be written as follows:
h ^ i m u h ^ h p r = δ h + v h p r = C h p r · x ¯ + v h p r
h ^ i m u h ^ r a l t = δ h + v r a l t = C r a l t · x ¯ + v r a l t
C h p r = C r a l t = [ 0 1 × 3 0 1 × 3 e 3 T 0 1 × 3 0 1 × 3 ] 1 × 15
where h ^ i m u is the altitude measured by IMU, h ^ h p r indicates the air pressure height measured by the barometer, and h ^ r a l t represents the radio altimeter. v h p r and v r a l t are all assumed to be zero-mean Gaussian white noise. By combining FLIR vision, air pressure height and radio altimeter, the nonlinear measurement model is presented as:
z ( t ) = C ( x ) + v ( t )
z ( t ) = [ v e c H ^ M R h ^ i m u h ^ h p r h ^ i m u h ^ r a l t ] ,   C ( x ) = [ v e c H M R δ h δ h ] , v ( t ) = [ v f l i r v h p r v r a l t ]
The multi-source information fusion framework based on SR_UKF consists of the process model and measurement model, which realizes the integration of inertial measurements, infrared image, airport geo-reference, air data and radio altitude.

2.4. Observability

Observability is an inherent characteristic of the proposed VINS; it provides an understanding of how well states of a system can be inferred from the system output measurements. Recently there has been many works in studying the observability of VINSs [36,37,38]. We apply the non-linear observability analysis proposed by Herman and Krener in [36] and refer to the work of Kelly [37] and Weiss [38] for details about how to apply this method to a system similar to ours. In the following, the observability analysis of the core system is established by studying the observability matrix rank based on Lie derivatives.

2.4.1. Nonlinear Observability

Considering the state space as an infinitely smooth manifold X of dimension n , the nonlinear system is described by the following model:
{ χ ˙ = i = 0 p f i ( χ ) u i y = h ( χ )
where χ n is the state vector, u i 1 , i = 0 p denotes the control input, u 0 = 1 , and y = [ y 1 , , y m ] T m is the measurement vector with y k = h k ( χ ) , k = 1 , , m . The zeroth-order Lie derivative is the function itself, i.e., L 0 h ( χ ) = h ( χ ) . The first-order Lie derivative of h with respect to f i at χ X is:
L f i h ( χ ) = f i h ( χ ) = h ( χ ) χ f i ( χ )
The recursive Lie derivative is defined as:
L f j L f i h ( χ ) = L f i h ( χ ) χ f j ( χ )
The k-th derivative of h along f i is:
L f i k h ( χ ) = L f i k 1 h ( χ ) χ f i ( χ )
Based on the preceding expression for the Lie derivative, the observability matrix is defined as:
O = [ L 0 h ( χ ) L f 1 h ( χ ) L f i f j n h ( χ ) ]
If the observability matrix O is full rank, the system is locally weakly observable.

2.4.2. Observability Analysis

In order to reveal the observability of our proposed system, we use the motion state instead of the state errors. The state errors are approximations where second and higher order terms are omitted under the assumption of a small error state [38]. However, the observability analysis on the full nonlinear system prevents information loss.
First, we define the system state vector of the core system as follows:
χ ( t ) = [ q b n T v n Τ p n T b g T b a T ] T
Then the nonlinear kinematic equations of the core system for computing the Lie derivatives is rearranged as:
[ q ˙ b n v ˙ n p ˙ n b ˙ g b ˙ a ] = [ 0.5 Ξ ( q b n ) b g g C ( q b n ) b a v n 0 3 × 1 0 3 × 1 ] + [ 0.5 Ξ ( q ) 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 ] ω m + [ 0 3 × 3 C ( q ) 0 3 × 3 0 3 × 3 0 3 × 3 ] a m = f 0 + f 1 ω m + f 2 a m
where C ( q b n ) is the rotational matrix corresponding to the quaternion q b n , Ξ ( q ) is the quaternion multiplication matrix for the quaternion of rotation q with q ˙ = 0.5 Ξ ( q ) ω , ω m denotes the angular velocity vector, a m is the accelerate vector.
A well-known result that we will use in the observability analysis of (31) is the following: when four and more known features are detected in FLIR image frame, the infrared camera pose is observable. According to Equation (2), the measurements can be summarized as:
h 1 = C ( q b n ) · R 0 + R 0 · p n · N n T / d n T 0
where R 0 = sv C n b , p 0 = sv p n , N n = 1 · R 0 · e 3 , d n = 1 · e 3 T · p 0 , and T 0 = R 0 · p 0 · N n T / d n . Furthermore, we enforce the unit-quaternion constraints by employing the following additional measurement equation:
h 2 = ( q b n ) T · q b n = 1
(1) Zeroth-Order Lie Derivatives: Define the zeroth-order Lie derivative of h 1 and h 2 , which are simply the measurement functions themselves, i.e.,:
L 0 h 1 = C ( q b n ) · R 0 + R 0 · p n · N n T / d n T 0
L 0 h 2 = ( q b n ) T · q b n
Their gradients are:
L 0 h 1 = [ Γ 1 ( q b n ) 0 3 × 3 D 1 ( p n ) / d n 0 3 × 3 0 3 × 3 ]
L 0 h 2 = [ 2 ( q b n ) T 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 ]
where Γ 1 ( q b n ) = C ( q b n ) · R 0 q b n , D 1 ( p n ) = ( R 0 · p n · N n T ) p n .
(2) First-Order Lie Derivatives: The first-order Lie derivatives of h 1 and h 2 with respect to f 0 are computed as (34):
L f 0 1 h 1 = L 0 h 1 f 0 = 0.5 Γ 1 ( q b n ) Ξ ( q b n ) b g + D 1 ( p n ) v / d n
L f 0 1 h 2 = ( q b n ) T Ξ ( q ) b g
Their gradients are:
L f 0 1 h 1 = [ Γ 2 ( q b n ) D 1 ( p n ) / d n D 2 ( p n ) / d n 0.5 Γ 1 ( q b n ) Ξ ( q b n ) 0 3 × 3 ]
L f 0 1 h 2 = [ Γ 3 ( q b n ) 0 3 × 3 0 3 × 3 ( q b n ) T Ξ ( q ) 0 3 × 3 ]
where Γ 2 ( q b n ) = ( L f 0 1 h 1 ) q b n , D 2 ( p n ) = ( L f 0 1 h 1 ) p n , Γ 3 ( q b n ) = ( L f 0 1 h 2 ) q b n .
(3) Second-Order Lie Derivatives: The second-order Lie derivative of h 1 with respect to f 0 is computed as (36):
L f 0 2 h 1 = L 1 h 1 f 0 = 0.5 Γ 2 ( q b n ) Ξ ( q b n ) b g + D 1 ( p n ) ( g C ( q b n ) b a ) / d n + D 2 ( p n ) v / d n
The gradient is:
L f 0 2 h 1 = [ Γ 4 ( q b n ) S ( v n ) D 3 ( p n ) 0.5 Γ 2 ( q b n ) Ξ ( q b n ) D 1 ( p n ) C ( q b n ) / d n ]
where Γ 4 ( q b n ) = ( L f 0 2 h 1 ) q b n , S ( v n ) = ( L f 0 2 h 1 ) v n , D 3 ( p n ) = ( L f 0 2 h 1 ) p n .
We obtain the observability matrix O by stacking the gradient matrices above:
O = [ L 0 h 1 L 0 h 2 L f 0 1 h 1 L f 0 1 h 2 L f 0 2 h 1 ] = [ Γ 1 ( q b n ) 0 3 × 3 D 1 ( p n ) / d n 0 3 × 3 0 3 × 3 2 ( q b n ) T 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 Γ 2 ( q b n ) D 1 ( p n ) / d n D 2 ( p n ) / d n 0.5 Γ 1 ( q b n ) Ξ ( q b n ) 0 3 × 3 Γ 3 ( q b n ) 0 3 × 3 0 3 × 3 ( q b n ) T Ξ ( q ) 0 3 × 3 Γ 4 ( q b n ) S ( v n ) D 3 ( p n ) 0.5 Γ 2 ( q b n ) Ξ ( q b n ) D 1 ( p n ) C ( q b n ) / d n ]
where the complete matrix has size 5 × 5. Considering the system state of aerial vehicle in landing phase, the attitude is relatively stable without any complex maneuver, i.e., pitch θ [ 2 ° , 4 ° ] , roll ϕ [ 1 ° , 1 ° ] , and angle velocity vector ω m is minor. In the observability matrix O, these matrices ( q b n ) T , ( q b n ) T Ξ ( q ) , and D 1 ( p n ) are full rank. After applying block Gaussian elimination to removing any rows of the matrix O that consist entirely of zeros, a row-reduced form of the matrix O having the same rank is given by:
[ 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 ]
which has full column rank, so the proposed system is proven to be observable.

3. Experimental Section and Discussion

In this section, we designed a flight data acquisition platform and adopted real flight data to verify the accuracy and robustness of the proposed method.

3.1. Experiments Preparation

The flight data was gathered at a general aviation airport (Pucheng, China) under different weather conditions such as fog, haze, cloud and sunny. As shown in Figure 7, the general aircraft (Y-12F) was equipped with an image sensing suite, an INS (Applanix AV510), a flight parameter recorder (FPR, AMPEX miniR 700), a flight video recorder (FVR, VM-4), a barometer (BARO, XSC-6E) and a radio altimeter (RALT, Honeywell KRA405b). An image sensing suite (ISS) mounted on the aircraft radome contains a SWIR camera (NIP PHK03M100CSW0) and a visible light camera. Furthermore, an INS, FPR, and FVR were installed on the deck of aircraft cabin. The flight data mainly included FLIR video (frame rate 24 Hz), inertial measurements (update 100 Hz), air pressure height (update 16 Hz) and radio altimeter (update 20 Hz) which were labeled by recorders with time stamp to synchronize measurements. In addition, a DGPS ground station (Trimble R5) was used for DGPS-inertial integration navigation to provide the ground truth.
To get accurate motion estimations, precise FLIR camera parameters and camera/INS relative pose are needed. Classical calibration method based on chessboard pattern [45] is adopted to obtain intrinsic parameters of the FLIR camera. The world coordinates of FLIR camera and INS are individually measured by an electronic total station, then the FLIR camera/INS relative pose can be calculated through vector relation between them [10]. The calibrated parameters of INS and FLIR camera are shown in Table 1.
The flight data is stored in a flight data simulator (FDS), which can play back the whole flight process for the algorithm design and verification. Moreover, the geographic data of the airport and its surrounding has been surveyed accurately. In this paper, experiments are run on an embedded computer board (Nvidia Jetson TX2) with six ARM CPU cores, 256 Pascal GPU cores, 8 GB memory. The block diagram of the experimental platform is shown in Figure 8. The embedded computer receives the airborne sensors data from the FDS and simultaneously reads the airport geographic information stored in the solid-state disk (SSD), then outputs the aircraft motion states through multi-source information fusion.

3.2. Runway Detection Experiment

An ideal line segment detector could process any images regardless of its orientation or size, and extract line segments in real-time without parameters tuning. Among existing algorithms, EDLines detector [19] and Line Segments Detector (LSD) [20] can satisfy these requirements. However, EDLines runs up to 11 times faster than LSD [19], which makes it more suitable for real-time runway detection. As shown in Figure 9, line segments are extracted from the ROI by LSD and EDLines detector, respectively.
In this paper, a complete landing process in fog is used for verifying the proposed algorithm. Experimental results contain two parts: runway detection and motion estimation. Some runway detection results are shown in Figure 10, From top to bottom the three rows represent three typical scenarios captured at flight altitudes of 200 ft, 100 ft and 60 ft, and the three columns from left to right denote the coarse layer, the fine layer, and the final results, respectively.
At the coarse layer, the ROI is marked in red at the left column. At the fine layer of our improved method, some line segments in ROI are detected and highlighted in red, and the trapezoid of runway contour is labeled in green at the middle column. These line segments are fitted into the final runway features which are shown in red at the right column. In addition, the statistics of runway detection listed in Table 2 show that the ratio of pixels in ROI to total pixels in CCD is less than 25%. Obviously, the proposed method is faster than others [33,34] which process the whole image, and its robustness is significantly improved.

3.3. Motion Estimation Experiment

As shown in Figure 11, the approach and landing trajectories from the two landing navigation methods are presented. The red curve represents INS/DGPS data, and the green is the motion estimation of the proposed method. The blue pattern denotes the airport runway area. The aircraft descended from 500 feet to 47 feet, through three typical altitudes of 200 feet, 100 feet and 60 feet, flying for 59.45 s. Five recorded time points are marked in this figure. In our experiments, the results of INS/DGPS integration are selected as ground truth.
The proposed algorithm is compared with three other methods such as INS/GPS integration [46], EPnP based method [26] and INS/GPS/BARO/RALT integration [47]. To be consistent with the specifications of the sensor manufactures, the comparison results of position errors, velocity errors, and attitude errors are shown in Figure 12. Δ X e , Δ X n , and Δ X u denote the measurement errors of the aircraft position in the eastward, northward, and upward, respectively. Δ ψ , Δ θ , and Δ ϕ represent the measurement errors of the aircraft yaw, pitch, and roll separately. Δ V e , Δ V n , and Δ V u are the east, north, and azimuth measurement errors of the aircraft velocity severally. As shown in Figure 12, the motion errors of INS/GPS/BARO/RALT integration are obviously larger than those of the others, while the motion errors of the proposed algorithm are smaller than those of the others. Because the EPnP-based algorithm adopts pure image features to calculate the position and orientation of the camera relative to the runway, the accuracy is greatly limited by the relative distance between the camera and the runway. It is difficult to accurately extract the features of the runway terminal in the 500–200 feet stage. Besides, the errors effect of the runway features in the 100–47 feet stage is greater due to the high ratio of runway features to image. The accuracy of motion estimation is higher only in the 200–100 feet stage.
Meanwhile, the data update rate is limited by the camera frame rate, which is lower than the INS update rate. In addition, the accuracy of motion estimation based on INS/GPS/BARO/RALT cannot be further improved due to the larger measurement errors of barometer and radio altimeter. However, this paper improves the existing runway detection algorithm to avoid the problem that the features of runway terminal are difficult to accurately detect, which can obtain accurate runway features. After the integration of vision measurements and inertial data, the update rate of motion estimation is also improved. Even if in low-visibility environments the motion estimations of the proposed method are still accurate enough, which is benefited from the accurate visual observations. In addition, the RMS errors of different motion estimation are listed in Table 3. The attitude, velocity and position errors of INS/GPS/BARO/RALT integration are slightly larger than those of the others, while these errors of the proposed algorithm are smaller than those of the others.
Among several flight parameters, the height observation is one of the most important for flight safety in landing phase. The flight altitude from GPS is usually inaccurate and unreliable, while the height channel of INS trends to diverges caused by the absence of damping. In general, air pressure height or radio altitude is adopted to damp the height channel of INS, but their accuracy is too low to meet the precision landing. The proposed algorithm that absorbs the advantages of vision and inertial sensors can not only improves the estimation accuracy but also guarantees high update rate.
In Figure 13, the flight height in landing obtained by different methods is represented. The RMS errors of flight height in the landing phase are shown in Table 4. Radio altimeter and barometer are not only of low update rate but also of poor accuracy, which is not suitable for landing navigation.
Although INS/GPS mode has high update rate of height data, its accuracy is poor compared with DGPS/INS. The EPnP-based method has higher accuracy than INS/GPS mode, but EPnP has lower update rate than INS/GPS mode due to its use of pure vision navigation. Obviously, the height precision obtained from the proposed INS/FLIR method is the smallest, it can replace INS/DGPS mode to meet precision landing demands.

3.4. Discussions

The proposed method has high precision up to the DGPS/INS level in low visibility. Firstly, the homography can be served as an ideal visual observation without error accumulation. Meanwhile, owing to the improved runway detection method, it can efficiently overcome the defects of infrared images and smoothly run in a landing scene with large scale and less texture. Compared to ILS and GPS, our method merely takes advantage of an infrared camera to cooperate with airborne navigation sensors, e.g., IMUs, to achieve autonomous motion estimation with low cost, robustness and accuracy. In particular, the accuracy of our method has reached the level of the DGPS/INS for precision approach and landing.
In the proposed method, the main factors that affect the accuracy of aircraft motion estimation include sensors calibration errors, terrain database precision, spatiotemporal consistency, and runway detection quality. The errors can be partially eliminated by strict sensors calibration [10,45], high precision terrain database and time synchronization [38]. However, the accuracy of runway detection has a great influence on the proposed method, which can be guaranteed by the algorithm itself. The size of synthetic runway neighborhood directly affects the accuracy of the fitted straight line features. If the neighborhood is too small, the line features will not be found. If the neighborhood is too large, the interference features will increase significantly. In this paper, the pixel errors ( Δ r , Δ c ) are set to be 2 σ , which are trade-off settings.

4. Conclusions and Future Works

The paper proposed a novel visual-inertial navigation method to provide drift-free pose estimation for fixed-wing aircrafts landing, in which inertial measurements, infrared observations and geo-information are organically fused in the UKF. In addition, the proposed method has been proven to be observable by nonlinear observability. Comprehensive experiments with real flight data have verified the accuracy and robustness of the proposed method.
In the future, there are still some research tasks to do for further improvement. (1) For stronger adaptability, we will adopt a multispectral image fusion method [48,49] to enhance the sensitivity in more weather conditions such as rain, snow, or dust. (2) Deep-learning methods [50] can be tried to detect semantic objects with known geo-references around the runway in infrared images, which should not only increase the quantity of vision features to improve the system precision, but also intensify the robustness to detect and recognize different airports. (3) For convenience, the online technique for calibration of the camera to an inertial system [51,52] can also be used to substitute the complicated hand-eye calibration.

Author Contributions

L.Z. proposed this visual-inertial navigation method, and wrote the source code and the manuscript together; Z.Z. made contribution to algorithm design, paper written and modification; L.H. took part in the algorithm verification; P.W. was responsible for data collection and experiments; W.N. supplied help on experiments and paper revision.

Acknowledgments

This work is supported in part by the Aeronautical Science Foundation of China under Grant 2014ZC31004 and 2017ZC31008 and in part by the Technology Innovation Foundation of Aviation Industry Corporation of China under Grant 2014D63130R.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiyun, L.; Minchan, K. Optimized GNSS Station Selection to Support Long-Term Monitoring of Ionospheric Anomalies for Aircraft Landing Systems. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 236–246. [Google Scholar] [CrossRef]
  2. Alvika, G.; Sujit, P.B.; Srikanth, S. A Survey of Autonomous Landing Techniques for UAVs. In Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014; pp. 1210–1218. [Google Scholar]
  3. Kong, W.; Zhou, D.; Zhang, D.; Zhang, J. Vision-based Autonomous Landing System for Unmanned Aerial Vehicle: A Survey. In Proceedings of the International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI), Beijing, China, 28–29 September 2014; pp. 1–8. [Google Scholar]
  4. Martínez, C.; Mondragón, I.; Olivares-Méndez, M.; Compoy, P. On-board and Ground Visual Pose Estimation Techniques for UAV Control. J. Intell. Robot. Syst. 2011, 61, 301–320. [Google Scholar] [CrossRef]
  5. Kong, W.; Zhang, D.; Wang, X.; Xian, Z.; Zhang, J. Autonomous Landing of an UAV with a Ground-Based Actuated Infrared Stereo Vision System. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 2963–2970. [Google Scholar]
  6. Kong, W.; Zhou, D.; Zhang, Y.; Zhang, D.; Zhang, J. A Ground-Based Optical System for Autonomous Landing of a Fixed Wing UAV. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Chicago, IL, USA, 14–18 September 2014; pp. 4797–4804. [Google Scholar]
  7. Kong, W.; Hu, T.; Zhang, D.; Shen, L.; Zhang, J. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach. Sensors 2017, 17, 1437–1453. [Google Scholar] [CrossRef] [PubMed]
  8. Tang, D.; Hu, T.; Shen, L.; Zhang, D.; Kong, W.; Low, K. Ground Stereo Vision-based Navigation for Autonomous Take-off and Landing of UAVs: A Chan-Vese Model Approach. Int. J. Adv. Robot. Syst. 2016, 13, 67–80. [Google Scholar] [CrossRef]
  9. Ma, Z.; Hu, T.; Shen, L. Stereo Vision Guiding for the Autonomous Landing of Fixed-wing UAVs: A Saliency-inspired Approach. Int. J. Adv. Robot. Syst. 2016, 13, 43–55. [Google Scholar] [CrossRef]
  10. Yang, T.; Li, G.; Li, J.; Zhang, Y.; Zhang, X.; Zhang, Z.; Li, Z. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment. Sensors 2016, 16, 1393–1412. [Google Scholar] [CrossRef] [PubMed]
  11. Coutard, L.; Chaumette, F.; Pflimlin, J. Automatic landing on aircraft carrier by visual servoing. In Proceedings of the IEEE Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 2843–2848. [Google Scholar]
  12. Coutard, L.; Chaumette, F. Visual detection and 3D model-based tracking for landing on an aircraft carrier. Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 1746–1751. [Google Scholar]
  13. Ding, Z.; Li, K.; Meng, Y.; Wang, L. FLIR/INS/RA Integrated Landing Guidance for Landing on Aircraft Carrier. Int. J. Adv. Robot. Syst. 2015, 12, 60–68. [Google Scholar] [CrossRef]
  14. Jia, N.; Lei, Z.; Yan, S. An Independently Carrier Landing Method Using Point and Line Features for Fixed-Wing UAVs. Commun. Comput. Inf. Sci. 2016, 634, 176–183. [Google Scholar] [CrossRef]
  15. Muskardin, T.; Balmer, G.; Wlach, S.; Kondak, K.; Laiacker, M.; Ollero, A. Landing of a Fixed-wing UAV on a Mobile Ground Vehicle. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1237–1242. [Google Scholar]
  16. Hans-Ullrich, D.; Bernd, R. Autonomous infrared-based guidance system for approach and landing. In Proceedings of the Enhanced and Synthetic Vision, Orlando, FL, USA, 12–13 April 2004; pp. 140–147. [Google Scholar]
  17. Goncalves, T.; Azinheira, J.; Rives, P. Vision-Based Automatic Approach and Landing of Fixed-Wing Aircraft Using a Dense Visual Tracking. Inform. Control Autom. Robot. 2011, 85, 269–282. [Google Scholar] [CrossRef]
  18. Gui, Y.; Guo, P.; Zhang, H.; Lei, Z.; Zhou, X.; Du, J.; Yu, Q. Airborne Vision-Based Navigation Method for UAV Accuracy Landing Using Infrared Lamps. J. Intell. Robot. Syst. 2013, 72, 197–218. [Google Scholar] [CrossRef]
  19. Guo, P.; Li, X.; Gui, Y.; Zhou, X.; Zhang, H.; Zhang, X. Airborne Vision-Aided Landing Navigation System for Fixed-Wing UAV. In Proceedings of the IEEE International Conference on Signal Processing (ICSP), Hangzhou, China, 19–23 October 2014; pp. 1215–1220. [Google Scholar]
  20. Bras, F.; Hamel, T.; Mahony, R.; Barat, C.; Thadasack, J. Approach Maneuvers for Autonomous Landing Using Visual Servo Control. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 1051–1065. [Google Scholar] [CrossRef]
  21. Fan, Y.; Ding, M.; Cao, Y. Vision algorithms for fixed-wing unmanned aerial vehicle landing system. Sci. China Technol. Sci. 2017, 60, 434–443. [Google Scholar] [CrossRef]
  22. Burlion, L.; Plinval, H. Toward vision-based landing of a fixed-wing UAV on an unknown runway under some fov constrains. In Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 1824–1832. [Google Scholar]
  23. Gibert, V.; Burlion, L.; Chriette, A.; Boada, J.; Plestan, F. Nonlinear observers in vision system: Application to civil aircraft landing. In Proceedings of the European Control Conference (ECC), Linz, Austria, 15–17 July 2015; pp. 1818–1823. [Google Scholar]
  24. Gibert, V.; Burlion, L.; Chriette, A.; Boada, J.; Plestan, F. New pose estimation scheme in perspective vision system during civil aircraft landing. IFAC-PapersOnLine 2015, 48, 238–243. [Google Scholar] [CrossRef]
  25. Gibert, V.; Plestan, F.; Burlion, L.; Boada, J.; Chriette, A. Visual estimation of deviations for the civil aircraft landing. Control Eng. Pract. 2018, 75, 17–25. [Google Scholar] [CrossRef]
  26. Ruchanurucks, M.; Rakprayoon, P.; Kongkaew, S. Automatic Landing Assist System Using IMU+PnP for Robust Positioning of Fixed-Wing UAVs. J. Intell. Robot. Syst. 2018, 90, 189–199. [Google Scholar] [CrossRef]
  27. Santoso, F.; Garratt, M.; Anavatti. Visual Inertial Navigation Systems for Aerial Robotics Sensor Fusion and Technology. IEEE Trans. Autom. Sci. Eng. 2017, 14, 260–275. [Google Scholar] [CrossRef]
  28. Yang, Z.; Gao, F.; Shen, S.J. Real-time Monocular Dense Mapping on Aerial Robots Using Visual-Inertial Fusion. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 4552–4559. [Google Scholar]
  29. Mur-Artal, R.; Tardos, J. Visual-Inertial monocular SLAM with Map Reuse. IEEE Trans. Autom. Lett. 2017, 2, 796–803. [Google Scholar] [CrossRef]
  30. Sun, K.; Mohta, K.; Pfrommer, B.; Watterson, M.; Liu, S.; Mulgaonkar, Y.; Taylor, C.J.; Kumar, V. Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight. IEEE Robot. Autom. Lett. 2018, 3, 965–972. [Google Scholar] [CrossRef][Green Version]
  31. Borges, P.; Vidas, S. Practical Infrared Visual Odometry. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2205–2213. [Google Scholar] [CrossRef]
  32. Liu, C.; Zhao, Q.; Zhang, Y.; Tan, K. Runway Extraction in Low Visibility Conditions Based on Sensor Fusion Method. IEEE Sens. J. 2016, 14, 1980–1987. [Google Scholar] [CrossRef]
  33. Kumar, V.; Kashyao, S.; Kumar, N. Detection of Runway and Obstacles using Electro-optical and Infrared Sensors before Landing. Def. Sci. J. 2014, 64, 67–76. [Google Scholar] [CrossRef][Green Version]
  34. Wu, W.; Xia, R.; Xiang, W.; Hui, B.; Chang, Z.; Liu, Y.; Zhang, Y. Efficient Airport Detection Using Line Segment Detector and Fisher Vector Representation. IEEE Geosci. Remote Sens. 2016, 13, 1079–1083. [Google Scholar] [CrossRef]
  35. Lei, Z.; Cheng, Y.; Zhengjun, Z. Real-time Accurate Runway Detection based on Airborne Multi-sensors Fusion. Def. Sci. J. 2017, 67, 48–56. [Google Scholar] [CrossRef]
  36. Hermann, R.; Krener, A. Nonlinear controllability and observability. IEEE Trans. Autom. Control 1977, 22, 728–740. [Google Scholar] [CrossRef]
  37. Kelly, J.; Sukhatme, G. Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-Calibration. Int. J. Robot. Res. 2010, 5, 56–79. [Google Scholar] [CrossRef]
  38. Weiss, S. Vision Based Navigation for Micro Helicopters. Ph.D. Thesis, ETH Zurich, Zürich, Switzerland, 2012. [Google Scholar]
  39. RTCA DO-315B. Minimum Aviation System Performance Standard (MASPS) for Enhanced Vision Systems, Synthetic Vision Systems, Combine Vision Systems and Enhanced Flight Vision Systems; RTCA: Washington, DC, USA, 2013. [Google Scholar]
  40. Ma, Y.; Soatto, S.; Kosecka, J.; Sastry, S. An Invitation to 3-D Vision: From Images to Geometric Models; Springer: New York, NY, USA, 2004; pp. 131–139. [Google Scholar]
  41. Ezio, M.; Manuel, V. Deeper Understanding of the Homography Decomposition for Vision-Based Control; RR-6303; INRIA: Sophia Antipolis, France, 2007. [Google Scholar]
  42. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003; pp. 325–339. [Google Scholar]
  43. Akinlar, C.; Topal, C. EDLines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 2011, 32, 1633–1642. [Google Scholar] [CrossRef]
  44. Rudolph, V.D.; Eric., A.W. The Square-root Unscented Kalman Filter for State and Parameter-estimation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing 2001, Salt Lake City, UT, USA, 7–11 May 2001; pp. 3461–3464. [Google Scholar]
  45. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef][Green Version]
  46. Khalaf, W.; Chouaib, I.; Wainakh, M. Novel adaptive UKF for tightly-coupled INS/GPS integration with experimental validation on an UAV. Gyrosc. Navig. 2017, 8, 259–269. [Google Scholar] [CrossRef]
  47. Gay, R.; Maybeck, P. An Integrated GPS/INS/BARO and Radar Altimeter System for Aircraft Precision Approach Landings. In Proceedings of the IEEE National Aerospace and Electronics Conference, Dayton, OH, USA, 22–26 May 1995; pp. 161–168. [Google Scholar]
  48. Huang, B.; Bi, D.; Wu, D. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain. Sensors 2018, 18, 1169–1182. [Google Scholar] [CrossRef]
  49. Ma, J.; Ma, Y.; Li, C. Infrared and Visible Image Fusion methods and Applications: A Survey. Inform. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
  50. Dong, J.; Fei, X.; Soatto, S. Visual-Inertial-Semantic Scene Representation for 3D Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3567–3577. [Google Scholar]
  51. Yang, Z.F.; Shen, S.J. Monocular Visual-Inertial State Estimation with Online Initialization and Camera-IMU Extrinsic Calibration. IEEE Trans. Autom. Sci. Eng. 2017, 14, 39–51. [Google Scholar] [CrossRef]
  52. Kaiser, J.; Martinelli, A.; Fontana, F.; Scraramuzza, D. Simultaneous State Initialization and Gyroscope Bias Calibration in Visual Inertial Aided Navigation. IEEE Robot. Autom. Lett. 2017, 2, 18–25. [Google Scholar] [CrossRef][Green Version]
Figure 1. Approach and Landing procedure.
Figure 1. Approach and Landing procedure.
Sensors 19 00408 g001
Figure 2. Framework of the proposed landing navigation: the blue box is the core part of the proposed approach
Figure 2. Framework of the proposed landing navigation: the blue box is the core part of the proposed approach
Sensors 19 00408 g002
Figure 3. Homography between synthetic and real images
Figure 3. Homography between synthetic and real images
Sensors 19 00408 g003
Figure 4. Reference frames and runway model
Figure 4. Reference frames and runway model
Sensors 19 00408 g004
Figure 5. Projection of features in synthetic image.
Figure 5. Projection of features in synthetic image.
Sensors 19 00408 g005
Figure 6. Real Runway Detection: the black solid rectangle is the runway ROI, the red lines are the extracted line segments, the blue quadrangle is the synthetic runway contour, the black dashed rectangles are the neighborhoods of runway edges, and the green quadrangle is the fitted runway edge.
Figure 6. Real Runway Detection: the black solid rectangle is the runway ROI, the red lines are the extracted line segments, the blue quadrangle is the synthetic runway contour, the black dashed rectangles are the neighborhoods of runway edges, and the green quadrangle is the fitted runway edge.
Sensors 19 00408 g006
Figure 7. The flight data acquisition platform: (a) ISS; (b) ISS installation; (c) aircraft landing; (d) instruments for flight data acquisition; (e) DGPS ground station.
Figure 7. The flight data acquisition platform: (a) ISS; (b) ISS installation; (c) aircraft landing; (d) instruments for flight data acquisition; (e) DGPS ground station.
Sensors 19 00408 g007
Figure 8. The block diagram of the experimental platform.
Figure 8. The block diagram of the experimental platform.
Sensors 19 00408 g008
Figure 9. Line Segments Extraction from ROI: (a) EDLines: 173 lines, 3.1 ms; (b) LSD: 213 lines, 17.1 ms.
Figure 9. Line Segments Extraction from ROI: (a) EDLines: 173 lines, 3.1 ms; (b) LSD: 213 lines, 17.1 ms.
Sensors 19 00408 g009
Figure 10. Runway detection at typical flight height: (a) 200 ft; (b) 100 ft; (c) 60 ft.
Figure 10. Runway detection at typical flight height: (a) 200 ft; (b) 100 ft; (c) 60 ft.
Sensors 19 00408 g010
Figure 11. Approach and landing trajectory.
Figure 11. Approach and landing trajectory.
Sensors 19 00408 g011
Figure 12. Errors of motion estimation: (a) position errors, (b) attitude errors, and (c) velocity errors.
Figure 12. Errors of motion estimation: (a) position errors, (b) attitude errors, and (c) velocity errors.
Sensors 19 00408 g012aSensors 19 00408 g012bSensors 19 00408 g012c
Figure 13. Flight height among 6 modes during landing.
Figure 13. Flight height among 6 modes during landing.
Sensors 19 00408 g013
Table 1. The calibrated parameters of INS and FLIR camera.
Table 1. The calibrated parameters of INS and FLIR camera.
FLIR Camera
Intrinsic Parameters
pixel size0.025 µm
focal lengthfx = 1010.7 pixel, fy = 1009.5 pixel
principal pointu0 = 316.376 pixel, v0 = 237.038 pixel
radial distortionk1 = −0.3408, k2 = 0.1238
spectral response0.9–1.7 μm
CCD resolution640 × 512
field of view20°(H) × 30°(V)
FLIR Camera
Installation
position[−0.002, 0.094, −12.217] m
[−0.0181, −0.0824, −0.0049] rad
attitude
INS Installationposition[0.0704, −0.4742, −7.2863] m
[0.0789, 0.0003, −0.0088] rad
attitude
Table 2. The statistics of the runway detection at three typical flight altitude.
Table 2. The statistics of the runway detection at three typical flight altitude.
ScenariosFlight Height (Ft)ROI (pixels)ROI/CCD RatioLines
120049 × 770.011516
2100106 × 2140.069258
360164 × 4880.2442173
Table 3. RMS Errors of attitude, velocity and position.
Table 3. RMS Errors of attitude, velocity and position.
HeightMethodΔθ
(deg)
Δϕ
(deg)
Δψ
(deg)
ΔVe
(m/s)
ΔVn
(m/s)
ΔVu
(m/s)
ΔXe
(m)
ΔXn
(m)
ΔXu
(m)
500–200 ftA0.02220.02750.02750.13840.23440.11800.25441.00910.0638
B0.52520.01180.04780.13000.12050.11182.51863.81283.3936
C0.36860.03450.08080.09800.66930.11599.31574.37623.2143
D0.20560.16450.01182.40901.30710.6973
200–100 ftA0.01330.02750.01510.10960.17090.07480.39380.62970. 4013
B0.50630.01510.03030.09160.11640.07541.91923.78813.2277
C0.49340.03640.06500.10010.46170.078012.9528.63553.2394
D0.44150.33490.02071.51850.76320.6902
100–60 ftA0.01220.02680.02030.07930.12280.06010.40510.89810.0531
B0.48690.01350.00800.08020.10930.06192.02294.03803.2476
C0.47730.03460.03750.11730.37770.063114.61710.4013.3094
D0.69140.52750.03042.56471.39170.7617
60–47 ftA0.01900.03790.01310.11870.12820.11890.40380.87950.0567
B0.47620.05960.01410.07690.15250.12211.92634.19503.2373
C0.47030.08000.04860.13900.54390.122418.05110.9483.2825
D0.77340.41610.02023.30101.71210.4384
Note: A—INS/FLIR, B—INS/GPS, C—INS/GPS/BARO/RALT, D—EPnP.
Table 4. RMS Errors of flight height in landing phase, unit (m).
Table 4. RMS Errors of flight height in landing phase, unit (m).
HeightINS/FLIRINS/GPSINS/GPS/BARO/RALTEPnPBARORadio Altimeter
500–200 ft0.06383.39363.21430.69733.07464.9590
200–100 ft0.04133.22773.23940.69023.68414.7333
100–60 ft0.05313.24763.30940.76174.03004.1154
60–47 ft0.05673.23733.28250.43844.14833.6150

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top