Next Article in Journal
Shape Optimization of Discontinuous Armature Arrangement PMLSM for Reduction of Thrust Ripple
Previous Article in Journal
Design and Control of an Omnidirectional Mobile Wall-Climbing Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Systematic Error Correction for Geo-Location of Airborne Optoelectronic Platforms

1
Changchun Institute of Optics Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
College of Electro-Mechanical Engineering, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(22), 11067; https://doi.org/10.3390/app112211067
Submission received: 13 September 2021 / Revised: 10 November 2021 / Accepted: 11 November 2021 / Published: 22 November 2021
(This article belongs to the Section Aerospace Science and Engineering)

Abstract

:
In order to improve the geo-location accuracy of the airborne optoelectronic platform and eliminate the influence of assembly systematic error on the accuracy, a systematic geo-location error correction method is proposed. First, based on the kinematic characteristics of the airborne optoelectronic platform, the geo-location model was established. Then, the error items that affect the geo-location accuracy were analyzed. The installation error between the platform and the POS was considered, and the installation error of platform’s pitch and azimuth was introduced. After ignoring higher-order infinitesimals, the least square form of systematic error is obtained. Therefore, the systematic error can be obtained through a series of measurements. Both Monte Carlo simulation analysis and in-flight experiment results show that this method can effectively obtain the systematic error. Through correction, the root-mean-square value of the geo-location error have reduced from 45.65 m to 12.62 m, and the mean error from 16.60 m to 1.24 m. This method can be widely used in systematic error correction of relevant photoelectric equipment.

1. Introduction

Airborne optoelectronic platforms, which can realize a wide range of search, identification, tracking and measurement tasks, are playing an increasingly important role in military and civilian applications such as search, rescue of the wounded and target reconnaissance, etc. [1,2,3,4]. The process of obtaining geographic coordinate information of a target using an airborne optoelectronic platform is called geo-location. Accurate geo-location is of great significance to the rescue of the wounded, target reconnaissance, etc. In recent years scholars have conducted extensive research on geo-location algorithms. The following methods are used to improve the geo-location accuracy:
(1)
Build a more accurate model based on the Earth ellipsoid model or digital elevation model. For example, Stich proposed a geo-location algorithm based on the Earth ellipsoid model from a single image using an aerial camera to reduce the influence of the Earth’s curvature on the positioning result [5]. Qiao proposed a geo-location algorithm based on the digital elevation model (DEM) for an airborne wide-area reconnaissance system, and the simulation results show that the proposed algorithm can improve the geo-location accuracy of ground target in rough terrain area [6]. The Global Hawk unmanned aerial vehicle (UAV) geo-location system is also based on an Earth ellipsoid model to calculate the geodetic coordinates of the image center [7].
(2)
Image the target multiple times or using multiple sensors or multiple UAVs to get redundant information to improve geo-location accuracy. Bai proposed an improved two-UAV intersection localization system based on airborne optoelectronic platforms using the crossed-angle localization method to address the limitation of the existing UAV photoelectric localization method used for moving objects [8]. Lee proposed an information-sharing strategy by allocating sensors to multiple small UAVs to solve the drawbacks that a small unmanned aerial vehicle (UAV) cannot be equipped with many sensors for target localization [9]. Morbidi and Mariottini described an active target-tracking strategy to deploy a team of UAVs along the paths that minimize the uncertainty about the position of a moving target [10]. Qu proposed a ground target cooperative geometric localization method based on the locations of several unmanned aerial vehicles (UAVs) and their relative distances from a target, and simulation results from the MATLAB/Simulink toolbox show that this method is more effective than a traditional approach [11].
(3)
Use filtering algorithms or video sequences to optimize positioning results. Zhao et al. [12] proposed an adaptive tracking algorithm based on the Kalman filter (KF). Qiao proposed a moving average filtering (MAF) to improve the geo-location accuracy of moving ground target, which adapts to both constant velocity motion model and the turn model [6]. Wang proposed a recursive least squares (RLS) filtering method based on UAV dead reckoning to improve the accuracy of the multi-target localization [13]. Nonlinear filter [14,15,16,17] and methods based on video sequence [18,19,20,21,22,23] are also proposed to estimate the locations of targets.
(4)
Analyze the factors causing the error and use calibration methods to make corrections. Liu proposed a system and method for correction of relative angular displacements between an unmanned aerial vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy [24]. With the Lens distortion parameter obtained in the laboratory, Wang proposed a real-time zoom lens distortion correction method to improve the accuracy of the multi-target localization [13].
In all the above references, the authors either didn’t take into account the effect of the systematic error [4,19,24,25], or just introduced it as a fixed bias [6,8]. Eliminating systematic error is the basis for improving the accuracy of target geo-location since it directly affects the geo-location accuracy and the filtering algorithms [6,13,14,15,16,17] cannot eliminate systematic error. The geo-location error will increase along with the increase of the distance to the target with systematic error. At a certain distance, it will become the main error that affects the geo-location accuracy. So, eliminating the systematic error is of great significance to improve the accuracy of geo-location.
The method proposed here is one kind of boresight calibration. However, it is different from those methods. First, boresight calibration methods usually solve the misalignments between the image coordinate system and the body coordinate system established by the inertial instruments [26,27]. While this method mainly focuses on airborne optoelectronic platform which always has additional rotation axes such as azimuth and pitch axes. Secondly, the transformations are usually established for various photogrammatic bundle adjustment systems [26,27,28]. While this method establishes those transformations through the distance measured by the laser rangefinder and one target for each measurement, so the method proposed in this article is a special boresight calibration with more axes and a laser rangefinder to measure the distance between GCPs and the imaging system.
The primary contribution of this paper is that a geo-location systematic error correction method is proposed. This method has the following advantages:
This method can correct the systematic error from the manufacture and assemble which can’t be eliminated by laboratory equipment. For example, after the photoelectric payload is installed on the UAV, due to the size and other issues, it is difficult for the laboratory equipment to correct the installation error between the aircraft and the payload.
This method is easy to implement. Unlike the other laboratory methods [13,24], this method doesn’t need special equipment and the systematic error can be obtained through flight experiments and control points on the ground.
The rest of paper is organized as follows: Section 2.1 briefly presents the overall framework of the geo-localization system. Section 2.2 presents the reference frames and transformations required for the geo-location system. Section 2.3 presents the ground target geo-location model. Section 2.4 presents the geo-location error analysis and methods to get the systematic error using control points. Section 3 presents the results of systematic error correction methods from both Monte Carlo analysis and in-flight experiments. Section 4 presents the discussion and our conclusions are given in Section 5.

2. Materials and Methods

2.1. Overall Framework

The geo-location system introduced in this article is composed of a ground control station, a digital and image transmission link and an UAV with an airborne optoelectronic platform, as shown in the Figure 1. The UAV was developed by Changchun Institute of Optics Fine Mechanics and Physics, Chinese Academy of Science for civilian applications such as rescue of the wounded and forest fire prevention. The ground control system is responsible for the control and status display of the UAV and the airborne optoelectronic platform. The transmission link is responsible for real-time downloading of video data. The airborne optoelectronic platform is composed of visible camera, Thermal imaging camera, laser rangefinder, stabilized platforms, image trackers, inertial measurement unit (IMU), etc. The stabilized platforms include the azimuth axis and the pitch axis, and each axis has an encoder which can output the current azimuth and pitch angle in real time.
The geo-location process is shown in the Figure 2. First, an operator selects the target in the real-time video in the ground control station. Then when the airborne optoelectronic platform is tracking the selected target and the operator turns on the laser rangefinder, the airborne optoelectronic platform starts the geo-location process. The geo-location results of the target are sent to the ground control station for real-time display.
In fact, the geo-location process can realize the geo-location at any position in sight of the camera. In this article, in order to realize the correction of the systematic error, only the target at the center of the field of view is considered. Since the axis of the laser ranging is parallel to the axis of the cameras during the design and installation of the airborne optoelectronic platform, the laser rangefinders’ value is the distance between the target and the photoelectric payload.

2.2. Establishment of the Basic Coordinates

Four basic coordinate frames are used in the geo-location algorithm, including the imaging coordinate frame   V X v Y v Z v , the aircraft coordinate frame   B X b Y b Z b , the navigation coordinate frame   P X p Y p Z p and the geodetic coordinate frame   G X g Y g Z g .

2.2.1. The Imaging Coordinate Frame V X v Y v Z v

This frame has its origin at the rotation center of the payload. The X v axis points upwards if the camera is along vehicle x-axis, and the Z v axis is along the light of sight (LOS) of the imaging system and pointing to the target, and Y v forms an orthogonal right-handed frame set. This imaging system is installed in a two-axis’ gimbal, while the outer gimbal angle rotates around the Z b axis, represents the azimuth angle β , having the initial position as front, and positive to the right. The inner gimbal angle rotates around the y v axis, represents the pitch angle α , having initial position as the LOS point down along the Z v axis and positive while rotate to the front. The imaging coordinate frame and the transition process between the imaging coordinate frame and aircraft coordinate frame are shown in Figure 3.

2.2.2. The Aircraft Coordinate Frame B X b Y b Z b

This frame is the standard ARINC frame with origin shifted to the center of the optical platform center. The X b axis points to the nose of the aircraft, Z b axis points to the bottom of the aircraft, and Y b forms an orthogonal right-handed set. POS were mounted on this frame, and the roll, pitch and yaw angles ϕ , θ , ψ rotate around the X b axis, Y b axis and Z b axis. The aircraft coordinate frame and the transition between the aircraft coordinate frame and navigation coordinate frame are shown in Figure 4.

2.2.3. The Navigation Coordinate Frame P X p Y p Z p

This frame is the standard NED (north, east, down) reference frame and has its origin at the aircraft’s center; the X p axis points to true north, Y p axis points to the east, and Z p axis lies along the local geodetic vertical and is positive down. The transition between the geodetic coordinate frame and the navigation coordinate frame is shown in Figure 5.

2.2.4. The Geodetic Coordinate Frame G X g Y g Z g

This frame is defined in WGS-84 and has its origin at the Earth’s geometric center. The X g is in the equatorial plane at the prime meridian, Z g points north through the polar axis, and Y g forms an orthogonal right-handed set. The elliptical Earth model can be expressed as Figure 5:
x g 2 R E 2 + y g 2 R E 2 + z g 2 R P 2 = 1  
where R E = 6 , 378 , 137   m is the semi-major axis and R P = 6 , 356 , 752   m is the semi-minor axis.
The geographical position of a point can be expressed as the longitude, latitude, and geodetic height (B, L, and H). The point in the geodetic coordinate frame can be expressed as:
P g = x g y g z g = R N + H   cos B   cos L R N + H   cos B   sin L R N 1 e 2   + H sin B  
where e = R E 2 R P 2 R E the first eccentricity of the Earth ellipsoid is, R N = R E 1 e 2 s i n 2 B is the prime vertical of curvature.

2.3. Ground Target Geo-Location

When the airborne optoelectronic platform in the initial position, azimuth β and pitch α angle both equal to zero, and the imaging coordinate frame coincides with the aircraft coordinate frame. The matrix transforms from the frame V X v Y v Z v to the frame B X b Y b Z b can be expressed as:
M b v = cos β sin β 0 sin β cos β 0 0 0 1 · cos α 0 sin α 0 1 0 sin α 0 cos α
The position and orientation system (POS), which is composed of the global positioning system (GPS) and inertial measurement unit, can measure the position and attitude information of the airborne platform accurately. The position information of the airborne optoelectronic platform includes the latitude, longitude, and height B , L , H and the attitude information includes roll, pitch and yaw angles ϕ , θ , ψ . The matrix transforms from the frame B X b Y b Z b to the frame P X p Y p Z p can be expressed as:
M p b = cos ψ sin ψ 0 sin ψ cos ψ 0 0 0 1 · cos θ 0 sin θ 0 1 0 sin θ 0 cos θ · 1 0 0 0 cos ϕ sin ϕ 0 sin ϕ cos ϕ
The matrix transforms from the frame P X p Y p Z p to the frame G X g Y g Z g can be expressed as:
M g p = cos L sin L 0 sin L cos L 0 0 0 1 · cos B 0.5 π 0 sin B 0.5 π 0 1 0 sin B 0.5 π 0 cos B 0.5 π
and the matrix transform from frame imaging coordinate frame to the geodetic coordinate frame can be expressed as:
M g v = M g p · M p b · M b v  
The distance ( R n g ) between the airborne platform and the target can be provided by the laser range finder. Since the laser beam is parallel with the LOS, the vector from the imaging system to the target can be expressed in geodetic coordinate frame as:
R g = M g v · 0 0 R n g T  
Then the target position T g in geodetic coordinate frame can be expressed as:
T g = P g + R g  
where P g is a shift of origin from the center of gravity of the aircraft to the origin of the imaging system coordinate frame.
According to the elliptical Earth model, the latitude, the longitude and the geodetic height of the target can be solved according to [6]. The latitude of the northern hemisphere is positive, the latitude of the southern hemisphere is negative, and the target latitude and the geodetic height can be solved by the following iteration equations:
{ R 0 = R E H 0 = x g 2 + y g 2 + z g 2 2 R E R P 2 B 0 = atan z g x g 2 + y g 2 2 · 1 e 2 R 0 R 0 + H 1 { R k + 1 = R E 1 e 2 sin 2 B k H k + 1 = x g 2 + y g 2 cos B k R k B k + 1 = atan z g x g 2 + y g 2 · 1 e 2 R k + 1 R k + 1 + H 1
According to the elliptical Earth model, the longitude of the eastern hemisphere is positive, the longitude of the western hemisphere is negative, and the target longitude can be solved by the following equation:
L = l x g > 0 l + π , x g < 0 , l < 0 l π , x g < 0 , l > 0 , l = atan y g x g

2.4. Ground Target Geolocation Error Analysis and Systematic Error Correction

2.4.1. Analysis of Errors Affecting Geo-Location

Geo-location accuracy is an important factor that measures the ability of airborne optoelectronic platform to obtain target geographic location information. Therefore, it is very important to analyze the individual sources of measurement uncertainty of each part on geo-location process. Sensor measurement errors and installation errors are two major factors that affect the geo-location accuracy. For this platform, the position and attitude errors of the aircraft are mainly derived from the measurement errors of the integrated navigation system. The azimuth and pitch angle errors of the platform are derived from the measurement errors of the encoders, and the laser ranging errors are derived from the measurement errors of the laser rangefinder. These are normally distributed random errors, shown as sensor measurement error in Figure 4, which cannot be eliminated but can only be estimated with filters. Here we focus on the systematic error that affects the positioning accuracy shown in Figure 6 as installation error.
(1)
POS installation error
In order to reduce the impact of aircraft vibration on target positioning, an independent POS can be installed on the base of the airborne optoelectronic platform to directly obtain the position and attitude information of the platform. During the installation of the POS, calibration is required to ensure that the coordinate system of the aircraft coordinate frame coincides with the navigation coordinate frame in inertial state. After calibration, the error of pitch and roll direction is generally about1mrad, and the error of heading direction is about 1.5 mrad. Use δ ψ , δ θ , and δ ϕ to represent the installation errors of the heading, pitch and roll directions respectively. Since these errors are all small values, conversion matrix from the geographic coordinate system to the aircraft coordinate system can be approximated by:
M p b = 1 δ ψ δ θ δ ψ 1 δ ϕ δ θ δ ϕ 1 · M p b
(2)
Payload installation error
The optical system of the airborne optoelectronic platform is installed in a two-axis frame. Ideally, when the pitch and azimuth angles of the platform are zero, the imaging coordinate frame should coincide with the aircraft system, and the pitch axis of the imaging coordinate frame should be aligned with the pitch axis of the aircraft coordinate frame, and the azimuth axis should be aligned with the yaw axis of the aircraft coordinate frame. The error generated during the assembly and installation will cause the yaw and pitch directions of the two frame axes to be inconsistent, resulting in an error in the angle measurement of the imaging coordinate frame. Under normal circumstances, the installation error of this frame is about 1.5 mrad. We use δ α and δ β represent the installation error of the azimuth and pitch, respectively. Since these two error angles are both small, conversion matrix from imaging coordinate frame to the aircraft coordinate frame can be approximated by the following matrix:
M b v = 1 δ β δ α δ β 1 0 δ α 0 1 · M b v

2.4.2. Systematic Error Correction Using Control Points

Considering the installation error of the integrated navigation system and the installation error of the imaging coordinate frame, the conversion matrix M p b and M b v are used instead of M p b and M b v , so Equation (6) can be expressed as follows:
T g P g = M g p · M p b · M b v 0 0 R n g
M p b · M b v can be expressed as:
M p b · M b v = 1 δ ψ δ θ δ ψ 1 δ ϕ δ θ δ ϕ 1 · M p b · 1 δ β δ α δ β 1 0 δ α 0 1 · M b v
For the convenience of presentation, the equation can be expressed as:
M g p 1 · T g P g = n 1 n 2 n 3 T  
M b v 0 0 R n g = m 1 m 2 m 3 T  
M p b = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33  
Since δ ψ , δ θ , δ ϕ , δ β and δ α are small quantities, the second-order small quantities can be ignored in the calculation process, there are:
[ n 1 n 2 n 3 ] = [ a 11 a 21 δ ψ + a 31 δ θ + a 12 δ β a 13 δ α a 12 a 22 δ ψ + a 32 δ θ a 11 δ β a 13 a 23 δ ψ + a 33 δ θ + a 11 δ α a 21 + a 11 δ ψ a 31 δ ϕ + a 22 δ β a 23 δ α a 22 + a 12 δ ψ a 32 δ ϕ a 21 δ β a 23 + a 13 δ ψ a 33 δ ϕ + a 21 δ α a 31 a 11 δ θ + a 21 δ ϕ + a 32 δ β a 33 δ α a 32 a 12 δ θ + a 22 δ ϕ a 31 δ β a 33 a 13 δ θ + a 23 δ ϕ + a 31 δ α ] [ m 1 m 2 m 3 ]
and the equation can be expressed as:
X δ ψ δ ϕ δ θ δ α δ β = y
where:
X = [ a 21 m 1 a 22 m 2 a 23 m 3 a 11 m 1 a 12 m 2 a 13 m 3 0 a 31 m 1 + a 32 m 2 + a 33 m 3 0 a 11 m 1 a 12 m 2 a 13 m 3 0 a 31 m 1 a 32 m 2 a 33 m 3 a 21 m 1 + a 22 m 2 + a 23 m 3 a 12 m 1 a 11 m 2 a 22 m 1 a 21 m 2 a 32 m 1 a 31 m 2 a 11 m 3 a 13 m 1 a 21 m 3 a 23 m 1 a 31 m 3 a 33 m 1 ] T
y = n 1 a 11 m 1 a 12 m 2 a 13 m 3 n 2 a 21 m 1 a 22 m 2 a 23 m 3 n 3 a 31 m 1 a 32 m 2 a 33 m 3
Through Equation (19), it can be found that one measurement of the control point can produce three equations, however there are five unknown installation errors. Therefore, a series of measurements can be performed on the control points, and the least square method can be used to estimate the systematic error. If the n times measurements are performed on the control points, then X becomes a matrix of 3 n × 5 and y becomes a column vector of 3 n dimensions. At this time, the least squares method can be used to estimate the systematic error:
δ ψ δ ϕ δ θ δ α δ β T = X T X 1 X T y  

3. Results

3.1. Simulation of System Error Correction

3.1.1. Monte Carlo Analysis Method

The Monte Carlo method, also known as the random simulation method, is applied to geo-location problems by many researchers [6,13,28]. The simulation data are generated by computer and used to replace the actual test data, which are difficult to obtain.
The error analysis model is established as:
Δ y = f x 1 + Δ x 1 , x 2 + Δ x 2 , , x n + Δ x n f x 1 , x 2 , , x n  
where Δ y is the error of y and Δ x n is the error of x n .
The random variable error Δ x n obeys the normal distribution, and the error model can be described as:
Δ x n = R i σ x n ,   i = 1 , 2 , , N
where R i is a pseudorandom number, which obeys the standard normal distribution, N is the size of sample space, and σ x n the measurement standard deviation of parameter x n .
The nominal value y of the error analysis is calculated by the real value of each parameter x 1 , x 2 , , x n . The random error sequences of each parameter Δ x 1 , Δ x 2 , , Δ x n are added to each parameter. According to Equation (23), the function value error Δ y is calculated, and the error value is analyzed by the statistical method.

3.1.2. Systematic Error Correction and Results

Assuming that the installation error of the integrated navigation system is 0.30° for the heading direction, −0.05° for the pitch direction, and 0.20° for the roll direction, the installation error of the image coordinate frame is −0.20° for the azimuth angle and 0.10° for the pitch angle, and the control point is at (33.980849° N, 107.523239° E, 3132.10 m). In the simulation, the aircraft takes multiple imaging measurements on the control points at different positions and in different attitudes to get the installation error. The systematic error and measurement errors are shown in Table 1.
After performing 256 measurements on the control point, we got the installation error shown in Figure 7.
From Figure 7, it is not difficult to see that after 100 imaging measurements on the control points, the five installation errors are calculated as follows: the integrated navigation system heading direction is 0.349°, the pitch direction is −0.047°, the roll direction is 0.199°. The azimuth angle of the imaging coordinate frame is −0.244°, and the pitch angle is 0.095°. Compared with the actual installation error, the calculated errors are 0.049°, and −0.003°, 0.001°, 0.044° and −0.005°, which are only 1/6 of the original installation error. After 100 measurements, the calculated installation errors have not changed significantly. Therefore, the actual application can consider more than 100 measurements on the control points to estimate the installation error for the platform.
Using the measurement error and systematic error data in Table 1, using the control point and aircraft positions, the geo-location results are shown in the Figure 8a,c,e. The root mean square error of the geo-location at this time is calculated to be 53.28 m, and the mean value of the positioning error is (0.0005126°, 0.0001104°, 4.67 m) or 22.13 m. Due to the installation error, the mean error of the target positioning result is not zero.
After the installation error is corrected with the systematic error obtained above, the geo-location data is shown in Figure 8b,d,f. It is calculated that the root mean square error of the positioning at this time is 13.61 m, and the mean value of the geo-location error at this time is (0.0000076°, 0.0000011°, −0.03 m), which corresponds to 6.37 m. The comparison between before and after correction is shown in Table 2. After the systematic error were corrected, the root mean square error is reduced to 1/3 of the original, and the mean value of the geo-location error is closer to zero. The use of methods such as a Kalman filter, etc. for geo-location would be more accurate after the systematic errors were corrected.

3.2. Flight Experiments and Results

In order to verify the method mentioned above, a UAV equipped with a POS system and the airborne optoelectronic platform was used for flight experiments.

3.2.1. Design of Flight Experiments

Four target points were set as shown in Table 3. The targets of control points were laid on the ground, and their geographical positions were measured by a DGPS device and the geo-location error of the target <0.2 m via the post-processing, which is much less than the geo-location error analyzed in Section 3.1. Thus, the target geo-graphical position can be viewed as the source of truth.
The UAV’s relative flight heights were set to 2500, 3000 and 3500 m. Five flight routes were set to achieve multiple positions and multiple angles to measure those targets as shown in Table 4. The routes of L1, L2 and L3 were used for the error solving and the routes of L4 and L5 were used to obtain samples for validation. The routes of L1, L3 and L5 have the same start and end waypoints with different relative flight height, and the route of L2 and L4 have the same start and end waypoints with different relative flight height too. The flight routes and target points are shown in Figure 9.

3.2.2. Systematic Error Solving

Figure 10a shows the plane flies on the left side of the target (route L1) and Figure 10b shows the plane flied along the target (route L2). Table 5 shows the Samples of flight routes and relevant target points. A total of 260 sets of measurement data were obtained. Using the method mentioned in Section 2.3, the 260 sets of data are used to solve the best estimator of the systematic error and the computation result was shown in Figure 11.
From Figure 10, it is not difficult to see that after 200 imaging measurements on the control points, the five installation errors are calculated as follows: the integrated navigation system heading direction is 0.206°, the pitch direction is −0.198°, the roll direction is −0.098°. The azimuth angle of the imaging coordinate frame is 0.061°, and the pitch angle is 0.097°.
The geo-location error before and after systematic error correction of sample 0 is shown in Figure 12, where Figure 12a–c are the box plot of latitude, longitude and altitude, respectively. After correction, the root mean square error of latitude, longitude and altitude is reduced, and the mean value of the geo-location error is closer to zero. Table 6 is more intuitive. The root mean square error of target positioning is reduced from 45.65 to 12.62, and the target positioning accuracy is also reduced from 16.60 m to 1.24 m. The error correction effectively improves the geo-location accuracy of the system.

3.2.3. The Flight Experiments Validation of the Systematic Error Correction

Substituting the systematic error into the sample data for verification, we got the comparison of geo-location between before and after corrections. The geo-location accuracy of the verification data has been significantly improved. Figure 13 shows the geo-location errors of sample data S1–S5 and we can see that the geo-position error of sample data S1–S5 reduce from about 55, 45, 80 and 26 m to 10, 6, 8 and 9 m. Figure 14 shows a more direct view of the improvement of the accurate of geo-location. In this image, we plot the geo-location results and the target points in geodetic frame. After correction, the geo-location results were closer to the target points.

4. Discussion

From the simulation analysis and flight experiment verification above, the method proposed in this paper can effectively solve the systematic error. This error can not only reduce the root-mean-square value of the geo-location error, but also make the mean error closer to 0, making preliminary preparations for future filtering:
(1)
Selection of sample points
In order to avoid the singularity of the matrix X T X in Equation (22), samples in a variety of states should be selected, such as the aircraft at different flying heights, positions, and different attitudes. The installation error can be solved more stably. Since different platforms have different laser rangefinder, it difficult to give a suitable flight path for the calibration. But through simulation and flight experiment, it is recommended that the UAV flies on the left side and right side of the target, and take sample during the azimuth angle near 45, 135, 225 and 325 deg, respectively.
(2)
Solving the azimuth and heading angle errors
It can be seen from the systematic error iterative graph in Figure 8 that the roll angle, pitch angle installation error of the POS system and pitch angle error of the platform generally achieved stable results after about 50 measurements. However, the POS heading angle, the installation error and the azimuth angle of the platform require about 200 measurements to obtain stable values. This is because when the platform’s installation errors are small, the azimuth angle and the heading angle have the same axis, so there will be coupling when solving. The sum of the errors of the two axes will stay at a fixed value, as shown in Figure 10. Calculations show that if the sum of the two errors remains at a fixed value, the final geo-location accuracy will not be affected.

5. Conclusions

This paper proposes a systematic error correction method based on flight data. First, based on the kinematics characteristics of the airborne optoelectronic platform, the geo-location model was established. Then, the error items that affect the geo-location accuracy were analyzed. The installation error between the platform and the POS was considered, and the installation error of platform’s pitch and azimuth was introduced. After ignoring higher-order infinitesimals, the least square form of systematic error is obtained. Therefore, the systematic error can be obtained through a series of measurements. Both Monte Carlo simulation analysis and in-flight experiment results show that this method can effectively obtain the systematic error. Through correction, it can not only reduce the root-mean-square value of the geo-location error, but also make the mean error closer to 0, making preliminary preparations for filtering further.
The method proposed here mainly focuses on the systematic error correction for airborne optoelectronic platforms with laser rangefinders, which have multiple rotation axes such as azimuth and pitch axes. Through this method, we can not only get the misalignment between the platform and the body coordinate system established by the inertial instruments, but the systematic error of the rotation axis inside the platform. As for more rotation axis platforms, we just need to add more systematic error items and expand the equation. There are differences between this method and the common boresight calibration: first, this method introduces more calibration items than just misalignment between imaging coordinate system and body coordinate system with inertial instrument; second, this method applies to platforms with laser rangefinders to measure the distance between the imaging coordinate system and targets, which is different from getting boresight angles from images.
This method can be widely used in the systematic error correction and geo-location of relevant photoelectric equipment. The next step we will focus on how to filter and locate both stationary and moving targets in real time after systematic errors are corrected.

Author Contributions

H.S., H.J., F.X. and J.L. initiated the research. H.S. designed the experiments and wrote the paper. L.W. contributed analysis tools. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We acknowledge Academic Editor for his careful revision of the languages and grammatical structures in this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kwon, H.; Pack, D.J. A robust mobile target localization method for cooperative unmanned aerial vehicles using sensor fusion qualitye. J. Intell. Robot. Syst. Theory Appl. 2012, 65, 479–493. [Google Scholar] [CrossRef]
  2. Gao, F.; Ma, X.; Gu, J.; Li, Y. An Active Target Localization with Moncular Vision. In Proceedings of the IEEE International Conference on Control & Automation (ICCA), Taichung, Taiwan, 18 –20 June 2014; pp. 1381–1386. [Google Scholar]
  3. Su, L.; Hao, Q. Study on Intersection Measurement and Error Analysis. In Proceedings of the IEEE International Conference on Computer Application and System Modeling (ICCASM), Taiyuan, China, 22–24 October 2010. [Google Scholar]
  4. Liu, L.; Liu, L. Intersection Measuring System of Trajectory Camera with Long Narrow Photosensitive Surface. In Proceedings of the Society of Photo-Optical Instrumentation Engineers (SPIE), Beijing, China, 26 January 2006; pp. 1006–1011. [Google Scholar]
  5. Stich, E.J. Geo-pointing and threat location techniques for airborne border surveillance. In Proceedings of the IEEE International Conference on Technologies for Homeland Security (HST 2013), Waltham, MA, USA, 12–14 November 2013; pp. 136–140. [Google Scholar]
  6. Qiao, C.; Ding, Y.; Xu, Y. Ground target geolocation based on digital elevation model for airborne wide-area reconnaissance system. J. Appl. Remote Sens. 2018, 12, 016004. [Google Scholar] [CrossRef]
  7. Held, K.J.; Robinson, B.H. TIER II plus Airborne EO Sensor LOS Control and Image geolocation. In Proceedings of the Aerospace Conference, Snowmass Aspen, CO, USA, 13 February 1997; pp. 377–405. [Google Scholar]
  8. Bai, G.; Liu, J.; Song, Y.; Zuo, Y. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform. Sensors 2017, 17, 98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Lee, W.; Bang, H.; Leeghim, H. Cooperative localization between small UAVs using a combination of heterogeneous sensors. Aerosp. Sci. Technol. 2013, 27, 105–111. [Google Scholar] [CrossRef]
  10. Morbidi, F.; Mariottini, G.L. Active target tracking and cooperative localization for teams of aerial vehicles. IEEE Trans. Control Syst. Technol. 2013, 21, 1694–1707. [Google Scholar] [CrossRef]
  11. Qu, Y.; Zhang, F.; Wu, X.; Xiao, B. Cooperative geometric localization for a ground target based on the relative distances by multiple UAVs. Sci. China Inf. Sci. Engl. Version 2019, 62, 41–50. [Google Scholar] [CrossRef] [Green Version]
  12. Zhao, W.B.; Chen, W.; Zheng, G.Z.; Huang, K.M.; Zhao, K.J.; Li, Y.G. Study on UAV video reconnaissance based adaptively tracking algorithm for the ground moving target. In Proceedings of the Advanced Intelligent Computing: 7th International Conference (ICIC2011), Zhengzhou, China, 11–14 August 2011; Huang, D.-S., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 282–289. [Google Scholar]
  13. Wang, X.; Liu, J.; Zhou, Q. Real-time multi-target localization from unmanned aerial vehicles. Sensors 2017, 17, 33. [Google Scholar] [CrossRef] [PubMed]
  14. Barber, D.B.; Redding, J.D.; McLain, T.W.; Beard, R.W.; Taylor, C.N. Vision-based Target Geo-location using a Fixed-wing Miniature Air Vehicle. J. Intell. Robot. Syst. 2006, 47, 361–382. [Google Scholar] [CrossRef]
  15. Grewal, M.S.; Henderson, V.D.; Miyasako, R.S. Application of Kalman filtering to the calibration and alignment of inertial navigation systems. IEEE Trans. Autom. Control. 1991, 36, 4–12. [Google Scholar] [CrossRef]
  16. Dmitriyev, S.P.; Stepanov, O.A.; Shepel, S.V. Nonlinear filter methods application in INS alignment. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 260–271. [Google Scholar] [CrossRef]
  17. Redding, J.; McLain, T.W.; Beard, R.W.; Taylor, C. Vision-based target localization from a fixed-wing miniature air vehicle. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006; pp. 2862–2867. [Google Scholar]
  18. Whang, H.; Dobrokhodov, V.N.; Kaminer, I.I.; Jones, K.D. On vision-based target tracking and range estimation for small UAVs. In Proceedings of the AIAA Guidance, Navigation and Control Conference, San Franscisco, CA, USA, 15–18 August 2005. [Google Scholar]
  19. Han, K.; DeSouza, G.N. Multiple Targets Geolocation using SIFT and Stereo Vision on Airborne Video Sequences. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009. [Google Scholar]
  20. Conte, G.; Doherty, P. An Integrated UAV Navigation System Based on Aerial Image Matching. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2008; pp. 1–10. [Google Scholar]
  21. Schultz, H.; Hanson, A.; Riseman, E.; Stolle, F.; Zhu, Z. A system for real-time generation of geo-referenced terrain models. In Proceedings of the SPIE Enabling Technologies for Law Enforcement, Boston, MA, USA, 6 November 2000. [Google Scholar]
  22. Whitacre, W.; Campbell, M.; Wheeler, M.; Stevenson, D. Flight Results from Tracking Ground Targets Using SeaScan UAVs with Gimballing Cameras. In Proceedings of the IEEE 2007 American Control Conference, New York, NY, USA, 9–13 July 2007; pp. 377–383. [Google Scholar]
  23. Zhan, F.; Shen, H.; Wang, P.; Zhang, C. Precise ground target location of subsonic UAV by compensating delay of navigation information. Opt. Precis. Eng. 2015, 23, 2506–2507. [Google Scholar] [CrossRef]
  24. Liu, C.; Liu, J.; Song, Y.; Liang, H. A novel system for correction of relative angular displacement between air-borne platform and UAV in target localization. Sensors 2017, 3, 510. [Google Scholar] [CrossRef] [Green Version]
  25. Cai, Y.; Ding, Y.; Zhang, H.; Xiu, J.; Liu, Z. Geo-Location Algorithm for Building Targets in Oblique Remote Sensing Images Based on Deep Learning and Height Estimation. Remote Sens. 2020, 12, 2427. [Google Scholar] [CrossRef]
  26. Bumker, M.; Heimes, F.J. New Calibration and Computing Method for Direct Georeferencing of Image and Scanner Data Using the Position and Angular Data of an Hybrid Inertial Navigation System. In Proceedings of the OEEPE Workshop on Integrated Sensor Orientation, Hannover, Germany, 17–18 September 2001. [Google Scholar]
  27. Lee, Y.J.; Yilmaz, A. Boresight Calibration of the Aerial Multi-Head Camera System. In Proceedings of the SPIE—The International Society for Optical Engineering, Orlando, FL, USA, 19 May 2011; p. 8059. [Google Scholar]
  28. Lee, Y.J.; Yilmaz, A.; Mendoza-Schrock, O. In-flight camera platform geometric calibration of the aerial multi-head camera system. In Proceedings of the Aerospace & Electronics Conference IEEE, Dayton, OH, USA, 14–16 July 2010. [Google Scholar]
Figure 1. The geo-location system introduced in this article (The top left figure shows the ground control station and the figure on the bottom left shows the inside of the station; the top right figure shows the UAV with the airborne optoelectronic platform, and the bottom right figure shows the detail of the airborne optoelectronic platform).
Figure 1. The geo-location system introduced in this article (The top left figure shows the ground control station and the figure on the bottom left shows the inside of the station; the top right figure shows the UAV with the airborne optoelectronic platform, and the bottom right figure shows the detail of the airborne optoelectronic platform).
Applsci 11 11067 g001
Figure 2. The geo-location process of system.
Figure 2. The geo-location process of system.
Applsci 11 11067 g002
Figure 3. The imaging coordinate frame and aircraft coordinate frame.
Figure 3. The imaging coordinate frame and aircraft coordinate frame.
Applsci 11 11067 g003
Figure 4. The aircraft coordinate frame and navigation coordinate frame.
Figure 4. The aircraft coordinate frame and navigation coordinate frame.
Applsci 11 11067 g004
Figure 5. The geodetic coordinate frame and the navigation coordinate frame.
Figure 5. The geodetic coordinate frame and the navigation coordinate frame.
Applsci 11 11067 g005
Figure 6. Error items that affect the accuracy of target positioning.
Figure 6. Error items that affect the accuracy of target positioning.
Applsci 11 11067 g006
Figure 7. The iteration curve of the installing errors for simulation data.
Figure 7. The iteration curve of the installing errors for simulation data.
Applsci 11 11067 g007
Figure 8. Simulation result of geo-location before and after correction: (a) Latitude error of geo-location before correction; (b) Latitude error of geo-location after correction; (c) Longitude error of geo-location before correction; (d) Longitude error of geo-location after correction; (e) Altitude error of geo-location before correction; (f) Altitude error of geo-location after correction; (g) Box plot of latitude error of geo-location before and after correction; (h) Box plot of longitude error of geo-location before and after correction; (i) Box plot of altitude error of geo-location before and after correction; (j) Error of geo-location before and after correction.
Figure 8. Simulation result of geo-location before and after correction: (a) Latitude error of geo-location before correction; (b) Latitude error of geo-location after correction; (c) Longitude error of geo-location before correction; (d) Longitude error of geo-location after correction; (e) Altitude error of geo-location before correction; (f) Altitude error of geo-location after correction; (g) Box plot of latitude error of geo-location before and after correction; (h) Box plot of longitude error of geo-location before and after correction; (i) Box plot of altitude error of geo-location before and after correction; (j) Error of geo-location before and after correction.
Applsci 11 11067 g008aApplsci 11 11067 g008b
Figure 9. The flight routes and target points of experiments.
Figure 9. The flight routes and target points of experiments.
Applsci 11 11067 g009
Figure 10. Target positioning flight experiment: (a) The plane flied on the left side of the target; (b) The plane flew on the right side of the target.
Figure 10. Target positioning flight experiment: (a) The plane flied on the left side of the target; (b) The plane flew on the right side of the target.
Applsci 11 11067 g010
Figure 11. The iteration curve of the installing errors for flight data.
Figure 11. The iteration curve of the installing errors for flight data.
Applsci 11 11067 g011
Figure 12. The computation results of the installing errors: (a) Box plot of latitude error of geo-location before and after correction; (b) Box plot of longitude error of geo-location before and after correction; (c) Box plot of altitude error of geo-location before and after correction; (d) Error of geo-location before and after correction.
Figure 12. The computation results of the installing errors: (a) Box plot of latitude error of geo-location before and after correction; (b) Box plot of longitude error of geo-location before and after correction; (c) Box plot of altitude error of geo-location before and after correction; (d) Error of geo-location before and after correction.
Applsci 11 11067 g012
Figure 13. The computation results of the installing errors.
Figure 13. The computation results of the installing errors.
Applsci 11 11067 g013
Figure 14. The computation results of the installing errors.
Figure 14. The computation results of the installing errors.
Applsci 11 11067 g014
Table 1. Systematic error and random measurement error in the geo-location.
Table 1. Systematic error and random measurement error in the geo-location.
Error TypeName of Error VariableSymbolError Value
Systematic errorPOS installation errorRoll0.2°
Pitch−0.05°
Yaw0.3°
Payload installation errorPitch0.1°
Azimuth−0.2°
Random errorPlatform positionLatitude0.0001° (10 m)
Altitude0.00012° (10 m)
Altitude10 m
Platform attitudeRoll0.02°
Pitch0.02°
Yaw0.05°
Payload anglePitch0.027°
Azimuth0.027°
Laser rangeLaser5 m
Table 2. The comparison of geo-location between before and after correction.
Table 2. The comparison of geo-location between before and after correction.
ConditionsItemsLongitude (deg)Latitude (deg)Altitude (m)Error (m)
before correctionsigma0.00027300.000181025.6853.28
mean0.00051260.00011044.6722.13
after correctionsigma0.00011650.00008367.4413.61
mean0.00000760.0000011−0.336.37
Table 3. Position of target points.
Table 3. Position of target points.
Target IDPosition (B, L, H)Usage
P1(124.5797389 E, 44.9517639 N, 155 m)For errors solving
P2(124.5818472 E, 44.9517556 N, 155 m)For validation
P3(124.5809611 E, 44.9523944 N, 155 m)For validation
P4(124.5810250 E, 44.9516056 N, 155 m)For validation
Table 4. Position of flight routes.
Table 4. Position of flight routes.
Route IDStart Waypoint (B, L, H)End Waypoint (B, L, H)
L1(124.5152661 E, 44.9594875 N, 2500)(124.6667839 E, 44.9594875 N, 2500)
L2(124.5152661 E, 44.9406178 N, 3000)(124.6667839 E, 44.9406178 N, 3000)
L3(124.5152661 E, 44.9594875 N, 3500)(124.6667839 E, 44.9594875 N, 3500)
L4(124.5152661 E, 44.9406178 N, 3000)(124.6667839 E, 44.9406178 N, 3000)
L5(124.5152661 E, 44.9594875 N, 3000)(124.6667839 E, 44.9594875 N, 3000)
Table 5. Samples of flight routes and relevant target points.
Table 5. Samples of flight routes and relevant target points.
Sample IDRoute IDTarget IDUsageNumber of Samples
S0L1, L2, L3P3For errors solving260
S1L5P1For validation100
S2L5P2For validation100
S3L4P3For validation50
S4L4P4For validation100
S5L5P1For validation100
Table 6. The comparison of geo-location between before and after correction.
Table 6. The comparison of geo-location between before and after correction.
ConditionsItemsLongitude (deg)Latitude (deg)Altitude (m)Error (m)
before correctionsigma0.00034530.000250625.2245.65
mean0.00014120.0001395−5.9516.60
after correctionsigma0.00010210.00007457.1712.62
mean0.00000440.00000100.551.24
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, H.; Jia, H.; Wang, L.; Xu, F.; Liu, J. Systematic Error Correction for Geo-Location of Airborne Optoelectronic Platforms. Appl. Sci. 2021, 11, 11067. https://doi.org/10.3390/app112211067

AMA Style

Sun H, Jia H, Wang L, Xu F, Liu J. Systematic Error Correction for Geo-Location of Airborne Optoelectronic Platforms. Applied Sciences. 2021; 11(22):11067. https://doi.org/10.3390/app112211067

Chicago/Turabian Style

Sun, Hui, Hongguang Jia, Lina Wang, Fang Xu, and Jinghong Liu. 2021. "Systematic Error Correction for Geo-Location of Airborne Optoelectronic Platforms" Applied Sciences 11, no. 22: 11067. https://doi.org/10.3390/app112211067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop