Abstract
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.
1. Introduction
Real-time multi-target localization plays an essential and significant role in disaster emergency rescue, border security and so on. Over the past two decades, considerable research efforts have been devoted to multi-target localization. UAV electro-optical stabilized imaging systems are equipped with many kinds of sensors, including visible light cameras, infrared thermal imaging systems, laser range finders and angle sensors. Target localization needs to measure the attitude of the UAV, the attitude of the electro-optical stabilized imaging system and the distance between the electro-optical stabilized imaging system and the target.
The target localization methods from UAVs are divided into two categories. One category is target localization using a group of UAVs [1,2,3,4,5]. The other category is target localization using a single UAV [6,7,8,9,10]. This research aims to improve the effectiveness and efficiency of target localization from a single UAV. Particularly, this research proposes a new hybrid target localization scheme which integrates both zoom lens distortion correction and an RLS filtering method. The proposed scheme has many unique features which are designed to geo-locate targets rapidly.
Previous studies on geo-locating targets from a fixed-wing UAV have several limitations. Deming [1] described a probabilistic technique for performing multiple target detection and localization based on data from a swarm of flying optical sensors. Minaeian [2] described a vision-based crowd detection and Geographic Information System (GIS) localization algorithm for a cooperative team of one UAV and a number of unmanned ground vehicle (UGV)s. Morbidi [3] described an active target-tracking strategy to deploy a team of unmanned aerial vehicles along paths that minimize the uncertainty about the position of a moving target. Qu [4] described a multiple UAV cooperative localization method using azimuth angle information shared between the UAVs. Kwon [5] described a robust, improved mobile target localization method which incorporates the Out-Of-Order Sigma Point Kalman Filter (O3SPKF) technique.
In [1,2,3,4,5], target location methods based on data fusion technology have to use a group of UAVs. Target localization using a group of UAVs has some issues, including the high computational complexity of data association, complexity of UAV flight plans, difficulties in efficient data communication between UAVs and high maintenance costs due to the use of multiple UAVs. This paper presents a method for determining the location of objects using a gimbaled EO camera on-board a fixed-wing unmanned aerial vehicle (UAV). We focus on geo-locating targets using a single fixed-wing UAV due to the low maintenance costs. A single fixed-wing UAV (as opposed to rotary wing aircraft) has unique benefits including adaptability to adverse weather, good durability and high fuel efficiency.
In [6], Yan used absolute height above sea level of a UAV to geo-locate targets. In contrast, our system focuses on geo-locating targets in the video stream and does not require absolute height above sea level of the UAV data. In [7], Ha used scale invariant feature transform (SIFT) to extract feature points of the same target in different frames. The author calculated the relative height between the target and the UAV by three dimensional reconstruction. The location accuracy depends on the accuracy of this three dimensional reconstruction. The method thus requires a large amount of computation. In contrast, the accuracy of our system compared to that described in [7] is almost the same, but our system has a great advantage in reduced computational complexity, so our system can geo-locate 50 targets at the same time and improve the efficiency of multi-target localization. This is very important in military reconnaissance and disaster monitoring applications which require good real-time performance.
In [8], Barber introduced a system for vision-based target geo-localization from a fixed-wing micro-air vehicle. In flight tests, the UAV geo-locates the stationary target when the UAV orbits the targets. In [9], the UAV flies in an orbit in order to improve the geo-location accuracy. In [10], the authors assume that the UAV’s altitude above the target is known. The target’s altitude is obtained from a geo-referenced database made available by the Perspective View Nascent Technology (PVNT) method. In [11], target-location need an accurate geo-referenced terrain database. In [12], all information collected by an aerial camera is accurately geo-located through registration with pre-existing geo-reference imagery. In contrast, our system focuses on geo-locating a specific object in the video stream and does not require any preexisting geo-referenced imagery.
In all the above references, the UAVs are equipped with fixed-focal lenses. The authors do not take into account the effect of zoom lens distortion on multi-target localization. Many electro-optical stabilized imaging system are equipped with zoom lenses. The focal length of a zoom lens is adjusted to track targets at different distances during the flight. The zoom lens distortion varies with changing focal length. Real-time zoom lens distortion is impossible to correct by using calibration methods because the large amount of transformation calculation has to be repeated when the focal length is changed.
The primary contributions of this paper are: (1) the accuracy of multi-target localization has been improved due to the combination of a real-time zoom lens distortion correction method and a RLS filtering method using embedded hardware (a multi-target geo-location and tracking circuit board); (2) UAV geo-locates targets using embedded hardware (the multi-target geo-location and tracking circuit board) in real-time without orbiting the targets; (3) 50 targets can be located at the same time using only one UAV; (4) the UAV can geo-locate targets without any pre-existing geo-referenced imagery, or a terrain database; (5) the circuit board is small, and therefore, can be applied to many kinds of small UAVs; (6) multi-target localization and tracking techniques are combined, therefore, we can geo-locate multiple moving targets in real-time and obtain the target motion parameters such as velocity and trajectory. This is very important for UAVs performing reconnaissance and attack missions.
The rest of paper is organized as follows: Section 2 briefly presents the overall framework of the multi-target localization system. Section 3.1 presents the reference frames and transformations required for the multi-target localization system. Section 3.2 presents our multi-target geo-location model. Section 4 presents the methods to improve the accuracy of multi-target localization. Section 4.1 presents the distortion correction method. Section 4.2 presents the RLS filter method. Section 5 presents the results of multi-target localization for aerial imaged captured from a flight test and evaluates their accuracy. Section 6 presents the conclusions.
2. Overall Framework
The real-time multi-target geo-location algorithm in this paper is programmed and implemented on a multi-target geo-location and tracking circuit board (model: THX-IMAGE-PROC-02, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China, see Figure 1a) with the TMS320DM642 (Texas Instruments Incorporated, Dallas, TX, USA) @ 720 MHz Clock Rate and 32 Bit Instructions/Cycle and 1 GB double data rate synchronous dynamic random access memory (DDR SDRAM). This circuit board also performs the proposed zoom lens distortion correction and the RLS filtering in real-time. The multi-target geo-location and tracking circuit board is mounted on an electro-optical stabilized imaging system (see Figure 1b). This aerial electro-optical stabilized imaging system consists of a visible-light camera, a laser range finder, an inertial measurement unit (IMU), a global positioning system (GPS), and a photoelectric encoder. They are in the same gimbal so that they rotate altogether in the same direction in any axis.
Figure 1.
(a) Multi-target geo-location and tracking circuit board; (b) Electro-optical stabilized imaging system. The arrows in the Figure 1b represent the installation locations of main sensors in an electro-optical stabilized imaging system.
The electro-optical stabilized imaging system is mounted on the UAV (model: Changguang 1, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China) to stabilize the videos and any eliminate video jitters caused by the UAV therefore greatly reducing the impact of external factors.
The UAV system incorporates the electro-optical stabilized imaging system, UAV, data transmission module and ground station, which is shown in Figure 2. In the traditional target geo-location algorithms [1,2,3,4,5,6,7,8,9,10,11,12], the image and UAV attitude information are transmitted to a ground station. The target geo-location is calculated on a computer in the ground station. However, the date transmission model sends data with divided time mode, so the image and UAV attitude information are respectively transmitted at different times from the UAV to the ground station, so it is not guaranteed that the image and UAV attitude information will be obtained at the same time in ground station. Therefore, the traditional target geo-location algorithm on the computer in the ground station has poor real-time ability and unreliable target geo-location accuracy.
Figure 2.
UAV system architecture.
To overcome the shortcomings of traditional target geo-location algorithms such as algorithm complexity, unreliable geo-location accuracy and poor real-time ability, in this paper, the target geo-location algorithm is implemented on a multi-target geo-location and tracking circuit board on the UAV in real-time. Real-time ability is very important for urgent response in applications such as military reconnaissance and disaster monitoring.
The overall framework of the multi-target geo-location method is shown in Figure 3. The detailed workflows of the abovementioned multi-target geo-location method will be introduced as follows: we use UAV to search for the ground targets, which are selected by an operator in the ground station. The coordinates of the multiple targets in the image are transmitted to the UAV through a data transmission model. Then, all the selected targets are tracked automatically by the multi-target geo-location and tracking circuit board using the improved tracking method based on [13]. The electro-optical stabilized imaging system locks the main target in the field of view (FOV) center. Other targets in the FOV are referred to as sub-targets. The electro-optical stabilized imaging system measures the distance between the main target and the UAV using a laser range finder.
Figure 3.
The overall framework of the multi-target geo-location method.
In order to ensure that the image, UAV attitude information, electro-optical stabilized imaging system’s azimuth and elevation angle, laser range finder value, and camera focal length are obtained at the same time, the frame synchronization signal of the camera is used as the external trigger signal for data acquisition by the above sensors, so we don’t need to implement sensor data interpolation algorithms in the system except for the GPS data. The UAV coordinates interpolation algorithm is shown in Equations (35) and (36).
The multi-target geo-location and tracking circuit board computed the multi-target geo-location after lens distortion correction in real-time. Then, the board used the moving target detection algorithm [14,15,16,17,18] for the tracked targets. If the target which is tracked is stationary, the multi-target geo-location and tracking circuit board uses the RLS filter to improve the target geo-location accuracy. The multi-target geo-location results are superimposed on each frame in the UAV and downlinked to a portable image receiver and the ground station.
This research aims to address the issues of real-time multi-target localization in UAVs by developing a hybrid localization model. In detail, the proposed scheme integrates the following improvements:
- (a)
- The multi-target localization accuracy is improved due to the combination of the zoom lens distortion correction method and the RLS filtering method. A real-time zoom lens distortion correction method is implemented on the circuit board in real time. In this paper, we analyse the effect of lens distortion on target geo-location accuracy. Many electro-optical stabilized imaging systems are equipped with zoom lenses. The focal length of a zoom lens can be adjusted to track targets at different distances during the flight . The zoom lens distortion varies with changing focal length. Real-time distortion correction of a zoomable lens is impossible by using the calibration methods because the tedious calibration process has to be repeated again if the focal length is changed.
- (b)
- The target geo-location algorithm is implemented on a circuit board in real time. The size of the circuit board is very small, therefore, this circuit board can be applied to many kinds of small UAVs. The target geo-location algorithm has the following advantages: low computational complexity and good real-time performance. UAV can geo-locate targets without pre-existing geo-referenced imagery, terrain databases and the relative height between UAV and targets. UAV can geo-locate targets using the embedded hardware in real-time without orbiting the targets.
- (c)
- The multi-target geo-location and tracking circuit board use the moving target detection algorithm [14,15,16,17,18] for the tracked targets. If the target which is tracked is stationary, the multi-target geo-location and tracking circuit board uses the RLS filter to automatically improve the target geo-location accuracy.
- (d)
- The multi-target localization, target detection and tracking techniques are combined. Therefore, we can geo-locate multiple moving targets in real-time and obtain target motion parameters such as velocity and trajectory. This is very important for UAVs performing reconnaissance and attack missions.
The real output rate of the geo-location results is 25 Hz. The reasons are as follows:
- (a)
- The data acquisition frequency of all the sensors is 25 Hz: the visible light camera’s frame rate is 25 Hz. The frame synchronization signal of the camera is used as the external trigger signal for all sensors expect GPS (the UAV coordinates interpolation algorithm is shown in Equations (35) and (36)).
- (b)
- Lens distortion correction is implemented in real-time, and the output rate of target location results after the lens distortion correction is 25 Hz.
- (c)
- When it is necessary to locate a new stationary target, the RLS algorithm needs 3–5 s to converge to a stable value (within 5 s, lens distortion correction is implemented in real-time, the output rate is 25 Hz). After 5 s, the geo-location errors of the target have converged to a stable value. We can obtain a more accurate location of this stationary target immediately (it is no longer necessary to run RLS). That output rate is 25 Hz, too.
Our geo-location algorithm can geo-locate at least 50 targets simultaneously. The reasons are as follows:
- (a)
- For a moving target we only use lens distortion correction to improve the target geo-location accuracy. This consumes 0.4 ms on average when calculating the geo-location of a single target and, at the same time, correcting zoom lens distortion (Section 5.4). It consumes 0.4 ms for tracking the multiple targets (Section 3.3). The image frame rate is 25 fps, so the duration of a frame is 40 ms, so our geo-location algorithm can geo-locate at least 50 targets simultaneously.
- (b)
- For a stationary target, only when it is necessary to locate a new stationary target, the RLS algorithm needs 3–5 s to converge to a stable value (within 5 s, lens distortion correction is implemented in real-time, 50 targets can be located simultaneously). After 5 s, the geo-location errors of the target have converged to a stable value. We no longer need to run RLS, so our geo-location algorithm can geo-locate at least 50 targets simultaneously after lens distortion correction and RLS.
Therefore, this algorithm has great advantages in geo-location accuracy and real-time performance. The multiple target location method in this paper can be widely applied in many areas such as UAVs and robots.
3. Real-Time Target Geo-Location and Tracking System
3.1. Coordinate Frames and Transformation
Five coordinate frames (camera frame, body frame, vehicle frame, ECEF frame and geodetic frame) are used in this study. The relative relationships between the frames are shown in Figure 4. All coordinate frames follow a right-hand rule.
Figure 4.
The coordinate frames relation: Azimuth-elevation rotation sequence between camera and body frames. (a) camera frame; (b) body frame.
3.1.1. Camera Frame
The origin is the camera projection center. The -axis is parallel to the horizontal column pixels’ direction in the CCD sensor (i.e., the direction in Figure 4). The -axis is parallel to the vertical row pixels’ direction in the CCD sensor (i.e., the direction in Figure 4). The positive -axis represents the optical axis of the camera.
3.1.2. Body Frame
The origin is the mass center of the attitude measuring system. The -axis is the 0° direction of attitude measuring system. The -axis is the 90° direction of attitude measuring system. The -axis completes the right handed orthogonal axes set. The azimuth , elevation angle and distance output by electro-optical stabilized imaging system are relative to this coordinate frame.
3.1.3. Vehicle Frame
A north-east-down (NED) coordinate frame. The origin is the mass center of attitude measuring system. The aircraft yaw , pitch and roll angle output by the attitude measuring system are relative to this coordinate frame.
3.1.4. ECEF Frame
The origin is Earth’s center of mass. The -axis points to the Conventional Terrestrial Pole (CTP) defined by International Time Bureau (BIH) 1984.0, and the -axis is directed to the intersection between prime meridian (defined in BIH1984.0) and CTP equator. The axes completes the right handed orthogonal axes set.
3.1.5. WGS-84 Geodetic Frame
The origin and three axes are the same as in the ECEF. Geodetic longitude , geodetic latitude and geodetic height are used here to describe spatial positions, and the aircraft coordinates , output by GPS are relative to this coordinate frame.
The relation between camera frame and body frame is shown in Figure 4. Two steps are required. First, transformation from camera frame to intermediate frame : rotate 90° (elevation angle ) along the -axis . The next step is transformation from intermediate frame to body frame: rotate azimuth angle along the -axis . In Figure 4a, represents 90°.
The relation between body frame and vehicle frame is shown in Figure 5. Three steps are required. First, transformation from the body frame to the intermediate frame : rotate roll angle along the -axis . The next step is transformation from the intermediate frame to the intermediate frame : rotate pitch angle along the -axis . The final step is transformation from the intermediate frame to the vehicle frame: rotate yaw angle along the -axis .
Figure 5.
The coordinate frames relation: roll-pitch-yaw rotation sequence between body and vehicle frames. (a) body frame; (b) intermediate frame; (c) vehicle frame.
The relation between vehicle frame and earth centered earth fixed (ECEF) is shown in Figure 6.
Figure 6.
The coordinate frames relation: Vehicle, ECEF and geodetic frames.
3.2. Multi-Target Geo-Location Model
As shown in Figure 4a, the main target is at the camera field of view (FOV) center, whose homogeneous coordinates in the camera frame are , Through the transformation among five coordinate frames ranging from camera frame to WGS-84 geodetic frame, the geographic coordinates of main target in the WGS-84 geodetic frame can be determined, as shown in Figure 7.
Figure 7.
Coordinate transformation process of multi-target geo-location system.
First, we calculate the coordinates of the main target in the ECEF:
where , .
Then we derive the geodetic coordinates of main target from earth centered earth fixed-world geodetic system (ECEF-WGS) transformation equations [19]:
In Equations (1)–(5), the semi-major axis of ellipsoid is , the semi-minor axis of ellipsoid is , the first eccentricity of spheroid , the second eccentricity of spheroid is , and the radius of spheroid curvature in the prime vertical is .
The key of sub-target geo-location is to build a geometrical geo-location model. The coordinates of sub-targets in the camera frame are solved on the basis of their pixel coordinates, and then their geodesic coordinates are calculated in accordance with the coordinate transformation Equations (1)–(5), Suppose the ground area corresponding to a single image is flat and the relative altitudes between targets and electro-optical stabilized imaging system are the same. Based on the image forming principles for single-plane array charge coupled device (CCD) sensors, a multi-target geo-location model can be established, as shown in Figure 8.
Figure 8.
The location of any target in image model.
Suppose that no image distortion exists, the image point of main target is at the image center, and the three points, namely projection center , sub-target and its image point , are on the same line. Then a pin-hole imaging model will be formed and the altitude of a target relative to electro-optical stabilized imaging system will be:
where: is the relative altitude, is the distance from electro-optical stabilized imaging system to main target, and is the distance from electro-optical stabilized imaging system to a sub-target.
Suppose the line-of-sight (LOS) vectors of the main target , the sub-target and the point K beneath the camera are , , respectively, is the angle between and , and is the angle between and , then [20]:
is the basis vectors for camera frame in a 3-dimensional vector space, the coordinates of LOS vectors and in the camera frame are given by Equations (9) and (10):
where is the camera focal length, the unit is mm. The pixel coordinate of the point is . The pixel coordinates of the point is .
is the basis vectors for vehicle frame in a 3-dimensional vector space. In vehicle frame, the LOS vector goes down axis , the coordinates of in vehicle frame is given by Equation (11):
The coordinates of in the camera frame are solved as:
where , .
is the rotation matrix transformation from the vehicle frame to the body frame. is the rotation matrix transformation from the body frame to the camera frame. is the rotation matrix transformation from the vehicle frame to the camera frame.
is the angle between the axis of the vehicle frame and the axis of the camera frame. According to the geometric relationship in Figure 6, we obtain:
Using Euler parameters, or quaternions, we have the definition:
It can also be shown that [21]:
This may be manipulated into:
Therefore:
and it follows that:
By substituting the value into Equations (11) and (12), the coordinate of in the camera frame can be obtained. Then is substituted into Equations (7) and (8), to obtain and . Finally, according to the known main target distance and Equation (6), the relative altitude and the sub-target distance can be determined. Based on the sub-target distance and the LOS vector of the sub-target in the camera frame, the coordinates of the sub-target in this frame can be determined:
Finally, the geodetic longitude , the geodetic latitude and geodetic height of the sub-target can be calculated by substituting , and into Equations (1)–(5).
3.3. Targets Tracking
The operator selects multiple targets in the first image and then these targets are tracked using the tracking algorithm. In recent years, many excellent tracking algorithms were proposed [22,23,24,25,26,27,28,29,30,31]. Due to the limited hardware resources in the TMS320DM642 (Texas Instruments Incorporated, Dallas, TX, USA), the tracking algorithm for UAV applications must be simple as well as highly efficient to meet the performance demands of real-time multiple target tracking. We use a simple two stage method to improve the real-time performance of the correlation tracking algorithm described in [13]. The main improvements are as follows:
In the low resolution stage, we calculate the average of four adjacent pixels in the original image to generate a low resolution image, whose resolution is half that of the original image. The low resolution template is generated in the same way. The formula of the normalized cross correlation (NCC) algorithm is as follows:
where the size of low resolution template is , the size of the low resolution search area is , () is the left corner point coordinate in the search area, , . is moving on the during the matching operation. When reaches the maximum value , the point is the best matching point in the low resolution search area.
In the original stage, we only need to search a small area in the original image. The size of the template is . The size of the small search area is (. The left corner point coordinate in the search area is . Then, the best matching point in the original image can be calculated using the NCC algorithm. In our implementation, is set to 28, is set to 46.
In [13], the time of template image matching is implemented is 0.62 ms. After improvement, it consumes 0.4 ms on the multi-target localization circuit board. The improved method can meet the real-time requirements of multi-target tracking.
4. Methods to Improve the Accuracy of Multi-Target Localization
4.1. Distortion Correction
The above multi-target geo-location model is established under the assumption that image distortion does not exist. In fact, due to the lens design and manufacturing errors of imaging systems, the image will be distorted [32,33,34,35], so the projection rays between the image point and object point can't completely meet the requirement of linear propagation in the total field of view (TFOV) and instead, will bend to some extent. As shown in Figure 6, the image point of the main target moves from the ideal position to a distorted point position , and the three points, namely projection center , image point and object point , are not in a straight line. Thus it does not conform to the ideal pinhole imaging model. The calculation of target geo-location data based on the ideal pinhole imaging model will lead to a big error, so the lens distortion must be corrected. Distortion correction involves at first deriving the position T of the target in the ideal image from its image point in the distorted image according to the distortion model of camera, and then to calculate the geodetic coordinates of the target by using the pixel coordinates of the ideal image point .
Real-time distortion correction of a zoom lens is impossible by using the calibration methods because the tedious calibration process has to be repeated again if the focal length is changed. In this research, we divide the zoom lens distortion procedures into two steps: lens distortion parameter estimation in the laboratory and real-time zoom lens distortion correction on the UAV.
4.1.1. Lens Distortion Parameter Estimation in the Laboratory
Lens distortion parameter estimation is performed in the laboratory. We use UAV electro-optical stabilized imaging system to take images which contain a chessboard pattern in the laboratory. The lines in the chessboard pattern are straight in the real world, but the images generally contain curved lines caused by the lens distortion. We take lots of images with different focal lengths of a zoom lens. We use the images to construct the distortion parameter table, which is shown in Figure 3.
We extract the chessboard image edges using the Canny edge detector. The thresholds of the Canny edge detector are provided in terms of percentages of the gradient norm of the image.
For a zoom lens, the typical range of distortion coefficient is given by [], is the diagonal of the image [36]. In pixel coordinates, the image size is pixels, the distortion center is , the image center is , where , . The range of is
. The range of is [37,38].
We sampled samples of in the range of . We sampled samples of in the range of . We sample samples of in the range of [] in each distortion center , so possible distortion parameters are generated from these samples in a certain camera focal length. The distortion parameter is shown in Equation (22) to Equation (24):
where ; ; ; ; ; ;
For each distortion parameter , the pixel coordinates of the corrected chessboard image’s edge points are computed by using Equation (25) to Equation (28):
, are pixel size, which units are µm. The distortion center is , and the unit is pixels. The pixel coordinates of the distorted image are , in units of pixels. The pixel coordinates of the undistorted (corrected) image is , and the units are pixels. is the projection coordinates of a distorted point, which units are .
The gradient direction of the corrected chessboard image’s edge points are computed using Equation (29) to Equation (31):
is the chessboard image brightness value, , are the first-order derivatives of the corrected image’s edge points brightness.
We compute the Hough transform of the corrected chessboard image. The strongest peaks in the Hough transform correspond to the most distinct lines. The distance between a line and the origin is . The orientation of a line is , where 1,2,..., .
If the angular difference between the edge point orientation and the line orientation is less than a certain threshold (in our implementation, it is set to 2°. This threshold can meet the distortion correction accuracy requirements. We compute the distance from edge point to the line :
If is less than a certain threshold. In our implementation, it is set to 2 pixels. This threshold can meet the distortion correction accuracy requirements. The edge point votes for the line , the votes of the edge point is:
We compute the sum of all edge points votes. In this focal length, the best distortion parameters are obtained by maximizing the straightness measure function:
where is the votes of the line in the corrected chessboard image using distortion parameter .
We apply the above algorithm to calibrate the best distortion parameter with different lens focal lengths. Then, the best zoom lens distortion parameters in all focal lengths are gained through curve fitting using Matlab Tools. We store the distortion parameter talbe for all focal length in the flash chip on the multi-target geo-location and tracking circuit board.
4.1.2. Real-Time Lens Distortion Correction on the UAV
The zoom lens is connected to the potentiometer through the gears. The relationship between focal length and resistance has been calibrated in the laboratory. We can get the focal length by measuring the resistance value of the potentiometer in real-time. During the flight of the UAV, we use the focal length measuring sensor (model: S10HP-3 3-turn Potentiometer, SAKAE, Nagoya, Japan, resistance error: ±1%) to measure the camera focal length and we find the distortion parameter in the flash chip on the multi-target localization circuit board. The pixel coordinates of the corrected real-time image are computed using Equation (25) to Equation (28). We use to calculate the geodetic coordinates of the targets.
4.2. RLS Filter
For stationary targets on the ground, the location result in different frames should be the same. Therefore, a popular technique to remove the estimation error is to use a recursive least squares (RLS) filter. The RLS filter minimizes the average squared error of the estimate. The RLS filter uses an algorithm that only requires a scalar division at each step, making the RLS filter suitable for real-time implementation, so we use RLS to reduce the standard deviation and improve the accuracy of multiple stationary target localization.
Suppose the original geo-location data of images are . The RLS algorithm flowchart is shown in Figure 9. In Figure 9, is a unit matrix. After the RLS filtration of the original data , the obtained data are , where can be longitude , latitude or geodetic height .
Figure 9.
Flowchart of the RLS algorithm.
The GPS data (coordinates of the UAV) refresh rate is 1 Hz, but the video frame rate is usually above 25 Hz. To raise the convergence rate of the RLS algorithm, when the UAV speed is known, the coordinates of the UAV at the corresponding time can be determined through dead reckoning. In the WGS-84 ECEF, the coordinates of the UAV are [39]:
where are the coordinates at the initial time, and is UAV speed in direction , is UAV speed in direction . The influence of the UAV geodetic coordinate position and speed on the reckoned coordinates of the UAV is analyzed as follows:
- (1)
- In the WGS-84 geodetic frame, the higher the latitude of the UAV is, the smaller the projection of 1° longitude onto the horizontal direction. Therefore, in the high latitude area, the measurement accuracy of GPS is high, and the accuracy of the reckoned coordinates is high.
- (2)
- The smaller the UAV speed is, the smaller the distance UAV moves in the same time interval, the higher accuracy of the reckoned coordinates is.
According to Equations (35) and (36), the error resulting from the updating rate of GPS data can be compensated to converge the RLS algorithm rapidly to a stable value. Therefore, we can geo-locate multiple stationary ground targets quickly and accurately.
5. Experiments and Discussion
The targets location data were obtained during a UAV flight in real time. The evaluation is based on UAV videos captured from a Changji highway from 9:40 to 11:10. The resolution of the videos is 1024 × 768 and the frame rate is 25 frames per second (fps).
5.1. The Zoom Lens Distortion Parameter Estimation Results
To evaluate the proposed lens distortion parameters estimation approach, we use a plane containing a chessboard pattern and a zoom lens camera which are shown in Figure 10. The size of the pattern is . We take the images which contained the chessboard pattern for several focal lengths: . Then, we perform the lens distortion parameters estimation approach in Section 4.1 to estimate the lens distortion parameters.
Figure 10.
(a) A plane containing a chessboard pattern; (b) The zoom lens camera (not yet mount in the electro-optical stabilized imaging system).
The relationships between the distortion coefficient and the focal length are shown in Figure 11a. The relationships between the distortion center and the focal length are shown in Figure 11b.
Figure 11.
(a) The relationships between distortion coefficient and the focal length (b) The relationships between the distortion center and the focal length .
We fit the curve between the data. If The relationships between the distortion coefficient and the focal length is shown in Equation (37):
where , , .
If . The relationships between the distortion coefficient and the focal length is shown in Equation (38):
where , , .
We fit the curve between the data. The relationships between the distortion center and the focal length is shown in Equations (39) and (40):
where , , , .
Based on the above fitting formula, we calculate the zoom lens distortion parameter in all focal length to construct the distortion parameter table in the laboratory. We store the distortion parameter table in the flash chip in multi-target localization circuit board.
5.2. Targets Location Experimental Design and Instrument Description
This test is divided into the following four parts:
- (1)
- Monte Carlo simulation analysis of multi-target geo-location error. Through this analysis, the expected error of multi-target geo-location can be determined.
- (2)
- Geo-location test of a single aerial image. We substitute an actual aerial image and its position/attitude data into multi-target geo-location program for target location resolution. We use a high-precision GPS receiver to measure the geo-location data of various ground targets as the nominal values for target geo-location. We compare the calculated values with these nominal values to obtain the multi-target geo-location accuracy of the image. We correct the geo-location error arising from lens distortion, and then compare the calculated geodesic coordinates of each target with its nominal geodesic coordinates to determine the multi-target geo-location accuracy after the distortion correction.
- (3)
- Geo-location test of multi-frame aerial images. For many stationary ground targets, we use the RLS algorithm to adaptively estimate the multi-frame image geo-location data and then by comparing the RLS filtration results with nominal values, we determine the target geo-location accuracy after the RLS filtration.
- (4)
- Real-time geo-location and tracking of multiple ground-based moving targets. We derive the motion trail of each target from the geo-location data and time interval of every image. Then we calculate the speed of each target.
Here, a GPS receiver of the Geo Explorer 3000 series is used for ground measurements. This instrument has 14 channels, including 12 L1 codes and carriers and two satellite-based augmentation systems (SBAS). It is integrated with real-time two-channel SBAS tracking technology, and supports real-time differential correction. It can achieve real-time sub-meter geo-location accuracy—An accuracy of 50 cm is available through Trimble Delta Phase postprocessing.
5.3. Test 1: Monte Carlo Analysis of Multi-Target Geo-Location Accuracy Error
Error analysis is an important step to judge if a geo-location method is good or not. It is very difficult to analyze the target geo-location error through complete differential based on the measurement equation of airborne electro-optical stabilized imaging system, so Monte Carlo analysis is introduced to analyze the multi-target geo-location error. The Monte Carlo method is based on the law of great number and the Bernoulli’s theory [40]. On the basis of this method, a model of multi-target geo-location error can be established:
where: ΔL, ΔM and ΔH are the geo-location errors of each target, and ΔX is geo-location parameter error.
We use five aerial images (32 targets) for multi-target geo-location and distortion correction test. (eight targets in the 1st image, six targets in the 2nd image, five targets in the 3rd image, five targets in the 4th image, and eight targets in the 5th image). We use eight targets in the 1st image in Test 1. The image size is 1024 pixels 768 pixels, and the pixel size is 5.5 μm 5.5 μm. The position/attitude data of electro-optical stabilized imaging system collected through GPS, attitude measurement and laser finding at the time of image shoot are shown in Table 1. The root-mean-square error (RMSE) of each parameter is determined in accordance with the maximum nominal error stipulated in the relevant measurement equipment specifications.
Table 1.
Localization and attitude data of UAV electro-optical stabilized imaging system.
Based on the nominal values and errors of above parameters, the sample model for 10,000 random variable arrays can be established in the Matlab software. By using the Equations (1)–(20) and (41), the geodetic coordinate RMSE of every target in this test can be determined through the Monte Carlo method, as shown in Table 2.
Table 2.
Errors of multi-target expected location.
5.4. Test 2: Multi-Target Geo-Location Using a Single Aerial Image with Distortion Correction
The data in Table 1 are substituted into Equations (1)–(20) to calculate the geodesic coordinates of each target in a single image, as shown in Table 3.
Table 3.
Calculated values in the geodesic coordinates of each target.
The calculated geodetic coordinates of each target in Table 3 are compared with its nominal geodetic coordinates measured by GPS receiver on the ground, in order to obtain its geo-location error in a single image, as shown in Table 4. It is found that the geo-location error of each target is within the expected error range through the comparison between the data in Table 4 and those in Table 2. The geodetic height geo-location errors of all the targets are about 18 m—that’s basically in line with the assumption in the Section 3.2 that “the ground area corresponding to a single image is flat and the relative altitudes between targets and electro-optical stabilized imaging system are the same”.
Table 4.
Geo-location error of each target in a single image.
The latitude and longitude geo-location errors of sub-targets are bigger than those of main targets for the following reasons: (1) the error arising from the slope distance difference between a main target and a sub-target. The longer the slope distance of a target to the image border, the bigger the geo-location error [39]; (2) the coordinate transformation error caused by the attitude measurement error and the angle measurement error (measured by the electro-optical stabilized imaging system) during the calculation of altitude and distance of a sub-target relative to the electro-optical stabilized imaging system; (3) the pixel coordinate error of a sub-target found in target detection; and (4) the pixel coordinate error of a sub-target caused by image distortion.
The geo-location errors of a target can be reduced in three ways: by reduction in the flight height or an increase in the platform elevation angle (the horizontal forward direction is 0°) to shorten the slope distance, which needs to consider the flight conditions; the selection of a high-precision attitude measuring system and the improvement in angle measurement accuracy of the electro-optical stabilized imaging system, for which one needs to consider the hardware cost; and the distortion correction. Therefore, the influence of distortion correction on multi-target geo-location accuracy will be mainly discussed.
Sub-target pixel coordinate error is mainly caused by image distortion. The correction method is discussed in Section 4.1 and Section 5.1. We calculate the corrected pixel coordinates using Equation (25) to Equation (28). Then, we use corrected pixel coordinates to calculate geodetic coordinates of target. Geo-location errors of multi-target after distortion correction are shown in Table 5.
Table 5.
Geo-location errors of multi-target after distortion correction.
It can be seen through the comparison between the data in Table 5 and those in Table 4 that, after the distortion correction, the latitude and longitude errors of each target are generally smaller than those before the correction, while the geo-location error of geodetic height remains basically unchanged. For the sub-targets farther from the image center, a more significant reduction in longitude and latitude errors can be obtained. Therefore, lens distortion correction can improve the geo-location accuracy of sub-targets and thus raise the overall accuracy of multi-target geo-location.
Target geo-location accuracy and missile hit accuracy are usually evaluated through the circular error probability (CEP) [41]. CEP is defined as the radius of a circle with the target point as its center and with the hit probability of 50%. In ECEF frame, the geo-location error along direction is . The geo-location error along direction is . and can be calculated from longitude error and latitude error listed in Table 4 and Table 5. Suppose both x and y are subject to the normal distribution, the joint probability density function of can be expressed as:
where and are the mean geo-location errors along and directions, respectively; and are the standard deviations of the geo-location errors along and directions, respectively; is the correlation coefficient of geo-location errors along and directions, .
Suppose , and , the R satisfying the following equation will be CEP:
where: , .
If the mean geo-location errors , are unknown, they can be substituted by the sample mean geo-location errors , respectively. If the standard deviations of the geo-location errors , are unknown, they can be substituted by the sample standard deviation of the geo-location errors , respectively. If the correlation coefficients of the geo-location errors are unknown, they can be substituted by the sample correlation coefficients of the geo-location errors . If the total mean geo-location error is unknown, it can be substituted by the total sample mean geo-location error . If the total standard deviation of the geo-location error is unknown, it can be substituted by the total sample standard deviation of the geo-location errors .
Suppose the number of geo-location error samples is : , , , … ,. The sample mean geo-location errors along and directions are:
The total sample mean geo-location error is:
The sample standard deviations of the geo-location errors along and directions are:
The total sample standard deviations of the geo-location errors is:
The sample correlation coefficient of the geo-location errors is:
When the number of geo-location error samples is more than 30, the confidence of CEP calculation result can reach 90% [41]. Therefore, through the multi-target geo-location and distortion correction test for five aerial images, 32 samples of target geo-location errors are obtained before and after the distortion correction, respectively, including eight in the 1st image, six in the 2nd image, five in the 3rd image, five in the 4th image, and eight in the 5th image. The normality test and independence test of sample data reveal that, the samples conform to normal distribution but are not independent (the sample correlation coefficient before the distortion correction is , and the sample correlation coefficient after the distortion correction is ).
Geo-location errors of the eight targets in the 1st image (which are parts of 32 targets) before the correction are shown in Table 4. Geo-location errors of the eight targets in the 1st image (which are parts of 32 targets) after the correction are shown in Table 5.
The multi-target geo-location errors before the distortion correction are processed through the Equations (43)–(50) to calculate the CEP, whereas the mean geo-location errors along the and directions are 17.41 m and 20.34 m, respectively, the standard deviations of the geo-location errors along the and directions are 7.77 m and 10.05 m, respectively, and the total mean geo-location error and total standard deviation are 28.98 m and 6.08 m, respectively. The calculation of Equation (46) through numerical integration finds that, among the 32 samples, 16 are in a circle with a radius of 28.74 m, which means the probability is 50%. The sizes of data samples inside and outside the solid circle in Figure 10 also show that, the CEP1 of multi-target geo-location before the distortion correction is 28.74 m. Then the multi-target geo-location errors after the distortion correction are processed through Equations (43)–(50) to calculate the CEP, where the mean geo-location errors along and directions are 17.04 m and 18.63 m, respectively, the standard deviations of the geo-location errors along and directions are 6.86 m and 8.25 m, respectively, and the total mean geo-location error and total standard deviation are 26.91 m and 5.31 m, respectively. The calculation of Equation (43) through numerical integration finds that, among the 32 samples, 16 are in a circle with a radius of 26.80 m, which means the probability is 50%. The sizes of samples inside and outside the dotted circle in Figure 10 also show that, the CEP2 of multi-target geo-location after the distortion correction is 26.80 m, 7% smaller than that before the distortion correction. Note: We use original geo-location error (32 samples scattered in the four quadrants) to calculate the CEP circle. Because there are 32 samples scattered in the four quadrants. It’s too scattered and not conducive to the analysis of the error. Therefore, we take the absolute value of each geo-location error, so all points are in the first quadrant in Figure 12.
Figure 12.
Sample distribution of target location error before and after distortion correction.
In order to compare the performance of our multi-target geo-location algorithm with that of other algorithms, these localization accuracy results have been compared with the accuracies of geo-location methods reported in [7,8,9], as shown in the Table 6. It can be seen from the Table 6 that, the target geo-location accuracy in this paper is close to that reported in [7]. However, the geo-location accuracy in [7] depends on the distance between the projection centers of two consecutive images (namely baseline length). To obtain higher geo-location accuracy, the baseline length shall be longer, so in [7], the time interval between two consecutive images used for geo-location is quite big. Meanwhile, as the SIFT algorithm is needed to extract feature points from multi-frame images for the purpose of 3D reconstruction, the algorithm in [7] has a heavy calculation load in prejudice of real-time implementation. In [8], the geo-location accuracy is about 20 m before filtering. The real-time geo-location accuracy of the two methods is almost the same. However, our UAV flight altitude is much higher than that in [8]. In [9], geo-location accuracy is about 39.1 m before compensation. Our real-time geo-location accuracy is higher than in [9]. In [9], after the UAV flies many around in an orbit, the geo-location accuracy can be increased to 8.58 m. However, this accuracy cannot be obtained in real-time. The algorithms in [7,8,9] are implemented on a computer in the ground station. Since the images are transmitted to, and processed on a computer in the ground station, a delay occurs between the data capture moment and the time of completion of processing. In comparison, the geo-location algorithm in this paper is programmed and implemented on multi-target localization circuit board (model: THX-IMAGE-PROC-02, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China) with a TMS320DM642@ 720-M Hz Clock Rate and 32-Bit Instructions/Cycle and 1 GB DDR. It consumes 0.4 ms on average when calculating the geo-location of a single target and, at the same time, correcting zoom lens distortion. Therefore, this algorithm has great advantages in both geo-location accuracy and real-time performance. The multi-target location method in this paper can be widely applied in many areas such as UAVs and robots.
Table 6.
Geo-location accuracy comparison between the proposed algorithm and the algorithms in reference [7].
5.5. Test 3: RLS Filter for Geo-Location Data of Multiple Stationary Ground Targets
The targets in the above tests are all stationary ground targets. After lens distortion correction, we use the 1st aerial images (eight targets) as the initial frame for target tracking. For 150 frames starting from the 1st image, eight targets are tracked, respectively. The coordinates of the UAV are calculated using Equations (25) and (26). The update rate of UAV coordinate is synchronized with the camera frame rate. The geo-location data of each target are adaptively estimated by RLS algorithm respectively. The geo-location results of eight targets before and after RLS filtration are shown in Figure 13 (the dots “•” in different colors in Figure 13 represent original geo-location data of the eight targets, while □, ◁, ▷, ☆, ▽, ○, ◇ and △ represent the geo-location data of main target and sub-targets 1, 2, …, 7 after RLS filtration). It can be observed from Figure 13 that, after RLS filtration, the dispersion of geo-location data decreases sharply for each target, converging rapidly to a small area adjacent to the true position of the target. Figure 14 shows how the plane geo-location error of sub-target 2 changes with the number of image frames (the corresponding time) in the RLS filtration process. It can be seen from Figure 14 that after filtering the geo-location data of 100~150 images (the corresponding time is 3~5 s), the geo-location errors of the target have converged to a stable value. So, we can obtain a more accurate stationary target location immediately after 150 images (it is no longer necessary to run RLS).
Figure 13.
Localization results before and after RLS filtering.
Figure 14.
Plane localization errors after RLS filtering.
By comparing the results after the stabilization of RLS filtration with the nominal geodetic coordinates of each target, the geo-location errors of geodetic coordinates of the targets after RLS filtration can be determined, as shown in Table 7.
Table 7.
Errors of multi-target location after RLS filtering.
It can be seen from the comparison of results between the data in Table 7 and those in Table 5 that, after RLS filtration, the longitude and latitude errors of each target are much smaller than the multi-target geo-location errors of a single image which is only processed by lens distortion correction. The geo-location error of geodetic height also decreases slightly.
After lens distortion correction, we use the other four aerial images (24 targets) as the initial frame for targets tracking, respectively (eight targets in the 1st image, six targets in the 2nd image, five targets in the 3rd image, five targets in the 4th image, and eight targets in the 5th image).
For 150 frames starting from the 2nd image, six targets are tracked, respectively. The geo-location data of each target are adaptively estimated by the RLS algorithm, respectively. For 150 frames starting from the 3rd image, five targets are tracked, respectively. The geo-location data of each target are adaptively estimated by the RLS algorithm, respectively. For 150 frames starting from the 4th image, five targets are tracked, respectively. The geo-location data of each target are adaptively estimated by the RLS algorithm, respectively. For 150 frames starting from the 5th image, eight targets are tracked, respectively. The geo-location data of each target are adaptively estimated by the RLS algorithm, respectively.
The multi-target geo-location data after RLS filtration are processed through the Equations (43)–(50) to calculate the CEP, where the mean geo-location errors along the and directions are 13.78 m and 14.53 m, respectively, and the standard deviations of the geo-location errors along the and directions are 5.79 m and 7.34 m, respectively. Among the 32 samples obtained through numerical integration of Equation (43), 17 are in a circle with a radius of 21.52 m, which means the probability is 53%. The sizes of samples inside and outside the dotted circle in Figure 13 also show that, the CEP3 of multi-target geo-location after RLS filtration is 21.52 m, 25% smaller than the CEP1 of multi-target geo-location of a single image. Note: We use original geo-location error (32 samples scattered in the four quadrants) to calculate the CEP circle. Because there are 32 samples scattered in the four quadrants. It’s too scattered and not conducive to the analysis of the error. Therefore, we take the absolute value of each geo-location error, so all points are in the first quadrant in Figure 15.
Figure 15.
Sample distribution of target location before and after RLS.
5.6. Test 4: Real-Time Geo-Location and Tracking of Multiple Moving Ground Targets
This test localizes and tracks four targets moving on a Changji highway in the video taken by the UAV, as shown in Figure 16 (part of the image). The size of every image is , the pixel size is , and the focal length . The target in the image center is chosen as main target, and three targets in other positions are chosen as sub-targets. The pixel coordinates of each target in the 1st image and the locations and attitudes data of the electro-optical stabilized imaging system at the corresponding time are shown in Table 8.
Figure 16.
Multiple moving targets in aerial video. (a–f): frame1, frame 3, frame 4, frame 6, frame 7, frame 9.
Table 8.
Locations and attitudes data of electro-optical stabilized imaging system.
Nine chronological images are selected from video images to calculate the geo-location data of each target in every image. The resultant spatial position distribution of all the targets is shown in Figure 17a. With the main target position in the 1st image as the origin of coordinates, the spatial positions of all the targets are projected earthwards to obtain their planar motion trails, as shown in Figure 17b. The time span of those nine frames is 30 s.
Figure 17.
(a) The spatial localization of each target; (b) The target motion trajectory on the ground.
Since both the UAV yaw and the electro-optical stabilized imaging system azimuth are not 0 in general, aerial images have been rotated and distorted somewhat. The aerial orthographic projection of the abovementioned highway, as shown in Figure 18, is acquired from Google Maps. It can be known from Figure 18 that, the actual direction of that highway is northwest–southeast. In China, cars drive on the right side of the road, so the house in Figure 18 is near the highway which is from northwest to southeast direction. In Figure 16b, we can see this house, so all the cars are on the road travelling in a northwest to southeast direction. This coincides with the localization results of Figure 17.
Figure 18.
Ortho image of the Changji highway in aerial imagery.
On the basis of Figure 17, the motion of each target has been analyzed as below: the targets in the 1st image, according to their positions from front to back, are sub-target 3, sub-target 2, sub-target 1 and main target in succession; in the 3rd–4th images, the sub-target 2 begins to catch up with and overtake the sub-target 3; in the 4th–6th images, the sub-target 1 begins to catch up with and overtake the sub-target 3; in the 7th–9th images, the main target begins to catch up with and overtake the sub-target 3; at last, all the targets, according to their positions from front to back, are sub-target 2, sub-target 1, main target and sub-target 3 in succession. This coincides completely with the motion law of all the targets in the video image shown in Figure 14, demonstrating that this geo-location algorithm can correctly locate and track multiple moving targets. The speed of each target can also be determined. This test has further verified the correctness of our multi-target geo-location model.
6. Conclusions
In order to improve the reconnaissance efficiency of UAV electro-optical stabilized imaging systems, a multi-target localization system based on a UAV electro-optical stabilized imaging system is proposed. First, a target location model and the way to improve the accuracy of multi-target localization are studied. Then, the geodetic coordinates of multiple targets are calculated using homogeneous coordinate transformation. On the basis of this, two methods which can improve the precision of the target localization are proposed: (1) the lens distortion correction method based on the distortion ratio; (2) the RLS filtering method based on UAV dead reckoning. The localization error model is established using Monte Carlo theory. The analysis of the multiple target location algorithm is carried out. The range of the localization error is obtained. In actual flight, the UAV flight altitude is 1140 . The multi-target localization results are in the range of allowable error. After we use lens distortion correction method in a single image, CEP of the multi-target localization is reduced by . The RLS algorithm can adaptively estimate the location data based on multi-frame images. Compared with multi-target localization based on the single frame image, CEP of the multi-target localization using RLS is reduced by 25%.
The average time to calculate the location data and distortion correction for a single target is 0.4 ms. The normal video rate is 25 fps, so the proposed localization algorithm can locate the 50 targets at the same time in real time. The proposed method significantly reduced the image data processing time, it is convenient to implement the multi-target localization by using other embedded system [42].
However, when a target is out of field of view and re-enters, the operator has to identify the target again. The future research will aim to address this problem. We will try to apply the tracking-learning-detection (TLD) [43] for automatic target detection when a target re-enters the field of view. Due to the difficulty of constructing TLD on the TMS320DM642 (Texas Instruments Incorporated, Dallas, TX, USA), we will leave all these problems for our future research.
Acknowledgments
We acknowledge the financial support that was given under the Project supported by the National Defense Pre-Research Foundation of China (Grant No. 402040203). We acknowledge Academic Editor for his careful revision of the languages and grammatical structures in this article.
Author Contributions
Xuan Wang, Qianfei Zhou and Jinghong Liu initiated the research and designed the experiments. Xuan Wang and Qianfei Zhou wrote the paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Deming, R.W.; Perlovsky, L.I. Concurrent multi-target localization, data association, and navigation for a swarm of flying sensors. Inf. Fusion 2007, 8, 316–330. [Google Scholar] [CrossRef]
- Minaeian, S.; Liu, J.; Son, Y.-J. Vision-based target detection and localization via a team of cooperative UAV and UGVs. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 1005–1016. [Google Scholar] [CrossRef]
- Morbidi, F.; Mariottini, G.L. Active target tracking and cooperative localization for teams of aerial vehicles. IEEE Trans. Control Syst. Technol. 2013, 21, 1694–1707. [Google Scholar] [CrossRef]
- Qu, Y.; Wu, J.; Zhang, Y. Cooperative Localization Based on the Azimuth Angles among Multiple UAVs. In Proceedings of the 2013 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 28–31 May 2013; pp. 818–823.
- Kwon, H.; Pack, D.J. A robust mobile target localization method for cooperative unmanned aerial vehicles using sensor fusion qualitye. J. Intell. Robot. Syst. Theory Appl. 2012, 65, 479–493. [Google Scholar] [CrossRef]
- Yan, M.; Du, P.; Wang, H.L.; Gao, X.J.; Zhang, Z.; Liu, D. Ground multi-target positioning algorithm for airborne optoelectronic system. J. Appl. Opt. 2012, 33, 717–720. [Google Scholar]
- Han, K.; de Souza, G.N. Multiple Targets Geo-Location Using SIFT and Stereo Vision on Airborne Video Sequences. In Proceedings of the 2009 IEEE RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 5327–5332.
- Barber, D.B.; Redding, J.; McLain, T.W.; Beard, R.W.; Taylor, C. Vision-based target geo-location using a fixed-wing miniature air vehicle. J. Intell. Robot. Syst. 2006, 47, 361–382. [Google Scholar] [CrossRef]
- Subong, S.; Bhoram, L.; Jihoon, K. Vision-based real-time target localization for single-antenna GPS-guided UAV. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 1391–1401. [Google Scholar]
- Dobrokhodov, V.N.; Kaminer, I.I.; Jones, K.D.; Ghabcheloo, R. Vision-Based Tracking and Motion Estimation for Moving Targets Using Small UAVS. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006.
- Yap, K.C. Incorporating Target Mensuration System for Target Motion Estimation Along a Road Using Asynchronous Filter. Master’s Thesis, Naval Postgraduate School, Monterey, CA, USA, December 2006. [Google Scholar]
- Kumar, R.; Sawhney, H.; Asmuth, J.; Pope, A.; Hsu, S. Registration of Video to Geo-Referenced Imagery. In Proceedings of the Fourteenth International Conference on Pattern Recognition, Queensland, Australia, 20 August 1998; pp. 1393–1400.
- Huang, D.; Wu, Z. The Application of TMS320c64x DSP Assembly Language in Correlation Tracking Algorithms. In Proceedings of the 2010 3rd International Congress on Image and Signal, Yantai, China, 16–18 October 2010; Volume 4, pp. 1529–1532.
- Zhang, Y.; Tong, X.; Yang, T.; Ma, W. Multi-model estimation based moving object detection for aerial video. Sensors 2015, 15, 8214–8231. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Xu, J.; Huang, Z.; Zhang, X.; Xia, X.-G.; Long, T.; Bao, Q. Road-aided ground slowly moving target 2D motion estimation for single-channel synthetic aperture radar. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
- Danescu, R.; Oniga, F.; Turcu, V.; Cristea, O. Long baseline stereovision for automatic detection and ranging of moving objects in the night sky. Sensors 2012, 12, 12940–12963. [Google Scholar] [CrossRef] [PubMed]
- Steen, K.A.; Villa-Henriksen, A.; Therkildsen, O.R.; Green, O. Automatic detection of animals in mowing operations using thermal cameras. Sensors 2012, 12, 7587–7597. [Google Scholar] [CrossRef] [PubMed]
- Rodriguez-Gomez, R.; Fernandez-Sanchez, E.J.; Diaz, J.; Ros, E. FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model. Sensors 2012, 12, 585–611. [Google Scholar] [CrossRef] [PubMed]
- Bowring, B.R. Transformation from spatial to geographical coordinates. Surv. Rev. 1976, 23, 323–327. [Google Scholar] [CrossRef]
- Li, Y.; Yun, J.; Jun, W. Building of rigorous geometric processing model based on line-of-sight vector of ZY-3 imagery. Geomat. Inf. Sci. Wuhan Univ. 2013, 38, 1451–1455. [Google Scholar]
- Hughes, P.C. Spacecraft Attitude Dynamics; Courier Corporation: North Chelmsford, MA, USA, 2004. [Google Scholar]
- Fu, C.; Duan, R.; Kircali, D.; Kayacan, E. Onboard robust visual tracking for UAVs using a reliable global-local object model. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
- Liu, Z.; Wang, Z.; Xu, M. Cubature information SMC-PHD for multi-target tracking. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
- Perlovsky, L.I.; Deming, R.W. Maximum likelihood joint tracking and association in strong clutter. Int. J. Adv. Robot. Syst. 2013, 10, 1–9. [Google Scholar]
- Tian, W.; Wang, Y. Analytic performance prediction of track-to-track association with biased data in multi-sensor multi-target tracking scenarios. Sensors 2013, 13, 12244–12265. [Google Scholar] [CrossRef] [PubMed]
- Ghirmai, T. Distributed particle filter for target tracking: With reduced sensor communications. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
- Kyristsis, S.; Antonopoulos, A.; Chanialakis, T.; Stefanakis, E.; Linardos, C.; Tripolitsiotis, A.; Partsinevelos, P. Towards autonomous modular UAV missions: The detection, geo-location and landing paradigm. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
- Enayet, A.; Razzaque, M.A.; Hassan, M.M.; Almogren, A.; Alamri, A. Moving target tracking through distributed clustering in directional sensor networks. Sensors 2014, 14, 24381–24407. [Google Scholar] [CrossRef] [PubMed]
- Boudriga, N.; Hamdi, M.; Iyengar, S. Coverage assessment and target tracking in 3D domains. Sensors 2011, 11, 9904–9927. [Google Scholar] [CrossRef] [PubMed]
- Dellen, B.; Erdal Aksoy, E.; Wörgötter, F. Segment tracking via a spatiotemporal linking process including feedback stabilization in an n-D lattice model. Sensors 2009, 9, 9355–9379. [Google Scholar] [CrossRef] [PubMed]
- Sun, B.; Jiang, C.; Li, M. Fuzzy neural network-based interacting multiple model for multi-node target tracking algorithm. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
- Drap, P.; Lefèvre, J. An exact formula for calculating inverse radial lens distortions. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
- Wackrow, R.; Ferreira, E.; Chandler, J.; Shiono, K. Camera calibration for water-biota research: The projected area of vegetation. Sensors 2015, 15, 30261–30269. [Google Scholar] [CrossRef] [PubMed]
- Sanz-Ablanedo, E.; Rodríguez-Pérez, J.R.; Armesto, J.; Taboada, M.F.Á. Geometric stability and lens decentering in compact digital cameras. Sensors 2010, 10, 1553–1572. [Google Scholar] [CrossRef] [PubMed]
- Bosch, J.; Gracias, N.; Ridao, P.; Ribas, D. Omnidirectional underwater camera design and calibration. Sensors 2015, 15, 6033–6065. [Google Scholar] [CrossRef] [PubMed]
- Miklavcic, S.; Cai, J. Automatic curve selection for lens distortion correction using Hough transform energy. IEEE Workshop Appl. Comput. Vis. 2013, 26, 455–460. [Google Scholar]
- Goljan, M.; Fridrich, J. Estimation of lens distortion correction from single images. Proc. SPIE 2014, 9028. [Google Scholar] [CrossRef]
- Lee, T.Y.; Chang, T.S.; Wei, C.H.; Lai, S.H.; Liu, K.C.; Wu, H.S. Automatic distortion correction of endoscopic images captured with wide-angle zoom lens. IEEE Trans. Biomed. Eng. 2013, 60, 2603–2613. [Google Scholar] [PubMed]
- Liu, J.; Sun, H.; Zhang, B.; Dai, M.; Jia, P.; Shen, H.; Zhang, L. Target self-determination orientation based on aerial photoelectric imaging platform. Opt. Precis. Eng. 2007, 15, 1305–1309. [Google Scholar]
- Sheng, W.; Long, Y.; Zhou, Y. Analysis of target location accuracy in space-based optical-sensor network. Acta Opt. Sin. 2011, 31, 0228001. [Google Scholar] [CrossRef]
- Le, Z.; Wuzhou, L.; Yangfeng, J. Localization accuracy evaluation method based on CEP. Command Control Simul. 2013, 35, 111–114. [Google Scholar]
- Kuo, D.; Gordon, D. Real-time orthorectification by FPGA-based hardware acceleration. Proc. SPIE 2010, 7830. [Google Scholar] [CrossRef]
- Kalal, Z.; Mikolajczyk, K. Tracking-Learning-Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1409–1422. [Google Scholar] [CrossRef] [PubMed]
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).