A Cooperative Target Localization Method Based on UAV Aerial Images

: A passive localization algorithm based on UAV aerial images and Angle of Arrival (AOA) is proposed to solve the target passive localization problem. In this paper, the images are captured using ﬁxed-focus shooting. A target localization factor is deﬁned to eliminate the effect of focal length and simplify calculations. To synchronize the positions of multiple UAVs, a dynamic navigation coordinate system is deﬁned with the leader at its center. The target positioning factor is calculated based on image information and azimuth elements within the UAV photoelectric reconnaissance device. The covariance equation is used to derive AOA, which is then used to obtain the target coordinate value by solving the joint UAV swarm positional information. The accuracy of the positioning algorithm is veriﬁed by actual aerial images. Based on this, an error model is established, the calculation method of the co-localization PDOP is given, and the correctness of the error model is veriﬁed through the simulation of the Monte Carlo statistical method. At the end of the article, the trackless Kalman ﬁlter algorithm is designed to improve positioning accuracy, and the simulation analysis is performed on the stationary and moving states of the target. The experimental results show that the algorithm can signiﬁcantly improve the target positioning accuracy and ensure stable tracking of the target.


Introduction
Reconnaissance-type UAVs are equipped with key features that enable them to locate targets quickly and accurately, as well as predict their behavior with precision.As technology has advanced, UAVs have become capable of multi-machine collaborative operations, thanks to bionic clustering and communication networking technologies.Optoelectronic information technology has also undergone significant development, resulting in the integration, miniaturization, and cost-effectiveness of airborne optoelectronic detection devices [1,2].To further improve target localization accuracy, UAVs now employ a clustered approach to execute target localization and situational awareness duties [3].
In general, there are two types of UAV target localization techniques: active localization and passive localization.Active localization is the process of actively locating a target using a radio instrument, such as a UAV radar.The UAV actively ranges the target while actively positioning itself, which has a bigger impact on the UAV's own concealing abilities and survivability [4,5].By passively collecting target information rather than actively producing electromagnetic waves, lasers, etc., to obtain ranging information, passive placement helps to some extent, ensuring the safety of the UAV itself.According to the type of observation quantity, passive localization techniques are divided into several categories: primarily Collinear Equation, Image Matching, Binocular Vision 3D Localization, Doppler Rate of Frequency Change (DRC), Doppler Rate of Chang (DRC), Phase Difference Rate of Change (PDRC), Time Difference of Arrival (TDOA), Frequency Difference of Arrival (FDOA), Angle of Arrival (AOA), and other techniques [6,7].The UAVs mentioned in this paper perform clustered localization tasks, and they distinguish themselves by being small, light, and having low power consumption, as well as better anti-jamming and stealthiness.To accommodate the UAV platform and usage needs, localization techniques need to be improved.
Collinear equation, image matching, and binocular vision 3D localization methods in the aforementioned passive localization are localization methods based on image information that can localize the target via a single image but with significant localization error.The flat terrain assumption, which is not always true in real-world application circumstances, is the foundation of the covariance equation approach.Although feature-based image matching is more efficient and gray-scale correlation-based image matching is more widely used, image-matching algorithms are more difficult to use, take longer to complete, and demand more computing resources, and thus they cannot be employed in situations where real-time performance is crucial.The secret to binocular or multicamera vision 3D localization is to shoot the target from various angles and acquire local feature points of the object, which cannot satisfy the measurement accuracy of further away targets due to the restriction of baseline distance.Although direction-finding cross-localization improves target maneuvering performance prediction, it has a significant flaw in multi-target localization and falls short of UAVs' general criteria.In wireless sensor networks, where target information is typically received from sensors mounted on several observation points, methods like DRC, PDRC, TDOA, FDOA, and AOA are based on fast-improving localization algorithms [8].
To sum up, this work suggests an enhanced passive localization technique with the following key contributions based on the picture data obtained from aerial photography.

1.
The solution technique does not require the input of focal length and elevation information; 2.
Simultaneous localization of multiple targets is possible; 3.
The target localization error may be estimated based on the error component of each observation; 4.
The proposed traceless Kalman filtering approach can significantly increase the target localization and tracking accuracy while maintaining good robustness.
The article is organized as follows.Section 2 discusses the multi-UAV cooperative target localization method, including algorithm assumptions and a schematic depiction of the computational flow.Section 3 introduces the multi-UAV target cooperative localization algorithm.Section 4 examines the localization error and constructs a cooperative localization error model based on Section 3. Section 5 describes the traceless Kalman filter's principles and computational methods.Section 6 simulates the algorithm's correctness and highlights its benefits and drawbacks.Section 7 presents the conclusions.

Cooperative Target Localization Process for Multiple UAVs
The scenario is described in terms of multiple UAVs performing real-time reconnaissance and localization missions, as follows: The UAV is equipped with an electro-optical load to obtain wide-field of view, high-resolution infrared and visible image information, allowing both target and target-assisted localization to be performed.
Through mission mustering, multiple UAVs in the scenario area coordinate to pinpoint the objective.Within the electric-optic load action range, multiple UAVs gather the corresponding target pixel coordinates based on image data, sync the image data with the appropriate navigation data to determine the target's relative position using the pertinent interior orientation data, and then convert the target's absolute position data, The specific process is shown in Figure 1.A single UAV's positioning process requires preassembled elevation or range information.We design a collaborative target placement solution approach in the absence of elevation information, taking into account the benefits of passive positioning and relative height measurement accuracy.Continuous tracking and gazing of the target is impossible due to the complex combat environment, but to take advantage of the UAV's wide field of view for efficient reconnaissance in a limited time window, it is necessary to complete multiple target localization solutions based on multiple images.Furthermore, because absolute target position information is required, multiple UAVs should establish spatial relative relationships through position sharing prior to collaborative target localization, and the time uniformity problem is solved by synchronizing and fusing multiple information of respective UAVs.
process is shown in Figure 1.A single UAV's positioning process requires preassembled elevation or range information.We design a collaborative target placement solution approach in the absence of elevation information, taking into account the benefits of passive positioning and relative height measurement accuracy.Continuous tracking and gazing of the target is impossible due to the complex combat environment, but to take advantage of the UAV's wide field of view for efficient reconnaissance in a limited time window, it is necessary to complete multiple target localization solutions based on multiple images.Furthermore, because absolute target position information is required, multiple UAVs should establish spatial relative relationships through position sharing prior to collaborative target localization, and the time uniformity problem is solved by synchronizing and fusing multiple information of respective UAVs.

Model Assumptions
The following assumptions are made in the above scenario problem: (1) Because the UAV's camera center corresponds with the origin of the navigation coordinate system, any position mistake between them is ignored.(2) The UAV's own location information is updated without delay; (3) The data link has no latency, a big bandwidth, and anti-interference properties to ensure that information is properly transferred.(4) The image's optical distortion is ignored.
The input parameters for cooperative target localization of numerous UAVs are primarily separated into the following categories: UAV flight status parameters, navigation data, and picture data, among others.The output parameter is the location information of the target, and the specific calculation process is shown in Figure 2.

Model Assumptions
The following assumptions are made in the above scenario problem: (1) Because the UAV's camera center corresponds with the origin of the navigation coordinate system, any position mistake between them is ignored.(2) The UAV's own location information is updated without delay; (3) The data link has no latency, a big bandwidth, and anti-interference properties to ensure that information is properly transferred.(4) The image's optical distortion is ignored.
The input parameters for cooperative target localization of numerous UAVs are primarily separated into the following categories: UAV flight status parameters, navigation data, and picture data, among others.The output parameter is the location information of the target, and the specific calculation process is shown in Figure 2.

Multi-UAV Target Co-Location Modeling
The set of UAVs involved in cooperative positioning is denoted by , where N is the total number of UAVs involved in localization; the set of targets that may be scouted and located is denoted by , where K denotes the total number of targets that can be scouted and located.

WGS-84 Earth Ellipsoid Model
The Earth ellipsoid is a mathematically defined Earth surface that approximates the geodetic level and serves as the reference framework for geodesy and global positioning techniques [9].This reference also displays the WGS-84 Earth ellipsoid model's major parameters.

Synchronization and Updating of Observational Position
The coordinates of the UAV coordinate system are derived from the image and the cooperative positioning of the target by the UAV, but because the positions of numerous UAVs are continually changing, the positions of multiple UAVs must be synchronized and updated.
It is expected that each UAV may collect its own geodetic coordinates and share their position with one another.Localization UAVs are classified into two types: leaders and followers.The mission planning technique assures that there is always one leader in the system to participate in positioning while the rest of the UAVs are followers.The method described in [10] is used in this paper to pick the leader aircraft.
This work develops a dynamic navigation coordinate system to ease the calculation.The dynamic navigation coordinate system ( O X Y Z

−
) is defined as follows: the coordinate system's origin is solidly connected to the camera center of the lead aircraft, the X n axis is positively pointing to the north, the Y n axis is in the plumb plane and positively pointing to the sky, and the Z n axis follows the right-hand rule.The positions of the other UAVs in the dynamic navigation coordinate system are dynamically updated as

Multi-UAV Target Co-Location Modeling
The set of UAVs involved in cooperative positioning is denoted by S n = S nj j = 1, 2, . . ., N , where N is the total number of UAVs involved in localization; the set of targets that may be scouted and located is denoted by U T = U T i i = 1, 2, . . ., K , where K denotes the total number of targets that can be scouted and located.

WGS-84 Earth Ellipsoid Model
The Earth ellipsoid is a mathematically defined Earth surface that approximates the geodetic level and serves as the reference framework for geodesy and global positioning techniques [9].This reference also displays the WGS-84 Earth ellipsoid model's major parameters.

Synchronization and Updating of Observational Position
The coordinates of the UAV coordinate system are derived from the image and the cooperative positioning of the target by the UAV, but because the positions of numerous UAVs are continually changing, the positions of multiple UAVs must be synchronized and updated.
It is expected that each UAV may collect its own geodetic coordinates and share their position with one another.Localization UAVs are classified into two types: leaders and followers.The mission planning technique assures that there is always one leader in the system to participate in positioning while the rest of the UAVs are followers.The method described in [10] is used in this paper to pick the leader aircraft.
This work develops a dynamic navigation coordinate system to ease the calculation.The dynamic navigation coordinate system (O n − X n Y n Z n ) is defined as follows: the coordinate system's origin is solidly connected to the camera center of the lead aircraft, the X n axis is positively pointing to the north, the Y n axis is in the plumb plane and positively pointing to the sky, and the Z n axis follows the right-hand rule.The positions of the other UAVs in the dynamic navigation coordinate system are dynamically updated as the leader's position changes.Figure 3 depicts the position of each UAV in the dynamic navigation coordinate system at a given time.the leader's position changes.Figure 3 depicts the position of each UAV in the dynamic navigation coordinate system at a given time.
where n e C denotes the ratio transformation of the geographical Cartesian coordinate sys- tem to the navigation coordinate system.

Image Based Localization Factor Solution Method
The target positioning solution process has to specify the coordinate system, angle, and coordinate system conversion matrix for a single image.Six coordinate systems: carrier coordinate system, servo stabilization coordinate system, electric-optic load system, and pixel coordinate system, as well as parameters like aircraft attitude angle, electricoptic load installation angle, servo frame angle, and look-down angle [11], are involved in addition to the definition of the coordinate systems shown in Sections 3.1 and 3.2.

Image Based Localization Factor Solution Method
The target positioning solution process has to specify the coordinate system, angle, and coordinate system conversion matrix for a single image.Six coordinate systems: carrier coordinate system, servo stabilization coordinate system, electric-optic load system, and pixel coordinate system, as well as parameters like aircraft attitude angle, electric-optic load installation angle, servo frame angle, and look-down angle [11], are involved in addition to the definition of the coordinate systems shown in Sections 3.1 and 3.2.
By identifying the targets and using the coaxial image plane, the electric-optic load may be used to determine the pixel coordinates of each target.Through image detection data and optoelectronic device characteristics, the target information on the image can be determined.Give the symbol Θ Tij to this information and define it as a target positioning factor.
First, build the camera coordinate system as depicted in Figure 4 and use the UAV's position S nj as the coordinate origin.In the navigation coordinate system, the coordinates of S nj are (X ns(j) , Y ns(j) , Z ns(j) ), those of the target point A(i), which corresponds to the image point a(i), are (X nA(i) , Y nA(i) , Z nA(i) ), and the inverse equation of the common line equation is as follows: where ), the conversion matrix between the navigation coordinate system and the UAV camera coordinate system, corresponds to the coefficients of C n c(j) .
() ( ) . We can obtain the following by converting Equation ( 3): The internal orientation components and target pixel coordinates used in the UAV

Image-Based AOA Vector Solution Process
The AOA can be solved if the current position and attitude of the UAV and the orientation factor within the target are known as follows: Set the UAV electric-optic load's longitudinal and lateral resolution to PxV max(j) × PxU max(j) , the half field of view angles in the longitudinal and lateral directions to α (j)1/2 × β (j)1/2 , the physical dimensions of the image elements in the longitudinal and lateral directions to d v(j) × d u(j) , the focal length to f (j) , and using the definition in Figure 4 we can obtain f (i) = X sa(i) , and the principal point of the image's position in the pixel coordinate system to (u 0(j) , v 0(j) ).Target's pixel coordinates are

Ys
, and the transformation between the two-dimensional image coordinate system and the camera coordinate system can be expressed as follows: The following formula can be derived in accordance with Figure 4 and its definition.
PxV max(j) (5) The desired placement factor is the right-hand side of Equation ( 4), and the organized expression is as follows: PxV max(j) , where ω nj = C 11(j) C 12(j) C 13(j) , , and ρ nj = C 31(j) C 32(j) C 33(j) .We can obtain the following by converting Equation (3): The internal orientation components and target pixel coordinates used in the UAV S nj 's reconnaissance of the target U T i are represented by Θ Tij in Equation (7).It is clear that for a specific kind of electric-optic load, the internal orientation elements α (j)1/2 , β (j)1/2 , PxV max(j) , PxU max(j) , u 0(j) and v 0(j) have constant values and that the localization factor only varies with the target pixel coordinates and is unaffected by the actual size and focal length of the image element.

Image-Based AOA Vector Solution Process
The AOA can be solved if the current position and attitude of the UAV and the orientation factor within the target are known as follows: T represents the AOA Vector.The following equation is obtained using Equation (8). tan ) From Equation ( 9) we obtain: Define σ target(i) as follows: As evident from Equations ( 9)-( 13), targets positioning factor Θ Tij , the transformation matrix C n c(j) and the target coefficient σ target(i) have the biggest effects on the AOA vector.This paper performs ground reconnaissance operations with σ target(i) set to −1.The AOA Vector for N UAVs is:

Co-Location Solution Model
The targets' location coordinates are U T i = x T i , y T i , z T i T (i = 1, 2, . . ., K). Figure 3 depicts the relationship between the UAV S nj and the target U T i , with R ij serving as a measure of their separation.
According to Figure 3 and Equations ( 8) and ( 12), the following equation can be obtained: The procedure suggested in [12] allows us to roughly eliminate R ij : where Φ ij is the symmetric matrix and rank(Φ ij ) = 2, and thus Equation (17) requires more equations than necessary to meet the target position's solution.
For UAVs involved in localization S n , there exists: It is possible to determine the target location coordinates by using Equation (1), Equation (7), Equation (12) and Equation (19).By converting the Earth's Cartesian coordinate system to the geodetic coordinate system and iteratively calculating the final geodetic coordinates, satisfying the accuracy longitude, latitude, and altitude as (λ Ti , φ Ti , h Ti ), and the result U T i is the coordinates of the navigation system of each target calculated with the coordinates of the UAV S nj in the navigation coordinate system as the reference point.After converting the measure to degrees, the targets' Earth coordinates are finally discovered.

AOA Error Model Based on Image
The look-down angle of the electric-optic load is θ xsj , with error δθ xsj (k), the installation angle of the electric-optic load is [φ pj (k), ϑ pj (k), γ pj (k)], with error [δφ pj (k), δϑ pj (k), δγ pj (k)], the yaw, pitch, and roll angles at the camera moment are [φ bj (k), ϑ bj (k), γ bj (k)], with measurement errors [δφ bj (k), δϑ bj (k), δγ bj (k)], the altitude and azimuth angles of the frame are [θ cj (k), ψ cj (k)], with error [δθ cj (k), δψ cj (k)], and let the true value of the pixel coordinates of the target observed by UAV S nj at time k.It should be noted that since the navigation device and the electric-optic load are solidly coupled to reduce error, the installation angle is 0.
Then the observation vector is: The observation measurement error is: At time k, The measured value of AOA Vector V S N is Vk , The true value of AOA Vector V S N is V k , then we get: where J k is the Jacobian matrix, σ J k the residual vector, and each value in σ J k is the residual of a single observation from its standardized value.
The calculation of the matrix J k is performed below.
To facilitate the calculation, set the auxiliary variable as M aux = X j , Y j , Z j T , Θ Tij is normalized to obtain unit vector τ c(i) , and τ c(i) is transformed by C n c(j) : Equation ( 24) is transformed to obtain a new expression for the AOA Vector.
A linearized transformation of Equation ( 23) yields J k : de f ine : where:

Collaborative Positioning Error Model Based on PDOP
Position Dilution of Precision, or PDOP, is simply a measure of how accurate a location is, and in a satellite positioning system, the degree of the PDOP value indicates how welldistributed the satellite terminals are [13].During the cooperative positioning of several UAVs, the DOP value of each measurement site in the reconnaissance region is influenced, with reduced DOP values frequently resulting in higher positioning accuracy [14].In order to analyze the error range of cooperative positioning and to suggest optimization ideas, PDOP simulation and analysis of the cooperative positioning of numerous UAVs are employed in this study.The derivation formula in question is displayed below.
Assuming that the target is indeed in the position [x Ti , y Ti , z Ti ] T , (i = 2, 3, . . ., M), the target solution is in the position [ xTi , ŷTi , ẑTi ] T , (i = 2, 3, . . ., M), and we obtain: H k is the Jacobian matrix, and σ H k is a vector of residuals between the observed values and the standard values.Each value in this vector represents the residual of a single observation compared to its standard value.
According to Equation ( 29), H k is obtained as: The covariance array of the error δX k is: The PDOP solution procedure for cooperative localization of multiple UAV targets is provided by Equations ( 30)-(33).

Error Analysis
There are two sections to the error analysis.One is the external orientation element, or the UAV's flying status at the time of target launch, as illustrated in Table 1.The other is the internal orientation element, which is represented by the target pixel coordinates at the time of target launch in Tables 2 and 3, which displays the distribution of the measurement errors for the external orientation element.
The simulation is run under the conditions indicated in Table 1 to ensure that the error model is accurate, and the additional parameters are shown in Tables 2 and 3.
The location factor can be calculated as Θ Tij ≈ 1 0 0 T from Equation (7), assuming that the target is close to the center of the image.According to the circumstances outlined in [11], the error when the target is at the edge of the image is greater than the error when the target is in the center of the image, so four places in Figure 5A-D were chosen for a Monte Carlo simulation using arithmetic.The look-down angle of view was set at −90 • , and the relative altitude of the flight was 3000 m.According to the experiments, under ideal flight conditions, the target's image-based AOA azimuth angle and altitude angles should both not exceed 2.73 • and 0.81 • , respectively.
Based on the calculation method of PDOP value proposed in Sections 4.1 and 4.2, the multi-aircraft cooperative localization error analysis is carried out.Firstly, two UAVs are set up for target localization; the position coordinates of the UAVs are shown in Table 5, the position error is 0 m, and the flight altitude is 3000 m.According to the parameters in Table 2, it can be known that the size of the corresponding area of the captured image is calculated to be about 1400 m × 1200 m when flying at a relative altitude of 3000 m.In this area, the PDOP value under any position can be calculated and plotted into a contour distribution map, as shown in Figure 10.The results of cooperative localization of two UAVs show that when the distance between UAVs is 200 m and 100 m, the minimum value of PDOP distribution is 50 m and 100 m, respectively, i.e., the smaller the spacing is, the larger the localization error is.In addition, the target localization accuracy is also related to the distribution of UAVs; therefore, four UAVs are set up for target localization, and their errors are analyzed.Assuming that four UAVs fly at a relative height of 3000 m, the position coordinates in the 2D plane  The results of cooperative localization of two UAVs show that when the distance between UAVs is 200 m and 100 m, the minimum value of PDOP distribution is 50 m and 100 m, respectively, i.e., the smaller the spacing is, the larger the localization error is.In addition, the target localization accuracy is also related to the distribution of UAVs; therefore, four UAVs are set up for target localization, and their errors are analyzed.Assuming that four UAVs fly at a relative height of 3000 m, the position coordinates in the 2D plane are shown in Table 6.The calculation results are shown in Figures 11 and 12.In summary, utilizing the collaborative positioning model of many unmanned aerial vehicles, a PDOP-based error computation model was constructed, and it was validated using Monte Carlo simulation.The simulation findings show that the baseline has a significant impact on the collaborative positioning error of the two UAVs, with a minimal positioning error of roughly 50 m.The collaborative positioning error of four unmanned aerial vehicles is less affected by the formation mode under the same conditions, with a minimal positioning error of roughly 20 m.Filtering methods can increase the accuracy of error models.

Target Localization Method Based on Cubature Kalman Filter
Nonlinear filtering is used because the observation functions of both the time difference and the measured AOA observations are nonlinear functions [15,16].Because of its linearization procedure, the Extended Kalman Filter (EKF) can only achieve a greater filtering performance provided the linearization error of the system's state and observation equations is small [17,18].The particle filter (PF) algorithm, which has recently been developed, is a good algorithm for solving the nonlinear estimation problem [19,20].As a result, the improvements in localization error by the EKF and PF algorithms were compared.
Arasaratnam and Haykin et al. proposed the Cubature Kalman Filter (CKF) algorithm [21] to solve the integration problem of nonlinear functions in filtering algorithms, which is similar to the Unscented Kalman Filter (UKF), first calculates the sampling points (called volume points), then calculates the one-step prediction of the volume points by the state equation, and then corrects the predicted value of the state by the quantitative update and the Kalman gain calculation.In comparison to the Unscented Kalman filter algorithm, the Cubature Kalman Filter algorithm obtains the volume points by calculating the spheri-cal radial volume criterion without linearizing the state equation and directly transferring the volume points by the nonlinear state equation while ensuring that the weights are always positive.This improves the algorithm's robustness and accuracy [22][23][24].
According to the third-order spherical radial criterion, the number of volume points for an n-dimensional state vector is m = 2n, and the set of volume points is designated as: where [1] j denotes the j th volume point, i.e., the j th column of [1], and [1] can be expressed as: The weights of each volume point are equal, as written: For the following target state equation and the measurement equation: The Cubature Kalman Filtering algorithm and the specific process are given below: Step 1: Calculation of volume points.
where a Cholesky decomposition of P k−1,k−1 gives S k−1 .
Step 2: One-step prediction of volume points.
Step 3: Compute one-step prediction and covariance matrix of state quantities.
Step 4: Calculation of new volume points based on one-step predicted values.
Step 5: Observation prediction for new volume points.
Step 6: Calculate the mean and covariance of the target observations weighted by the observation predictions of the volume points.
Step 8: Calculate system state update and covariance update.
The flow of the Cubature Kalman Filtering algorithm is shown in Figure 13.

Co-Localization Algorithm Verification
An external field test was performed to validate the co-localization algorithm's correctness.For the same ground identifier, twelve groups of UAV aerial photographs were selected under varied working conditions, with two images in each group, and the target location was solved separately, yielding a total of six groups of target coordinate values.An example of a group of aerial images is shown in Figure 14.

Co-Localization Algorithm Verification
An external field test was performed to validate the co-localization algorithm's correctness.For the same ground identifier, twelve groups of UAV aerial photographs were selected under varied working conditions, with two images in each group, and the target location was solved separately, yielding a total of six groups of target coordinate values.An example of a group of aerial images is shown in Figure 14.

Co-Localization Algorithm Verification
An external field test was performed to validate the co-localization algorithm's correctness.For the same ground identifier, twelve groups of UAV aerial photographs were selected under varied working conditions, with two images in each group, and the target location was solved separately, yielding a total of six groups of target coordinate values.An example of a group of aerial images is shown in Figure 14. Figure 15 shows the target localization results (the status of UAV collaborative target localization is shown in Table 7, The calculation results of the target position in this state are shown in Table 8).The blue solid dots in the figure represent the true position of the ground identification (target), while the red solid dots represent the results of collaborative target localization by two machines.There are a total of six sets of data.It can be seen that the algorithm can accurately calculate the position of the target in the two-dimensional plane.Figure 15 shows the target localization results (the status of UAV collaborative target localization is shown in Table 7, The calculation results of the target position in this state are shown in Table 8).The blue solid dots in the figure represent the true position of the ground identification (target), while the red solid dots represent the results of collaborative target localization by two machines.There are a total of six sets of data.It can be seen that the algorithm can accurately calculate the position of the target in the two-dimensional plane.Figure 15 shows the target localization results (the status of UAV collaborative target localization is shown in Table 7, The calculation results of the target position in this state are shown in Table 8).The blue solid dots in the figure represent the true position of the ground identification (target), while the red solid dots represent the results of collaborative target localization by two machines.There are a total of six sets of data.It can be seen that the algorithm can accurately calculate the position of the target in the two-dimensional plane.

Multi-UAV Co-Location and Tracking
Simulations for fixed and moving targets are discussed in this section, and these were used to validate the effectiveness of CKF.During the process of target discovery and localization by the four UAV observatories, the UAV moves along a specific trajectory and makes numerous observations of the target area.The observations from the first measurement are combined with the initial position where the UAV observatory begins positioning to obtain the initial position estimate of the target and the corresponding covariance matrix of the zero-mean estimation error as the initial estimate of the filtering algorithm, and the filtering algorithm is then used to process the multiple observations to obtain a more accurate estimate of the target.
Figure 3 shows the NUE (North-Up-East) coordinate system O n − X n Y n Z n ,and Table 9 shows the beginning condition of the ground target, data related to the number of UAVs, initial status, and measurement errors.Ref. [25] provides the basic motion model of the target, and the efficiency of the Unscented Kalman Filter-based target localization approach is validated in this study using the target's constant linear and rotating motion.
The discrete constant velocity linear and rotating motion models of the target are shown in the following equations: where x k T and x ct k = .
x k .
x k y k .y k T are the state vectors of the target's constant linear and turning motion models, Φ cv k and Φ ct k the status transition matrices, and W cv k , W ct k the system noise.To model the positional adequacy and location error of the UAVs, a random normal error with a mean value of 0 and a standard deviation of 10 m is added to the position of the UAVs' path. Figure 16 depicts the error between the measuring position and the real position of the UAV.To model the positional adequacy and location error of the UAVs, a random normal error with a mean value of 0 and a standard deviation of 10 m is added to the position of the UAVs' path. Figure 16 depicts the error between the measuring position and the real position of the UAV.Four UAVs approach the target from various angles.Figure 17 depicts the position result for a stationary object.The target position's positioning deviance quickly converges from tens of meters to less than ten meters.Assume that the target moves in a straight path and turns in a straight line and that one revolution of the motion takes 80 s.Four UAVs are programmed to proceed toward Four UAVs approach the target from various angles.Figure 17 depicts the position result for a stationary object.The target position's positioning deviance quickly converges from tens of meters to less than ten meters.W the system noise.
To model the positional adequacy and location error of the UAVs, a random normal error with a mean value of 0 and a standard deviation of 10 m is added to the position of the UAVs' path. Figure 16 depicts the error between the measuring position and the real position of the UAV.Four UAVs approach the target from various angles.Figure 17 depicts the position result for a stationary object.The target position's positioning deviance quickly converges from tens of meters to less than ten meters.Assume that the target moves in a straight path and turns in a straight line and that one revolution of the motion takes 80 s.Four UAVs are programmed to proceed toward the target's initial position to finish the target's continual localization and tracking.Figure Assume that the target moves in a straight path and turns in a straight line and that one revolution of the motion takes 80 s.Four UAVs are programmed to proceed toward the target's initial position to finish the target's continual localization and tracking.Figure 18 depicts the target's and UAVs' respective motion trajectories and tracking at a certain time.To account for the interference of random perturbations on positioning outcomes, 100 Monte Carlo simulations were run, and the RMSE (Root Mean Square Error) was employed as the accuracy judgment measure.
Figure 19 shows the measurement results of the position component when tracking a moving target, and it can be seen that the algorithm proposed in this paper can quickly converge and stably trac the target.Figure 20 shows the position and velocity RMS errors when tracking a moving target, and the tracking accuracy gradually improves with the change of time, the position error converges within 12 m, and the velocity error converges within 0.5 m/s, which also shows that the algorithm has high accuracy under external interference.To account for the interference of random perturbations on positioning outcomes, 100 Monte Carlo simulations were run, and the RMSE (Root Mean Square Error) was employed as the accuracy judgment measure.
Figure 19 shows the measurement results of the position component when tracking a moving target, and it can be seen that the algorithm proposed in this paper can quickly converge and stably trac the target.Figure 20 shows the position and velocity RMS errors when tracking a moving target, and the tracking accuracy gradually improves with the change of time, the position error converges within 12 m, and the velocity error converges within 0.5 m/s, which also shows that the algorithm has high accuracy under external interference.To account for the interference of random perturbations on positioning outcomes, 100 Monte Carlo simulations were run, and the RMSE (Root Mean Square Error) was employed as the accuracy judgment measure.
Figure 19 shows the measurement results of the position component when tracking a moving target, and it can be seen that the algorithm proposed in this paper can quickly converge and stably trac the target.Figure 20 shows the position and velocity RMS errors when tracking a moving target, and the tracking accuracy gradually improves with the change of time, the position error converges within 12 m, and the velocity error converges within 0.5 m/s, which also shows that the algorithm has high accuracy under external interference.

Conclusions
This paper's target co-localization approach is a localization method that does not rely on elevation and ranging information.It can calculate the positions of many targets at once, considerably improving UAV detecting capability.The approach is almost hardware-independent and thus appropriate for low-cost small UAV cluster systems.The error model is used in this study to examine the lowest target positioning error of 20 m at 3000 m relative flight altitude under typical flight conditions.This paper uses the traceless Kalman filtering algorithm to simulate and verify the stationary and moving targets, respectively, and the target localization accuracy is improved by 40% compared to the original one, and the target can be continuously tracked in the case of interference with a high degree of accuracy guaranteed.

Figure 1 .
Figure 1.Schematic diagram of the process of Multi-UAV performing target positioning tasks.

Figure 1 .
Figure 1.Schematic diagram of the process of Multi-UAV performing target positioning tasks.

Figure 2 .
Figure 2. Flow chart of target co-location calculation.

Figure 2 .
Figure 2. Flow chart of target co-location calculation.

Figure 3 .
Figure 3. Schematic diagram of position synchronization and update based on dynamic navigation coordinate system.The longitude, latitude, and altitude information for the UAV in the WGS-84 Earth ellipsoidal geodetic coordinate system is () ( , , ), ( 1, 2,..., ) g c j j j j h j N    == , and its position in the dynamic navigation coordinate system is

S
, respectively, and the posi- tions of other UAVs in the dynamic navigation coordinate system are shown in Equation (1):

Figure 3 .
Figure 3. Schematic diagram of position synchronization and update based on dynamic navigation coordinate system.

njS
's reconnaissance of the target i T U are represented by Tij  in Equation (7).It is clear that for a specific kind of electric-optic load, the internal orientation elements and that the localization factor only varies with the target pixel coordinates and is unaffected by the actual size and focal length of the image element.

Figure 4 .
Figure 4. Schematic diagram of single-image based object detection.

Figure 4 .
Figure 4. Schematic diagram of single-image based object detection.

Figure 5 .
Figure 5. Points selected for Monte Carlo simulation to obtain maximum error.A, B, C and D correspond to the maximum error position and E corresponds to the minimum error position.Figures 6-9 display the distribution of AOA errors for the target sites when the parameter values inTable 1 are used.Table 4 displays the computation results and observational parameters.

Figure 5 .
Figure 5. Points selected for Monte Carlo simulation to obtain maximum error.A, B, C and D correspond to the maximum error position and E corresponds to the minimum error position.

Figures 6 -
Figures 6-9 display the distribution of AOA errors for the target sites when the parameter values inTable 1 are used.Table 4 displays the computation results and observational parameters.

Figure 5 .
Figure 5. Points selected for Monte Carlo simulation to obtain maximum error.A, B, C and D correspond to the maximum error position and E corresponds to the minimum error position.

Figures 6 -Figure 6 .
Figures 6-9 display the distribution of AOA errors for the target sites when the parameter values inTable 1 are used.Table 4 displays the computation results and observational parameters.

Figure 9 .
Figure 9. Distribution of AOA errors of D. (a) shows the distribution of altitude AOA error, with mean −72.726 and standard deviation 0.808; (b) shows the distribution of azimuth AOA error, with mean −143.310 and standard deviation 2.727.

Figure 11 .Figure 11 .
Figure 11.PDOP and Monte Carlo simulation position error conture map of square formation flying.

Figure 11 .Figure 12 .
Figure 11.PDOP and Monte Carlo simulation position error conture map of square formation flying.

Aerospace 2023 , 26 Figure 14 .
Figure 14.The ground images captured by two drones at the same time, with the number "28" in the red box as the ground identifier, and their true coordinates are known.

Figure 15 .Table 7 .
Figure 15.The distribution of actual target position and localization result in a two-dimensional plane.Table 7. States of 12 UAVs using for localization.

Figure 14 .
Figure 14.The ground images captured by two drones at the same time, with the number "28" in the red box as the ground identifier, and their true coordinates are known.

Aerospace 2023 , 26 Figure 14 .
Figure 14.The ground images captured by two drones at the same time, with the number "28" in the red box as the ground identifier, and their true coordinates are known.

Figure 15 .
Figure 15.The distribution of actual target position and localization result in a two-dimensional plane.

Figure 15 .
Figure 15.The distribution of actual target position and localization result in a two-dimensional plane.

Figure 16 .
Figure 16.Diagrammatic representation of the noise interference on the drone's position, superimposed on the motion trajectory, with a mean of 0 and a standard deviation of 10 m.

Figure 17 .
Figure 17.The position result for stationary target.

Figure 16 .
Figure 16.Diagrammatic representation the noise interference on the drone's position, superimposed on the motion trajectory, with a mean of 0 and a standard deviation of 10 m.

Figure 16 .
Figure 16.Diagrammatic representation of the noise interference on the drone's position, superimposed on the motion trajectory, with a mean of 0 and a standard deviation of 10 m.

Figure 17 .
Figure 17.The position result for stationary target.

Figure 17 .
Figure 17.The position result for stationary target.

erospace 2023 ,
10, x FOR PEER REVIEW 24 of 26 18 depicts the target's and UAVs' respective motion trajectories and tracking at a certain time.

erospace 2023 ,
10, x FOR PEER REVIEW 24 of 26 18 depicts the target's and UAVs' respective motion trajectories and tracking at a certain time.

Figure 20 .
Figure 20.RMSE of target position and velocity prediction.
Aerospace 2023, 10, x FOR PEER REVIEW 13 of 26 Table 1 are used.Table 4 displays the computation results and observational parameters.

Table 4 .
AOA of measurements at each point under condition 1.

Table 4 .
AOA of measurements at each point under condition 1.

Table 5 .
Coordinate distribution of UAVs.

Table 6 .
Coordinate distribution of UAVs.
PDOP Contour line is distributed as follows.

Table 7 .
States of 12 UAVs using for localization.