Previous Article in Journal
Design and Verification of 6-DOF Robotic Arm for Captive Trajectory System Applications in Wind Tunnel
Previous Article in Special Issue
A Rapid Aerial Image Mosaic Method for Multiple Drones Based on Key Frames
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Target Tracking-Based Online Calibration of UAV Electro-Optical Pod Installation Errors

Aerospace Technology Institute of CARDC, Mianyang 621000, China
*
Author to whom correspondence should be addressed.
Automation 2026, 7(2), 59; https://doi.org/10.3390/automation7020059
Submission received: 4 February 2026 / Revised: 24 March 2026 / Accepted: 30 March 2026 / Published: 1 April 2026

Abstract

As the “visual perception hub” of unmanned aerial vehicles (UAVs), electro-optical (EO) pods play an increasingly critical role in tasks such as intelligence gathering, situational awareness, target tracking, and localization. With the expanding scope and depth of UAV applications, higher demands are placed on the precision and adaptability of installation error calibration techniques for EO pods. Current mainstream calibration methods typically require specialized procedures under constrained conditions, while few approaches integrate existing UAV system capabilities and mission requirements, which leads to cumbersome, time-consuming processes and suboptimal alignment between calibration outcomes and task objectives. This paper proposes an online calibration method for UAV EO pod installation errors based on target tracking, which can rapidly compute the optimal closed-form solution for installation errors by leveraging UAV tracking missions. First, an observation equation for pod installation errors is established using tracking results. Second, multi-temporal observations are combined to model the calibration problem as an optimal rotation matrix estimation task, and then the optimal closed-form solution for installation errors is derived. Concurrently, a statistics-based approximate calibration method is introduced specifically for tracking missions. Furthermore, an online calibration system compatible with diverse UAV platforms is designed, along with different rapid calibration schemes for emergency response scenarios, fully incorporating existing system capabilities and mission needs. Finally, a fixed-wing UAV experimental platform is developed, with calibration tests conducted under various flight regimes. Experimental results validate the feasibility and robustness of the proposed methodology.

1. Introduction

Driven by rapid advancements in UAV and AI technologies, unmanned aerial systems have expanded beyond traditional military applications into civilian domains such as emergency rescue, agricultural protection, and power line inspection, gradually becoming key nodes in the current low-altitude economy. As core mission payloads for UAVs, the rapid and high-precision calibration of installation errors of electro-optical pods is a fundamental prerequisite for completing various tasks, and the calibration accuracy directly determines mission effectiveness. Ideally, the coordinate system of the pod base should completely align with the coordinate system of the UAV. However, factors including onboard space constraints, manufacturing tolerances, assembly imperfections, and vibration damper deformation introduce non-negligible installation errors. If not calibrated and compensated, these minor discrepancies propagate into substantial inaccuracies in long-range UAV operations. Particularly in target localization, installation errors critically degrade positioning precision. Geometric analysis reveals that merely 1° of yaw-axis misalignment induces over 5 m of target positioning error at a 300 m slant range. Such deviations become especially prominent in long-distance, high-accuracy missions like precision strikes, disaster search-and-rescue, and precise delivery, directly constraining UAV efficacy and mission reliability. Consequently, developing rapid, high-accuracy, online-capable calibration techniques is not only a technical requirement, but also an engineering necessity to avoid “minute initial errors causing massive downstream consequences”, which holds dual strategic value for both national security and civil industry upgrading.
Conventional ground-based calibration methods rely on specialized instruments such as high-precision turntables and laser theodolites, achieving error compensation through iterative mechanical adjustments and optical measurements. Although these approaches achieve high accuracy in laboratory settings, they still suffer from critical limitations. Firstly, cumbersome and time-consuming procedures require strict environmental stability, resulting in poor adaptability. Secondly, the mismatches between ground static/quasi-static calibration conditions and high-dynamic flight scenarios yield suboptimal calibration outcomes for practical UAV tasks. More importantly, when the UAV is transferred or the pod is disassembled for maintenance, it must be recalibrated, which significantly reduces operational availability rates while increasing maintenance costs. This fails to meet the rapid deployment requirements and becomes a critical bottleneck that restricts the emergency response capability of UAVs.
Recent years have witnessed expanding UAV deployment in civilian sectors, including logistics delivery, agricultural protection, and power line inspection, where the mission scenarios of UAVs exhibit highly diversified and dynamic characteristics. This evolution imposes dual new challenges on electro-optical pod calibration technologies: Extreme Precision for accuracy-critical operation scenarios, and Real-time Calibration Capability enabling rapid, dynamic, and online calibration to support emergency response scenarios, ideally achieving seamless in-flight calibration integration (“calibrate upon takeoff, compensation immediately post-calibration”).
The limitations of existing calibration methods and the demand for rapid and high-precision calibration motivate this research. Conventional ground-based techniques, while accurate in laboratories, are time-consuming, environment-sensitive, and require recalibration after pod maintenance—rendering them unsuitable for rapid UAV deployment. Prior aerial calibration methods [1,2] either require explicit target localization, involve iterative approximations, or fail to provide optimal closed-form solutions. Moreover, few approaches integrate calibration with routine UAV tasks like target tracking. These gaps highlight the need for an online, tracking-driven calibration framework that delivers analytically optimal solutions while meeting the dynamic demands of modern UAV missions.
To address these challenges, this study proposes a target tracking-based online calibration method for UAV electro-optical pod installation errors, with four primary contributions:
(a)
Fusion of multi-temporal observations to model the installation errors calibration as an optimal rotation matrix estimation problem, and an analytically optimal closed-form solution is derived, which enhances the calibration precision;
(b)
A mission-integrated calibration system compatible with diverse UAV platforms is designed, leveraging existing detection and tracking capabilities of current UAV systems to achieve in-task calibration, which ensures optimal error correction aligned with practical UAV objectives;
(c)
Different rapid-calibration schemes for emergency response scenarios are proposed by embedding the calibration within UAV takeoff or mission execution phases, which eliminate the dedicated calibration procedures and realize “calibrate upon takeoff, compensate immediately post-calibration”;
(d)
We developed a fixed-wing UAV experimental system and conducted calibration comparison experiments across various flight regimes, the results of which can provide actionable guidance for UAV path planning during the calibration process, especially for precise operation scenarios.
The remainder of this paper is structured as follows: Section 2 introduces the relevant research. Section 3 formulates the installation error model for UAV electro-optical pods and develops the novel solution methodology. Section 4 details the design of the online calibration system alongside different rapid calibration schemes for emergency response scenarios. Section 5 implements a fixed-wing UAV system and validates method precision and robustness through extensive flight tests. Section 6 concludes the work and research contributions.

2. Related Works

Recent advancements in in-flight calibration techniques for electro-optical pod installation errors have liberated the process from the environmental constraints of traditional ground-based calibration methods. By leveraging real-time measurement data during UAV operations, these aerial calibration methods primarily fall into two categories: Multi-error joint calibration approaches and Cooperative target-based methods. Regarding multi-error joint calibration, Kalibr [3] serves as a robust multi-sensor calibration toolbox widely implemented in SLAM [4,5,6] and VIO [7,8] systems. It enables simultaneous estimation of camera intrinsics, camera-IMU extrinsics, and the IMU noise parameters. Although Kalibr provides a powerful framework for sensor fusion, it is primarily designed for offline calibration and does not address the specific challenge of in-flight pod installation error calibration under dynamic UAV operations. Additionally, Bundle Adjustment (BA) [9] acts as the core optimization algorithm in SLAM, which simultaneously refines camera poses, 3D map points, and intrinsic parameters during optimization. However, BA typically operates on visual features and does not directly incorporate the gimbal angles, UAV state, and target tracking information required for pod installation error calibration.
Regarding cooperative target-based calibration, ref. [1] models the EO pod installation errors using multiple ground-cooperative-target positioning measurements combined with the airframe and pod state data. The near-optimal solutions of the pod installation errors are obtained via an iterative genetic algorithm optimization of a nonlinear least-squares (NLS) estimation. Similarly, combining the state information of the aircraft and the electro-optical pod, ref. [2] employs the least squares method to perform real-time fitting of the pod installation errors by tracking and locating the ground cooperative target. However, due to the approximation steps performed in [2], it can only handle situations where the initial misalignment angles are within a range of ±2°, which yields suboptimal outcomes. These limitations highlight the need for a method that can achieve optimal closed-form solutions without relying on iterative approximations or restrictive angular ranges. Although both [1,2] share conceptual parallels with our work, critical distinctions exist: mandatory target localization in prior methods vs. our target tracking-based approach; fundamentally distinct mathematical frameworks for installation errors modeling/solving; inherent solution suboptimality in existing techniques vs. our closed-form optimality.
In addition, in terms of image object detection and tracking technologies, advancements in computer vision have propelled target detection and tracking to become essential capabilities for UAVs and even electro-optical pods. The YOLO series algorithms [10,11] have dominated as the primary detectors for resource-constrained platforms like UAVs and mobile phones due to their full consideration of object detection accuracy and computational efficiency. Traditional correlation filter-based trackers [12,13] remain prevalent in commercial pods and are still the built-in tracking algorithm for most electro-optical pods. Meanwhile, deep neural network trackers (e.g., STARK [14], TransT [15]) are gradually being applied in industry. These advanced tracking methods provide the necessary real-time target localization information that enables our calibration framework to operate during actual tracking missions. Thus, we utilize YOLOv12 [11] for detection and STARK [14] for tracking, and perform pod installation error calibration during the target tracking process. Though categorized as cooperative target-based calibration, our work diverges fundamentally from [1,2]. The proposed method requires only target tracking (no explicit localization) and delivers analytically optimal closed-form solutions for installation errors. Moreover, we also design rapid calibration schemes for the EO installation errors of various types of UAV systems in emergency response scenarios.

3. Methodology

3.1. Observation Equation Modeling for Installation Errors

This work achieves online calibration of electro-optical pod installation errors during the process of UAV target tracking missions by leveraging the detection and tracking capabilities of UAV systems. To establish reference benchmarks, any object with known geodetic coordinates (longitude, latitude, altitude) qualifies as a valid target, including the ground units in air-ground cooperative systems, aerial units in drone swarms, UAV ground control stations, UAV takeoff points, ground cooperation markers, and other geolocated objects. This section models the installation error of the pod by establishing the geometric constraints between target geodetic coordinates and pod observation vectors, enabling continuous installation error modeling.
The coordinate systems involved are depicted in Figure 1, where O r X r Y r Z r denotes the ground-fixed inertial reference frame, which serves as the navigation coordinate system for the UAV, with its origin at the takeoff point and its X r , Y r , Z r axes point north, east, and ground respectively, also known as the North-East-Down (NED) frame; The aircraft-body coordinate frame is a follower coordinate system rigidly attached to the UAV airframe, denoted as O b X b Y b Z b , when viewed from the tail to the nose of the UAV, the X b , Y b , Z b axes of the coordinate system point forward, right, and downward, respectively; O d X d Y d Z d represents the pod-fixed frame mounted at the belly or nose of the UAV, so the pod coordinate frame is also a follower coordinate system, where the X d , Y d , Z d axes shares the same forward-right-down orientation relative to its mounting datum, and with the origin located at the geometric center of the pod; o x y signifies the image coordinate frame established at the center of the image plane, with its x-axis horizontal right and y-axis vertical down; O c X c Y c Z c denotes as the camera frame originating at the camera’s optical center. When viewed from the rear toward the front, the X c axis aligns with the optical axis (forward), the Y c axis parallels the image x-axis (right), and the Z c axis parallels the image y-axis (downward).
During pod installation, efforts are often made to align the body frame O b X b Y b Z b with the pod frame O d X d Y d Z d as much as possible. However, factors such as airborne space constraints, manufacturing tolerances, and assembly deviations introduce a persistent rigid-body transformation between these two frames, denoted as t b d , R d b , where t b d represents the position vector of the d -frame origin in b -frame coordinates, and R d b represents the rotation matrix from b -frame to d -frame. While the translational installation error t b d can be obtained from UAV three-dimensional CAD models, or through measurement methods. In actual flight, the UAV’s flight height can reach hundreds to thousands of meters, under which condition the rotational installation error R d b seriously impacts the accuracy of tasks such as pod geocoordinate tracking and target positioning. However, the magnitude of t b d is much smaller compared to the distances from the UAV to the target, having a negligible impact on actual missions. Therefore, this paper focuses exclusively on studying the calibration method of the rotation matrix R d b . For brevity, the following text uses “installation error” to specifically refer to the rotational installation error of the pod.
At time t , denote the target’s geodetic coordinates (longitude, latitude, altitude) as Q t = Q l t , Q a t , Q h t T , its projected image coordinates on the image plane as I t = I x t , I y t T , the gimbal frame angles of the electro-optical pod as A s t = y a w s t , p i t c h s t , r o l l s t T , the UAV’s geodetic coordinates as W t = W l t , W a t , W h t T , and the UAV’s attitude angles as A u t = y a w u t , p i t c h u t , r o l l u t T . Then, the coordinates P c t of the projected target point in the camera frame can be expressed as
P c t = f , I x t × s x , I y t × s y T
where f denotes the camera focal length, s x and s y represent the pixel pitch along the x and y axes, respectively. The vector L r t = L x , L y , L z T from the UAV to the target in the North-East-Down (NED) frame is then expressed as
L x = Q a t W a t × R e L y = Q l t W l t × R e × cos Q a t W a t L z = W h t Q h t
with R e being the average radius of the Earth, taken as 6,371,393 m.
Through mathematical derivation combining Equations (1) and (2), and inter-frame transformation relationships, the coordinates L c t of the tracked target in the camera frame yields
L c t = R c d R d b R b r L r t t b d = η t P c t
Here, R c d is derived from the gimbal mechanism angles A s t , R b r can be calculated based on the UAV attitude angles A u t , η t denotes a scale factor that varies across observations, and R d b represents the installation error to be calibrated. Additionally, Equation (3) excludes the displacement t d c between the camera and pod frame origins. This omission is justified as t d c is typically on the order of centimeters, which is negligible for long-range UAV tasks.
Let L b t = R b r L r t t b d . To eliminate the scale factor η t and establish the relationship between the target observation vector, target geodetic coordinates, and UAV geodetic coordinates. First, normalize P c t and L b t to obtain
p c t P c t P c t l b t L b t L b t
where p c t denotes the unit vector from the camera frame origin to the target expressed in camera coordinates, and l b t represents the unit vector from the pod frame origin to the target expressed in body coordinates. Substituting Equation (4) into Equation (3) yields the observation constraint equation incorporating pod installation errors
p d t = R d b l b t
with p d t = R c d T p c t being the unit vector (from the camera frame origin to the target) transformed into pod coordinates.

3.2. Optimal Estimation of Installation Errors

Equation (5) represents the single-observation model for pod installation error, where the rotation matrix R d b constitutes the sole unknown. Given the time-invariant nature of R d b , the calibration accuracy of R d b then can be enhanced by fusing multiple observations. To circumvent inter-frame multi-target correspondence challenges, this work proposes a temporal fusion calibration framework by tracking a single target across successive observations. Within this framework, we derive an analytically optimal closed-form solution for R d b and propose a statistical approximation calibration method for the angular installation errors.

3.2.1. Optimal Closed-Form Solution for R d b

In practice, temporal discretization is performed according to the camera frame rate, yielding integer time indices from 1 to n . At any discrete time t , an observation constraint equation can be derived via Equation (5). By combining n temporal observations, the calibration problem is formulated as a constrained optimization problem
min t = 1 n p d t R d b l b t 2 F R d b   subject   to   det R d b = 1 ,   R d b R d b T = I 3 × 3
To minimize F R d b , we first expand F R d b to obtain
F R d b = t = 1 n p d t R d b l b t 2 = t = 1 n p d t · p d t + l b t · l b t 2 p d t · R d b l b t = t = 1 n 2 2 p d t · R d b l b t = 2 n 2 t = 1 n p d t · R d b l b t
where the operator “ · ” denotes the vector inner product. Consequently, minimizing F R d b is equivalent to computing the maximum value of the following expression
F R d b = t = 1 n p d t · R d b l b t = T r a c e t = 1 n R d b × l b t × T r a n s p d t = T r a c e R d b t = 1 n l b t × T r a n s p d t T r a c e R d b B
with T r a n s being the transpose operator, T r a c e computing the matrix trace, and B = t = 1 n l b t × T r a n s p d t representing the measurement information matrix.
Following the literature [16] and the Kabsch algorithm [17], the analytically optimal closed-form solution for the rotation matrix R d b is given by
B = U × d i a g λ 1 , λ 2 , λ 3 × V T R d b = V U T
where U and V are the orthogonal matrices obtained by performing singular value decomposition (SVD) on the information matrix B , and d i a g λ 1 , λ 2 , λ 3 is the diagonal matrix where λ 1 , λ 2 , λ 3 are the singular values. Equation (9) thus constitutes the derived optimal closed-form solution for the installation error.

3.2.2. Approximate Calibration Based on Statistics

From the perspective of pod angular misalignment, the installation error R d b can be parameterized by Euler angles e ^ y a w , e ^ p i t c h , and e ^ r o l l following a 3-2-1 rotation sequence (z-y-x axes) with respect to the UAV body frame. For tracking missions, such as pod air-to-ground geocoordinate tracking or tracking-based target positioning, the pod is controlled during the tracking process to ensure that the target is located at the image center. This closed-loop tracking is achieved by adjusting the gimbal angles of the pod (in the 3-2-1 rotation convention), where rotations about the optical axis (roll) preserve target centrality and thus negligibly impact tracking performance. Consequently, only yaw ( e ^ y a w ) and pitch ( e ^ p i t c h ) installation errors require calibration for such missions.
Given the known absolute geodetic coordinates Q t of the tracked target at any time t , combined with the UAV’s absolute geodetic coordinates W t and attitude angles A u t , the target’s body-frame coordinates are computationally determined as
L b t = R b r L r t t b d = L x t , L y t , L z t T
where R b r = R x ( r o l l u t ) R y ( p i t c h u t ) R z ( y a w u t ) represents the kinematic transformation from the ground reference frame to the body frame via 3-2-1 Euler rotations. This enables the calculation of the target’s azimuth angle ( y a w b t ) and elevation angle ( p i t c h b t ) in the UAV body frame through
y a w b t = atan 2 L y t , L x t p i t c h b t = atan 2 L z t , L x t L x t + L y t L y t
with the negative sign convention indicating nose-up positive elevation. Simultaneously, let y a w s t and p i t c h s t denote the servo-mechanism gimbal angles at time t , with the servo tracking control deviations m y a w t and m p i t c h t . The target’s pod-frame azimuth ( y a w d t ) and elevation ( p i t c h d t ) are then expressed as
y a w d t = y a w s t + m y a w t p i t c h d t = p i t c h s t + m p i t c h t
Under ideal conditions, the actual gimbal angles y a w d t and p i t c h d t of the target in the pod frame should equal the computed azimuth ( y a w b t ) and elevation ( p i t c h b t ) of the target in the body frame, respectively. During pod installation, deliberate alignment procedures are employed to minimize angular misalignment between pod and body frames, rendering installation errors e ^ y a w (azimuth) and e ^ p i t c h (elevation) minor in magnitude. Hence, these errors can also be approximately calibrated by directly calculating the deviations of the yaw and pitch angles of the target in the body and pod frames through
e ^ y a w = 1 n t = 1 n y a w d t y a w b t 1 n t = 1 n e y a w t e ^ p i t c h = 1 n t = 1 n p i t c h d t p i t c h b t 1 n t = 1 n e p i t c h t
For practical tracking applications, the calibrated angular installation errors e ^ y a w and e ^ p i t c h then can be applied to establish the relationship between the line-of-sight angles of the target in the body and pod frames.
By fusing multiple observations of the tracked target, the proposed calibration method leverages temporal redundancy to inherently mitigate the influence of high-frequency disturbances and occasional mismatches.

4. Online Calibration System Architecture and Emergency Calibration Scheme

Rapid advances in AI-enabled computer vision have established image-based target detection and tracking as core capabilities for UAVs and even electro-optical pods. Building upon these capabilities, this section develops an online calibration system for pod installation errors by integrating the calibration with the target tracking missions. Furthermore, we design diverse rapid calibration protocols for emergency response scenarios.

4.1. Design of Online Calibration System

The ultimate objective of pod installation error calibration is to enhance mission capabilities for UAV systems, where target tracking, as a core mission, plays a pivotal role in generating useful perception data. For instance, for targets of interest, UAVs typically engage tracked targets to enable subsequent geolocation operations. Consequently, our methodology integrates calibration with this fundamental tracking task, thereby optimizing calibration outcomes for mission-specific adaptability.
To enable autonomous online calibration of UAV electro-optical pod installation errors, this work develops the calibration system depicted in Figure 2, comprising four key components: (1) the electro-optical pod, (2) the flight controller, (3) the airborne processor, and (4) a geolocated target object. The first three components are deployed on the UAV, while the target may comprise any physical object with known geodetic coordinates. For moving targets such as cars and UAVs, positioning payloads are deployed on the platform to provide real-time location data, which can be transmitted to the airborne processor through the communication link.
Modern UAV systems necessitate multifunctional capabilities including communication management, target detection, and mission-target tracking. This work integrates pod installation error calibration into the UAV tracking workflow, implementing the multithreaded asynchronous architecture illustrated in Figure 2 within the airborne processor. The architecture comprises four concurrent threads: Communication, Detection, Tracking, and Integration Threads, with the Communication Thread further incorporating three dedicated modules: Pod Communication, Flight Controller Communication, and Link Communication. Detailly, the Pod Communication module acquires imagery and servo state data from the electro-optical pod, relaying images to both Detection and Tracking Threads while feeding servo information to the Integration Thread; additionally, it relays the pod control commands to the pod upon receiving from the Integration Thread, for closed-loop pod actuation. Simultaneously, the Flight Controller Communication module retrieves UAV telemetry states for the Integration Thread while concurrently delivering target information received from this thread to the flight controller. In parallel, the Link Communication module streams real-time geodetic coordinates of the target and control commands issued by the ground control software through the data link.
Additionally, the Detection Thread performs target identification by employing YOLOv12 [11] as the object detector for autonomous target acquisition, subsequently relaying detection results to the Tracking Thread, and the Tracking Thread utilizes STARK [14] as its tracker, which automatically initiates pursuit based on predefined criteria or accepts manually designated targets from ground control software. The Integration Thread incorporates two modules: Pod Control and Installation Error Calibration. The Pod Control module employs Active Disturbance Rejection Control (ARDC) [18] to generate closed-loop pod commands based on tracking outcomes and pod operational parameters. Then, the Installation Error Calibration module calculates installation errors by exploiting pod gimbal angles, UAV states, target geocoordinates, and closed-loop tracking control outcomes through Equations (9) or (13).
However, the tracking results might affect calibration results in two main aspects. First, the accuracy of tracking directly influences calibration precision. To mitigate this, our method leverages multiple temporal observations of the tracked target and formulates the installation error calibration as an optimization problem over an optimal rotation matrix. This approach is conceptually analogous to bundle adjustment, which effectively suppresses the impact of random tracking errors by fusing multi-frame information, thereby enhancing overall calibration robustness and accuracy. Second, the real-time performance of tracking also plays a critical role. Excessive latency can lead to temporal mismatches between tracking outputs and UAV state data, inevitably degrading the accuracy of the estimated installation errors. To address this, we have accelerated the tracking process in our implementation (by adopting a lightweight model, leveraging TensorRT, and implementing it in C++) to ensure synchronization between tracking results and UAV state information, thereby minimizing the impact of latency on calibration fidelity.
Since calibrated during actual target tracking missions, and derived the optimal closed-form solution of installation error; consequently, these results exhibit enhanced adaptability to high-dynamic flight conditions and superior alignment with UAV mission objectives.

4.2. Design of Emergency Calibration Protocol

Emergency response scenarios such as military reconnaissance, counter-terrorism, and disaster relief demand rapid, dynamic, and online-capable calibration methods for UAV electro-optical pod installation errors. Building upon the aforementioned pod installation error resolution methodology and online calibration architecture, this work develops rapid calibration protocols for diverse UAV platforms in emergency response contexts as follows:
(a)
For rotary-wing UAV systems, deploy a cooperative target at the takeoff site or other geolocated positions. Then, during both takeoff and egress maneuver phases, perform pod closed-loop tracking of the cooperative target to achieve pod installation error calibration.
(b)
For fixed-wing UAV systems, position a cooperative target at a strategic location ahead of the takeoff path and enable concurrent online calibration of pod installation errors during the taxi-takeoff roll via continuous target tracking.
(c)
For air–ground cooperative systems and UAV swarms, calibration methods extend beyond depending on cooperative targets to leverage ground units in cooperative systems or aerial units within the swarm, all of which disseminate real-time geocoordinate data via communication links. This capability enables temporally and spatially unconstrained in-mission calibration or recalibration, executed opportunistically during task execution.
The calibration protocols above embed the calibration process within standard takeoff or mission execution workflows, eliminating dedicated procedures to achieve seamless calibrate-upon-takeoff and compensate-immediately-post-calibration functionality, which significantly enhances UAV emergency response capabilities.

5. Experiments

5.1. Fixed-Wing UAV Experimental System Design

To validate the correctness and feasibility of the proposed methodology, an experimental fixed-wing UAV system was constructed as illustrated in Figure 3, comprising five core components: (1) the airframe, (2) the airborne processor, (3) the electro-optical pod, (4) the datalink subsystem, and (5) the flight controller integrated with ancillary sensors. The primary airframe employs lightweight balsa construction with a 2.2 m wingspan and dual-propeller propulsion, while an NVIDIA Orin NX development board serves as the onboard processor featuring 16 GB memory and 100 TOPS AI computing capability. The PLD80A micro-pod is employed as the imaging sensor unit, integrating dual cameras with 7.2 mm and 25 mm focal lengths. This pod exhibits an identical 2.9 μm pixel pitch in horizontal and vertical dimensions, captures native 1920 × 1080 resolution imagery at 60 Hz frame rate, and is nose-mounted to prevent occlusion of the field of view (FOV). The flight controller is deployed proximate to the center of gravity, with built-in sensors including IMU, GPS, and airspeed instrumentation. Additionally, the datalink subsystem is primarily utilized for bidirectional real-time transmission between the UAV and ground station, encompassing three primary data streams: telemetry states from both flight controller and pod, command inputs from ground control interfaces, and processed imagery generated by the airborne processor.
During actual flight operations, the UAV’s cruise speed and height were configured at 25 m/s and 200 m, respectively. The airborne processor establishes multicast-based data exchange with the flight controller, electro-optical pod, and ground control software at a unified 60 Hz communication frequency. The processed images are compressed and encoded using H.265 and transmitted via RTSP streams over the data link. Considering computational constraints and real-time calibration requirements, both YOLOv12 and STARK are accelerated by TensorRT during deployment. Furthermore, the detection and tracking threads operate under a temporal decoupling strategy: the object detection thread enters the dormant state upon confirming the successful initialization of the tracking thread and is reactivated only upon tracking task completion.
To facilitate validation of calibration accuracy, without loss of generality, this paper adopts a stationary car located at the UAV’s takeoff point as the cooperative target. Thus, the target coincides with the origin of the ground inertial reference frame, sharing the same geodetic coordinates with the takeoff point. Post calibration, when the pod executes closed-loop geocoordinate tracking of the takeoff point, then the qualitative accuracy verification can be achieved by checking whether the cooperative target is at the image center. Furthermore, the target’s static nature and known altitude enable the quantitative validation of the accuracy of calibration through computing its coordinates in the ground reference frame.

5.2. Results and Discussion

Based on the aforementioned UAV system, we conducted calibration experiments across various flight regimes. Below, we will present the calibration results of the pod installation errors from both qualitative and quantitative perspectives. Additionally, we also verify the accuracy of the calibration by analyzing the positioning precision of the cooperative target and provide a comprehensive analysis of the experimental results.

5.2.1. Qualitative Performance Analysis

Figure 4 shows comparative outcomes obtained when the pod pursues geocoordinate tracking of the ground cooperative target during the UAV’s flight before and after calibration. Experimental protocols involved recording the UAV’s geodetic coordinates prior to launch, with a stationary car subsequently positioned at the takeoff origin as denoted in subfigure (b). Thus, the car’s positional parameters are equivalent to the recorded geodetic coordinates. Before the installation error calibration, upon issuing geocoordinate tracking commands for the cooperative target via the pod ground control software, the Integration Thread computes the target’s body-frame vector g b t using UAV geocoordinates, attitude, and target geocoordinates. Applying Equation (11) then derives azimuth and elevation angles of the target in the body frame, with subsequent gimbal actuation based on these angular references, yielding the tracking outcome captured in subfigure (a). After calibrating the installation error, the retrieved body-frame target vector g b t undergoes coordinate transformation to the pod frame utilizing the calibrated parameters, with subsequent calculation of azimuth and elevation angles in the pod frame via Equation (11), ultimately yielding the geocoordinate tracking outcome evidenced in subfigure (b).
Graphical evidence in Figure 4 indicates that the cooperative target lay beyond the pod’s geocoordinate tracking FOV before installation error calibration, whereas, post-calibration, the target is positioned at the FOV center. This definitive pre-/post-calibration contrast validates both the feasibility and efficacy of the proposed pod installation errors calibration methodology. Additionally, Figure 4 shows target identification outcomes from the Detection Thread during the Integration Thread executing gimbal-steered geocoordinate tracking, including target categories and detection confidences.

5.2.2. Quantitative Calibration Metrics

To validate the feasibility of the proposed calibration method and examine the influence of UAV flight regimes and calibration distance on the calibration results, multiple calibrations of the pod installation errors were conducted under both level flight and circling flight conditions. Level flight and circling tracking are two primary flight modes during UAV mission execution; thus, calibrating under real operational scenarios can enhance the mission applicability of the calibration results. Furthermore, to verify the robustness of the proposed calibration method, additional calibration tests were carried out under conditions of significant angular installation errors.
(1)
Calibration Experiments at Varying Distances
To investigate the impact of distance on the calibration results, data were collected at varying distance ranges from the target to calibrate the installation errors of the pod. The calibration results are shown in Figure 5 and Figure 6. Based on the collected data, the optimal closed-form solution for the pod installation errors could be derived using Equation (9), while the statistical installation errors were obtained through Equation (13). By utilizing the UAV’s position, attitude, the geographical location information of the cooperative target, and the calibration results, the azimuth and elevation of the target in the pod frame can be calculated. Let y a w o t and p i t c h o t denote the azimuth and elevation of the target in the pod frame at time t, corrected using the optimal closed-form solution for the installation errors, while y a w s t and p i t c h s t represent the corresponding angles at time t compensated utilizing the calibration results from the statistical method. Since the calibration system employs closed-loop tracking of the cooperative target during the calibration process, the ground truth of the target’s azimuth and elevation in the pod frame can be calculated in real time based on the gimbal angles of the pod servo mechanism and the servo tracking deviations. These ground truth values are denoted as y a w t and p i t c h t , respectively. In this paper, the accuracy of the calibration method is evaluated based on the deviation between the corrected angles and their corresponding ground truth values.
The calibration data for the results shown in Figure 5 were collected over a distance range of 200 m to 500 m, while the data for Figure 6 were obtained within a range from 2450 m to 2800 m. In both Figure 5 and Figure 6, the green and red curves represent the accuracy of the two calibration methods proposed in this paper. Specifically, the green curve corresponds to the calibration accuracy of the statistical calibration method, and the red curve depicts the calibration accuracy of the optimal closed-form solution method for installation errors. The left column in the figure displays the deviation between the ground truth and the calculated values at different time points, with the horizontal axis representing time and the vertical axis representing the angular deviation. The right column presents histogram statistics of these deviations, where the horizontal axis denotes the angular deviation, and the vertical axis indicates the corresponding probability density. As shown in the figures, after calibration correction, the deviations between the calculated angles of the target in the pod frame and the ground truth angles are minimal and distributed near zero, thereby verifying the feasibility of the method proposed in this paper.
Specifically, we employed the mean absolute error (MAE) of the deviations to quantify the accuracy of the calibration algorithm. Detailed statistical results are presented in Table 1. The table indicates that the mean deviation of the computed angles after calibration correction is generally less than 0.1 degrees, confirming the high accuracy of the proposed calibration algorithm. The comparison of the two sets of calibration results reveals that long-distance calibration achieves higher precision, with the MAE between the calculated and ground-truth gimbal angles of the target in the pod frame being less than 0.02°. The primary reasons for this phenomenon are as follows:
(a)
The 5 Hz positioning frequency of the GPS device affects the UAV’s localization accuracy. At longer distances, the UAV localization error becomes relatively small compared to the UAV-target range, thus exerting a lesser influence on the calibration results.
(b)
During calibration, the pod maintains locked tracking on the target. At longer distances, the angular rate of the target relative to the pod is lower, resulting in more stable servo control of the pod, which is more conducive to accurate calibration.
Table 1. Calibration accuracy at varying distances.
Table 1. Calibration accuracy at varying distances.
Calibration
Distance Range
Statistical Calibration MethodOptimal Solution Calibration Method
Elevation AccuracyAzimuth AccuracyElevation AccuracyAzimuth Accuracy
[200 m, 500 m]0.0540°0.1014°0.0539°0.0889°
[2450 m, 2800 m]0.0087°0.0126°0.0087°0.0124°
(2)
Cross-verification Experiments of Calibration Results Under Different Flight Regimes
To assess the influence of UAV flight conditions on the calibration results, calibration tests were conducted separately under both level flight and circling flight conditions, followed by cross-verification using the calibration results obtained from these different flight states. During the experiment, the UAV first approached the cooperative target in level flight from a distance and then performed a circling flight around the geographical location of the target. Calibration data were collected during both the level flight and circling flight phases. To evaluate the adaptability of calibration results under different UAV flight conditions, five sets of comparative experiments were conducted: (a) Correcting level-flight data using calibration results obtained from level flight; (b) Correcting circling-flight data using calibration results obtained from circling flight; (c) Correcting circling-flight data using calibration results obtained from level flight; (d) Correcting level-flight data using calibration results obtained from circling flight; (e) Calibration and correction based on data from both level flight and circling flight. In each set of comparative experiments, the term “correction” refers to the process of adjusting the calculated gimbal angles of the target in the pod frame using the calibration results, followed by comparison with the actual gimbal angles of the pod. The deviations between the corrected calculation gimbal angles and the actual values were then statistically analyzed to assess the adaptability of the calibration results. Figure 7 and Figure 8 present box plots of the deviation distribution between the corrected values and the ground truth obtained from the five comparative experiments. Specifically, Figure 7 shows the distribution of azimuth angle deviations, while Figure 8 displays the distribution of elevation angle deviations. For brevity, only the cross-verification results from the optimal solution calibration method are provided here; the results from the statistical calibration method are consistent.
The letter σ in the figure denotes the standard deviation of the corresponding error distribution. As shown in Figure 7 and Figure 8, the standard deviations of the error distributions for all five cross-verification experiments are relatively small, reaching an order of 10−2. The mean values of the error distributions for the pod gimbal angles in calibration experiments 1, 2, and 5 are centered near zero. In contrast, the mean values for experiments 3 and 4 deviate from zero, indicating that calibration results exhibit better task adaptability when the calibration conditions match the mission conditions. The primary factors contributing to the aforementioned mean deviation are as follows:
(a)
The UAV test system in this study employs a single-GPS positioning device. The yaw angle provided by the flight controller is essentially the yaw angle of the UAV’s velocity direction, which in most cases does not align with the X-axis direction of the body frame.
(b)
Typically, the yaw angle provided by the flight controller is accurate during steady, crosswind-free level flight but contains bias during circling flight.
The aforementioned issue of inaccurate UAV yaw angle can be addressed by adopting a dual-GPS equipment setup, though this approach is not explored in depth here. Similar to Table 1, Table 2 provides a quantitative statistical summary of the accuracy for the five cross-verification experiments. Comparing the results in the table reveals that overall, the optimal-solution calibration method achieves higher accuracy than the statistical calibration method. Experiments 1 and 2 yield the highest calibration accuracy, indicating that the UAV flight state during calibration should ideally match the flight state during mission execution. Experiment 5, which used data from both level flight and circling flight for calibration, shows slightly lower accuracy for the optimal solution compared with Experiments 1 and 2, yet still attains a calibration precision of 0.06°, making the results usable. For Experiments 3 and 4, where the calibration flight state differed from the test flight state, the corrected pod gimbal angles exhibit larger deviations from the ground truth. Nevertheless, they remain applicable for tasks that do not demand very high calibration accuracy, such as pod air-to-ground geocoordinate tracking.
In addition, Table 3 presents the pod installation errors obtained through calibration under three scenarios: the UAV level flight, circling flight, and a combination of both, corresponding to test groups 1, 2, and 5 in the table. Based on Equation (13), the statistical calibration method can only estimate the angular installation errors of the pod in the yaw and pitch directions, whereas the optimal solution calibration method can decompose the solved rotation matrix R into three sequential rotations about the Z-, Y-, and X-axes, thereby obtaining the installation errors of the pod in the yaw, pitch, and roll directions. From the table, it can be observed that the calibration results of the statistical method and the optimal solution method are consistent in the pitch direction. Furthermore, when calibration using level-flight data, the calibration results of the two methods in the yaw direction are also largely consistent, mutually verifying the correctness of the two calibration methods proposed in this paper. However, when data from circling flight are included in the calibration, the calibration results of the two methods in the yaw direction differ by approximately 0.8°. Given that Table 2 indicates the optimal solution method achieves higher accuracy, we can conclude that the statistical calibration method is more suitable for calibration during level-flight phases, while the optimal solution calibration method is applicable to all flight conditions.
(3)
Calibration Experiment with Large-Angle Installation Errors
From the above experimental results, it can be observed that both calibration methods proposed in this paper effectively estimate the installation errors of the pod and achieve high calibration accuracy. However, as shown in Table 3, the installation error angles obtained from the aforementioned calibration are relatively small, all within 1°, primarily due to the alignment and leveling measures taken during the installation process. To verify the effectiveness of the proposed calibration method under conditions of large-angle installation errors, the above installation procedures were intentionally omitted during pod mounting, deliberately introducing substantial angular installation errors between the pod frame and the UAV body frame. The calibration results under UAV level-flight conditions are shown in Figure 9.
The blue curves in Figure 9 represent the deviation between the calculated and the ground-truth gimbal angles of the target in the pod frame without calibration compensation. The meanings of the remaining curves are consistent with those in Figure 5. As can be observed from the figure, the pod exhibits significant installation errors in both the yaw and pitch directions, reaching 4° and 8°, respectively. However, the angular deviation after calibration compensation remains centered near zero, verifying the adaptability of the proposed method to large-angle installation errors. Statistically, the accuracy of the statistical calibration method in the yaw and pitch directions is 0.1249° and 0.0956°, respectively, while the accuracy of the optimal solution calibration method is 0.0679° and 0.0804°, respectively. Comparatively, the optimal solution calibration method demonstrates higher precision. The pod air-to-ground geocoordinate tracking results shown in Figure 4 were obtained from this flight test. It can be seen from the figure that before calibration, the target was not within the pod’s field of view, whereas after calibration, the target appears at the image center. This demonstrates the necessity of calibration and verifies the feasibility of the method proposed in this paper.
(4)
Cooperative Target Localization Results
To evaluate the impact of calibration results on target localization accuracy, this paper performs a localization solution for the cooperative target by calculating its coordinates in the ground-fixed inertial reference frame O r X r Y r Z r . Since the cooperative target is stationary and located at the origin of the reference frame, the ideal 3D coordinates solved at any moment should be 0 , 0 , 0 T . Therefore, the coordinates of the target obtained through the solution in the reference frame represent the localization error, which can be used to quantitatively assess the influence of calibration on target localization accuracy.
Without loss of generality, this paper assumes that the target altitude is known, similar to scenarios such as ground target localization over flat terrain or target localization on the sea surface. The localization of the target is primarily achieved through the following four steps: First, based on the image target tracking results, the unit vector p c t of the target in the camera coordinate system can be calculated. Second, through coordinate transformation (ignoring the positional installation error of the pod), the coordinate representation l r t of the unit vector from the UAV pointing to the target in the reference frame can be obtained. Then, since the altitudes of the UAV and the target are known as W h t and Q h t respectively, the z-axis component of the vector from the UAV to the target in the reference frame is L z = W h t Q h t . From this, the scale factor s = L z / l r t ( z ) can be calculated, where l r t ( z ) represents the z-axis component of l r t . Consequently, the vector from the UAV to the target can be expressed as s × l r t . Finally, the vector of the UAV in the reference frame can be calculated by using Equation (2). Adding this to the vector from the UAV to the target yields the coordinate representation of the target in the reference frame.
Figure 10, Figure 11 and Figure 12 show the localization results of the cooperative target during the UAV’s level-flight phase, which also reflect the distribution of localization errors. Figure 10 presents the localization results without calibrating the pod installation errors, Figure 11 shows the results corrected using the statistical calibration method, and Figure 12 displays the target localization results corrected using the optimal solution calibration method. As can be observed from the figures, the localization error of the target without calibration reaches an order of 40 m, whereas, after calibration, the error is reduced to the order of 5 m.
To quantify the localization accuracy under these three scenarios, similar to the quantification of calibration accuracy, we use the mean absolute error of the localization error to represent localization accuracy. Statistically, without calibration, the localization accuracy in the X-axis and Y-axis directions is 2.67 m and 27.83 m, respectively. Using statistical calibration, the accuracy in the X-axis and Y-axis directions is 2.11 m and 1.45 m, respectively. With optimal solution calibration, the accuracy improves to 1.24 m and 1.67 m in the X-axis and Y-axis directions, respectively. The statistical results indicate that after calibration, the target localization accuracy is significantly improved and achieves a high level of precision, further validating the feasibility of the calibration methods proposed in this paper.
Additionally, Figure 13 illustrates the variation of target localization errors with distance under the three scenarios. The red, green, and blue curves in the figure represent the localization results based on optimal solution calibration, statistical calibration, and no calibration, respectively. The horizontal axis denotes the distance to the target, while the vertical axis represents the localization results, with the upper subplot showing the results in the X-axis direction and the lower subplot showing the results in the Y-axis direction. As can be observed from the figure, the target localization error gradually increases with distance, which aligns with theoretical expectations since the localization error caused by angular deviation is proportional to the distance.
Based on the comprehensive experimental results and analysis presented above, the following three conclusions can be drawn:
(a)
The optimal solution calibration method proposed in this paper can effectively calibrate the pod installation errors under various distances and flight conditions with high calibration accuracy, making it the preferred calibration method for UAV precision mission scenarios.
(b)
The statistical calibration method proposed in this paper is more suitable for rapid calibration during level flight. Although it cannot estimate the installation error in the roll direction, it can be applied to tracking tasks such as pod geocoordinate tracking, where calibration accuracy requirements are not stringent.
(c)
To enhance the adaptability of the calibration results to UAV missions, the flight state during calibration should ideally match the flight state during actual task execution.

6. Conclusions

This study addresses the calibration of UAV electro-optical pod installation errors by proposing an efficient target tracking-based online methodology. The calibration challenge was formulated as an optimal rotation matrix estimation problem through temporal multi-observation fusion, yielding a closed-form solution that enhances calibration precision. For tracking-oriented applications, a statistics-driven approximate calibration approach was concurrently developed. Additionally, an online calibration system for electro-optical pod installation errors adaptable to diverse UAV platforms was engineered, incorporating rapid calibration protocols for emergency response scenarios. Furthermore, a fixed-wing UAV experimental system was constructed, and calibration comparison experiments were conducted under diverse flight regimes, the comparison results of which can provide informed guidance for UAV path planning during the calibration process in precision operation scenarios.
Additionally, experimental results demonstrate that the proposed methodology enables reliable pod installation error calibration across diverse flight regimes, significantly enhancing precision in downstream missions, including geocoordinate tracking and target positioning, thereby substantiating both its feasibility and robustness. The research findings provide theoretical foundations and technical blueprints for rapid high-precision calibration during electro-optical pod deployment and hold critical implications for time-critical UAV precision missions encompassing tactical reconnaissance, emergency response, and precise delivery.
While this study establishes a robust foundation for online calibration of UAV electro-optical pod installation errors, several promising avenues warrant further investigation. First, investigating fast and high-precision tracking techniques could further enhance calibration accuracy. Second, the potential interference of the tracking task itself on calibration accuracy merits deeper exploration. Additionally, future work could focus on exploring calibration techniques for UAV electro-optical pods in GPS-denied environments, reducing the dependence of calibration methods on satellite navigation signals, thereby enhancing the adaptability and intelligence of future UAV systems.

Author Contributions

Conceptualization, Y.X. and H.X.; methodology, Y.X.; software, Y.X.; validation, Y.X., H.X., and J.L.; formal analysis, Y.M. and A.W.; investigation, H.Y. and Y.M.; resources, H.X. and A.W.; data curation, H.Y.; writing—original draft preparation, Y.X.; writing—review and editing, T.Y.; visualization, T.Y.; supervision, J.L.; project administration, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank the editors and anonymous reviewers for their valuable comments and suggestions on the paper. AI tools were used solely for language translation and polishing. The authors take full responsibility for the content of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LOSLine Of Sight
EOElectro-Optical
NEDNorth-East-Down
MAEMean Absolute Error
UAVUnmanned Aerial Vehicle
ARDCActive Disturbance Rejection Control

References

  1. Xu, C.; Wang, D.Z.; Huang, D.Q.; Han, W. A Calibration Method for Installation Errors of UAV Electro-Optical Pods. China Patent CN106871927A, 20 June 2017. [Google Scholar]
  2. Tan, R.L. An Aerial Calibration Method for Installation Error of Photoelectric Pod. Opt. Optoelectron. Technol. 2024, 22, 28–34. [Google Scholar] [CrossRef]
  3. Visual-Inertial Calibration Toolbox Kalibr. Available online: https://github.com/ethz-asl/kalibr (accessed on 19 March 2026).
  4. Raúl, M.; Juan, D.T. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar]
  5. Campos, C.; Elvira, R.; Rodriguez, J.J.G.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  6. Karrer, M.; Chli, M. Distributed Variable-Baseline Stereo SLAM from two UAVs. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 82–88. [Google Scholar] [CrossRef]
  7. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  8. Stumberg, L.V.; Cremers, D. DM-VIO: Delayed Marginalization Visual-Inertial Odometry. IEEE Robot. Autom. Lett. 2022, 7, 1408–1415. [Google Scholar] [CrossRef]
  9. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In International Workshop on Vision Algorithms; Springer: Berlin/Heidelberg, Germany, 1999; pp. 298–372. [Google Scholar]
  10. Wang, A.; Wang, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. In Proceedings of the 38th Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 9–15 December 2024. [Google Scholar]
  11. Tian, Y.; Ye, Q.; David, D. YOLOv12: Attention-Centric Real-Time Object Detectors. arXiv 2024, arXiv:2502.12524. [Google Scholar]
  12. Danelljan, M.; Bhat, G.; Khan, F.S. ECO: Efficient Convolution Operators for Tracking. In Proceedings of the IEEE CVPR, Honolulu, HI, USA, 21–26 July 2017; pp. 6931–6939. [Google Scholar] [CrossRef]
  13. Danelljan, M.; Häger, G.; Khan, F.S.; Felsberg, M. Discriminative Scale Space Tracking. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1561–1575. [Google Scholar] [CrossRef] [PubMed]
  14. Yan, B.; Peng, H.; Fu, J.L.; Wang, D.; Lu, H.C. Learning Spatio-Temporal Transformer for Visual Tracking. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 10428–10437. [Google Scholar]
  15. Chen, X.; Yan, B.; Zhu, J.W.; Wang, D.; Yang, X.Y.; Lu, H.C. Transformer tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8126–8135. [Google Scholar]
  16. Hu, M. Space and Transformation; Science Press: Beijing, China, 2007; pp. 130–133. [Google Scholar]
  17. Kabsch, W. A Discussion of the Solution for the Best Rotation to Relate Two Sets of Vectors. Acta Crystallogr. Sect. A 1978, 34, 827–828. [Google Scholar] [CrossRef]
  18. Han, J.Q. From PID to active disturbance rejection control. Control Eng. China 2002, 9, 13–18. [Google Scholar] [CrossRef]
Figure 1. Coordinate frame configuration.
Figure 1. Coordinate frame configuration.
Automation 07 00059 g001
Figure 2. Online calibration architecture for pod installation errors.
Figure 2. Online calibration architecture for pod installation errors.
Automation 07 00059 g002
Figure 3. Fixed-wing UAV experimental system.
Figure 3. Fixed-wing UAV experimental system.
Automation 07 00059 g003
Figure 4. Comparison of pod closed-loop geocoordinate tracking results pre- versus post-calibration.
Figure 4. Comparison of pod closed-loop geocoordinate tracking results pre- versus post-calibration.
Automation 07 00059 g004
Figure 5. Calibration accuracy for distances ranging from 200 m to 500 m.
Figure 5. Calibration accuracy for distances ranging from 200 m to 500 m.
Automation 07 00059 g005
Figure 6. Calibration accuracy for distances ranging from 2450 m to 2800 m.
Figure 6. Calibration accuracy for distances ranging from 2450 m to 2800 m.
Automation 07 00059 g006
Figure 7. Distribution of yaw gimbal angle deviations in cross-verification.
Figure 7. Distribution of yaw gimbal angle deviations in cross-verification.
Automation 07 00059 g007
Figure 8. Distribution of pitch gimbal angle deviations in cross-verification.
Figure 8. Distribution of pitch gimbal angle deviations in cross-verification.
Automation 07 00059 g008
Figure 9. Calibration results for large-angle installation errors.
Figure 9. Calibration results for large-angle installation errors.
Automation 07 00059 g009
Figure 10. 2D distribution of localization errors without calibration.
Figure 10. 2D distribution of localization errors without calibration.
Automation 07 00059 g010
Figure 11. 2D distribution of localization errors based on statistical calibration.
Figure 11. 2D distribution of localization errors based on statistical calibration.
Automation 07 00059 g011
Figure 12. 2D distribution of localization errors based on optimal solution calibration.
Figure 12. 2D distribution of localization errors based on optimal solution calibration.
Automation 07 00059 g012
Figure 13. Variation of target localization accuracy versus distance.
Figure 13. Variation of target localization accuracy versus distance.
Automation 07 00059 g013
Table 2. Statistical comparison of cross-validation calibration accuracy.
Table 2. Statistical comparison of cross-validation calibration accuracy.
Test
Group
Statistical Calibration MethodOptimal Solution Calibration Method
Elevation AccuracyAzimuth AccuracyElevation AccuracyAzimuth Accuracy
10.0126°0.0087°0.0124°0.0087°
20.0834°0.0288°0.0204°0.0284°
30.5792°0.1431°0.3973°0.1395°
40.5792°0.1431°0.1230°0.1357°
50.2896°0.0717°0.0223°0.0677°
Table 3. Calibrated installation errors.
Table 3. Calibrated installation errors.
Test GroupCalibration MethodYaw Installation ErrorPitch Installation ErrorRoll Installation Error
1Statistical Calibration−0.8384°0.4994°--
Optimal Solution Calibration−0.7296°0.4979°0.2487°
2Statistical Calibration−1.4175°0.3564°--
Optimal Solution Calibration−0.6889°0.3595°0.6299°
5Statistical Calibration−1.1280°0.4279°--
Optimal Solution Calibration−0.4829°0.4291°0.8077°
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, Y.; Liu, J.; Yan, H.; Wang, A.; Xu, H.; Ma, Y.; Yao, T. Target Tracking-Based Online Calibration of UAV Electro-Optical Pod Installation Errors. Automation 2026, 7, 59. https://doi.org/10.3390/automation7020059

AMA Style

Xu Y, Liu J, Yan H, Wang A, Xu H, Ma Y, Yao T. Target Tracking-Based Online Calibration of UAV Electro-Optical Pod Installation Errors. Automation. 2026; 7(2):59. https://doi.org/10.3390/automation7020059

Chicago/Turabian Style

Xu, Yong, Jin Liu, Hongtao Yan, An Wang, Haihang Xu, Yue Ma, and Tian Yao. 2026. "Target Tracking-Based Online Calibration of UAV Electro-Optical Pod Installation Errors" Automation 7, no. 2: 59. https://doi.org/10.3390/automation7020059

APA Style

Xu, Y., Liu, J., Yan, H., Wang, A., Xu, H., Ma, Y., & Yao, T. (2026). Target Tracking-Based Online Calibration of UAV Electro-Optical Pod Installation Errors. Automation, 7(2), 59. https://doi.org/10.3390/automation7020059

Article Metrics

Back to TopTop