1. Introduction
In recent years, advancements in launch vehicle technology have facilitated the deployment of large satellite constellations, such as Starlink, Telesat, and OneWeb. This has led to a rapid increase in the number of on-orbit satellites and a progressively congested orbital environment. Meanwhile, the accumulation of defunct satellites, space debris, and launch vehicle fragments—which persist in medium-low Earth near-circular orbits—poses a growing threat to the safety of operational spacecraft. Against this backdrop, tracking and monitoring near-Earth space objects is crucial for early risk warning and plays a vital role in ensuring spacecraft safety. Video satellites, which offer advantages including real-time monitoring, continuous imaging, agile attitude maneuverability, and cost-effectiveness, have emerged as an effective supplement to ground-based space remote sensing technologies [
1,
2]. They are widely used in stare-mode observation, space situational awareness, and disaster prevention, thereby significantly improving comprehensive situational awareness [
3,
4,
5]. Consequently, several countries have successfully developed and launched various optical video imaging satellites in recent years [
6,
7,
8,
9,
10].
Video satellites utilize onboard visible-light cameras to perform visual tracking and continuous imaging of ground or space targets. This requires the satellite’s control system to continuously adjust the boresight direction of the star-borne camera, ensuring it remains aligned with the target throughout the observation process [
11]. At this point, the target will appear at the center of the camera’s imaging plane, ensuring optimal observation results. Simultaneously, since the target’s projection is located farther from the image border, the risk of the target moving out of the camera’s field of view due to relative motion is minimized, thereby guaranteeing uninterrupted continuous visual tracking. Early research of video satellite visual tracking are mainly position-based methods. This type of tracking methods calculates real-time desired control commands based on the target’s position and the current camera boresight direction [
12,
13,
14,
15,
16], which depend on comprehensive prior positional information. Consequently, it requires both the target position and camera parameters to be precisely known. Even slight deviations in the real-time target position or the camera’s optical parameters can degrade the tracking and observation performance.
To overcome the limitations of position-based tracking methods that require complete prior information, scholars have shifted their focus to image-based visual tracking methods for video satellites [
5,
17,
18,
19,
20,
21]. Unlike position-based tracking methods, after the camera completes initial target acquisition and the onboard image processing system extracts the coordinates of the target’s projection point, the generation of the tracking instructions no longer relies on precise target position information. Instead, the visual tracking instructions are calculated in real-time based on the target’s projection point on the image plane. This process gradually guides the adjustment of the camera’s boresight direction, and the target’s projection will move toward the center of the image plane to achieve observation. Consequently, such methods can achieve stable and effective visual tracking of the target without relying on precise target position information, offering enhanced autonomy and robustness. Our team has previously investigated image-based visual tracking of video satellites and designed visual tracking methods [
4,
22,
23,
24,
25]. In these studies, the error quaternion calculated from image coordinates is defined based on the deviation between the target’s real-time projection point coordinates and the desired projection coordinates on the image plane, and tracking instructions are designed based on the error quaternion. The effectiveness of these approaches relies heavily on precise onboard camera imaging models, which encompass both the internal optical parameters of the camera and its installation parameters relative to the satellite body. However, the in-orbit operational environment of video satellites—subject to thermal cycling, particle radiation, and micro-vibrations—can cause deviations in camera parameters. Since in-orbit calibration of camera parameters is often cumbersome and time-consuming, it significantly compromises the mission flexibility.
To address the challenges of visual tracking of video satellites with uncalibrated camera, our team investigated the target staring observation considering uncertainties in camera parameters [
3,
26,
27]. The designed tracking methods were verified through simulation to effectively overcome the limitations of position-based tracking methods. In these studies, it was assumed that system uncertainties originated solely from the onboard camera. By establishing a linear relationship between the estimated projection error and camera parameters, appropriate adaptive update laws were designed to achieve real-time estimation of camera parameter uncertainties. Consequently, this approach is exclusively applicable to scenarios where the target position is precisely known (e.g., observation of ground targets with known longitude and latitude). However, the above methods are not directly applicable to the visual tracking of spatial targets when camera parameters are uncertain, as they rely on the real-time acquisition of precise target positions.
The commonly used passive orbit determination methods for space targets include optical orbit determination, radar orbit determination and multi-source fusion orbit determination. These orbit determination methods are inevitably subject to certain deviations in orbit determination results due to factors such as weather, sensor noise, and time synchronization [
28,
29,
30,
31]. Related studies have shown that the vast majority of spacecraft operate in near circular orbits [
32], and the measurement deviations of the semi major axis, eccentricity, and true anomaly of space targets under short-term pure optical measurements are greater than the deviations and uncertainties of the orbital elements that characterize the spatial orientation of the orbital plane (orbital inclination and ascending node right ascension are strongly constrained by normal observations of the orbital plane) [
33,
34,
35,
36,
37]. If radar ranging information is integrated, the measurement accuracy of the semi major axis can be greatly improved, but the phase angle (including the argument of perigee and true anomaly) measurement deviation of spacecraft in the orbital plane is still relatively significant. The positional uncertainty within the orbital plane may further affect the tracking performance of traditional methods, leading to tracking failures or decreased accuracy. However, despite our efforts, there is currently no research on visual tracking of space targets by video satellites under the conditions of uncertainties in camera parameters and target position.
This paper addresses the performance degradation in video satellites’ visual tracking of near-circular orbit targets under simultaneous uncertainties in camera parameters and target position. Based on our team’s previous research on the visual tracking of ground targets with known positions (Refs. [
3,
26,
27]), which only considered camera parameter uncertainties, we extend the applicability of the method to spatial near circular orbit targets with phase angle measurement deviations. An adaptive visual tracking method for space targets that simultaneously considers the uncertainties in camera parameters and target position is proposed. The main work of this manuscript is as follows. Firstly, the motion equation of the space target operating in a near circular orbit around the Earth is linearized, and the orbit phase parameters containing uncertainties are separated. Then, the parameters representing the uncertainties of camera parameters and target position are extracted from the camera’s observation equation and the visual velocity equation as the estimated variables and linearized, laying the foundation for parameter adaptive estimation. Afterwards, an adaptive visual tracking law and parameter update law based on image feedback information are designed, and the stability of the closed-loop system is rigorously proved using Barbalat’s lemma. Finally, simulation verification is conducted. In the simulation section, the proposed method is compared with the traditional position-based tracking method and image-based method. The performance of each tracking method is quantitatively analyzed by defining the image stability index. It is verified that when uncertainties exist in both camera parameters and target orbit position, traditional methods suffer from shortcomings such as tracking failure or significant degradation in tracking accuracy. However, the method proposed in this paper can overcome the effects of both uncertainties and achieve higher precision tracking of the target.
The main contribution of this article can be summarized as:
- (1)
To address the challenge of video satellite visual tracking under simultaneous uncertainties in camera parameters and target positions, this paper proposes a novel adaptive method. Unlike traditional approaches that suffer from performance degradation under such uncertainties, or existing adaptive methods that require precisely known target positions, the proposed technique estimates both types of uncertainties concurrently. It leverages real-time image feedback to update the unknown parameters and compute the visual tracking instructions, with the closed-loop system stability rigorously guaranteed.
- (2)
An image stability index is defined to quantitatively evaluate the tracking accuracy. Simulations comparing the proposed method with traditional position-based and image-based methods reveal that under concurrent uncertainties, the position-based method fails to track the target, while the proposed method reduces the steady-state image error by approximately an order of magnitude compared to the traditional image-based approach. This improvement enables significantly higher tracking precision despite the presence of dual uncertainties.
The remainder of this paper is structured as follows.
Section 2 presents the visual tracking modeling, including the tracking observation model and motion model of the video satellite.
Section 3 details the design of the adaptive staring method, which involves formulating appropriate parameters to be estimated for real-time updates of both target position parameters and camera parameters, with strict stability analysis of the closed-loop system provided.
Section 4 presents the simulation analysis, and
Section 5 concludes the study.
4. Results and Discussion
In the simulation section, the designed adaptive visual tracking method (39) is compared and analyzed with the traditional position-based tracking method and image-based tracking method.
The position-based tracking method was designed according to Ref. [
11], with the expression of
where
and
are positive definite diagonal matrices.
and
are the desired angular and angular velocity of the pan-tilt system, respectively. Their values are computed based on the attitude error defined in Ref. [
11], the kinematics of the pan-tilt, and the measured target position.
The image-based tracking method was designed according to Ref. [
44], with the expression of
Please refer to
Appendix B for the specific controller design process and parameter meanings.
4.1. Simulation Parameters Setting
The orbit elements of the video satellite at the initial moment are shown in
Table 1, and the orbit elements of the target running on a near-circular orbit and the initial estimation error are shown in
Table 2. The theoretical and real parameters of the onboard camera are shown in
Table 3. The parameters of the proposed adaptive tracking method are shown in
Table 4. The parameters of the position-based tracking method are
and
, and the parameters of the traditional image-based method (52) are shown in
Table 5. The initial attitude quaternion and angular velocity of the video satellite are
and
. The physical parameters and the initial motion state of the pan-tilt system are shown in
Table 6. To better simulate the sensor noise present in practical imaging, a deviation is introduced to the actual projection points of the target on the image plane, with the standard deviation in both horizontal and vertical directions set to
. The image sampling time interval of the onboard camera is 0.05 s. The external disturbances of the pan-tilt system are set to
and the external environment disturbance torque exerted on the satellite body is set to
.
In order to quantitatively analyze the image stability of the target tracking performance of each method, an image stability index composed of pixel deviation between the target projection point and the expected point is defined as follows:
It can be seen that this index can measure the average image error during the time period from to . In the simulation, = 10 s is set as the end of the simulation, and is set as 6 s.
4.2. Simulation Results and Analysis
4.2.1. Simulation Results of the Two Traditional Methods
The simulation results of the position-based tracking method (51), including the variation curves of the angular velocity and the control torque of the pan-tilt system, as well as the target’s projection coordinate and trajectories on the image plane are shown in
Figure 4,
Figure 5,
Figure 6 and
Figure 7, and the results of the traditional image-based tracking method (52) are shown in
Figure 8,
Figure 9,
Figure 10 and
Figure 11, respectively.
It can be seen that due to the identical initial conditions, the coordinates of the target’s projection on the image plane are the same at the initial moment (due to image measurement noise, there may be slight differences). Due to the deviation between the target projection point and the image center point, the image-based tracking method (52) can generate real-time instructions by image deviation between the current image coordinate and the center point. By adjusting the pan tilt’s two axis, the target projection gradually approaches the image center point, and ultimately, the target projection stabilizes in the area near the center point (the green center point in
Figure 10). For the position-based tracking method (51), from
Figure 3,
Figure 4 and
Figure 5, it can be seen that the onboard camera’s final pointing also reached a stable state, but the target projection gradually moves towards the edge of the image plane and eventually disappears from the camera’s field of view, resulting in target tracking failure.
The reasons for the difference in tracking performance between the two methods are analyzed as follows: for the position-based tracking method (51), the expected angle and angular velocity of the pan-tilt are directly calculated from the target’s position measurement information and camera parameters. Due to the deviation of the two critical factors, the control system generates control instructions with deviations. In this state, there is a significant directional deviation between the camera optical axis and the target direction, resulting in the loss of the target image. But for the image-based tracking method (52), the expected state does not directly depend on the target position information but is calculated in real-time through the deviation between the projection coordinates of the target in the image plane and the expected projection point. The influencing factor of tracking performance comes from the uncertainty of camera parameters, but the degree of influence of this factor is smaller than when both uncertainties exist simultaneously. Therefore, the final target still remains near the center of the camera field of view, achieving more effective visual tracking of the target.
4.2.2. Simulation Results of the Adaptive Method
The results of the adaptive tracking method (39) proposed in this paper are shown in
Figure 12,
Figure 13,
Figure 14 and
Figure 15. It can be seen that under the action of the adaptive tracking method, the target eventually converges to the center position of the imaging plane, achieving high precision visual tracking of the target. By comparing
Figure 10 and
Figure 14, it can be seen that under the adaptive tracking method, the distance between the target’s final projection point and the expected projection point is smaller than that of the traditional image-based tracking method, indicating that the adaptive method has a higher tracking accuracy.
Table 7 lists the calculation results of the image stability index (53) with different tracking methods under various simulation conditions. In the table, Condition 1 involves the presence of uncertainties in both camera parameters and target position; Condition 2 involves only camera parameter uncertainty; Condition 3 involves only target position uncertainty; and Condition 4 is free from any uncertainty (i.e., camera parameters are accurately calibrated, and the target position is precisely measured). It can be seen that when the camera is calibrated and the target’s position is precisely measured, the stability indices of the three methods are relatively small, indicating that in ideal situations without uncertainties, all three methods can achieve effective visual tracking of the target with high accuracy. When there is uncertainty in the target’s position, the stability index of the position-based controller rapidly increases, indicating that the tracking error rapidly increases, while that of the image-based controller remains approximately unchanged (different numerical values caused by the image noise). This is because the input of the image-based controller comes entirely from the projection coordinates of the target, so the measurement deviation of the target’s position within the orbital plane will not affect the tracking accuracy. But when uncertainties exist in camera parameters, both the position-based method (51) and image-based method (52) showed a significant increase in the image stability index, with image-based tracking method having a smaller increase, indicating that the image-based method is more robust to camera parameter uncertainties. However, it can be observed that the method (39) proposed in this paper yielded the smallest image stability index across all four conditions, with minimal variation under the different uncertainty conditions. This indicates that the proposed method possesses strong robustness when confronted with both types of uncertainties.
4.2.3. Robustness Test and Discussion of the Adaptive Method
To further investigate the robustness of the proposed tracking method against the two types of uncertainties, this subsection first examines the impact of varying target initial position errors on tracking accuracy by systematically adjusting the magnitude of these deviations. The final image stability indices are recorded in
Table 8. The camera parameter settings remain consistent with the previous section.
A comparison of the data in the table revealed that for the proposed visual tracking method for video satellites, the image stability index increased as the initial target positioning error grew under camera parameter uncertainty conditions. This indicates that the positioning accuracy of the proposed method degrades to some extent when the initial target positioning error becomes large. However, even when the positioning error angle increases tenfold, the image tracking accuracy remains higher than that of traditional image-based methods, demonstrating the robustness of the proposed method against initial target positioning deviations. This can be attributed to the fact that the initial positioning bias affects the initial parameter estimates. As the tracking process proceeds, the proposed method adaptively updates the parameters using the target projection information, gradually compensating for the influence of the initial positioning deviation and ultimately achieving high-precision visual tracking of the target.
Next, an extended study was conducted to investigate the adaptability of the proposed method under uncertainties in the orientation of the orbital plane. Assuming initial positioning errors of 0.2° in both the orbital inclination and the right ascension of the ascending node while keeping the other simulation conditions consistent with those in
Section 4.2.2, the obtained image stability index was 8.6235. Compared with the results in
Section 4.2.2, the tracking accuracy showed a certain degree of degradation, yet remained higher than that of traditional image-based tracking method, whose value is 35.1385. The primary reason for the decline in accuracy lies in the fact that deviations in the orientation of the orbital plane introduce a form of model error, which affects the direction of parameter updates and the final estimated values. However, since the proposed method is essentially an image-feedback-based control strategy, its control law explicitly incorporates the image tracking error
. Consequently, even in the presence of model errors, the image feedback term
can drive the camera’s optical axis in real-time toward reducing the tracking error, thereby helping to maintain the system’s tracking performance to some extent.
It should be noted that factors such as actual orbital perturbations and non-circular motion may further affect the tracking accuracy of the proposed method. How to maintain high-precision tracking under more complex dynamic scenarios is a key issue to be addressed in future research.