1. Introduction
Video satellites have the characteristics of agile attitude maneuvering, real-time continuous video imaging, and the ability to independently complete tracking and observation [
1,
2,
3]. They have unique advantages compared with traditional remote sensing satellites [
4]. They can achieve continuous staring observation of ground or space targets, and have broad application prospects in resource exploration, Earth observation, and other fields. Due to the above features, many countries and agencies have attached a greater importance to the research of video satellites, and have successively launched a batch of them in recent years, such as the Skybox [
5], LAPAN-TUBSA [
6], SkySat [
7], Tiantuo-2 [
8], and Jilin-1 [
9,
10] satellites, etc., as a supplement to traditional remote sensing observation methods or ground-based observation methods.
The staring control of video satellites usually requires the camera’s optical axis to point towards the direction of the target, which can make the target projection stay at the center position of the camera’s imaging plane and achieve ideal observation results. It can also avoid the target from leaving the camera’s field of view by leaving a certain distance to the edge of the image plane, ensuring the continuity of the tracking observation. Due to the orbital motion of the video satellite, displacement between the target and the satellite exists. In order to achieve tracking and observation, it is necessary to design attitude controllers for the video satellite.
Early research on video satellite tracking and control required the prior geographic position information of ground targets as the input of the attitude controller in order to calculate attitude errors and then implement staring control [
1,
11,
12,
13,
14]. This type of method is a location-based tracking control that does not involve the image information obtained by the camera, so the prior position information of the target is required. This means that it cannot be applied to the observation of non-cooperative targets, due to the lack of prior information. Therefore, some scholars have started to conduct research on the staring control of video satellites based on visual image feedback [
11,
15,
16,
17,
18,
19,
20,
21], which can be applied to the observation of both cooperative and non-cooperative targets.
The methods used in the above studies all assume that the parameters of the onboard camera system are precisely calibrated. However, the calibration of cameras in orbit is difficult and time-consuming, making it difficult to meet the rapid response requirements of increasingly complex space-based observation tasks. At the same time, during the long duration of orbit motion, the camera is affected by factors such as heat and vibration, and its optical properties may also deviate to the nominal value. Adopting a method under the assumption of precise camera calibrations will affect the tracking and control accuracy. Currently, two main methodologies are being utilized to resolve the issue. One is the model revision method, which compensates for the parameter deviation caused by some specific factors (thermal deformation [
22], geometric deformation of photosensitive elements [
23], etc.) to a certain extent, but is difficult to meet the demand for fast response. The other method compensates for the impact of parameter deviations using control algorithms. At present, this method has been studied (to some extent) in the fields of ground robots [
24,
25,
26,
27,
28,
29,
30,
31,
32] and drones [
33,
34,
35,
36,
37,
38]. However, research on the tracking and control of targets by video satellites with uncalibrated in-orbit cameras is still relatively scarce.
The image-based control method requires establishing the relationship between the relative position of the target with the video satellite and its projection point coordinates on the camera’s imaging plane. The control law is calculated using the deviation between the present projection point and the expected point. Then, under the control maneuver, the attitude of the video satellite is changed, gradually moving the target projection point to the expected position. Therefore, the generation of control instructions depends on the calculation of the change rate of projection point coordinates; that is, the visual speed information. However, in reality, visual speed is difficult to measure directly. In order to calculate the visual speed and use it as an input for the controller, it can be indirectly obtained using traditional methods, by calculating the time differential of the pixel coordinates; that is, the differential calculated value of the visual velocity is obtained by dividing the variation in the target’s projected coordinates at adjacent moments by the time interval. The differential operation accuracy is directly affected by the differences in the time intervals. The smaller the time interval, the higher the visual velocity accuracy obtained. However, the time interval is first limited by the video frame rate of the onboard cameras; for example, the video frame rate of Tiantuo-2 and Jilin-1 is 25 frames per second, while the frame rate of the Surrey V-1C video satellite can reach 100 frames per second. Therefore, even with the same control law, different control effects could occur on video satellites with different frame rate shooting capabilities. Moreover, differential operation is also influenced by algorithms and hardware, with significant differences in image processing speeds between different algorithms and hardware which limit the rapid response capability of video satellites. In addition, the measured pixel coordinates inevitably contain noise effects, and the time difference operation will amplify the impact of noise, which can further affect the control accuracy. Therefore, the visual speed obtained with these differences is not only affected by a lack of smoothness, but is also susceptible to noise, which can reduce control accuracy and stability. Thus, it is necessary to design corresponding adaptive visual tracking control methods for situations where visual velocity is treated as an unknown variable without using differential methods. However, there is still relatively little research on how to obtain the visual velocity when it is difficult to acquire.
This manuscript comprehensively considers the tracking control of video satellites under the conditions of uncalibrated cameras and unknown visual velocity. First, we redesign the control law proposed in Ref. [
39] by constructing an adaptive law to estimate the visual speed of the target which does not rely on differential calculations as in the reference. Using real-time camera parameter estimation, the reference attitude trajectory, parameter update law, and tracking controller are all calculated using visual speed estimation values. Simulations demonstrate improved noise robustness and smoother tracking compared to the results in the original reference.
The remaining part of the paper proceeds as follows. In
Section 2, the physical models of this article, including the video satellite motion model and camera projection model, are established. In
Section 3, we describe the model for estimating visual velocity, redesign the control law, and rigorously prove the stability of the closed-loop system. In
Section 4, we conduct comparisons and analyses with the differential calculation of visual velocity through simulation, and we end with some conclusions and open problems in
Section 5.