Next Article in Journal
Low-Cost Satellite Launch System—Aerodynamic Feasibility Study
Previous Article in Journal
Multi-Layer Fault-Tolerant Robust Filter for Integrated Navigation in Launch Inertial Coordinate System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spacecraft Staring Attitude Control for Ground Targets Using an Uncalibrated Camera

College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Aerospace 2022, 9(6), 283; https://doi.org/10.3390/aerospace9060283
Submission received: 15 March 2022 / Revised: 15 May 2022 / Accepted: 23 May 2022 / Published: 24 May 2022
(This article belongs to the Section Astronautics & Space Science)

Abstract

:
Previous staring attitude control techniques utilize the geographic location of a ground target to dictate the direction of the camera’s optical axis, while the assembly accuracy and the internal structure of the spaceborne camera are not considered. This paper investigates the image-based staring controller design of a video satellite in the presence of uncertain intrinsic and extrinsic camera parameters. The dynamical projection model of the ground target on the image plane is firstly established, and then we linearly parameterize the defined projection errors. Furthermore, a potential function and a self-updating rule are introduced to estimate the parameters online by minimizing the projection errors. As the parameters are updating constantly, an adaptive control algorithm is developed, so that the errors between the current and the desired projections of the ground target converge to zero. The stability is proved using Barbalat’s lemma. Simulation results show that the designed controller can successfully move the target’s projection to the desired coordinate even though the camera parameters are unknown.

1. Introduction

With the increasing importance of Earth observation projects, staring imaging outperforms other sensing techniques owing to its unique capability in capturing continuous images of the ground target [1,2]. As its name suggests, staring control requires the camera to constantly point to the target, so we can obtain images where the target is always at the center. To achieve this purpose, the optical axis of the camera is supposed to be aimed at the target throughout the whole observation phase. Many video satellites (e.g., TUBSAT [3], Tiantuo-2, Jilin-1) are equipped with staring mode, therefore they can be applied for various scenarios, such as emergence rescue, target monitoring, and so on [4,5,6]. In a staring control case, the satellite is moving along the orbit and simultaneously the ground target is not stationary as it is fixed on the rotating Earth’s surface, leading to a time-varying relative motion. Therefore, a dedicated attitude controller should be designed to keep the satellite staring at the target.
Conventional methods for staring control mainly require the relative position between the satellite and the target, normally obtained via orbital data and the geographic information, respectively. Furthermore, a staring imaging for a single point dictates only the optical axis, so the orientation perpendicular to it is free. Refs. [2,7] both propose PD-like controllers to achieve staring imaging, while Refs. [8,9,10] pursue optimality during the attitude maneuver. Ref. [11] realizes a similar real-time optimal control method with an emphasis on the pointing accuracy. However, the above studies have not considered the image feedback, though the image error directly and precisely reflects the controller’s accuracy. In light of this, it is necessary to take use of the camera’s ability to achieve more precise staring.
Besides traditional methods whose attitude control torques are generated by establishing the inertial geometry and disintegrating the relative orientation into different rotation angles, state-of-the-art image-processing technologies [12,13,14,15] have made it possible to develop a novel image-based staring controller relying on the target’s projection on the image plane. As a camera plays an essential role in different engineering applications [16,17,18,19,20,21,22], various image-based control methods have been developed. The same as staring control, some of these control schemes use the image to obtain the orientation of the target point. For example, Refs. [23,24] study positioning control of robots, Ref. [25] conducts research on motion control of an unmanned aerial vehicle (UAV), and Ref. [26] uses images for space debris tracking. However, above studies either neglect the uncertainties of the camera or do not consider the spacecraft kinematics and dynamics. A spaceborne camera is hard to calibrate in a complicated working environment such as space; therefore, this paper focuses on analyzing an image-based adaptive staring attitude controller for a video satellite using an uncalibrated camera.
To take advantage of the images, the projection model should be analyzed. The target’s projection on the image plane is decided by the relative position between the target and satellite in the inertial space, the satellite’s attitude and the camera’s configuration. The camera configuration consists of the intrinsic structure and the extrinsic mounting position and orientation. Camera parameters are to be properly defined, thus able to be linearized and thereafter estimated online. Worth noting that the relative orientation of staring imaging is influenced by both the attitude and orbital motion, which is different from a robotic model. To conclude the discussion, for a video satellite whose camera is uncalibrated, the image-based staring attitude controller is built upon the thorough analysis of the camera structure.
This paper differs from traditional staring attitude control methods by focusing on the image-based adaptive algorithm accommodating the unknown camera parameters, therefore the kinematic relationship between the image and the attitude is firstly established. Through linear parameterization, the negative gradients of the projection errors are chosen as the direction of parameter adjustment. Estimated parameters and the image information are then adopted to formulate the staring controller, which directs the target’s projection to the desired coordinates. A potential function is also introduced to guarantee the controller’s stability. The convergence of the ground target’s projection to its desired location indicates that the optical axis reaches the desired orientation. Finally, simulation shows the trajectory of the projection on the image plane. As the projection moves along the trajectory, the image errors, as well as the estimated projection errors, are approaching zero.
The remainder of this paper is organized as follows. The camera modeling is introduced in Section 2, where the projection kinematics are derived for a video satellite. In Section 3, we propose the adaptive controller, including the parameter extraction and estimation. Simulation is presented in Section 4 and the results demonstrate the effectiveness of our controller. Conclusions are drawn in the last section.

2. Problem Formulation

This section starts with a brief introduction of satellite attitude kinematics and dynamics, and then establishes the camera projection model between the target’s position in the Earth-centered inertial (ECI) frame and its pixel coordinates on the image plane. Finally, the projection kinematics in the form of pixel coordinates are derived.

2.1. Attitude Kinematics and Dynamics

A quaternion q, which includes a scalar part q 0 and a vector part q v = q 1 q 2 q 3 T , is adopted to describe the attitude:
q = cos ϕ / 2 + r sin ϕ / 2 = q 0 + q v
where r is the Euler axis and ϕ is the rotation angle. The quaternion can avoid the singularity of Euler angles and it must meet the normalization condition: q = 1 . The attitude kinematics and dynamics of a satellite as a rigid body are given by
q ˙ v q ˙ 0 = 1 2 q 0 E 3 + s k q v q v T ω J ω ˙ = ω × J ω + U
where E 3 is a 3 × 3 unit matrix, J is the inertial moment of the satellite and ω represents the angular velocity of the satellite relative to the inertial frame expressed in the body frame. U is the attitude control torques. The operation s k · is defined as
s k q v = 0 q 3 q 2 q 3 0 q 1 q 2 q 1 0
In traditional attitude tracking controllers including the staring control, a desired quaternion q d should firstly be designed and then the error quaternion q e is obtained. According to different control strategies, U is calculated based on q e and the angular velocity error ω e . For an image-based staring control case, alternative attitude representation is needed as the attitude errors are embodied in the image errors, i.e., the pixel coordinate errors between the current and desired projection. Therefore, we are to measure the relative attitude via image recognition. Inevitably, an uncalibrated camera introduces extra uncertainties into the images. For this reason, the analysis of a camera model is necessary.

2.2. Earth-Staring Observation

Earth-staring observation requires the camera’s optical axis to point towards the ground target for a period of time. The scenario is shown in Figure 1. O e X e Y e Z e is ECI frame. O e and O b are the the center of mass of Earth and the satellite, respectively. The satellite with a camera is on the orbit, and the ground target T is located at the Earth surface while rotating around Z e at the angular velocity of Ω e . The satellite’s position is expressed in ECI as R e b , and M i b represents the rotation matrix from ECI to the body frame. Define the homogeneous transform matrix from the inertial frame to the body frame T h R 4 × 4 :
T h = M i b M i b i R e b 0 1 × 3 1
i R e T is the target’s position in ECI, and b R b T is the vector from the satellite to the target expressed in the body frame. Homogeneous coordinates are adopted to better describe the transformation. According to the geometrical relationship, we have
b R b T 1 = T h i R e T 1

2.3. The Intrinsic Camera Model

Inside the camera, the target is projected on the image plane through the lens. Assume the camera has a focal length of f and a pixel size of d x × d y . Figure 2 depicts the camera frame O c X c Y c Z c and the 2D pixel frame o u v , whose conjunction with the optical axis is u 0 , v 0 T . φ is the angle between the axis u and v. The target in the camera frame is expressed as c R c T = x c , y c , z c T , and its projection is y = u , v T . Thus, we have the following projection transformation:
z c y 1 = Π c R c T 1
where Π R 3 × 4 is defined as
Π = f / d x f / d x · cot φ u 0 0 0 f / d y · sin φ v 0 0 0 0 1 0

2.4. The Extrinsic Camera Model

The position and attitude of the camera frame O c X c Y c Z c with respect to the body frame O b X b Y b Z b is displayed in Figure 3. R b c represents the position of O c in the body frame. Similarly, we define the homogeneous transform matrix from the body frame to the camera frame T R 4 × 4 :
T = M b c c R b c 0 1 × 3 1
The target’s position in the camera frame is c R c T and in the body frame it is b R b T . The transformation between them can be given by
c R c T 1 = T · b R b T 1

2.5. The Projection Kinematics of Staring Imaging

According to Equations (6) and (9), we have
z c y 1 = N · b R b T 1
where the projection matrix N R 3 × 4 is defined by N = Π · T and its elements are denoted as n i j i = 1 , 2 , 3 ; j = 1 , 2 , 3 , 4 . Then combine (5) and (10), we obtain
y 1 = 1 z c N · T h · i R e T 1
The above equation reveals the mapping relation between the target’s position in ECI and its projection coordinates on the image plane. The matrix N reflects the camera’s role in the transformation, while T contains the satellite’s attitude and orbit motion impacts. Define
N n 1 T n 2 T n 3 T P n 3 T
where P R 2 × 4 is the matrix consisting of the first two rows of N, and n 3 T R 1 × 4 is the third row. To derive the kinematic equations more clearly, we will explicitly denote the time-varying states. Equation (11) can be rewritten as
y ( t ) = 1 z c ( t ) P · T h ( t ) · i R e T ( t ) 1 z c ( t ) = n 3 T · T h ( t ) · i R e T ( t ) 1
For simplicity, let n 3 3 T denote a vector formed by the first three elements of n 3 T , and let P 3 denote a matrix formed by the first three columns of P . Differentiate the depth z c ( t ) of the target in the camera frame and we obtain
z ˙ c t = n 3 T s k M i b t i R e T t M i b t i R e b t M i b t 0 1 × 3 0 1 × 3 ω t i V e b t + n 3 T T h t i V e T t 0 = n 3 3 T n 34 s k M i b t i R e T t M i b t i R e b t M i b t 0 1 × 3 0 1 × 3 ω t i V e b t + n 3 3 T n 34 M i b t M i b t i R e b t 0 1 × 3 1 i V e T t 0 = n 3 3 T s k M i b t i R e T t M i b t i R e b t M i b t ω t i V e b t + n 3 3 T M i b t · i V e T t = n 3 3 T s k M i b t i R e T t M i b t i R e b t ω t + n 3 3 T M i b t i V e T t i V e b t
To simplify the expression, we define
a t = n 3 3 T s k M i b t i R e T t M i b t i R e b t a v t = n 3 3 T M i b t i V e T t i V e b t
Thus we have
z ˙ c t = a t ω t + a v t
Similarly, we define
A t = P 3 y t n 3 3 T s k M i b t i R e T t M i b t i R e b t A v t = P 3 y t n 3 3 T M i b t i V e T t i V e b t
then we have the derivative of the image coordinates given by
y ˙ t = 1 z c t P · T ˙ h t · i R e T t 1 z ˙ c t z c t y t + 1 z c t P · T h t · i V e T t 1 = 1 z c t P 3 y t n 3 3 T s k M i b t i R e T t M i b t i R e b t ω + 1 z c t P 3 y t n 3 3 T M i b t i V e T t i V e b t = 1 z c t A t ω t + A v t
Equations (16) and (18) are the staring imaging kinematics. From the prior information of the target point, the position i R e T t and velocity i V e T t of the ground point are already known. Moreover, noting that the attitude and orbit determination can provide the rotation matrix M i b t and the orbital location i R e b t and velocity i V e b t , the only uncertain parameters are P 3 and n 3 3 T . There are a few characteristics worth analyzing in the kinematic equations. First, the depth z c ( t ) is not observable through images, therefore our controller can not access depth information. Second, the matrices a v t and A v t contain the relative orbital motion and are uncontrollable, and simultaneously we do not conduct orbit maneuver. This brings in a problem that if the relative motion between the target and the satellite is too fast, the demand to track the target may exceed the satellite’s attitude maneuver capability. This can result in the target being lost in the field of view. Hence, the satellite should have sufficient angular maneuver capability to keep up the relative rotation. Moreover, refer to [2] for more analysis of the relative angular velocity. Third, the image-based kinematics (16) and (18) maintain the same feature as quaternion-based kinematics (2) as they are all linear with regard to ω . Fourth, the unknown camera parameters exist in a v t , a v t , A t and A v t , so no accurate projection change rate can be derived through the kinematics, and a self-updating rule is to be proposed to estimate them online.

2.6. Control Objective

The control objective is to guarantee the projection coordinate y t approaches its desired location y d on the image plane. The y t is extracted from the real-time images and y d is predetermined. Normally we expect to fix the target point at the center of the image to gain a better view, thus without loss of generality, y d = u 0 , v 0 T is selected. To realize this purpose, an adaptive controller is to be designed to specifically address the camera parameters. Define the image error Δ y t = y t y d , the control objective is that Δ y t asymptotically converges to 0 .
Figure 4 shows the control framework. The camera captures the target when it appears in the field of view, and then the image is processed so that the corresponding target coordinate y t is obtained. With y t , the camera parameters are estimated online and are thereafter applied to the attitude controller. According to the kinematics and dynamics, the satellite will finally accomplish the staring attitude maneuver.

3. Controller Design

3.1. Parameter Definition

To analyze the exact parameters that need to be estimated, we rewrite Equation (13) as
z c t y t 1 = N · T h t · i R e T t 1 = N · b R b T t 1 = P 3 n 3 3 T b R b T t + n 14 n 24 n 34
Comparing the imaging kinematics (16) and (18) with (19), we notice that all the elements in the matrix N can influence projection coordinates, while the projection change rate is only affected by n 3 ( 3 ) T and P 3 . We take all the 12 elements in N as the parameters to be estimated and then form a parameter sequence:
θ = n i j T i = 1 , 2 , 3 j = 1 , 2 , 3 , 4
Theorem 1.
For any ρ R that is not 0, θ and ρ θ corresponds to the same projection y ( t ) .
Proof. 
Substitute the depth z c ( t ) in the projection expression in (13) and we have
y t = 1 z c t P · T h t · i R e T t 1 = P · T h t · i R e T t 1 n 3 T · T h t · i R e T t 1 = ρ P · T h t · i R e T t 1 ρ n 3 T · T h t · i R e T t 1
It is clear that θ and ρ θ result in the same y ( t ) . A similar property can be obtained for visual servoing application [27]. □
Due to the Theorem 1, the parameters defined in (20) has at least one multiplier difference with the real camera parameters. Without loss of generality, we can force n 33 = 1 and only estimate the left parameters in θ , thus the estimation complexities are reduced. We then redefine a new parameter sequence with 11 elements:
θ p = n 11 , n 12 , n 13 , n 14 , n 21 , n 22 , n 23 , n 24 , n 31 , n 32 , n 34 T .
Theorem 2.
Assume n represents the number of elements in θ p . For any vector p R 3 × 1 (e.g., the angular velocity ω), the matrices Y t , p R 2 × n , y t , p R 2 × 1 , Z t , p R 1 × n , z t , p R can be found, so that the matrices A t R 2 × 3 and a t R 1 × 3 have the following property:
A t , θ p · p = Y t , p · θ p + y t , p a t , θ p · p = Z t , p · θ p + z t , p
Similarly, the matrices Y v t R 2 × n , y v t R 2 × 1 , Z v t R 1 × n , z v t R can be found, so that the matrices A v t R 2 × 1 and a v t R have the following property:
A v t , θ p = Y v t · θ p + y v t a v t , θ p = Z v t · θ p + z v t
The proof is omitted here, since it is pretty straightforward based on Equations (15) and (17). Theorem 2 demonstrates that the imaging kinematics can be expressed in a linear form of θ p . This is the basis of the feasibility of estimation. The estimated parameters are denoted as θ ^ p t , where the overhead script ^ indicates that the variable is estimated instead of a real value. Correspondingly, this notation applies to all the estimated variables in the remainder of this paper.

3.2. Reference Attitude Trajectory

A PD-like controller requires both the convergence of Δ y t and the angular velocity error ω e ( t ) = ω ( t ) ω d ( t ) , where ω d ( t ) is the desired angular velocity. y d is predetermined and time-invariant, while ω d ( t ) is time-varying and not as straightforward. Therefore it is troublesome to directly obtain the desired angular velocity trajectory. Instead, a reference attitude trajectory, i.e., y r t and ω r t , is designed to cope with the problem. Define y r t so that it meets the following condition:
y ˙ r t = y ˙ d λ Δ y t = λ Δ y t
where λ is a positive scalar. We further define the reference image tracking error:
δ y ˙ t = y ˙ t y ˙ r t = Δ y ˙ t + λ Δ y t
Theorem 3.
If δ y ˙ t asymptotically converges to 0, Δ y t and Δ y ˙ t asymptotically converge to 0; if δ y ˙ t exponentially converges to 0, Δ y t and Δ y ˙ t exponentially converge to 0;
The theorem is obvious, so no proof is not listed here. Now the control objective is converted to guaranteeing the stability of δ y ˙ t . Meanwhile, the reference angular velocity tracking error is obtained by:
δ ω t = ω t ω r t
The reference angular velocity trajectory ω r t is defined as
ω r t = A ^ + t z ^ c t · λ · Δ y t A ^ v t
where A ^ + t is the pseudo-inverse matrix of A ^ t , which is the estimated A t . A ^ + t is defined as
A ^ + t = A ^ T t A ^ t A ^ T t 1
The presupposition of the existence of the definition of A ^ t is that A ^ t has a rank of 2. As the values of A ^ t are dependent on θ ^ p t , the rank of A ^ t is affected by the way θ ^ p t is updated.

3.3. Potential Function Design

Assume the 3 × 3 sub-matrix of the estimated projection matrix N ^ t is defined as
N ^ 3 t = P ^ 3 n ^ 3 3 T
According to Ref. [28], if N ^ 3 t has a rank of 3, the matrix A ^ t has a rank of 2. Hence, a potential function is designed as
U θ ^ p t = 1 e a N ^ 3 t 2 1 + b
Apparently we can conclude that U θ ^ p t is always positive. The coefficients a and b are both positive as well, but b is very small, which is designed to avoid the singular situation where the denominator is 0. The potential function reaches its maximum 1 b when N ^ 3 t = 0 , i.e., N ^ 3 t has a rank less than 3. U θ ^ p t will approach to 0 when θ ^ p t keeps far from the neighborhood of N ^ 3 t = 0 . To ensure that, we should enhance the resistance of θ ^ p t getting close to the singular area. In the parameter estimation subsection, the self-updating rule will incorporate the gradient of U θ ^ p t regarding θ ^ p t which is given by
U θ ^ p t θ ^ p t = 2 a N ^ 3 t e a N ^ 3 t 2 e a N ^ 3 t 2 1 + b 2 N ^ 3 t θ ^ p t

3.4. Parameter Estimation

The estimation error is defined as Δ θ p t = θ ^ p t θ p . Equation (11) always holds for an ideal camera whose parameters are all calibrated. However, the real parameters may diverge from the ideal ones. Adopting the definition in [27], we define the following estimated projection errors e ( t ) to represent the image deviations caused by Δ θ p :
e t = z ^ c t y t P ^ t T h t i R e T t 1
where y t is obtained from the image, and z ^ c t and P ^ t are the estimated depth and parameters.
Theorem 4.
Given the estimated projection errors e ( t ) , there exists a matrix W p t R 2 × n , so that
e t = W p t Δ θ p t
Proof. 
According to the projection Equation (11), with the real parameters we have
z c t y t P · T h t i R e T t 1 = 0
Thus, e ( t ) can be rewritten as
e t = z ^ c t y t P ^ t T h t i R e T t 1 z c t y t P · T h t i R e T t 1 = z ^ c t z c t y t P ^ t P T h t i R e T t 1 = y t n ^ 3 T t n 3 T T h t i R e T t 1 P ^ t P T h t i R e T t 1 = y t n ^ 3 T t n 3 T P ^ t P T h t i R e T t 1
n ^ 3 T ( t ) n 3 T and P ^ t P appear linearly in the last equation, such that Δ θ t can be linearized from e ( t ) . Considering that n 33 and n ^ 33 in θ and θ ^ are both fixed to 1, we can eliminate n 33 n ^ 33 in Equation (36). Then we can conclude that e ( t ) is linear with respect to Δ θ p t , i.e., such a matrix W p t exists. Apparently, W p t consists of y t , T h t and i R e T t . □
We propose the following self-updating rule for the parameters:
θ ^ ˙ p t = Γ 1 Y p T t δ y ˙ t + W p T t K 1 e t + K 2 U θ ^ p t θ ^ p t δ ω t 2
Γ , K 1 , K 2 and K 3 are all positive coefficient matrices, and Y p T ( t ) is a regressor matrix which does not contain any camera parameters. It is defined via the following equation:
Δ θ p T t · Y p T t = z c t z ^ c t y ˙ r t A v t A ^ v t A t A ^ t ω t T K 3
There are three components in the brace of Equation (37). The first one is about the reference image tracking error, which will play a part in the stability proof. The second is about the negative gradient of the estimated projection error, which drives down e t by updating θ ^ p t . Additionally, the last term is the negative gradient of the potential function, which mainly takes effect when θ ^ p t is approaching an area that leads to r a n k ( A ^ t ) < 2 .

3.5. Adaptive Staring Imaging Controller

The controller is given by
U t = ω t × J ω t + J ω ˙ r t K 4 δ ω t A ^ T t K 3 δ y ˙ t K 5 U θ ^ p t θ ^ p t δ ω t
where K 4 and K 5 are positive coefficient matrices. For simplicity, we define 2 non-negative functions:
V 1 t = 1 2 δ ω T t J δ ω t V 2 t = 1 2 Δ θ p T t Γ Δ θ p t
The derivatives of V 1 ( t ) and V 2 ( t ) are given respectively as follows
V ˙ 1 t = δ ω T t J δ ˙ ω t = δ ω T t J ω ˙ t ω ˙ r t = δ ω T t ω t × J ω t + U t J ω ˙ r t = δ ω T t K 4 δ ω t A ^ T t K 3 δ y ˙ t K 5 U θ ^ p t θ ^ p t δ ω t = δ ω T t K 4 δ ω t δ ω T t A ^ T t K 3 δ y ˙ t K 5 δ ω T t U θ ^ p t θ ^ p t δ ω t
V ˙ 2 t = Δ θ p T t Γ Δ θ ˙ p t = Δ θ p T t Y p T t δ y ˙ t + W p T t K 1 e t + K 2 U θ ^ p t θ ^ p t δ ω t 2 = Δ θ p T t Y p T t δ y ˙ t e T t K 1 e t Δ θ p T t K 2 U θ ^ p t θ ^ p t δ ω t 2
The δ ω T t A ^ T t K 3 δ y ˙ t in the expression of V ˙ 1 t can be decomposed into ω T t A ^ T t K 3 δ y ˙ t and ω r T t A ^ T t K 3 δ y ˙ t . We have
ω T t A ^ T t = ω T t A T t + ω T t A ^ T t A T t = z c t y ˙ t A v t T + ω T t A ^ T t A T t = z c t y ˙ r t + z c t y ˙ t y ˙ r t A v t T + ω T t A ^ T t A T t = z c t y ˙ r t A v t + z c t δ y ˙ t T + ω T t A ^ T t A T t ω r T t A ^ T t = z ^ c t y ˙ r t A ^ v t T
Thus with the definition of Y p T ( t ) in Equation (38), δ ω T t A ^ T t K 3 δ y ˙ t is decomposed into
δ ω T t A ^ T t K 3 δ y ˙ t = ω t ω r t T A ^ T t K 3 δ y ˙ t = z c t z ^ c t y ˙ r t A v t A ^ v t + A ^ t A t ω t + z c t δ y ˙ t T K 3 δ y ˙ t = z c t δ y ˙ t K 3 δ y ˙ t + z c t z ^ c t y ˙ r t A v t A ^ v t + A ^ t A t ω t T K 3 δ y ˙ t = z c t δ y ˙ t K 3 δ y ˙ t Δ θ p T t Y p T t δ y ˙ t
Now we define the Lyapunov function V t = V 1 t + V 2 t . Substitute Equation (44) into Equation (41) and we obtain the derivative of V t ) :
V ˙ t = V ˙ 1 t + V ˙ 2 t = δ ω T t K 4 δ ω t z c t δ y ˙ t K 3 δ y ˙ t e T t K 1 e t K 5 δ ω T t U θ ^ p t θ ^ p t δ ω t Δ θ p T t K 2 U θ ^ p t θ ^ p t δ ω t 2 δ ω T t K 4 δ ω t z c t δ y ˙ t K 3 δ y ˙ t e T t K 1 e t K 5 Δ θ p T t K 2 U θ ^ p t θ ^ p t δ ω t 2
Assume k 5 min is the smallest eigenvalue of K 5 , k 2 max is the biggest eigenvalue of K 2 , and τ min is the smallest eigenvalue of Γ . Select proper values so that
k 5 min k 2 max 2 V 0 τ min
In this way, we can guarantee
V ˙ t 0
According to Equations (45) and (47), V ( t ) is bounded and the upper bound is V ( 0 ) . Taking the expression of V ( t ) from (40) into account, ω t , ω r t and θ ^ p t are all bounded variables. Due to the definition of ω r t and y r ( t ) , the boundedness of ω r t implies that the boundedness of y ( t ) and y r ( t ) . The bounded θ ^ p t also indicates the estimated projection error e ( t ) is bounded, as is obvious in Equation (34). Attributing to the boundedness of the aforementioned variables, we can infer that V ¨ t is also bounded. According to Barbalat’s Lemma, we can conclude that
lim t V ˙ t = 0
i.e.,
lim t δ ω t = 0 lim t δ y ˙ t = 0 lim t e t = 0
As Theorem 3 suggests, we have
lim t Δ y t = 0 lim t Δ y ˙ t = 0
Hence, the stability of the proposed adaptive staring imaging controller (37) and (39) is proved. Although the camera is uncalibrated, the estimated projection error e ( t ) is defined to help to estimate the parameters. As the parameters are updated online, the controller utilizes the image extraction information y t and angular velocity ω ( t ) as the input and then computes the errors between the input and the reference trajectory. The control torques will finally direct the projection of the target point to its desired location on the image plane and achieve staring observation.

4. Simulation and Discussion

In this section, the proposed image-based adaptive staring controller and a conventional position-based controller are applied to realize the ground target observation.
At the initial time (12 Jul 2021 04:30:00.000 UTC), the ground target’s location is given in Table 1 and the orbital elements of the satellite are listed in Table 2. The target is initially near the sub-satellite point. The real and theoretical camera parameters are listed in Table 3. M 321 · denotes the rotation matrix with a 3-2-1 rotation sequence. The theoretical parameters were initially real states of the camera and are used as the initial estimated values. Due to various causes, e.g., the long-term oscillation, the real parameters deviate from the theoretical ones and reflect the current camera states. The image plane of the camera consists of 752 × 582 pixels. The desired projection location is at the center of the plane, i.e., ( u 0 , v 0 ) T . The initial attitude is presented in Table 4 where the camera is roughly kept pointing to Earth center.
In the given initial conditions, the ground target already appears on the image. The control torques are bounded by the maximum output U m a x of the attitude actuator, i.e., a reaction flywheel, which in our simulation is 0.1 Nm. So the inequality U i t U max holds for i = 1 , 2 , 3 . The following two cases are simulated using the same uncalibrated camera and the same initial attitude and orbit conditions. In case 1, the conventional position-based controller only utilizes target’s location information without taking advantage of the images. In case 2, we suppose the image processing algorithm detects its pixel coordinates and the image-based adaptive controller outputs the control torques incorporating the image and location information.

4.1. Case 1: Conventional Position-based Staring Controller

The conventional staring control methods are normally based on the location of the ground target. By designing the desired orientation and angular velocity, the optical axis of camera is supposed to be aimed at the target in an ideally calibrated camera case. Using the uncalibrated camera and the initial conditions presented in this section, we adopt a position-based staring controller from [2]. The controller is
U t = K δ δ K ω Δ ω
where K δ and K ω are the coefficient matrices. Let δ = ( α , β ) T where α and β are the two rotation angles between the camera’s optical axis and its desired orientation aiming at the target. Let Δ ω = ω ω d where ω d is the desired angular velocity. α , β and ω d are designed based on target’s position. Refer to the original article for more detailed definitions. Table 5 shows the coefficient values adopted in the simulation.
The target’s trajectory on the image plane is shown in Figure 5 where the black box is the field of view. Figure 6 depicts the changes of two rotation angles. The initial location of the ground point is the upper right corner of the plane marked by the start point. As the controller starts working, the target gradually moves out of the field of view, which means the camera can not see the target temporarily. Since this controller is dependent on the position, it can still work without the sight. However, the end point shown in the Figure 5 demonstrates that when the satellite finishes attitude maneuvering and is at the stable staring stage, the target is still lost in our image. This leads to the failure of observing the ground target, which results from the uncalibrated camera. According to transform matrix from the body frame to camera frame M b c in Table 3, the optical axis of camera has over 1 deviation from its ideal orientation in the body frame. Hence, when the position-based controller thinks the optical axis is aimed at the target, the target is actually lost in the view.

4.2. Case 2: Image-Based Adaptive Staring Controller

Table 6 shows the control coefficients in the controller. The operation diag · represents a matrix with the elements located at the diagonal consecutively.
Figure 7 depicts the trace of the target’s projection on the image plane. Initially, the target appears at the same location on the image plane as in the case Section 4.1. With the staring controller working properly, the projection moves along the trajectory and finally reaches the end point, which is also the image center ( 376 , 291 ) . The trajectory indicates that it takes some time for the controller to find the proper direction of the desired destination, because the initial guess of the parameters is primary factor affecting the accuracy of the controller at the starting stage. Furthermore, the initial angular velocity of the satellite also determines the initial moving direction the target. Figure 8 shows the differences between the current and the desired coordinates and it reflects the same trend as Figure 7.
Figure 9 is the time evolution of estimated projection errors. The adaptive rule continuously updates the parameters in the negative gradient direction of e ( t ) , so the estimated projection errors can be reduced due to the parameter estimation.
Figure 10 shows the angular velocities of the satellite. At first, ω t adjusts very fast and is then gradually stabilized. It is worth noting that ω t is not convergent to 0 , because the ground target is a moving point in the inertial space. The satellite is required to rotate at a certain rate to keep staring at it. Moreover, from Figure 10 we can see that the final angular velocity is not a constant but varies at a very low speed because of the relative motion between the ground point and the satellite. Figure 11 shows the control torques generated by attitude actuators. In the starting process, u 2 reaches its upper bound which is the joint result of the parameter estimation, the initial image errors and the initial angular velocity. As the target projection approaches its desired location, the control torques are decreasing and are eventually kept within a narrow range to meet the need for the aforementioned minor adjustment of the angular acceleration.
Here, we sum up these two cases. For a position-based staring controller, it fails at coping with the deviation in the presence of an uncalibrated camera. For the proposed image-based adaptive staring controller, although the camera parameters are not unknown, the online estimation can reduce the estimated projection errors. With this technique, control torques are generated to drive the target projection to its desired location. The simulation demonstrates that the adaptive controller achieves the goal of keeping the target’s projection at the center and we can expect that high precision can be realized to gain better ground target staring observation. Only small control torques are needed to maintain the constant tracking in the stable staring process.

5. Conclusions and Outlook

For a video satellite, staring attitude control has been its main working mode and has reaped many promising applications. This paper proposes an adaptive controller that takes the camera’s model into account. First, the projection kinematics are established based on the staring imaging scenario, where constant relative motion exists. Second, an attitude reference trajectory is introduced to avoid designing the desired angular velocity and a potential function is introduced to guarantee that the definition of the reference angular velocity exists. Third, we define the parameters that need to be estimated and a corresponding parameter updating rule is proposed. Finally, the image and attitude information is incorporated to form the adaptive staring controller, which is constructed using the estimated variables. Stability is proved and the projection is successfully controlled to the predetermined desired location on the image plane in the simulation. Thus, an image-based staring controller for an uncalibrated camera is formulated.
While we can obtain the information of ground targets, it is hard to predict the motion of many moving targets such as planes and ships. In the further study, non-cooperative targets should be dealt with where the relative motion is unknown.

Author Contributions

Conceptualization, C.S.; data curation, C.S. and M.W.; formal analysis, C.S. and H.S.; funding acquisition, C.F.; investigation, C.S.; methodology, C.S., C.F. and M.W.; project administration, C.F.; resources, C.F.; software, C.S.; supervision, C.F.; visualization, C.S.; writing—original draft, C.S.; writing—review and editing, C.F. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under the Grant No. 11702321.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.Y.; Xiang, J.H. Tracking imaging feedback attitude control of video satellite. In Proceedings of the ELECTRICAL ENGINEERING AND AUTOMATION: Proceedings of the International Conference on Electrical Engineering and Automation (EEA2016), Hong Kong, China, 24–26 June 2016; World Scientific: Singapore, 2017; pp. 729–737. [Google Scholar]
  2. Lian, Y.; Gao, Y.; Zeng, G. Staring imaging attitude control of small satellites. J. Guid. Control Dyn. 2017, 40, 1278–1285. [Google Scholar] [CrossRef]
  3. Buhl, M.; Segert, T.; Danziger, B. TUBSAT—A Reliable and Cost Effective Micro Satellite Platform. In Proceedings of the 61st International Astronautical Congress, International Astronautical Federation Paper IAC-10-B4, Prague, Czech Republic, 27 September–1 October 2010; Volume 6. [Google Scholar]
  4. Luo, Y.; Zhou, L.; Wang, S.; Wang, Z. Video satellite imagery super resolution via convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2398–2402. [Google Scholar] [CrossRef]
  5. Xiao, A.; Wang, Z.; Wang, L.; Ren, Y. Super-resolution for “Jilin-1” satellite video imagery via a convolutional network. Sensors 2018, 18, 1194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Jiang, K.; Wang, Z.; Yi, P.; Jiang, J. A progressively enhanced network for video satellite imagery superresolution. IEEE Signal Process. Lett. 2018, 25, 1630–1634. [Google Scholar] [CrossRef]
  7. Chen, X.; Ma, Y.; Geng, Y.; Feng, W.; Dong, Y. Staring imaging attitude tracking control of agile small satellite. In Proceedings of the 2011 6th IEEE Conference on Industrial Electronics and Applications, Beijing, China, 21–23 June 2011; pp. 143–148. [Google Scholar]
  8. Geng, Y.; Li, C.; Guo, Y.; Biggs, J.D. Hybrid robust and optimal control for pointing a staring-mode spacecraft. Aerosp. Sci. Technol. 2020, 105, 105959. [Google Scholar] [CrossRef]
  9. Li, C.; Geng, Y.; Guo, Y.; Han, P. Suboptimal Repointing Maneuver of a staring-mode spacecraft with one DOF for final attitude. Acta Astronaut. 2020, 175, 349–361. [Google Scholar] [CrossRef]
  10. Cui, K.; Xiang, J.; Zhang, Y. Mission planning optimization of video satellite for ground multi-object staring imaging. Adv. Space Res. 2018, 61, 1476–1489. [Google Scholar] [CrossRef]
  11. Li, P.; Dong, Y.; Li, H. Staring Imaging Real-Time Optimal Control Based on Neural Network. Int. J. Aerosp. Eng. 2020, 2020, 8822223. [Google Scholar] [CrossRef]
  12. Zhang, X.; Xiang, J. Moving object detection in video satellite image based on deep learning. In Proceedings of the LIDAR Imaging Detection and Target Recognition 2017. International Society for Optics and Photonics, Changchun, China, 23–25 July 2017; Volume 10605, p. 106054H. [Google Scholar]
  13. Zhang, X.; Xiang, J.; Zhang, Y. Space object detection in video satellite images using motion information. Int. J. Aerosp. Eng. 2017, 2017, 1024529. [Google Scholar] [CrossRef] [Green Version]
  14. Yan, Z.; Song, X.; Zhong, H.; Jiang, F. Moving object detection for video satellite based on transfer learning deep convolutional neural networks. In Proceedings of the 10th International Conference on Pattern Recognition Systems (ICPRS-2019), Tours, France, 8–10 July 2019; pp. 106–111. [Google Scholar]
  15. Yu, S.; Yuanbo, Y.; He, X.; Lu, M.; Wang, P.; An, X.; Fang, X. On-Board Fast and Intelligent Perception of Ships With the “Jilin-1” Spectrum 01/02 Satellites. IEEE Access 2020, 8, 48005–48014. [Google Scholar] [CrossRef]
  16. Zhao, X.; Emami, M.R.; Zhang, S. Robust image-based control for spacecraft uncooperative rendezvous and synchronization using a zooming camera. Acta Astronaut. 2021, 184, 128–141. [Google Scholar] [CrossRef]
  17. Zhang, H.; Jiang, Z.; Elgammal, A. Vision-based pose estimation for cooperative space objects. Acta Astronaut. 2013, 91, 115–122. [Google Scholar] [CrossRef]
  18. Huang, Y.; Zhu, M.; Zheng, Z.; Low, K.H. Linear velocity-free visual servoing control for unmanned helicopter landing on a ship with visibility constraint. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 2979–2993. [Google Scholar] [CrossRef]
  19. Shirzadeh, M.; Asl, H.J.; Amirkhani, A.; Jalali, A.A. Vision-based control of a quadrotor utilizing artificial neural networks for tracking of moving targets. Eng. Appl. Artif. Intell. 2017, 58, 34–48. [Google Scholar] [CrossRef]
  20. Jabbari Asl, H.; Yoon, J. Robust image-based control of the quadrotor unmanned aerial vehicle. Nonlinear Dyn. 2016, 85, 2035–2048. [Google Scholar] [CrossRef]
  21. Prabowo, Y.A.; Trilaksono, B.R.; Triputra, F.R. Hardware in-the-loop simulation for visual servoing of fixed wing UAV. In Proceedings of the 2015 international conference on electrical engineering and informatics (ICEEI), Denpasar, Indonesia, 10–11 August 2015; pp. 247–252. [Google Scholar]
  22. Zheng, D.; Wang, H.; Wang, J.; Zhang, X.; Chen, W. Toward visibility guaranteed visual servoing control of quadrotor UAVs. IEEE/ASME Trans. Mechatron. 2019, 24, 1087–1095. [Google Scholar] [CrossRef]
  23. Liang, X.; Wang, H.; Liu, Y.H.; Chen, W.; Jing, Z. Image-based position control of mobile robots with a completely unknown fixed camera. IEEE Trans. Autom. Control 2018, 63, 3016–3023. [Google Scholar] [CrossRef]
  24. Xu, F.; Wang, H.; Liu, Z.; Chen, W. Adaptive visual servoing for an underwater soft robot considering refraction effects. IEEE Trans. Ind. Electron. 2019, 67, 10575–10586. [Google Scholar] [CrossRef]
  25. Xie, H.; Low, K.H.; He, Z. Adaptive visual servoing of unmanned aerial vehicles in GPS-denied environments. IEEE/ASME Trans. Mechatron. 2017, 22, 2554–2563. [Google Scholar] [CrossRef]
  26. Felicetti, L.; Emami, M.R. Image-based attitude maneuvers for space debris tracking. Aerosp. Sci. Technol. 2018, 76, 58–71. [Google Scholar] [CrossRef]
  27. Wang, H.; Liu, Y.H.; Zhou, D. Adaptive visual servoing using point and line features with an uncalibrated eye-in-hand camera. IEEE Trans. Robot. 2008, 24, 843–857. [Google Scholar] [CrossRef]
  28. Wang, H.; Liu, Y.H.; Chen, W. Uncalibrated visual tracking control without visual velocity. IEEE Trans. Control. Syst. Technol. 2010, 18, 1359–1370. [Google Scholar] [CrossRef]
Figure 1. The ground-target-staring observation geometry.
Figure 1. The ground-target-staring observation geometry.
Aerospace 09 00283 g001
Figure 2. The intrinsic camera model.
Figure 2. The intrinsic camera model.
Aerospace 09 00283 g002
Figure 3. The extrinsic camera model.
Figure 3. The extrinsic camera model.
Aerospace 09 00283 g003
Figure 4. The framework of the image-based staring control.
Figure 4. The framework of the image-based staring control.
Aerospace 09 00283 g004
Figure 5. The projection trajectory on the image plane for the position-based controller.
Figure 5. The projection trajectory on the image plane for the position-based controller.
Aerospace 09 00283 g005
Figure 6. The evolution of rotation angles α and β .
Figure 6. The evolution of rotation angles α and β .
Aerospace 09 00283 g006
Figure 7. The projection trajectory on the image plane.
Figure 7. The projection trajectory on the image plane.
Aerospace 09 00283 g007
Figure 8. The evolution of image errors.
Figure 8. The evolution of image errors.
Aerospace 09 00283 g008
Figure 9. The evolution of estimated projection error.
Figure 9. The evolution of estimated projection error.
Aerospace 09 00283 g009
Figure 10. The evolution of angular velocities.
Figure 10. The evolution of angular velocities.
Aerospace 09 00283 g010
Figure 11. The evolution of control torques.
Figure 11. The evolution of control torques.
Aerospace 09 00283 g011
Table 1. Ground target location.
Table 1. Ground target location.
Longitude ( )Latitude Values ( )Height (km)
128.27164.720
Table 2. Orbital elements.
Table 2. Orbital elements.
Semimajor Axis (km)EccentricityInclination ( )Argument of Perigee ( )Right Ascension of the Ascending Node ( )True Anomaly ( )
6868.14097.257459.3884290.01754.8163
Table 3. Camera parameters.
Table 3. Camera parameters.
Camera ParametersTheoretical ValuesReal Values
f1 m1.1 m
u 0 376396
v 0 291276
d x 8.33 × 10 6 m 8.43 × 10 6
d y 8.33 × 10 6 m 8.43 × 10 6
c R b c 0.2682 0.0408 0.0671 m 0.2582 0.0358 0.0771 m
M b c M 321 30 , 40 , 20 M 321 29 , 39 . 6 , 18 . 9
Table 4. Initial attitude.
Table 4. Initial attitude.
QuaternionAngular Velocity ( / s )
( 0.4228 , 0.6600 , 0.5414 , 0.3040 ) T ( 0.0178 , 0.0587 , 0.0167 ) T
Table 5. Position-based control parameters.
Table 5. Position-based control parameters.
K δ K ω
4 0 0 1.6 0 0 2.5 0 0 0 2.5 0 0 0 2.5
Table 6. Image-based control parameters.
Table 6. Image-based control parameters.
Control ParametersValues
K 1 10 15 × E 2
K 2 10 3
K 3 2.5 × 10 15 × E 2
K 4 2 × E 3
K 5 30 × E 3
Γ diag 8 × 10 4 × E 8 , 1000 × E 3
a1
b0.001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, C.; Fan, C.; Song, H.; Wang, M. Spacecraft Staring Attitude Control for Ground Targets Using an Uncalibrated Camera. Aerospace 2022, 9, 283. https://doi.org/10.3390/aerospace9060283

AMA Style

Song C, Fan C, Song H, Wang M. Spacecraft Staring Attitude Control for Ground Targets Using an Uncalibrated Camera. Aerospace. 2022; 9(6):283. https://doi.org/10.3390/aerospace9060283

Chicago/Turabian Style

Song, Chao, Caizhi Fan, Haibo Song, and Mengmeng Wang. 2022. "Spacecraft Staring Attitude Control for Ground Targets Using an Uncalibrated Camera" Aerospace 9, no. 6: 283. https://doi.org/10.3390/aerospace9060283

APA Style

Song, C., Fan, C., Song, H., & Wang, M. (2022). Spacecraft Staring Attitude Control for Ground Targets Using an Uncalibrated Camera. Aerospace, 9(6), 283. https://doi.org/10.3390/aerospace9060283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop