Next Article in Journal
Proxy-Based Sliding Mode Force Control for Compliant Grinding via Diagonal Recurrent Neural Network and Prandtl-Ishlinskii Hysteresis Compensation Model
Previous Article in Journal
Real-Time Point Recognition for Seedlings Using Kernel Density Estimators and Pyramid Histogram of Oriented Gradients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on Robust Finite-Time Visual Servoing with a Gyro-Stabilized Surveillance System

Department of Mechanical System Engineering, Division of Energy Transport System Engineering, Pukyong National University, Busan 48513, Republic of Korea
*
Author to whom correspondence should be addressed.
Actuators 2024, 13(3), 82; https://doi.org/10.3390/act13030082
Submission received: 24 January 2024 / Revised: 15 February 2024 / Accepted: 19 February 2024 / Published: 21 February 2024
(This article belongs to the Section Control Systems)

Abstract

:
This article presents the design and validation of a novel visual servoing scheme for a surveillance system. In this system, a two-axis gimbal mechanism operates the rotation of a camera which is able to provide visual information on the tracked target for the control system. The control objective is to bring the target’s projection to the center of the image plane with the smallest steady-state error and a smooth transient response, even with the unpredictable motion of the target and the influence of external disturbances. To fulfill these tasks, the proposed control scheme is designed consisting of two parts: (1) an observer estimates simultaneously the matched and unmatched disturbances; and (2) a motion control law guarantees the finite-time stability and visual servoing performance. Finally, experiments are conducted for validation and evaluation. The proposed control system shows its consistency and ought to perform better than previous approaches.

1. Introduction

A gyro-stabilized system is a system that uses information from a gyroscope to mechanically stabilize a camera assembly. This is carried out by using a gimbal mechanism to operate the camera rotation. The captured images can also be used to generate a reference for the autonomous tracking operation. Similar mechanisms also find applications in gun-turret control, missile guidance, drone gimbal photography, handheld cameras, etc. It is worth noting that while specific applications may use other optical sensors or weapon systems instead of vision cameras, controlling their line of sight (LOS) remains essential [1,2,3]. In particular, controlling the LOS of the vision camera by the gimbal using the measurement from the camera itself is a type of visual servoing problem. Some fundamental studies on this problem have been established [4,5,6], and many others are still open.
Locating the target of interest in optical imagery is the first control action to be performed. Here, an image tracker implemented with image processing techniques and computer vision algorithms tracks the target’s projection in the image plane frame-by-frame [7,8]. The projection reflects the location of the target in the coordinate fixed to the camera, or in other words, the deviation angle between the camera’s LOS and a vector pointing at the target. Then, the control requirements of pointing the camera’s LOS to the target can be expressed as bringing the target’s projection to the center of the image plane with zero steady-state error and a smooth transient response. However, the inevitable constraints of an image tracker significantly affect the efficiency of the system.
The gimbal motion control is the second action taken. Most of the existing works using visual data to control the gimbal are rather simple, despite the complexities in the characteristics of the gimbal and camera. A straightforward control approach is a proportional controller mentioned in [4,9]. Hurak et al. [8] proposed a controller based on the classical image-based pointing and tracking approach. The study considered the couplings and under-actuation of the two-axis gimbal to control the gimbal’s angular rate. Unfortunately, the stability of the designed control system has not been proven and the completed form of the control law requires numerous measurements. The image-based tracking control approach for the gimbal camera was also adopted in [10,11]. In [10], the image-based tracking control of a gimbal mounted on an unmanned aerial vehicle was compared to another classical approach, the position-based tracking. The study showed that the former provided higher tracking accuracy, and the latter was more robust but required sufficient measurements. The image-based control for a gimbal in a flying wing wildfire tracking in [11] gave a similar remark, where a deviation between the position of the tracking target in the image and its center was noticeable in the task of gliding descent trajectory. F. Guo et al. [12] proposed a feedback linearization control scheme with command planning for the problem of LOS tracking. J. Guo et al. [13] introduced a cascade vision-based tracking control, consisting of a proportional lag compensator in the inner loop, and a feedback linearization-based sliding mode controller in the outer loop. However, in the presence of carrier disturbances and target movement, large deviations in tracking errors are created. T. Pham et al. [14] adopted reinforcement learning to optimally train the parameters of an adaptive proportional–integral–derivative image-based tracking control with a two-axis gimbal system. Unfortunately, this approach is preferable for repetitive tasks and was only validated in simulations. It is also worth noting that the above studies deal with regulator problems, that is, the tracking target is stationary and its projection should die out at the desired rate.
Tracking a moving target, i.e., visual servoing, was considered by X. Liu et al. [15]. They combined a model predictive controller with a disturbance observer to reduce the number of required measurements and obtain both disturbance rejection and good tracking performances. Furthermore, the model predictive control technique requires high computational load and high complexity, making it difficult to implement. In addition to the constraints in image streaming, the gimbal’s dynamics, vehicle maneuvers, and other disturbances are also worthy of concern. In their later studies [16,17], different control approaches were proposed for the visual servoing of gyro-stabilized systems. In [16], a time-delay disturbance observer-based sampled-data control was developed to deal with the measurement delay generated by the acquisition and processing of the image information. On the other hand, an active disturbance compensation and variable gain function technique was proposed in [17] for the two-axis system with output constraints and disturbances. In particular, all of these studies [15,16,17] dealt with kinematics control only, i.e., the control inputs are the speed of the gimbal channels without considering their dynamics. Thus, all disturbances and constraints act as matched disturbances. Meanwhile, the completed representation of these kinds of systems contains both matched and unmatched disturbances, which is more challenging.
Considering the dynamic control of the gimbal systems, classical control strategies use an additional rate control loop to stabilize the LOS and isolate it from the above-mentioned influences. Hence, a cascade control arrangement is usually used [3,18]. Recent works rely on robust control techniques such that both target tracking and disturbance rejection are simultaneously achieved. Sliding mode control (SMC) [19,20], H control [21], and active disturbance rejection control [22] are popular schemes, thanks to their robustness and effectiveness. Nevertheless, the preferable approach for controlling a gimbal system is the combination of a disturbance observer and a robust control law, thanks to its capability of active disturbance rejection. Li et al. [23] incorporated an integral SMC and a reduced-order cascade extended-state observer. The disturbance/uncertainty estimator-based integral SMC was introduced by Kurkcu et al. [24]. The newest studies take advantage of the SMC’s robustness to design both the observer and controller. The higher-order sliding mode observers were incorporated with a terminal SMC [19,25] or a super-twisting SMC [26]. Finite-time stability, that is, the convergence time is bounded by a predefined constant regardless of initial conditions, was desired and met. However, unmatched disturbance cannot be compensated, and the so-called chattering effect caused by high-frequency switching control still appears.
Therefore, this paper introduces a novel method for the finite-time visual servoing problem with the surveillance system. The proposed control system consists of two main components. Firstly, a single observer estimates both unmatched disturbances in the tracking target’s projection dynamics and matched disturbances in the gimbal dynamics. Then, a control law is designed to bring the projection to the center of the image plane and ensure the system’s finite-time stability, regardless of the target motion and the disturbances. Experiments are conducted for validation. The comparison to available approaches shows the superiority of the proposed control system. In short, the contributions of the paper can be summarized as follows:
-
An observer is proposed to be able to estimate the states and disturbances of the gyro-stabilized surveillance system.
-
A new visual servoing scheme ensures the boundedness and finite-time stability of the visual servoing system.
-
Proof for the finite-time stability of the closed-loop system is provided. Experimental results evaluate the effectiveness of the proposed system.
Accordingly, the remainder is organized as follows. The complete model of the system with a vision camera is introduced in Section 2. The observer is presented in Section 3. The control law design and proof of stability are given in Section 4. Experimental studies are conducted and their results are discussed in Section 5, where the proposed system is compared with others. Finally, conclusions are drawn.

2. System Modeling

The structure of the gyro-stabilized system to be controlled is shown in Figure 1. A completed representation of the system, with the dynamics of a two-axis gimbal and kinematics of a projection on the image plane, is as follows [27]:
e ˙ x y = L w ω + L x e x y + L v ν t + e d , ω ˙ = B ω u K ω + d ,
Vector  e x y = [ x y ] T  indicates the location on the image plane of the projection of a given target in the 3-dimensional camera frame.  ω = [ ω t y ω t z ] T  is the vector of the angular rates about the tilt and pan axes of the inner gimbal, respectively. Meanwhile,  ω t x  is the rate of rotation about the roll axis, which is uncontrollable by the two-axis gimbal mechanism. The instantaneous linear velocity of the camera frame origin is given by  v t u  is the command vector for the gimbal’s actuators. Additionally,  e d  and  d  are the vectors of kinematic and dynamic disturbances, respectively. In detail,  e d  is the result of the unpredictable movement of the tracking target on its projection’s location.  d  contains external torques acting on the gimbal mechanism due to the motion of the carrying vehicle, the imbalance mass of the gimbal, the nonlinear friction between channels, etc. The time variable has been neglected in the equation for the sake of simplicity.
Meanwhile, from the so-called pinhole camera model, the interaction matrices between the system motions and the projected point’s velocity,  L ω L x , and  L v  are written as follows:
L ω = 1 f [ x y f 2 + x 2 f 2 y 2 x y ] , L x = [ 0 ω t x ω t x 0 ] , L v = 1 Z c [ x f 0 y 0 f ] ,
with  f  being the camera’s focal length and Zc being the distance between the camera center and the tracking target. On the other hand, the system matrices of the gimbal mechanism are given by the following:
K = d i a g { K 1 , K 2 } , B ω = d i a g { B 1 , B 2 cos θ }
with  K 1 K 2 B 1 , and  B 2  being constant values and  θ  being the relative angle between the inner and outer channels.  | θ | < π / 2 [ rad ]  thanks to mechanical locks such that the gimbal lock phenomenon is avoided.
Moreover, let the system model (1) be rewritten as follows:
X ˙ = A ( X ) X + B u + D w , Y = C X
with
X = [ x y ω t y ω t z ] T , w = [ ( L v ν t + e d ) T d T ] T , A ( X ) = [ L x L w O 2 K ] , B = [ O 2 B ω ] , C = I 4 , D = I 4
On and In are the n-by-n null matrix and identity matrix, respectively. Then, one can see the following properties:
-
The system satisfies the observer matching condition [28]:
r a n k ( C D ) = r a n k ( D )
-
The nonlinear term satisfies the local Lipchitz condition in a region Ω including the origin with respect to X. That is, for any X1 and X2 in Ω, there exists a positive  κ such that [29]
A ( X 1 ) X 1 A ( X 2 ) X 2 κ X 1 X 2

3. State and Input Observer Design

To estimate the value of  w , which consists of both unmatched and matched disturbances acting on the system, an observer is proposed such that its states match the system states and subsequently, the disturbances are asymptotically estimated. Assume that the disturbances are bounded and generated by a vector of exogenous signals  υ  such that
w ˙ = H w + υ
where  H R 4 × 4  is a diagonal positive definite matrix. Then, the observer is proposed in the form
X ^ ˙ = A ( X ^ ) X ^ + B u + w ^ + L ( Y C X ^ ) , w ^ ˙ = H w ^ + λ Q ( Y C X ^ ) , Y ^ = C X ^
where  X ^  and  w ^  are the estimated states and disturbances, respectively.  Y ^  is the observer output. The matrices  L  and  Q  and the positive scalar  λ  are the gains of the observer.
Theorem 1. 
The  L 2  gain of the observer is bounded by a positive  γ  if there exist the matrices  L ,  P  and  Q R 4 × 4  and the positive scalar  λ  such that
[ P L C C T L T P + ( κ + 1 ) I P ( 1 λ ) P P I 4 O 4 ( 1 λ ) P O 4 2 H + 1 γ I 4 ] 0 P > 0 , Q C = P
holds.
Proof of Theorem 1. 
 
A Lyapunov function candidate is considered as follows:
V 1 = X ˜ T P X ˜ + w ˜ T w ˜
where  X ˜ = X X ^  and  w ˜ = w w ^  are the observation errors. Taking the time derivative of  V 1  results in
V ˙ 1 = 2 X ˜ T P [ A ( X ) X A ( X ^ ) X ^ ] X ˜ T ( P L C + C T L T P ) X ˜ + 2 ( 1 λ ) w ˜ T P X ˜ 2 w ˜ T H w ˜ + 2 w ˜ T υ
Applying Young’s inequality and the inequality given in (7) for the not-quadratic terms of  V ˙ 1  yields the following:
2 X ˜ T P [ A ( X ) X A ( X ^ ) X ^ ] X ˜ T ( P P + κ I 4 ) X ˜ 2 w ˜ T υ 1 γ w ˜ T w ˜ + γ υ T υ
Therefore, the derivative in (12) deduces to  V ˙ 1 V ¯ ˙ 1 , where
V ¯ ˙ 1 = X ˜ T ( P L C C T L T P + P P + κ I 4 ) X ˜ + 2 ( 1 λ ) w ˜ T P X ˜ 2 w ˜ T H w ˜ + 1 γ w ˜ T w ˜ + γ υ T υ
Additionally, consider the following inequality:
V ¯ ˙ 1 + X ˜ T X ˜ + w ˜ T w ˜ γ υ T υ 0
By substituting (14) into (15) and performing the Schur complement, inequality (10) is derived. Since  V ˙ 1 V ¯ ˙ 1 , (15) still holds if replacing  V ¯ ˙ 1  with  V ˙ 1 . The resultant inequality is equivalent to the boundedness of the  L 2  gain of the observer being as follows [30]:
sup υ 2 0 [ X ˜ T w ˜ T ] T 2 υ 2 γ
One can also see from (16) that the smallest upper bound of the  L 2  gain is the result of minimizing  γ  in (10). Hence, the influence of  υ  is also minimized and the estimated states and disturbances follow closely their actual values. □

4. Finite-Time Visual Servoing Controller Design

Recall that the control objective is to bring the target’s projection to the center of the image frame in a finite time regardless of its initial location. This objective is reflected in two control errors defined by
s x y = e x y + Λ f f x y , e w = ω ω d f
where  f ˙ x y = e x y  and  Λ f R 2 × 2  is a diagonal positive definite matrix such that  s x y  is Hurwitz. Hence, the convergence of  s x y  leads to the convergence of both  e x y  and  f x y ω d f  is the filtered desired angular rates about the tilt and pan axes of the inner gimbal and is designed as follows:
τ ω ˙ d f = e f , e f = ω d f ω d , ω d = L w 1 ( Λ 1 s x y + Λ 3 s x y β + L x e x y + Λ w 1 w ^ + Λ f e x y )
In which  τ  is a positive scalar, and  L ω 1  is the inverse of  L ω  from (2). The controller gains  Λ w 1 = [ I 2 O 2 ]  so that  Λ w 1 w ^  is the vector of estimated disturbances that correspond to those in the image plane.  Λ 1  and  Λ 3 R 2 × 2  are diagonal positive definite matrices.  β ( 0 , 1 )  and the operation  ( ) β  is defined by
( ) β = [ | 1 | β sgn ( 1 ) | 2 | β sgn ( 2 ) | n | β sgn ( n ) ] T
with
( ) = [ 1 2 n ] T
and  sgn ( i )  being the signum function of  ( i ) .
The dynamics of the control errors are then written as follows:
s ˙ x y = Λ 1 s x y Λ 3 s x y β + L w ( e w + e f ) + L v ν t + e d Λ w 1 w ^ , e ˙ w = B ω u K ω + d + 1 τ e f
From (21), the control is designed as in (22) to ensure the convergence of both control errors:
u = B ω 1 ( K w 1 τ e f α L w s x y Λ 2 e ω Λ 4 e ω β Λ w 2 w ^ )
Here,  Λ w 2 = [ O 2 I 2 ] α  is a positive scalar,  Λ 2  and  Λ 4 R 2 × 2  are diagonal positive definite matrices.
Theorem 2. 
When the control law (22) is conducted for the gyro-stabilized surveillance system whose dynamics are represented by (1), the finite-time stability of the visual servoing system is preserved with a proper choice of the controller gains.
Proof of Theorem 2. 
 
The finite-time convergence of the proposed system is derived from the following Lyapunov function candidate:
V 2 = 1 2 s x y T s x y + 1 2 α e ω T e ω
With the dynamics (21) and the proposed control law (22), the time derivative of  V 2  is obtained as follows:
V ˙ 2 = s x y T s ˙ x y + 1 α e ω T e ˙ ω   = s x y T Λ 1 s x y s x y T Λ 3 s x y β + s x y T ( L ω ( e ω + e f ) + L v ν t + e d Λ w 1 w ^ )         + 1 α e ω T ( K ω 1 τ e f α L ω s x y Λ 2 e ω Λ 4 e ω β Λ w 2 w ^ K ω + d + 1 τ e f )   = s x y T Λ 1 s x y s x y T Λ 3 s x y β 1 α e ω T Λ 2 e ω 1 α e ω T Λ 4 e ω β         + s x y T ( L w e f + Λ w 1 w ˜ ) + 1 α e ω T Λ w 2 w ˜
Based on Young’s inequality for products,
2 s x y T ( L w e f + Λ w 1 w ˜ ) s x y T s x y + L w e f + Λ w 1 w ˜ 2 2 , 2 e w T Λ w 2 w ˜ e w T e ω + Λ w 2 w ˜ 2 2
Thus, the differential Lyapunov function candidate is represented as follows:
V ˙ 2 s x y T ( Λ 1 1 2 I 2 ) s x y s x y T Λ 3 s x y β 1 α e w T ( Λ 2 1 2 I 2 ) e ω 1 α e w T Λ 4 e ω β + 1 2 L w e f + Λ w 1 w ˜ 2 2 + 1 2 α Λ w 2 w ˜ 2 2 κ 1 V 2 κ 2 V 2 1 + β 2 + ε
where
κ 1 = 2 min ( λ min ( Λ 1 1 2 I 2 ) , λ min ( Λ 2 1 2 I 2 ) ) , κ 2 = 2 min ( λ min ( Λ 3 ) , λ min ( Λ 4 ) ) , ε = max ( 1 2 L w e f + Λ w 1 w ˜ 2 2 + 1 2 α Λ w 2 w ˜ 2 2 )
One can see that only the errors of the observer and filter remain in the residual term  ε . Then, a proper choice of the controller’s gains is that  κ 1  and  κ 2  in (27) are positive. Based on Theorem 2 in [31], there exists a scalar  δ ( 0 , 1 )  such that the control errors converge to the following region:
lim t T f V 2 min ( ε ( 1 δ ) κ 1 , ( ε ( 1 δ ) κ 2 ) 2 1 + β )
in a finite time  T f  given by
T f 2 ( 1 β ) max ( 1 δ ln δ κ 1 V 2 1 + β 2 ( e 0 ) + κ 2 κ 2 , 1 κ 1 ln κ 1 V 2 1 β ( e 0 ) + δ κ 2 δ κ 2 )
which can be seen as the functions of  κ 1  and  κ 2 . In other words, the finite-time stability of the proposed control system is preserved. □
On another note, the integral in (17) guarantees zero steady states of the projection’s motion; however, its accumulation can lead to a large overshoot and a long settling time. Therefore, an approximate integral  f ^ x y  and the corresponding control error  s x y  are modified as follows [32,33]:
f ^ ˙ x y = Λ f f ^ x y + η sat s x y η , s x y = e x y + Λ f f ^ x y
where  η  is a positive scalar and  sat ( )  denotes for the saturation function. One can see that inside the boundary of the saturation,  f ^ ˙ x y = e x y , above the positive boundary,  f ^ ˙ x y < e x y , and otherwise  f ^ ˙ x y > e x y . In other words, the integral action takes place completely only inside the saturation function’s boundary.

5. Experimental Studies

The experiments were conducted in order to validate the proposed control system. For this objective, Figure 2 depicts the experimental apparatus that was used with the proposed control scheme and experiment scenarios. In detail, the experimental system consists of a two-channel gimbal prototype that carries the Hanwha SNZ-6320 camera on its inner channel. Given a captured image from the camera that contains the tracked object, an image tracker uses the KCF algorithm [34] working with the histogram of oriented gradients (HOG) descriptor and the Gaussian kernel and returns the location of the object’s projection on this image. The rotational motions of two channels are operated by two servo systems CM1-C-23S30C from Muscle Corporation. The inner channel is directly driven by the servo, while the outer one adopts a belt drive with a gear ratio of 3:1. The orientation of the camera and the inner gimbal is sensed by an attitude heading reference system, MW-AHRS v1. Additionally, the image tracker, observer, and controller were executed by Matlab/Simulink on a desktop computer and interacted with the experimental apparatus. The gains of those are listed in Table 1.
The experiments involved three scenarios, as illustrated in Figure 2. In scenario A, the surveillance system had to track a stationary target whose projection’s initial location was out of the center of the image plane. In the other two scenarios, the system was required to track a moving target, i.e., a visual servoing problem. Different characteristics of the mentioned scenarios help validate different aspects of the proposed control system. In these experiments, unmatched disturbances arise from the unpredictable movement of the tracking target and constraints of the tracker, such as low-resolution measurement, long processing, and communication time. Match disturbances come from the imbalance mass of the inner gimbal, nonlinear frictions, and the actuator’s dead zone and saturation. For the sake of comparisons, the simple proportional controller from [2], the image-based pointing controller proposed by Hurak et al. [8], and the vision-based back-stepping control from [27] were also taken into the tests. The results from the tests are shown in Figure 3, Figure 4 and Figure 5, respectively. It is noted that the displayed unit of the variables x and y is [pixel], instead of [m], to correspond to displayed images in user interfaces. Additionally, the control action was taken from the 5th [s] of each test.

5.1. Experiment A

The initial location of the target’s projection is at  [ 105 190 ] T  [pixel] in the image plane. The tilt angle of the inner channel that carries the camera was initialized at 04 [deg]. The tracking path depicted in Figure 3a shows that the backstepping control system struggled to bring the projection to zero. The proportional control system was able to do so but had difficulty in following a linear path to the image central. The main reason is the kinematic interactions of all three rotations of the gimbal mechanism in each direction of the projection’s movement and they act as additive disturbances. This results in a curve in the tracking path. Meanwhile, in both the proposed system and Hurak’s one, the interaction matrices are taken into the control law. Thus, the projections were able to follow linear paths. This is the shortest path from the initial location to the image center. This also indicated that responses in both coordinates converge smoothly to zeros at the same time without any overshoot, as clearly seen in Figure 3b,d. Additionally, the control signals in Figure 3c hint that the proposed control law requires less effort in the tilt direction in comparison with the others. After settling time, all controllers effectively stabilize the system at a steady state. However, the backstepping controller failed to compensate for the steady-state error, while the others were able to correct it. This is because of the input-to-state stability of the backstepping control system, i.e., the system is bounded by a function of the size of the input disturbances that currently exist in the gimbal camera.

5.2. Experiment B

In this experiment, the target projection was at location  [ 180 80 ] T  [pixel] in the image plane in the beginning. From the 5th [s], the target moved continuously on an ellipse line in a vertical plane facing the camera from a distance of 600 [mm]. The horizontal and vertical motions of the target were given by sinusoidal trajectories with a 0.01 [Hz] frequency and 410 [mm] and 210 [mm] amplitudes, respectively. The tracking paths in Figure 4a show that all the controllers quickly brought the target’s projection to the center of the image plane and then kept it there, even with the motion of the target. However, a large variation around the image center in Figure 4a demonstrated that the comparison controllers could not track the moving target effectively. In particular, in the x-coordinate, the proportional control system and Hurak’s system result in the largest variation. In the y-coordinate, the proportional control and the backstepping control systems have the largest tracking errors, then the control system designed by Hurak et al. In contrast, the proposed control system guarantees the boundedness of the control errors in finite time, hence resulting in a small variation in the projection’s location in both coordinates during the test, which can easily be seen in Figure 4a,b. The control inputs in Figure 4c show the adjustment that the proposed controller made. Since the target followed an ellipse line, the tilt and pan rotations of the inner gimbal also followed sinusoidal trajectories, as seen in Figure 4d, so that the LOS of the carried camera could track the desired target.

5.3. Experiment C

For this test, the target moved on the same plane in the previous scenario. It departed from the intersection point of the camera’s LOS and this plane, so the initial location of its projection was at the center of the image. The target followed a rectangular trajectory with a total distance of 480 [mm] horizontally and 250 [mm] vertically. This required the inner gimbal to rotate with trapezoidal trajectories, as seen in Figure 5d. The time response in the x-coordinate of the image plane shows that the backstepping control achieved the smallest tracking error in the first half of the test period. However, it then required a lot of adjustment and resulted in a large vibration in the second half of the test. It was also the worst one from the view in the y-coordinate. The proportional approach also led to significant errors in the image plane. In particular, the tracking movement of the camera’s LOS was visibly lagging behind that of the target since this simple control strategy cannot compensate effectively for nonlinearities and disturbances in the system. Hurak’s system and the proposed one were quite comparable in the x-coordinate, but the latter showed its superiority in the other direction.
For quantitative evaluation, the root-mean-square errors (RMSEs) of the projection were computed and are listed in Table 2. The results of Experiment A imply that the proposed system is only the second best. However, the time responses from Figure 3b indicate that these large values of RMSE are due to the longer rise time, especially in the x-coordinate, though the settling times are almost the same. The reason is found to be due to the lower peak velocities of the gimbal during the transient period in the proposed system, which is better from the perspective of the image-tracking algorithm. Similarly, in Experiments B and C, the proposed system achieves a significantly better RMSE in the y-coordinate, but only the third best in the other coordinate because of the large transient period of this coordination. If the transient period at the beginning of Experiment B is neglected, i.e., the zoom in Figure 4b, the RMSE of the proposed system in the x-coordinate is 0.1107 [pixel], just less than 0.0934 [pixel] of the backstepping control system. In Experiment C, apart from the transient period, the proposed system achieves the smallest steady-state error. Generally, the proposed controller shows its consistency and ought to perform better than the others. On another note, the proposed control law is quite complicated in comparison to the comparative controls, though it does not need more measurements than the others. The results of the three experiments also reveal that the proposed system requires more control adjustments, and so do the control efforts, especially in comparison to the proportional control and the control proposed by Hurak. Finally, although the proposed system achieves the smallest steady-state errors in most cases, finite-time boundedness is preserved but not the asymptotic stability.

6. Conclusions

In this paper, the novel visual servoing controller was designed and implemented for a gyro-stabilized surveillance system. The proposed control scheme consists of a disturbance observer and a continuous control law. The control system design was presented in detail, and the mathematical proofs show that the proposed scheme guarantees finite-time stability for the system. With this approach, effective visual servoing performance was obtained. In experimental studies, the proposed control system was challenged through different tracking scenarios and compared with validated approaches. The results show the stability, consistency, and effectiveness of the proposed system. In future works, a more robust tracking algorithm and control technique will be considered to achieve a fast and continuous tracking performance.

Author Contributions

Conceptualization, T.H. and Y.-B.K.; methodology, T.H.; software, Y.-B.K.; validation, T.H. and Y.-B.K.; formal analysis, T.H.; investigation, T.H.; resources, Y.-B.K.; data curation, T.H.; writing—original draft preparation, T.H.; writing—review and editing, T.H. and Y.-B.K.; visualization, T.H.; supervision, Y.-B.K.; project administration, Y.-B.K.; funding acquisition, Y.-B.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2022R1A2C1003486).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

This work was also supported by the National Research Foundation (NRF), South Korea under Project BK21 FOUR (Smart Convergence and Application Education Research Center).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, F. Simultaneous Self-Calibration of Nonorthogonality and Nonlinearity of Cost-Effective Multiaxis Inertially Stabilized Gimbal Systems. IEEE Robot. Autom. Lett. 2018, 3, 132–139. [Google Scholar] [CrossRef]
  2. Wang, Y.; Yu, J.; Wang, X.; Fangxiu, J. A guidance and control design with reduced information for a dual-spin stabilized projectile. Def. Technol. 2023; in press. [Google Scholar] [CrossRef]
  3. Kennedy, P.J.; Kennedy, R.L. Direct versus indirect line of sight (LOS) stabilization. IEEE Trans. Control Syst. Technol. 2003, 11, 3–15. [Google Scholar] [CrossRef]
  4. Hilkert, J.M. Inertially stabilized platform technology Concepts and principles. IEEE Control Syst. 2008, 28, 26–46. [Google Scholar] [CrossRef]
  5. Masten, M.K. Inertially stabilized platforms for optical imaging systems. IEEE Control Syst. 2008, 28, 47–64. [Google Scholar] [CrossRef]
  6. Osborne, J.M.; Fuentes, R. Global Analysis of the Double-Gimbal Mechanism. IEEE Control Syst. 2008, 28, 44–64. [Google Scholar] [CrossRef]
  7. Chaumette, F.; Hutchinson, S. Visual servo control. I. Basic approaches. IEEE Robot. Autom. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  8. Hurak, Z.; Rezac, M. Image-Based Pointing and Tracking for Inertially Stabilized Airborne Camera Platform. IEEE Trans. Control Syst. Technol. 2012, 20, 1146–1159. [Google Scholar] [CrossRef]
  9. Bibby, C.; Reid, I. Visual tracking at sea. In Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 2005, pp. 1841–1846. [Google Scholar]
  10. Liu, X.; Zhou, H.; Chang, Y.; Xiang, X.; Zhao, K.; Tang, D. Visual-based Online Control of Gimbal on the UAV for Target Tracking. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 5754–5759. [Google Scholar] [CrossRef]
  11. Nunes, A.P.; Moutinho, A.; Azinheira, J.R. Flying Wing Wildfire Tracking Using Visual Servoing and Gimbal Control. In Proceedings of the Pattern Recognition, Computer Vision, and Image Processing, ICPR 2022 International Workshops and Challenges, Montreal, QC, Canada, 21–25 August 2022; Lecture Notes in Computer Science. Rousseau, J.J., Kapralos, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2023; Volume 13644. [Google Scholar] [CrossRef]
  12. Guo, F.; Xie, C.; Chen, S. Line of Sight Tracking Method Based on Feedback Linearization and Command Planning. J. Phys. Conf. Ser. 2023, 2456, 012027. [Google Scholar] [CrossRef]
  13. Guo, J.; Yuan, C.; Zhang, X.; Chen, F. Vision-Based Target Detection and Tracking for a Miniature Pan-Tilt Inertially Stabilized Platform. Electronics 2021, 10, 2243. [Google Scholar] [CrossRef]
  14. Pham, T.; Bui, M.; Mac, P.; Tran, H. Image Based Visual Servo Control with a Two-Axis Inertially Stabilized Platform Using Deep Deterministic Policy Gradient. In Proceedings of the 2022 7th International Conference on Robotics and Automation Engineering (ICRAE), Singapore, 18–20 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 31–37. [Google Scholar] [CrossRef]
  15. Liu, X.; Mao, J.; Yang, J.; Li, S.; Yang, K. Robust predictive visual servoing control for an inertially stabilized platform with uncertain kinematics. ISA Trans. 2021, 114, 347–358. [Google Scholar] [CrossRef]
  16. Yang, J.; Liu, X.; Sun, J.; Li, S. Sampled-data robust visual servoing control for moving target tracking of an inertially stabilized platform with a measurement delay. Automatica 2022, 137, 110105. [Google Scholar] [CrossRef]
  17. Liu, X.; Yang, J.; Qiao, P. Gain Function-Based Visual Tracking Control for Inertial Stabilized Platform with Output Constraints and Disturbances. Electronics 2022, 11, 1137. [Google Scholar] [CrossRef]
  18. Reis, M.F.; Monteiro, J.C.; Costa, R.R.; Leite, A.C. Super-Twisting Control with Quaternion Feedback for a 3-DoF Inertial Stabilization Platform. Proc. IEEE Conf. Decis. Control 2019, 2018, 2193–2198. [Google Scholar] [CrossRef]
  19. Mao, J.; Yang, J.; Liu, X.; Li, S.; Li, Q. Modeling and Robust Continuous TSM Control for an Inertially Stabilized Platform with Couplings. IEEE Trans. Control Syst. Technol. 2020, 28, 2548–2555. [Google Scholar] [CrossRef]
  20. Suoliang, G.; Lei, Z.; Zhaowu, P.; Shuang, Y. Finite-time Robust Control for Inertially Stabilized Platform Based on Terminal Sliding Mode. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 483–488. [Google Scholar]
  21. Lee, D.H.; Tran, D.Q.; Kim, Y.B.; Chakir, S. A robust double active control system design for disturbance suppression of a two-axis gimbal system. Electronics 2020, 9, 1638. [Google Scholar] [CrossRef]
  22. Fu, T.; Gao, Y.; Guan, L.; Qin, C. An LADRC Controller to Improve the Robustness of the Visual Tracking and Inertial Stabilized System in Luminance Variation Conditions. Actuators 2022, 11, 118. [Google Scholar] [CrossRef]
  23. Li, H.; Yu, J. Anti-Disturbance Control Based on Cascade ESO and Sliding Mode Control for Gimbal System of Double Gimbal CMG. IEEE Access 2020, 8, 5644–5654. [Google Scholar] [CrossRef]
  24. Kurkcu, B.; Kasnakoglu, C.; Efe, M.O. Disturbance/Uncertainty Estimator Based Integral Sliding-Mode Control. IEEE Trans. Autom. Control 2018, 63, 3940–3947. [Google Scholar] [CrossRef]
  25. Zhou, X.; Li, X. Trajectory tracking control for electro-optical tracking system based on fractional-order sliding mode controller with super-twisting extended state observer. ISA Trans. 2021, 117, 85–95. [Google Scholar] [CrossRef]
  26. Chalanga, A.; Kamal, S.; Fridman, L.M.; Bandyopadhyay, B.; Moreno, J.A. Implementation of Super-Twisting Control: Super-Twisting and Higher Order Sliding-Mode Observer-Based Approaches. IEEE Trans. Ind. Electron. 2016, 63, 3677–3685. [Google Scholar] [CrossRef]
  27. Huynh, T.; Tran, M.-T.; Lee, D.-H.; Chakir, S.; Kim, Y.-B. A Study on Vision-Based Backstepping Control for a Target Tracking System. Actuators 2021, 10, 105. [Google Scholar] [CrossRef]
  28. Kudva, P.; Viswanadham, N.; Ramakrishna, A. Observers for linear systems with unknown inputs. IEEE Trans. Autom. Control 1980, 25, 113–115. [Google Scholar] [CrossRef]
  29. Searcóid, M.Ó. Uniform Continuity. In Metric Spaces; Springer: London, UK, 2006; pp. 147–163. [Google Scholar]
  30. Boyd, S.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; Society for Industrial and Applied Mathematics, 3600 University City Science Center: Philadelphia, PA, USA, 1994. [Google Scholar]
  31. Tran, D.T.; Truong, H.V.A.; Jin, M.; Ahn, K.K. Finite-Time Output Control for Uncertain Robotic Manipulators with Time-Varying Output Constraints. IEEE Access 2022, 10, 119119–119131. [Google Scholar] [CrossRef]
  32. Seshagiri, S.; Khalil, H.K. On introducing integral action in sliding mode control. In Proceedings of the 41st IEEE Conference on Decision and Control, Las Vegas, NV, USA, 10–13 December 2002; IEEE: Piscataway, NJ, USA, 2002; Volume 2, pp. 1473–1478. [Google Scholar]
  33. Li, P.; Ma, J.; Li, W.; Zheng, Z. Adaptive conditional integral sliding mode control for fault tolerant flight control system. In Proceedings of the 2008 Asia Simulation Conference—7th International Conference on System Simulation and Scientific Computing, Beijing, China, 10–12 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 638–642. [Google Scholar]
  34. Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-Speed Tracking with Kernelized Correlation Filters. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 583–596. [Google Scholar] [CrossRef]
Figure 1. Configuration of the controlled system.
Figure 1. Configuration of the controlled system.
Actuators 13 00082 g001
Figure 2. The proposed control system and experiment scenarios.
Figure 2. The proposed control system and experiment scenarios.
Actuators 13 00082 g002
Figure 3. Experimental results of tracking a stationary target: (a) Projection trajectories on the image plane; (b) Time responses of the target’s projection; (c) Control inputs; (d) Time responses of the inner gimbal.
Figure 3. Experimental results of tracking a stationary target: (a) Projection trajectories on the image plane; (b) Time responses of the target’s projection; (c) Control inputs; (d) Time responses of the inner gimbal.
Actuators 13 00082 g003
Figure 4. Experimental results of tracking a target following a sinusoidal trajectory: (a) Projection trajectories on the image plane; (b) Time responses of the target’s projection; (c) Control inputs; (d) Time responses of the inner gimbal.
Figure 4. Experimental results of tracking a target following a sinusoidal trajectory: (a) Projection trajectories on the image plane; (b) Time responses of the target’s projection; (c) Control inputs; (d) Time responses of the inner gimbal.
Actuators 13 00082 g004
Figure 5. Experimental results of tracking a target following a rectangular trajectory: (a) Projection trajectories on the image plane; (b) Time responses of the target’s projection; (c) Control inputs; (d) Time responses of the inner gimbal.
Figure 5. Experimental results of tracking a target following a rectangular trajectory: (a) Projection trajectories on the image plane; (b) Time responses of the target’s projection; (c) Control inputs; (d) Time responses of the inner gimbal.
Actuators 13 00082 g005
Table 1. Control system specifications.
Table 1. Control system specifications.
ParameterValue
System matrices   K = d i a g { 6.146 , 5.207 } , H = 10 I 4 , B ω = d i a g { 7.232 , 1.402 cos θ }
Observer’s gains   L = d i a g { 33.143 , 33.143 , 41.429 , 41.429 } , Q = 1.671 I 4 , λ = 100
Finite-time controller’s gains   Λ f = d i a g { 0.058 , 0.022 } , Λ 1 = d i a g { 45.376 , 50.218 } , Λ 3 = d i a g { 10.086 , 15.731 } , Λ 2 = d i a g { 10.427 , 10.739 } , Λ 4 = d i a g { 2.068 , 2.077 } , β = 0.667 , α = 0.0001
Sampling time0.05 [s]
Table 2. Comparison of performance indices.
Table 2. Comparison of performance indices.
ExperimentCoordinateRMSE
ProportionalHurak’sBackstepProposed
Ax [pixel]5.12285.01815.09355.0979
y [pixel]2.90512.80633.31952.8124
Bx [pixel]0.32690.26070.23340.2893
y [pixel]0.50410.35900.53440.2244
Cx [pixel]0.64220.49060.44910.5515
y [pixel]0.55200.38840.55280.3350
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huynh, T.; Kim, Y.-B. A Study on Robust Finite-Time Visual Servoing with a Gyro-Stabilized Surveillance System. Actuators 2024, 13, 82. https://doi.org/10.3390/act13030082

AMA Style

Huynh T, Kim Y-B. A Study on Robust Finite-Time Visual Servoing with a Gyro-Stabilized Surveillance System. Actuators. 2024; 13(3):82. https://doi.org/10.3390/act13030082

Chicago/Turabian Style

Huynh, Thinh, and Young-Bok Kim. 2024. "A Study on Robust Finite-Time Visual Servoing with a Gyro-Stabilized Surveillance System" Actuators 13, no. 3: 82. https://doi.org/10.3390/act13030082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop