Next Article in Journal
Improved Vehicle Vibration Control through Optimization of Suspension Parameters Using the Response Surface Method and a Non-Linear Programming with a Quadratic Lagrangian Algorithm
Next Article in Special Issue
Modular Self-Configurable Robots—The State of the Art
Previous Article in Journal
Electromagnetic Design and Analysis of Inertial Mass Linear Actuator for Active Vibration Isolation System
Previous Article in Special Issue
Research on High Precision Magnetic Positioning Technology Based on Facility Transport Platform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction and Control of Small Deviation in the Time-Delay of the Image Tracker in an Intelligent Electro-Optical Detection System

1
College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
2
Laboratory of Aerospace Servo Actuation and Transmission, Beijing Institute of Precision Mechatronics and Controls, Beijing 100076, China
*
Author to whom correspondence should be addressed.
Actuators 2023, 12(7), 296; https://doi.org/10.3390/act12070296
Submission received: 16 May 2023 / Revised: 4 July 2023 / Accepted: 11 July 2023 / Published: 21 July 2023

Abstract

:
A small deviation in the time-delay of the image tracker is essential for improving the tracking precision of an electro-optical system, and for future advances in actuator technology. The core goal of this manuscript is to address issues such as tracking the controller time-delay compensation and the precision of an electro-optical detection system using an advanced filter design, a fire control modeling, and an anti-occlusion target detection system. To address this problem, a small deviation in the time-delay prediction and control method of the image tracker is proposed based on the principle of linear motion transformation. The time-delay error formation is analyzed in detail to reveal the scientific mechanism between the tracking controller feedback and the line-of-sight position correction. An advanced N-step Kalman filtering controller model is established by combining a line-of-sight firing control judgment and a single-sample training anti-occlusion DSST target tracking strategy. Finally, an actuator platform with three degrees of freedom is used to test the optical mechatronics system. The results show that the distribution probability of the line-of-sight measuring error in a circle with a radius of 0.15 mrad is 72%. Compared with the traditional control method, the tracking precision of the optimal method is improved by 58.3%.

1. Introduction

Unmanned equipment, which denotes intelligent precision devices with the specific functions of reconnaissance, positioning, aiming, and target tracking [1,2,3,4,5], plays an increasingly important role in actuator technology and industry. With the capability of target imaging, labeling, tracking, and measuring, an electro-optical detection system (EODS) is an essential component in unmanned equipment for conducting autonomous target recognition, aiming, and tracking.
EODS is generally equipped with a tracking controller, which is used to calculate the miss-distance between the center of the target position and the center of the lens field-view in real time. It is also equipped with a judgment controller to select the firing time based on the miss-distance [6]. The core performance measure of an EODS is its targeting precision, especially for “low, slow and small (LSS)” fast-moving targets [7]. The specific performance indicators of the target are a distance of less than 500 m, area of less than 2 m2, and speed of 30~50 m/s [8]. In this case, the targeting precision of the EODS should be less than 0.15 mrad to accurately hit the target. To satisfy such a requirement, the dynamic performance of the line-of-sight (LOS) of an EODS, i.e., the stability response time, must be ensured. However, due to the intrinsic properties of the digital imaging systems of an EODS, time delay is one of the main obstacles to improving the EODS dynamics thus affecting the tracking and aiming performance of the electro-optical equipment.
The miss-distance can only be measured after the establishment, processing, and transmission of image signals, so the target miss-distance obtained by the controller will lag behind the actual target imaging time. Therefore, there is a small deviation between the measured value and the true value caused by the tracker time-delay, which can lead to a decrease in tracking precision. According to the definition of the Society of Photo-Optical Instrumentation Engineers (SPIE), small is considered to be an area of less than 80 pixels in a 256 × 256 image. This indicates that less than 0.12% of the image resolution is a small target [9]. And the EODS needs to control this deviation within 1~3 pixels in a lens of 1280 × 720 pixels for long-distance precision shooting. Thus, the deviation to be corrected in target tracking and aiming is very tiny.
However, as a small deviation detection element, the precision of the existing image trackers is not high enough. The time-delay error of a tracker is also affected by the frame rate, lens resolution, and hardware circuit. Thus, it will delay the best firing time and even result in lower tracking precision.
There are two kinds of traditional ways to improve precision. The first type involves improving the performance of the tracking algorithm. Tomar et al. [10] proposed a dynamic kernel convolution neural network linear regression model to track people under dense occlusion. Lin et al. [11] proposed an intelligent hybrid strategy based on machine vision for detecting a mechanical work piece. The test showed that the tracking error was less than 0.06 mm. However, most deep learning methods require high image quality, and it takes a lot of sample data and training time to obtain a barely suitable detection model. In actual gun firing, the tracking data should be obtained quickly, and the image detection will be affected by an adverse environment, such as light intensity, lens dust, scratches, and target occlusion. Thus, these methods are not suitable for EODS tracking and aiming. Wu et al. [12] proposed a gray-level feature dual-neighbor gradient method. Han et al. [13] proposed a multi-scale three-layer local gray-level contrast metric tracking detection framework. The advantage of these template matching methods is that they can complete the tracking without a lot of sample data. But it is easy to lose the target position after its occlusion or a change in direction.
The second type involves keeping the tracking algorithm unchanged and introducing an advanced filter prediction technology. The US Air Force has designed a three-state filter for the “AH-64 Apache” helicopter to obtain the predicted speed value. A good filter prediction method can break through the limit of the image algorithm and improve the tracking precision. Wen et al. [14] proposed a direct integration method for time-delay control. Malviya et al. [15] proposed a particle filter with a robotic arm. Zhong et al. [16] established a passive error feature prediction equation based on the geometric dynamics of a spherical camera. Wu et al. [17] proposed a nonlinear Gaussian iterative prediction model to achieve the visual servo stability control of wheeled robots. Zhang et al. [18] fused the inertial measurement unit and monocular camera, and adopted the iterative extended Kalman filter. The tests showed that the accuracy was improved by 15–30 times. However, these filter prediction methods are mainly used for servo mechanisms or long-range rotation, and the overall structure weight is too large. Moreover, the manual aiming and tracking action belong to low-frequency, small-amplitude, and short-range motion within 1.5 Hz. Thus, these methods are not convenient for operators to quickly deploy and carry out.
According to the above representative sample of recent studies on image processing and filter prediction, there are still only a few types of small deviation time-delay prediction methods for image trackers in EODS firing, and a control method with high tracking precision is still in the exploratory stage.
The core goal of this paper is to address issues such as tracking controller time-delay compensation and tracking precision of an electro-optical detection system using an advanced filter design, fire control modeling, and anti-occlusion target detection. In response to this problem, a small deviation time-delay prediction and control method for an image tracker is proposed based on the aim of linear motion transformation. The time-delay error formation is analyzed in detail to reveal the scientific mechanism between the tracking controller feedback and the line-of-sight position correction. An advanced N-step Kalman filtering controller model is established by combining a line-of-sight firing control judgment and a single-sample training anti-occlusion DSST target tracking strategy. The experiment shows that the distribution probability of the line-of-sight measured error in a circle with a radius of 0.15 mrad is 72%. Compared with the traditional control method, the tracking precision of the optimal method are improved by 58.3%.
This manuscript is valuable for all the researchers who are interested in the electro-optical system, small deviation control, image tracker, and time-delay prediction.

2. Materials and Methods

2.1. Composition and Framework of EODS

The optical-mechatronics composition of an EODS is shown in Figure 1. As seen in the figure, the composition of the EODS includes a white light lens, an infrared lens, a laser range finder lens, an eyepiece and a collimated beam. The EODS is installed on a precision rotating platform. A Cassegrain collimator with the reticle is provided to simulate the infinity target and it also provides a reticle imaging observation point for the EODS.
The framework of the optical-mechatronics system of an EODS, along with a Cassegrain collimator, a light source detector and a target manipulator is shown in Figure 2. The light source detector is used to detect the laser offset distance on the target board.

2.2. Traditional Tracking Controller Model of an EODS

Figure 3 shows the traditional tracking controller model of an EODS. The tracker obtains the target resolution coordinate after collecting the image. Then, it calculates the miss-distance Δ between the target coordinate and the center coordinate of field view. The miss-distance is the angle deviation obtained by converting the number of pixels K, the lens resolution E × F, and the lens field angle α × β . The unit of miss-distance Δ is mrad. The qualitative relationship of error correction can be thus obtained.
(1)
Under the ideal condition without time-delay error of the tracker, the LOS position is the miss-distance Δ.
(2)
Under the actual condition with time-delay error of the tracker, the miss-distance Δ after tracker processing is the measured value of the LOS. There is a small deviation δ between the measured value and the true value of the LOS.
(3)
The core goal of this manuscript is to reduce the adverse impact of small deviation δ on the EODS and improve the tracking precision.

2.3. Optimized Model of the Tracking Controller in an EODS with Small Deviation in Time-Delay

As shown in Figure 4, the optimized tracking controller model of an EODS is established in this paper.
(1)
Model composition: Lens assembly, image tracker, prediction filter and firing controller. The model inputs the target image and outputs the LOS’s position.
(2)
Lens assembly: It collects the target image and generates a video stream with a frame rate of 25 Hz and a resolution of 1280 × 720 pixels (M × N). Then, it inputs the image tracker to extract a target’s features.
(3)
Image tracker: The DSST tracker inputs video stream and outputs a gray level feature response value Z max . The resolution coordinate Δ Z ( x , y ) corresponding to the response value Z max is the target-tracking area of the current frame.
(4)
Prediction filter: An advanced N-step Kalman prediction filter is used to correct the small deviation in the time-delays τ T and τ G . T is the sampling period. N is the fixed step. The estimated value of the next NT moment can be predicted after the correction. The LOS’s predicted value X ^ f t is obtained via continuous recursive iteration.
(5)
Firing controller: It judges whether the LOS’s predicted value X ^ f t coincide with the target’s actual position X(t). The small deviation δ will be limited within Δ Δ 0 .
Assume that the LOS’s true value is X(t). If the image tracker has a time-delay τ T and noise N, the quantitative relationship of the LOS’s measured value X T ( t ) can be obtained.
X T ( t ) = X ( t τ T ) + N
The time-delay of the tracker is generally about 1~3 frames. Suppose the noise N follows a normal distribution of zero mean. There are two main aspects of noise. One is the sensor-inherent noise and the other is ambient noise in the tracking target.
Assume that the firing threshold value is Δ 0 . When the measured value meets X T ( t ) > Δ 0 , it is judged that the LOS does not coincide with the target at this time. When X T ( t ) Δ 0 , it is judged that the LOS coincides with the target. The coincidence time is recorded as t s . So the currently measured value is marked as X T ( t s ) . There is a time-delay τ G between the tracker receiving the coinciding signal and completing the firing. The firing time is recorded as t e . In Equation (2), the currently measured value is marked as X T ( t e ) .
X T ( t e ) = X T ( t s + τ G )
The measured value X T ( t s ) is set as the standard when the miss-distance Δ reaches the given firing threshold value Δ 0 . Then, time t s is set as the standard. By comparing with the true value X T ( t s ) , the quantitative relationship of small deviation δ can be obtained.
δ = X ( t e ) X T ( t s ) = X ( t e ) X T ( t e ) + X T ( t e ) X T ( t s ) = δ 1 + δ 2
Then, the quantitative relationship of small deviations δ 1 and δ 2 can be obtained.
δ 1 = X ( t e ) X T ( t e ) δ 2 = X T ( t e ) X T ( t s )
Therefore, the small deviation in time-delay τ consists of two parts:
(1)
Time-delay τ T : It tracks the target according to the miss-distance Δ at a certain time in the past. When studying the small deviation δ 1 of the image tracker at time t e , the time-delay τ T should be subtracted. In Equation (1), the small deviation δ 1 that is generated at time t s τ T is corrected to reduce the adverse impact of time-delay on EODS firing.
(2)
Time-delay τ G : It tracks the target according to the miss-distance Δ at a certain time in the future. The image tracker has a signal time-delay between coinciding time t s and shooting time t e . The LOS is still shaky within the time-delay τ G . When studying the small deviation δ 2 of the image tracker at time t e , the time-delay τ G should be added. In Equation (2), the small deviation δ 2 generated at time t s + τ G is corrected to reduce the adverse impact of time-delay on EODS firing.

3. Control Design

3.1. Miss-Distance Advanced Kalman Prediction Filtering Controller

In order to obtain the change in the LOS’s characteristics during target tracking and aiming of an EODS, the aiming linear motion transformation model is established. In actual shooting, the aiming and tracking action belong to a low frequency, small amplitude and a small range motion within 1.5 Hz. And the LOS’s motion of pitch and azimuth direction is basically the same, so the LOS’s jitter motion can be supposed into a linear motion transformation model.
The signal collection of a tracking controller is a discrete process. According to the general model of the random linear discrete system, the mathematical equations for an LOS’s motion state and the image tracker’s measured value can be obtained.
X k = Φ k | k 1 X k 1 + Γ k | k 1 W k 1 Y k = H k X k + V k
In Equation (5), X k is the n dimension state vector at time k. Φ k | k 1 is the n × n dimension state transition matrix. Γ k | k 1 is the n × p dimension noise input matrix. W k 1 is a p dimension state noise sequence. Y k is an m dimension observation sequence. H k is the m × n dimension observation matrix. V k is the m dimension observation noise sequence.
Figure 5 shows the optimized Kalman prediction filter of the EODS. The traditional signal fusion estimation field does not need too high precision. So it is usually limited to using an one-step prediction. However, for the practical applications in engineering fields such as tracking and aiming of EODS, the traditional filter should be improved.
The optimized model satisfies the following two assumptions.
Assumption 1.
State noise W k and observation noise V k are white noises with zero mean value and they are not related. Their variance is Q and R, respectively.
E W k = 0 , E W k W j T = Q k δ k j E V k = 0 , E V k V j T = R k δ k j E W k V j T = 0
In Equation (6), k , j , δ k j is the Kroneck function.
δ k j = 1 k = j 0 k j
Assumption 2.
The initial value  x 0 is not related to state noise  W k and observation noise V k .
E x 0 = μ 0 E x 0 μ 0 x 0 μ 0 T = P 0
On the basis of Assumptions 1 and 2, and according to the last NT time estimated value, the current time-optimized equation can be deduced.
X ^ k | k n = Φ k | k n X ^ k n | k n + Γ k n W k n
Similarly, according to the last NT time mean square difference, the current time-prediction mean square error equation can be deduced.
P k | k n = Φ k | k n P k n | k n Φ k | k n T + Γ k | k n Q Γ k | k n T
The Kalman prediction gain matrix equation can be obtained.
K k = P k | k n H k T H k P k | k n H k T + R k 1
Then, using the data measured via the image tracker to correct the current state value, the current time-optimal prediction estimation equation can be deduced.
X ^ k | k = Φ k | k n X ^ k | k n + K k Y k H k X ^ k | k n
Finally, the advanced N-step optimal filter prediction mean square error equation after data update can be deduced.
P k | k = P k | k n 1 + H k T R k 1 H k 1
The EODS’s image tracker uses an optimal prediction filter structure based on the LOS’s linear motion transformation. So, the state transition matrix in Equation (5) can be deduced.
Φ k | k n = 1 T T 2 2 0 1 T 0 0 1
If the initial values X 0 and P 0 are known, the state estimation vector X ^ k | k at time k can be calculated recursively according to the tracker observation value Y k at time k. If the observation value Y k has a time-delay τ, the actual observation value at time k is Y k n . Therefore, the current state estimate value X ^ k | k at time k is actually the predicted LOS value x k n at time k-n in the past.
The MD signal is different from the angular velocity signal of the incremental encoder, and the sampling period T s of the encoder is generally in the range of 10 μs to 500 μs. And the change in period T s is relatively small. The MD signal is affected by frame rate, lens resolution, and hardware computing power. Moreover, the sampling period T k of the tracker is generally in the range of 1 ms to 100 ms. The longer the tracking time, the more exponential the increase in the amount of the algorithm running data. So this will lead to a phenomenon where the period T k starts rapidly and then slows down. If the hardware performance is poor, the tracker will gradually deteriorate from MD time-delay to stagnation in the later stage. Therefore, compared with the speed loop incremental encoder, the MD signal of the position loop tracker can be regarded as a non-uniform sampling discrete signal.
By adjusting the step size n, the optimal Kalman algorithm suitable for different systems can be obtained. The parameters of this paper include a frame rate of 25 Hz and a lens resolution of 1280 × 720, with good hardware computing power. Based on the parameter configuration of the EODS, simulation was conducted using n = 3 as an example to demonstrate the effectiveness of the algorithm. According to the single-stage Kalman filter equation, the past state vector X ^ k | k 3 at time k − 3 is estimated from the current observation value Y k 3 at time k. Then, X ^ k | k 3 is used to develop a three-step prediction to obtain the estimated LOS value X ^ k + 3 | k at time k. Finally, the following mathematical equation is obtained.
X ^ k + 3 | k = Φ k + 3 | k X ^ k | k Y ^ k + 3 = H k + 3 X ^ k + 3 | k
According to the statistical results of the image tracker, the observation noise variance is R = 0.0019, and the time-delay is about τ T = 40   ms . Suppose the filter initial value is X 0 = 0 , 0 , 0 T , the observation matrix is H = 1 , 0 , 0 T , the mean square error of the initial value is P 0 = I 3 × 3 , and the filter gain matrix’s initial value is K 0 = 0 , 0 , 0 T . The model process noise Q is mainly obtained through comparative experiments.
Q = 0 0 0.0002 0 0.0005 0.0186 0.0002 0.0186 0.9299
The model adopts the Runge Kutta fourth-order simulation, and the step length is set to 0.001 s. According to the linear motion transformation model, the LOS’s statistical data in the X azimuth direction are fitted as the frequency spectrum function. Then, the filter model inputs this function as the true value for testing. As shown in Figure 6, the black curve S0 represents the true value of the LOS. The red curve S1 represents the tracker measuring method. The green curve S2 represents the traditional moving-average filter method. The blue curve S3 represents the optimized design advanced N-step Kalman filter prediction method.
The black curve is set as the standard. In Figure 6, the red, green and blue curves have the same changing trend as the standard black curve. It shows that these methods can basically reflect the dynamic change in LOS’s true values. As shown in Figure 6, in the local expand area (7.8~9 s), the blue curve is closest to the black curve. It shows that the optimized method has the highest test accuracy compared with the other two groups.
In Figure 7, the red curve L0 represents the inherent measured error between the image tracker’s measured value and the LOS’s true value. The blue curve L1 represents the traditional method’s measured error between the moving-average filtering value and the LOS’s true value. And the black curve L2 represents the optimal method’s measured error between the advanced N-step Kalman predicted value and the LOS’s true value.
In Figure 7, compared with the three curves, the red curve has the largest peak value, the black curve has the smallest peak value, and the blue curve is in the middle. Detailed data are shown in Table 1. The inherent measured error is 0.49 mrad (10~15 s), and the traditional method’s measured error is 0.21 mrad (0~5 s). So the traditional method’s error ratio is reduced by 57.1%. It shows that the traditional method can reduce the tracker’s inherent measured error to a certain extent. In Figure 7, in the local expand area (13~13.5 s), the black curve has two peaks. The upper bound of the black curve is 0.051 mrad, and the lower bound is −0.053 mrad. So the optimal method’s measured error is 0.053 mrad. The optimized method error ratio is reduced by 89.2%. It shows that both the traditional and optimal methods can reduce the tracker’s measured error. However, compared with the traditional method, the error correction effect of the optimal method is improved by 74.8%. It shows that the advanced N-step Kalman filter prediction controller can effectively correct the small deviation in the time-delay of a tracker and improve the shooting accuracy of an EODS.

3.2. Miss-Distance Judgment of LOS Firing Controller

The optimized filter controller can reduce the tracking time-delay error between the measured value and the true value. Then, it can output the accurate LOS predicted value. That is, the miss-distance Δ in Figure 8. To improve the tracking precision, it is also necessary to make the LOS’s predicted position coincide with the target’s actual position in the firing threshold value, so as to reduce the adverse impact of tracker time-delay error on EODS firing.
The time-delay of miss-distance Δ is about τ G = 35   ms . The aiming and tracking actions of an EODS belong to a low frequency, small amplitude and a small range motion within 1.5 Hz. According to the linear motion transformation model, the mathematical equation of LOS firing control judgment correction can be obtained.
X s t = X ^ f t + ω g t τ G < Δ 0
In Equation (17), X ^ f t is the LOS’s predicted value. τ G is the time-delay. ω g t is the LOS’s angular velocity. Δ 0 is the firing judgment threshold. X s t is the LOS’s fusion value.
Ideally, when the LOS’s fusion value X s t coincides with the actual target position, X s t is the shooting accuracy ε. So the firing threshold value Δ 0 should be less than ε. However, the actual manual tracking and aiming process is complex, and the LOS linear motion model will be affected by external factors. Therefore, a composite constraint Equation (18) is added based on the LOS judgment correction in Equation (17).
X ^ f t ε 2
Combining Equations (17) and (18), a new mathematical equation can be obtained.
X ^ f t < Δ 0 2 ε 2 X s t = X ^ f t + ω g t τ G < Δ 0 ε
The lens resolution is N = 1280 × 720, and the lens field angle is β = 3.6125° × 2.034°. When the number of pixels between the tracker’ measured value and the center of field view is n, the quantitative relationship of miss-distance Δ can be obtained.
Δ = n φ = n β N
Then, the deviation value δ0 of a single pixel is converted to 0.0493 mrad using Equation (20). And the EODS needs to control the small deviation δ within 1~3 pixels in the lens of 1280 × 720 for long-distance precision shooting [6,7]. Suppose the small deviation δ caused by the tracker time-delay is n = ±3 pixels. So the tracker accuracy of X azimuth and Y pitch direction is ±0.14775 mrad and ±0.1479 mrad, respectively. The mathematical equation of preset accuracy ε can be obtained.
ε = Δ = 0.1479 0.15   mrad
As shown in Figure 9, the black curve P1 represents the LOS’s true value. The red curve P2 represents the LOS’s predicted value. The green curve P0 represents the LOS’s firing judgment threshold. The green curve represents the optimized design method proposed in this paper.
The black curve is set as the standard. In Figure 9, the green curve peaks three times in 0~15 s. It shows that the firing control judgment has been met for three times in this period. The third peak of the green curve occurs in 11.34~11.46 s. As shown in Figure 9, in the local expand area, the coincidence time of the LOS’s predicted value (red curve) and LOS’s true value (black curve) is 11.4 s. At this time, the LOS’s true value of the black curve is about 0.01 mrad, and the LOS’s predicted value of the red curve is about 0.03 mrad. It shows that the optimized method can effectively select the firing time.
As shown in Table 2, the test accuracy is 0.02 mrad. Compared with the preset accuracy of 0.15 mrad, the test accuracy is improved by 86.7%. It shows that the optimized method can control the tracker time-delay error within 1~3 pixels. The LOS firing time is effectively judged to identify the target in the field view center of the EODS lens, so as to improve the tracking precision.

3.3. Miss-Distance Anti-Occlusion Detection and Tracking Controller

The precondition for the correct implementation of an LOS firing controller model is that the image tracker stably outputs the miss-distance Δ signal. In Figure 10, the tracker time-delay can be compensated via filter prediction. However, once the miss-distance Δ is lost in the tracking and aiming process, the LOS firing controller cannot be implemented. This will lead to a decrease in the tracking precision of the EODS. As shown in Figure 10, the basic principle of the optimized image tracker is to obtain a resolution coordinate position DSST filter through image processing. Then, the DSST filter stably outputs the pixel coordinate position of the target in the next frame.
The DSST filter is used to extract the image blocks f 1 , f 2 , , f n with gray level feature from the single-sample detection area with resolution M × N [19,20,21]. Then, the filter h t is solved to obtain the gray level response value g 1 , g 2 , , g n corresponding to each image block f 1 , f 2 , , f n . The Gaussian function is selected as the expected response function and marked as g i . The function peak value is located in the center of the corresponding sample f i . Finally, the mathematical equation of DSST filter h t is obtained.
σ = i = 1 n h t f i g i 2 = 1 M N i = 1 n H ¯ t F i G i 2
In Equation (22), ∗ represents convolution. σ is the minimum mean square error. h t , g n , f n are parameters extracted from the M × N detection area. f is the gray level feature of different image blocks from the previous frame. g is the response value constructed using the Gaussian function. h is the template updated by each iteration. The response values g 1 , g 2 , , g n follow the Gaussian distribution and the response maximum g max = g i is located at the center of the corresponding gray level image block. F i , H ¯ t , G i is the discrete Fourier transform corresponding to image block f n , response value g n , and DSST filter h t . The underline indicates the parameter complex conjugate. Then, the mathematical equation of Equation (23) can be obtained.
H t = i = 1 n G ¯ i F i i = 1 n F ¯ i F i
In order to simplify and reduce the calculation amount of the DSST target tracker, the numerator and denominator of Equation (24) are recorded as A j and B j respectively.
A j = 1 η A j 1 + η G ¯ j F j B j = 1 η B j 1 + η F ¯ j F j
In Equation (24), η is the adjust coefficient, which represents the learning rate of the optimal filter. A j , B j and A j 1 , B j 1 are the parameters of the current frame and the previous frame, respectively.
If the sample image Y resolution of the next frame is M × N, the updated response value Z can be obtained through Equation (25).
Z = F 1 A ¯ Y B + λ
In Equation (25), F 1 is the inverse discrete Fourier transform. λ is the adjust coefficient.
The area with the gray level response maximum is the target tracking position of the current frame. If the response value Z is the maximum Z max , the center point resolution coordinate Δ Z ( x , y ) of the corresponding image block is the new position for target tracking. The target coordinate Δ Z ( x , y ) is converted into miss-distance Δ, and stably output into the optimized prediction filter.
As shown in Figure 11a, an aerial view of the complex background is used as the single-sample training data of anti-occlusion DSST target tracking. Figure 12a,b shows the response value distribution comparison of two different methods. In Figure 12, the X-Y axes represent the resolution coordinates of the examination image. Z axis represents the response value distribution. There are multiple points in the black circle, but the highest point on the Z-axis is the response maximum Z max . Except for the maximum scatter, the more the interference scatters in the black circle, the more the occurrence of false detection. Compared with the traditional template matching method, the number of interference scatters of the optimized DSST tracking method is significantly decreased. The traditional method’s response maximum is 0.4, the optimized method’s response maximum is 0.7. Both of them can detect the cross target position. However, the response ratio of the optimized method increases by 42.9%. It shows that the target tracking stability of the optimized method is significantly improved and it can thus effectively avoid the target detection failure.
As shown in Figure 11a, an aerial view of the complex background is used as the single-sample training data for the anti-occlusion DSST target tracking. Figure 12a,b shows the response value distribution comparison of the two different methods. In Figure 12, the X-Y axes represent the resolution coordinates of the examination image. Z axis represents the response value distribution. The maximum scatter point in the black circle represents the response maximum Z max . Except for the maximum scatter, the more the interference scatters in the black circle, the more the occurrence of false detection. Compared with the traditional template matching method, the number of interference scatters of the optimized DSST tracking method is significantly decreased. The traditional method’s response maximum is 0.4, the optimized method’s response maximum is 0.7. Both of them can detect the cross-target position. However, the response ratio of the optimized method increases by 42.9%. It shows that the target tracking stability of the optimized method is significantly improved and it can thus effectively avoid the target detection failure.
Figure 11b displays a target manipulator. As shown in Figure 2 and Figure 11b, the length of the connecting rod O 1 O 2 is 70 cm. The target board rotates clockwise at an angular speed of 10°/s. Figure 13 and Figure 14 show the target tracking test results of the four groups of image sequences. The tracking distance of the rotated target is 50 m. The moving speed of the occlusion object is about 1.2 m/s. The tracking distance of the occluded target is 100 m.
In Figure 13a–c and Figure 14a–c, the traditional method causes a tracking drift error at the 45th frame and completely loses the target at the 75th frame. Compared with the traditional method, the optimized method can accurately and stably track the target’s motion. In Figure 13d–g and Figure 14d–g, compared with the traditional method, the optimized method can resist target occlusion. At the 30th frame, the occlusion object appears from the left. The traditional method causes a tracking drift error at the 50th frame and completely loses the target at the 70th frame. It shows that the anti-occlusion DSST tracking method can stably output the miss-distance Δ and improve the tracking precision.

4. Experimental Verification

The purpose of this experiment is to analyze the prediction and control mechanisms of the research object, which is an EODS. The LOS’s shooting accuracy is the core index. Experimental composition: an EODS, a Cassegrain collimator, Hikvision light source detector, collimated laser, precision rotating platform, and X-Y precision rotation actuator platform. Figure 15 shows the EODS’s experimental setup. Table 3 presents the performance parameters.
(1)
The traditional method is set as the control without adding filter prediction, and the image tracker utilizes a template matching method.
(2)
The optimal method is set as the test object with the addition of the advanced N-step Kalman filter prediction and LOS coinciding judgment correction systems. The image tracker uses an anti-occlusion DSST target tracking method.
The manual aiming and tracking action belong to a low frequency, small amplitude and a small range motion within 1.5 Hz. Moreover, the frequency–amplitude of pitch and azimuth direction are basically similar. According to the linear motion transformation model, it is supposed that the motion of the LOS in X and Y direction is the same. A Cassegrain collimator with the reticle is provided to simulate the infinity target. Then, the precision rotating platform is operated to test the LOS’s tracking precision in the X azimuth direction. The light source detector collects the laser offset data on the target board within 0~50 frames (about 3 s).
As shown in Figure 16, the trend of the LOS and the value of the two methods are compared through different manifestations of the same data. Figure 16a shows the line statistical chart. The red solid line represnts the measured error of the LOS via the traditional method, and the blue dotted line represents the error measured via the optimal method. Figure 16b shows the bar stacking chart. The red and blue areas represents the traditional method and the optimal method, respectively. The red curve is the control group. In Figure 16a, the blue curve has the same changing trend as the standard red curve. It shows that the optimized method can basically reflect the dynamic change in LOS’s position. However, compared with the red curve, the blue curve is closest to the Y = 0 mrad. It shows that the optimized method has a higher test accuracy.
As shown in Figure 16b, the LOS’s measured error, represented by the red area, via the traditional method is about −0.98 mrad (in 40~50 frame), and the error measured via the optimized method, represented by the blue area, is about −0.52 mrad (in 0~10 frame). Compared with the red area, the error of the blue curve is reduced by 46.9%. The EODS needs to control the small deviation within 1~3 pixels with the lens of 1280 × 720 for long-distance precision shooting [6,7,8,9]. According to the calculation of the lens field angle 3.6125° × 2.034° and Equations (20) and (21), the LOS’s preset accuracy should be within a circle with a radius of 0.15 mrad. As shown in Table 4 and Table 5, the optimized method’s distribution probability of the LOS’s measured error in the circle with a radius of 0.15 mrad is 72%, and the traditional method’s is 30%. In Table 6, compared with the traditional method, the LOS’s shooting accuracy using the optimized method is improved by 58.3%. It shows that the optimized method can reduce the adverse effect of a tracker time-delay error on EODS and significantly improve the LOS tracking precision.

5. Conclusions

This paper presents a new method for the prediction and control of small deviation in the time-delay of the tracker in an intelligent EODS. A miss-distance advanced Kalman prediction filtering controller is designed, the miss-distance judgment of the LOS firing controller is established, and a miss-distance anti-occlusion detection and tracking controller is used. The test shows that the distribution probability of the LOS’s measured error in a circle with a radius of 0.15 mrad is 72%. Compared with the traditional method, the LOS’s tracking precision using the optimized method is improved by 58.3%.
In conclusion, the new prediction and control method presented in this paper can effectively reduce the adverse impacts of small deviation in the tracker time-delay to improve the tracking precision and shooting accuracy of an EODS.

Author Contributions

C.S. designed the parameters optimization method and controller and carried out experimental research on the effect of the modeling and algorithm. D.F. and Z.W. guided the research and proposed the ideas and revisions of the manuscript. W.Z. provided help with simulation. Y.C. and Z.Z. provided the actuator model. C.S. and Z.W. revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

The present work is funded by the Provincial Department of Education “Postgraduate Scientific Research Innovation Project of Hunan Province (No. QL20210007)” and “Ministerial Level Postgraduate Funding (No. JY2021A007)”.

Data Availability Statement

The data presented in this study are available on request from the first author.

Acknowledgments

The authors would like to thank all the teachers and colleagues who encouraged and provided with the equipment for the experiment. The authors would also like to thank all the anonymous reviewers for their meticulous comments and helpful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

EODS(electro-optical detection system)
LOS(line-of-sight)
LSS(low, slow and small)
SPIE(society of photo-optical instrumentation engineers)
MD(miss-distance)

References

  1. Zhou, X.Y.; Ma, D.X.; Fan, D.P.; Zhang, Z.Y. Error analysis of mast mounted electro-optical stabilized platform based on multi-body kinematics theory. In Proceedings of the Sixth International Symposium on Precision Engineering Measurements and Instrumentation, Hangzhou, China, 28 December 2010; Volume 7544. [Google Scholar] [CrossRef]
  2. Liu, Z.F.; Wei, W.; Liu, X.D.; Han, S.W. Target tracking of snake robot with double-sine serpentine gait based on adaptive sliding mode control. Actuators 2023, 12, 38. [Google Scholar] [CrossRef]
  3. Shen, C.; Fan, S.X.; Jiang, X.L.; Tan, R.Y.; Fan, D.P. Dynamics modeling and theoretical study of the two-axis four-gimbal coarse-fine composite UAV electro-optical pod. Appl. Sci. 2020, 10, 1923. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, B.; Nie, K.; Chen, X.L.; Mao, Y. Development of sliding mode controller based on internal model controller for higher precision electro-optical tracking system. Actuators 2022, 11, 16. [Google Scholar] [CrossRef]
  5. Jeong, Y.H. Integrated vehicle controller for path tracking with rollover prevention of autonomous articulated electric vehicle based on model predictive control. Actuators 2023, 12, 41. [Google Scholar] [CrossRef]
  6. Liu, H.; Fan, D.P.; Li, S.P.; Zhou, Q.K. Design and analysis of a novel electric firing mechanism for sniper rifles. Acta Armamentarii 2017, 37, 1111–1116. [Google Scholar] [CrossRef]
  7. Musa, S.A.; Abdullah, R.S.A.R.; Sali, A.; Ismail, A.; Rashid, N.E.A. Low-slow-small (LSS) target detection based on micro Doppler analysis in forward scattering radar geometry. Sensors 2019, 19, 3332. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Lin, D.; Wu, Y.M. Tracing and implementation of IMM Kalman filtering feed-forward compensation technology based on neural network. Optik 2020, 202, 163574. [Google Scholar] [CrossRef]
  9. Liu, Y.; Sun, P.; Wergeles, N.; Shang, Y. A survey and performance evaluation of deep learning methods for small object detection. Expert Syst. Appl. 2021, 172, 114602. [Google Scholar] [CrossRef]
  10. Tomar, A.; Kumar, S.; Pant, B.; Tiwari, U.K. Dynamic kernel CNN-LR model for people counting. Appl. Intell. 2022, 52, 55–70. [Google Scholar] [CrossRef]
  11. Lin, X.K.; Wang, X.; Li, L. Intelligent detection of edge inconsistency for mechanical workpiece by machine vision with deep learning and variable geometry model. Appl. Intell. 2020, 50, 2105–2119. [Google Scholar] [CrossRef]
  12. Wu, L.; Ma, Y.; Fan, F.; Wu, M.H.; Huang, J. A double-neighborhood gradient method for infrared small target detection. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1476–1480. [Google Scholar] [CrossRef]
  13. Han, J.H.; Moradi, S.; Faramarzi, I.; Liu, C.Y.; Zhang, H.H.; Zhao, Q. A local contrast method for infrared small-target detection utilizing a tri-layer window. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1822–1826. [Google Scholar] [CrossRef]
  14. Wen, Z.J.; Ding, Y.; Liu, P.K.; Ding, H. Direct integration method for time-delayed control of second-order dynamic systems. J. Dyn. Syst. Meas. Control 2017, 139, 61001–61010. [Google Scholar] [CrossRef]
  15. Malviya, V.; Kala, R. Trajectory prediction and tracking using a multi-behaviour social particle filter. Appl. Intell. 2022, 52, 7158–7200. [Google Scholar] [CrossRef]
  16. Zhong, H.; Miao, Z.Q.; Wang, Y.N.; Mao, J.X.; Li, L.; Zhang, H.; Chen, Y.J.; Fierro, R. A practical visual servo control for aerial manipulation using a spherical projection model. IEEE Trans. Ind. Electron. 2020, 67, 10564–10574. [Google Scholar] [CrossRef]
  17. Wu, J.H.; Jin, Z.H.; Liu, A.D.; Yu, L. Non-linear model predictive control for visual servoing systems incorporating iterative linear quadratic Gaussian. IET Control Theory Appl. 2020, 14, 1989–1994. [Google Scholar] [CrossRef]
  18. Zhang, S.K.; Chirarattananon, P. Direct visual-inertial ego-motion estimation via iterated extended kalman filter. IEEE Robot. Autom. Lett. 2020, 5, 1476–1483. [Google Scholar] [CrossRef] [Green Version]
  19. Zhao, H.; Wen, K.; Lei, T.J.; Xiao, Y.N.; Pan, Y. Automatic aluminum alloy surface grinding trajectory planning of industrial robot based on weld seam recognition and positioning. Actuators 2023, 12, 170. [Google Scholar] [CrossRef]
  20. Hsu, M.H.; Nguyen, P.T.T.; Nguyen, D.D.; Kuo, C.H. Image Servo Tracking of a Flexible Manipulator Prototype with Connected Continuum Kinematic Modules. Actuators 2022, 11, 360. [Google Scholar] [CrossRef]
  21. Wu, D.; Lu, Q.J. Secure Control of Networked Inverted Pendulum Visual Servo Systems Based on Active Disturbance Rejection Control. Actuators 2022, 11, 355. [Google Scholar] [CrossRef]
Figure 1. Electro-optical detection system (EODS): front and back view.
Figure 1. Electro-optical detection system (EODS): front and back view.
Actuators 12 00296 g001
Figure 2. Framework of the optical-mechatronics system.
Figure 2. Framework of the optical-mechatronics system.
Actuators 12 00296 g002
Figure 3. Traditional tracking controller model of an EODS.
Figure 3. Traditional tracking controller model of an EODS.
Actuators 12 00296 g003
Figure 4. Optimized tracking controller model of an EODS for prediction of small deviation in time-delay.
Figure 4. Optimized tracking controller model of an EODS for prediction of small deviation in time-delay.
Actuators 12 00296 g004
Figure 5. Miss-distance advanced Kalman prediction filtering controller.
Figure 5. Miss-distance advanced Kalman prediction filtering controller.
Actuators 12 00296 g005
Figure 6. LOS’s tracking position comparison test.
Figure 6. LOS’s tracking position comparison test.
Actuators 12 00296 g006
Figure 7. LOS’s tracking measured error comparison test.
Figure 7. LOS’s tracking measured error comparison test.
Actuators 12 00296 g007
Figure 8. Miss-distance judgment of LOS firing controller.
Figure 8. Miss-distance judgment of LOS firing controller.
Actuators 12 00296 g008
Figure 9. LOS firing judgment and correction comparison test.
Figure 9. LOS firing judgment and correction comparison test.
Actuators 12 00296 g009
Figure 10. Miss-distance anti-occlusion detection and tracking controller.
Figure 10. Miss-distance anti-occlusion detection and tracking controller.
Actuators 12 00296 g010
Figure 11. Anti-occlusion target detect and tracking test.
Figure 11. Anti-occlusion target detect and tracking test.
Actuators 12 00296 g011
Figure 12. Offline detection results of cross target single image.
Figure 12. Offline detection results of cross target single image.
Actuators 12 00296 g012
Figure 13. Tracking of rotated target and anti-occlusion test via the optimized method.
Figure 13. Tracking of rotated target and anti-occlusion test via the optimized method.
Actuators 12 00296 g013
Figure 14. Tracking of rotated target and anti-occlusion test via the traditional method.
Figure 14. Tracking of rotated target and anti-occlusion test via the traditional method.
Actuators 12 00296 g014
Figure 15. Experimental setup diagram.
Figure 15. Experimental setup diagram.
Actuators 12 00296 g015
Figure 16. Comparison test of LOS’s tracking precision.
Figure 16. Comparison test of LOS’s tracking precision.
Actuators 12 00296 g016
Table 1. Comparison of tracking controller filter prediction error.
Table 1. Comparison of tracking controller filter prediction error.
NoIndexParameter
1Inherent measured error of tracker δ10.49 mrad
2Traditional method measured error δ20.21 mrad
3Optimal method measured error δ30.053 mrad
4Traditional method error ratio λ1 = 1 − δ2/δ157.1%
5Optimal method error ratio λ2 = 1 − δ3/δ189.2%
6Optimal/traditional method error ratio λ3 = 1 − δ3/δ274.8%
Table 2. Comparison of miss-distance judgment of LOS firing controller.
Table 2. Comparison of miss-distance judgment of LOS firing controller.
NoIndexParameter
1Deviation value of pixel unit δ00.0493 mrad
2Actual LOS value x1 at coincidence0.01 mrad
3Predicted LOS value x2 at coincidence0.03 mrad
4Preset accuracy ε0.15 mrad
5Test accuracy b = |x1 − x2|0.02 mrad
6Error ratio μ = 1 − εb/ε86.7%
Table 3. Parameters of LOS’s tracking precision test.
Table 3. Parameters of LOS’s tracking precision test.
NoIndexParameter
1Field angle of light source detector α × β 44.9° × 33.9°
2Resolution of light source detector E × F2592 × 2048
3Frame frequency of light source detector K19 Hz
4Focal length of collimator objective lens F1000 mm
5Aperture of collimator d140 mm
Table 4. LOS’s tracking test data via the traditional method.
Table 4. LOS’s tracking test data via the traditional method.
FrameLOS (mrad)FrameLOS (mrad)FrameLOS (mrad)FrameLOS (mrad)
00130.233436260.59613639−0.366864
1−0.011064140.049536270.57303640−0.313164
20.142236150.335736280.61323641−0.324864
3−0.023664160.331836290.49143642−0.219864
4−0.009864170.246636300.13563643−0.433764
5−0.001764180.26373631−0.09776444−0.345264
6−0.053064190.41133632−0.08546445−0.528864
70.002436200.60093633−0.10436446−0.708264
80.067236210.49533634−0.36536447−0.421464
9−0.023364220.47313635−0.40916448−0.722964
100.052236230.51093636−0.38606449−0.607464
110.202236240.61203637−0.42296450−0.978864
120.347436250.53163638−0.353964
Table 5. LOS’s tracking test data via the optimized method.
Table 5. LOS’s tracking test data via the optimized method.
FrameLOS (mrad)FrameLOS (mrad)FrameLOS (mrad)FrameLOS (mrad)
00130.10838426−0.03231639−0.039516
1−0.10731614−0.032616270.008484400.308184
20.06668415−0.03291628−0.049116410.283884
3−0.11361616−0.10641629−0.081816420.435984
4−0.310416170.080184300.003984430.343584
5−0.36861618−0.13101631−0.056916440.315384
6−0.479616190.03158432−0.073416450.481884
7−0.318516200.07058433−0.011316460.354984
80.07208421−0.08301634−0.046416470.139284
90.05948422−0.07671635−0.04551648−0.012816
100.05378423−0.07011636−0.045516490.314184
110.13028424−0.03861637−0.000816500.455184
12−0.335016250.027684380.124284
Table 6. Comparison of LOS’s tracking test accuracy.
Table 6. Comparison of LOS’s tracking test accuracy.
NoIndexParameter
1Tracking accuracy f≤0.15 mrad
2Actual physical significance of tracking accuracy fWithin 1~3 pixels
3Traditional method 0.15 mrad intra-circle probability Q130%
4Optimal method 0.15 mrad intra-circle probability Q272%
5Tracking accuracy ratio K = 1 − Q1/Q258.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, C.; Wen, Z.; Zhu, W.; Fan, D.; Chen, Y.; Zhang, Z. Prediction and Control of Small Deviation in the Time-Delay of the Image Tracker in an Intelligent Electro-Optical Detection System. Actuators 2023, 12, 296. https://doi.org/10.3390/act12070296

AMA Style

Shen C, Wen Z, Zhu W, Fan D, Chen Y, Zhang Z. Prediction and Control of Small Deviation in the Time-Delay of the Image Tracker in an Intelligent Electro-Optical Detection System. Actuators. 2023; 12(7):296. https://doi.org/10.3390/act12070296

Chicago/Turabian Style

Shen, Cheng, Zhijie Wen, Wenliang Zhu, Dapeng Fan, Yukang Chen, and Zhuo Zhang. 2023. "Prediction and Control of Small Deviation in the Time-Delay of the Image Tracker in an Intelligent Electro-Optical Detection System" Actuators 12, no. 7: 296. https://doi.org/10.3390/act12070296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop