Real-Time Image Stabilization Method Based on Optical Flow and Binary Point Feature Matching

: The strap-down missile-borne image guidance system can be easily a ﬀ ected by the unwanted jitters of the motion of the camera, and the subsequent recognition and tracking functions are also inﬂuenced, thus severely a ﬀ ecting the navigation accuracy of the image guidance system. So, a real-time image stabilization technology is needed to help improve the image quality of the image guidance system. To satisfy the real-time and accuracy requirements of image stabilization in the strap-down missile-borne image guidance system, an image stabilization method based on optical ﬂow and image matching with binary feature descriptors is proposed. The global motion of consecutive frames is estimated by the pyramid Lucas-Kanade (LK) optical ﬂow algorithm, and the interval frames image matching based on fast retina keypoint (FREAK) algorithm is used to reduce the cumulative trajectory error. A Kalman ﬁlter is designed to smooth the trajectory, which is conducive to ﬁtting to the main motion of the guidance system. Simulations have been carried out, and the results show that the proposed algorithm improves the accuracy and real-time performance simultaneously compared to the state-of-art algorithms.


Introduction
The strap-down missile-borne image guidance system is a sort of real-time guidance system based on computer vision, which has been applied in practice, such as the image-guided miniature ammunition, Spike [1]. The Spike munition has the advantages of being a light weight, low cost, and no gimbaled system, which makes it simpler to design the strap-down seekers' mechanical configurations [2]. Spike is one of the smallest and the cheapest guided missiles with outstanding maneuverability performance, which can reach a speed of 600 miles per h in 1.5 s after launch, and the onboard guidance system can manipulate the control surfaces to complete the movements of missile on pitch, roll and yaw axes in a prelimited time.
The general working process of strap-down missile-borne image guidance system consists of three steps. First, after launching of the missile, the onboard camera turns on to search, capture, and track the target during the flight. Then, when the target is locked, the onboard computer calculates the angular deviation between the center of the target and the visual central axis. At last, the deviation is sent to the autopilot to manipulate the control surfaces of the vehicle and guide the missile to the target automatically till hitting the target.
During the process, the quality of image sequences is an important factor affecting the navigation accuracy in the image guidance system. Since the strap-down seekers do not require any moveable false matching is proposed to meet the requirements of image processing accuracy and real-time performance. Moreover, a Kalman filter is designed to smooth the motion trajectory to make the video fit the intentional motion as much as possible.
The structure of this paper is as follows. In Section 2, the overall framework of the proposed algorithm is analyzed and illustrated. Then, in Section 3, the global motion of missile-borne image guidance system is established and estimated by the pyramid LK optical flow algorithm. In Section 4, the computational costs of different feature point matching algorithms are analyzed, and the FREAK algorithm is selected to conduct correction of the trajectory generated by the optical flow algorithm. The equations of Kalman filter are also established based on the global motion of the missile-borne image guidance system. The specific framework of the proposed algorithm is given in the last paragraph. Experiments have been carried out to verify the feasibility and effectiveness of the proposed algorithm in Section 5. Conclusions are presented in Section 6.

Proposed Framework
The block diagram of the overall proposed framework is shown in Figure 1.
Electronics 2020, 9,198 3 of 16 [26,27] to eliminate false matching is proposed to meet the requirements of image processing accuracy and real-time performance. Moreover, a Kalman filter is designed to smooth the motion trajectory to make the video fit the intentional motion as much as possible. The structure of this paper is as follows. In Section 2, the overall framework of the proposed algorithm is analyzed and illustrated. Then, in Section 3, the global motion of missile-borne image guidance system is established and estimated by the pyramid LK optical flow algorithm. In Section 4, the computational costs of different feature point matching algorithms are analyzed, and the FREAK algorithm is selected to conduct correction of the trajectory generated by the optical flow algorithm. The equations of Kalman filter are also established based on the global motion of the missile-borne image guidance system. The specific framework of the proposed algorithm is given in the last paragraph. Experiments have been carried out to verify the feasibility and effectiveness of the proposed algorithm in Section 5. Conclusions are presented in Section 6.

Proposed Framework
The block diagram of the overall proposed framework is shown in Figure 1. As shown in Figure 1, the global motion trajectory between the adjacent frames of the image sequence is obtained by the optical flow algorithm. Then, the image matching algorithm is conducted periodically to correct the cumulative error of the optical flow algorithm. The trajectory generated by optical flow is filtered by a Kalman filter in every frame, and the measurements that corrected by the point feature matching algorithm are taken as input of the Kalman filter to correct the cumulative error of optical flow algorithm as well.
The motion trajectory of the image sequence obtained by the strap-down image guidance system consists of intentional motion and shaky motion components, while the intentional motion components indicate the main motion of the guidance system, and the shaky motion components are the noises produced by jittery control and atmospheric turbulence during the flight of the missile. So, the Kalman filter is introduced to filter out the shaky components of the image sequences. Because the shaky components of the strap-down camera are Gaussian noise like high-frequency jitters, the intentional motion components with low frequency can be well reserved by Kalman filter, where the high-frequency jitters would be removed separately.

Global Motion Estimation
In this section, the global motion model of the missile-borne strap-down image guidance system is established based on the four-parameter similarity transformation model. Further, the global motion of video is firstly estimated by the optical flow algorithm.
The main motion of the image sequences captured by the strap-down image guidance system mainly consists of rotation, translation and scale transformation, while jitters of the projectile mainly As shown in Figure 1, the global motion trajectory between the adjacent frames of the image sequence is obtained by the optical flow algorithm. Then, the image matching algorithm is conducted periodically to correct the cumulative error of the optical flow algorithm. The trajectory generated by optical flow is filtered by a Kalman filter in every frame, and the measurements that corrected by the point feature matching algorithm are taken as input of the Kalman filter to correct the cumulative error of optical flow algorithm as well.
The motion trajectory of the image sequence obtained by the strap-down image guidance system consists of intentional motion and shaky motion components, while the intentional motion components indicate the main motion of the guidance system, and the shaky motion components are the noises produced by jittery control and atmospheric turbulence during the flight of the missile. So, the Kalman filter is introduced to filter out the shaky components of the image sequences. Because the shaky components of the strap-down camera are Gaussian noise like high-frequency jitters, the intentional motion components with low frequency can be well reserved by Kalman filter, where the high-frequency jitters would be removed separately.

Global Motion Estimation
In this section, the global motion model of the missile-borne strap-down image guidance system is established based on the four-parameter similarity transformation model. Further, the global motion of video is firstly estimated by the optical flow algorithm.
Electronics 2020, 9, 198 4 of 16 The main motion of the image sequences captured by the strap-down image guidance system mainly consists of rotation, translation and scale transformation, while jitters of the projectile mainly contribute to the high-frequency rotations and translations. Therefore, the four-parameter similarity transformation model is chosen to represent inter-frame global motion of the missile-borne image guidance system. The similarity transformation matrix is represented by T, which is given by where θ is the rotation angle, S is the scale factor, dx and dy are translations of horizontal and vertical direction respectively. Thus, the corresponding feature points in two adjacent frames can be represented by where (x n−1 i , y n−1 i ) is the location of feature point i in the previous frame and (x n i , y n i ) is the location of the corresponding feature point in the current frame.
Then, the global motion from the first frame to the current frame can be expressed as where X n = (x n i , y n i ) T represents the location of a feature point in the current frame, is the location of a feature point in the previous frame, and T n represents the transformation matrix from X n−1 to X n . There are many different methods to extract the optical flow out of image sequences, among which the Lucas-Kanade (LK) optical flow algorithm [28] is one of the widely applied optical flow algorithms for it has less computational cost with acceptable accuracy [29,30]. Considering the background of this technology, the LK optical flow algorithm could be invalid in the case of fast movement and the error of the LK optical flow algorithm would accumulate rapidly if the objects of the video move too fast. In order to track the target which moves fast and to reduce the cumulative error of the LK optical flow algorithm, the pyramid LK optical flow algorithm [31] is introduced to solve this issue.
With the pyramid LK optical flow algorithm, the key points matching between adjacent frames can be conducted. The RANSAC algorithm is used to eliminate the mismatched keypoints and the matching pairs of foreground objects. In order to verify the tracking accuracy of optical flow method, experiments were carried out and the experimental results are shown in Figures 2 and 3, where there are three levels in the pyramid. The jitters are manually added to the original image sequence, so that the trajectory tracking results of optical flow can be compared with the real trajectory of the image sequence as shown in Figure 3. Three consecutive frames of the image sequence are presented to compare the images before and after stabilization. The image sequence with 450 frames and 360 × 360 sresolution in Figure 2 is filmed by a handheld Daheng Mercury USB3 VISION digital camera. Images in Figure 2a are a sequence with jitters and images in Figure 2b show the stabilized image sequence.   Figure 3 shows the real trajectory, trajectory obtained by optical flow, and the filtered optical flow trajectory that conducted by Kalman filter. As shown in Figure 3, the trajectory obtained by optical flow gradually deviates from the real trajectory, and the tracking error of optical flow becomes larger when it is not compensated or corrected. In order to reduce the accumulative error of optical flow, an image matching algorithm with higher tracking accuracy is introduced to correct it.

Motion Trajectory Correction and Filtering Based on Binary Feature Descriptors Matching
To achieve accurate image matching with minimum computational cost, multiple image matching algorithms are compared and the FREAK algorithm is applied to obtain the transformation matrix between two images with a constant number of frames apart. The Kalman filter is designed based on the transformation matrix of the missile-borne image guidance system to filter out the highfrequency jitters of the intentional motion trajectory.    Figure 3 shows the real trajectory, trajectory obtained by optical flow, and the filtered optical flow trajectory that conducted by Kalman filter. As shown in Figure 3, the trajectory obtained by optical flow gradually deviates from the real trajectory, and the tracking error of optical flow becomes larger when it is not compensated or corrected. In order to reduce the accumulative error of optical flow, an image matching algorithm with higher tracking accuracy is introduced to correct it.

Motion Trajectory Correction and Filtering Based on Binary Feature Descriptors Matching
To achieve accurate image matching with minimum computational cost, multiple image matching algorithms are compared and the FREAK algorithm is applied to obtain the transformation matrix between two images with a constant number of frames apart. The Kalman filter is designed based on the transformation matrix of the missile-borne image guidance system to filter out the highfrequency jitters of the intentional motion trajectory.   Figure 3 shows the real trajectory, trajectory obtained by optical flow, and the filtered optical flow trajectory that conducted by Kalman filter. As shown in Figure 3, the trajectory obtained by optical flow gradually deviates from the real trajectory, and the tracking error of optical flow becomes larger when it is not compensated or corrected. In order to reduce the accumulative error of optical flow, an image matching algorithm with higher tracking accuracy is introduced to correct it.

Motion Trajectory Correction and Filtering Based on Binary Feature Descriptors Matching
To achieve accurate image matching with minimum computational cost, multiple image matching algorithms are compared and the FREAK algorithm is applied to obtain the transformation matrix between two images with a constant number of frames apart. The Kalman filter is designed based on Electronics 2020, 9,198 6 of 16 the transformation matrix of the missile-borne image guidance system to filter out the high-frequency jitters of the intentional motion trajectory.

Trajectory Correction Based on FREAK Feature Descriptor
The point feature matching algorithms are state-of-art image matching algorithms for video stabilization, where the computational costs are also comparably higher. Schaeffer [32], Jared [33], and Bekele [34] compared and evaluated different binary key point descriptors such as BRIEF, BRISK, SURF, and FREAK, etc., and conclusions have been drawn that the accuracy of the FREAK algorithm is comparably better with less computational cost among these algorithms. A number of videos were tested with different feature matching algorithms, and the time consumptions are compared in milliseconds. The tests were carried out on a computer with Core i5 of 2.4 GHz, and RAM of 8 GB, with Visual Studio 2015 C++ and Open CV. The results are shown in Table 1, where the selection of feature points of the FREAK algorithm is conducted by SURF algorithm. It can be seen from Table 1 that the efficiency of the FREAK algorithm is better than other binary feature point matching algorithms, which is consistent with other studies performed. The computational cost of the SIFT algorithm is the highest, and it is more than 300 ms per frame even at the resolution of 360 × 360, which is not suitable for real-time implementation. As for the SURF algorithm, it is a little bit higher in computational cost compared to the FREAK algorithm. In addition, considering the FREAK algorithm has better scale invariance [32] performance, while the scale variation is the main factor as the missile approaching the target. So, the FREAK algorithm is selected to correct the tracking error of optical flow.
The accuracy of the three-level pyramid LK optical flow algorithm and image matching based on the FREAK descriptors are tested and compared using an image sequence where jitters are manually added as well. The image sequence with 120 frames and 1280 × 720 resolution in Figure 4 is filmed by the Daheng Mercury USB3 VISION digital camera, where the false matching is detected by RANSAC algorithm. The stabilized image sequences are shown in Figure 4, and the tracking errors are given in Figure 5.
As shown in Figure 5, the image matching with FREAK descriptors has a much better performance of image stabilization accuracy than that of optical flow, which indicates that it can be used to correct the trajectory generated by the optical flow algorithm. In order to determine the period of point feature matching to correct the trajectory of optical flow, simulations were carried out to test the optical flow and FREAK algorithm. For the missile-borne image guidance system moves fast in the air, image sequences with moving objects were tested. The motion trajectories generated by optical flow and the FREAK algorithm are compared, as shown in Figure 6.
Electronics 2020, 9, 198 7 of 16 The accuracy of the three-level pyramid LK optical flow algorithm and image matching based on the FREAK descriptors are tested and compared using an image sequence where jitters are manually added as well. The image sequence with 120 frames and 1280 × 720 resolution in Figure 4 is filmed by the Daheng Mercury USB3 VISION digital camera, where the false matching is detected by RANSAC algorithm. The stabilized image sequences are shown in Figure 4, and the tracking errors are given in Figure 5.   As shown in Figure 5, the image matching with FREAK descriptors has a much better performance of image stabilization accuracy than that of optical flow, which indicates that it can be used to correct the trajectory generated by the optical flow algorithm. In order to determine the period of point feature matching to correct the trajectory of optical flow, simulations were carried out to test the optical flow and FREAK algorithm. For the missile-borne image guidance system moves fast in the air, image sequences with moving objects were tested. The motion trajectories generated by optical flow and the FREAK algorithm are compared, as shown in Figure 6.    As shown in Figure 5, the image matching with FREAK descriptors has a much better performance of image stabilization accuracy than that of optical flow, which indicates that it can be used to correct the trajectory generated by the optical flow algorithm. In order to determine the period of point feature matching to correct the trajectory of optical flow, simulations were carried out to test the optical flow and FREAK algorithm. For the missile-borne image guidance system moves fast in the air, image sequences with moving objects were tested. The motion trajectories generated by optical flow and the FREAK algorithm are compared, as shown in Figure 6.  It can be seen from Figure 6 that the deviation between the trajectory obtained by optical flow and FREAK is getting larger from about the 10th frame, and there is no obvious difference between them before that point. To ensure high trajectory tracking accuracy with as low computational cost as possible, the trajectory tracking results of the optical flow method are corrected by FREAK once every 10 frames, and the trajectory of each frame is then filtered by a Kalman filter. performance of image stabilization accuracy than that of optical flow, which indicates that it can be used to correct the trajectory generated by the optical flow algorithm. In order to determine the period of point feature matching to correct the trajectory of optical flow, simulations were carried out to test the optical flow and FREAK algorithm. For the missile-borne image guidance system moves fast in the air, image sequences with moving objects were tested. The motion trajectories generated by optical flow and the FREAK algorithm are compared, as shown in Figure 6. It can be seen from Figure 6 that the deviation between the trajectory obtained by optical flow and FREAK is getting larger from about the 10th frame, and there is no obvious difference between them before that point. To ensure high trajectory tracking accuracy with as low computational cost as possible, the trajectory tracking results of the optical flow method are corrected by FREAK once every 10 frames, and the trajectory of each frame is then filtered by a Kalman filter.

Motion Trajectory Filtering Based on Kalman Filter
To filter out the high-frequency noises of the motion trajectory, a Kalman filter is designed to obtain the intentional motion of the trajectory, for the motion of missile-borne image guidance system is predictable, which is suitable for the application of Kalman filter. According to the global motion model in Section 3, the state model of the trajectory can be expressed as

Motion Trajectory Filtering Based on Kalman Filter
To filter out the high-frequency noises of the motion trajectory, a Kalman filter is designed to obtain the intentional motion of the trajectory, for the motion of missile-borne image guidance system is predictable, which is suitable for the application of Kalman filter. According to the global motion model in Section 3, the state model of the trajectory can be expressed as where v θ (k) is the velocity of rotation angle, dv x (k) and dv y (k) are velocities of pixels in x-and y-direction. The measurements of the Kalman filter obtained by optical flow and the image matching algorithm are represented by Electronics 2020, 9, 198 9 of 16 Assuming that the motion of adjacent frames is uniformly varying, according to Equations (4) and (5), the dynamic description model A and the observation model H are given by With the equations above, the Kalman filter can be achieved through the following procedures, where the equations of the prediction phase of the Kalman filter from k−1 to k are given by where Q is the covariance matrix of processing noises and R is the covariance matrix of measurements noises of the Kalman filter. The updated equations of the Kalman filter from k−1 to k are given as follows: The motion trajectories obtained by FREAK, which has higher accuracy, are brought into the Kalman filter as the measurements to update the state matrix for every 10 frames, while the optical flow trajectory is taken as input of the Kalman filter at other times.
From the analysis above and the overall framework in Figure 1, the specific framework of the proposed algorithm is shown in Figure 7. By using this method, fast image stabilization can be realized. Experiments were carried out to verify the proposed method.
Electronics 2020, 9,198 9 of 16 where Q is the covariance matrix of processing noises and R is the covariance matrix of measurements noises of the Kalman filter. The updated equations of the Kalman filter from k−1 to k are given as follows: The motion trajectories obtained by FREAK, which has higher accuracy, are brought into the Kalman filter as the measurements to update the state matrix for every 10 frames, while the optical flow trajectory is taken as input of the Kalman filter at other times.
From the analysis above and the overall framework in Figure 1, the specific framework of the proposed algorithm is shown in Figure 7. By using this method, fast image stabilization can be realized. Experiments were carried out to verify the proposed method.

Experimental Results
Experiments were conducted on a computer with an i5-8265U CPU, and 8 GB of RAM. The algorithms were implemented in Visual Studio 2015 C++ and OpenCV. To examine the accuracy of the proposed algorithm, the inter-frame transformation fidelity (ITF) which is based on the peak signal-to-noise ratio (PSNR) is used to evaluate the image stabilization results. The PSNR value between two consecutive frames represents how similar an image is to another by measuring the similarity between them, which is defined as follows,

Experimental Results
Experiments were conducted on a computer with an i5-8265U CPU, and 8 GB of RAM. The algorithms were implemented in Visual Studio 2015 C++ and OpenCV. To examine the accuracy of the proposed algorithm, the inter-frame transformation fidelity (ITF) which is based on the peak signal-to-noise ratio (PSNR) is used to evaluate the image stabilization results. The PSNR value between two consecutive frames represents how similar an image is to another by measuring the similarity between them, which is defined as follows, where I k−1 and I k refer to the two adjacent frames, MSE(I k−1 , I k ) refers to the mean square error of the two frames. MSE(I k−1 , I k ) is defined as, where M and N refer to the number of rows and columns.
where N frame is the total number of video frames. By comparing the ITF values of video sequences before and after stabilization, the performance of the video stabilization algorithm can be evaluated, and the higher the ITF value, the better the image stabilization is.
In order to evaluate the performance of the proposed method objectively and show the advantages intuitively, the proposed method is compared with two state-of-the-art methods, the Deshaker [35] and the real-time video stabilization system presented by Hu et al. [27], on the accuracy performance and computational costs of image stabilization. The two publicly available video sequences named 0004TU and 2WL are used for tests, which can be downloaded on the website [36]. The frame numbers of 0004TU and 2WL are 450 and 500 respectively. The resolution of them is 1280 × 720 pixels. Figures 8  and 9 show the video stabilization results of different methods. The figures in Figures 8 and 9 were trimmed to remove the black peripheries after image stabilization. Table 2 shows the ITF results of the stabilized image sequences. Table 3 shows the computational cost of video stabilization for various methods.
sequences named 0004TU and 2WL are used for tests, which can be downloaded on the website [36]. The frame numbers of 0004TU and 2WL are 450 and 500 respectively. The resolution of them is 1280 × 720 pixels. Figures 8 and 9 show the video stabilization results of different methods. The figures in Figures 8 and 9 were trimmed to remove the black peripheries after image stabilization. Table 2 shows the ITF results of the stabilized image sequences. Table 3 shows the computational cost of video stabilization for various methods.  As shown in Table 2, the Deshaker has the best performance of the video stabilization in both 0004TU and 2WL image sequences, where the ITF values are improved with 2.54 dB and 3.56 dB, respectively. The ITF values of the image sequences stabilized by Hu et al.'s method improved with 0.89 dB in 0004TU and 1.68 dB in 2WL compared to the original image sequences, while the proposed method improved the ITF values with 2.43 dB in 0004TU and 2.35 dB in 2WL. As a result, the proposed method has a better accuracy performance than Hu et al.'s method in both cases, but a bit worse than the Deshaker. And it can be noticed from Table 3 that the proposed method has approximate performance in 0004TU compared to Deshaker, which is about 0.1 dB less improved. The ITF value of proposed method in 2WL is 1.2 dB less improved than the Deshaker.
For the difference of the experimental environment between the proposed method and Hu et al., it is not fair to compare the computational cost of proposed method directly. So, the Deshaker has been taken as the reference to examine the computational cost of different methods, for it has the same characteristics in different computers. Table 3 shows that the computational costs of Deshaker are 55.5 ms/frame and 54.25 ms/frame in our hardware environment, which is 1.2 times higher than our proposed method. Meanwhile, the experimental results of the method of Hu et al. show that the computational costs of Deshaker are 34.51 ms/frame and 34.20 ms/frame, which is 1.1 times higher than Hu et al.'s method in their hardware environment. Thus, the proposed method has the best performance on computational cost among these methods.

0004TU 2WL
In our condition 1 Deshaker 55.5 54.25 To ensure that the algorithm can be applied to various of harsh application environments, and validate the robustness of it, the algorithm on another 15 image sequences are tested as well, which can be obtained publicly on the website [37].
The first frames of these stabilized image sequences are shown in Figure 10. And their features and tracking results are given in Table 4. performance on computational cost among these methods.
To ensure that the algorithm can be applied to various of harsh application environments, and validate the robustness of it, the algorithm on another 15 image sequences are tested as well, which can be obtained publicly on the website [37].
The first frames of these stabilized image sequences are shown in Figure 10. And their features and tracking results are given in Table 4.    By analyzing the ITF values of these stabilized videos in Table 4, the ITF values have been improved by 2-5 dB compared to the original videos in the great majority of cases. However, the ITF value of Video 4 has only been improved by about 0.52 dB. This is because the original image sequence already has the comparably higher ITF value of 26.47 dB. In addition, the proposed algorithm is used to distinguish and maintain the low-frequency motion in the image sequence, and eliminate the high-frequency jitters in the video which are caused by the shaky movements of missile. So, the low-frequency movements in Video 4 are preserved after processing, while a small number of high-frequency jitters are removed, and the ITF value is not improved significantly.
Generally, the proposed method is verified with multiple video cases and compared with the existing methods. The experimental results in Tables 2 and 3 show that the accuracy performance of proposed method is close to that of Deshaker and it performs best in computational cost. Considering the background of this technology, the Deshaker method is the most difficult to apply in practice due to its poor real-time performance, while the method of Hu et al. has a lower accuracy with higher time cost than the proposed method. In addition, the simulations carried out in Figure 4 show that the proposed method has a good performance on the stability and robustness in various conditions. As a result, the proposed method has the advantages of real-time performance compared to previous methods with preferable accuracy, and it has been proven as well to be steady and robust, which is applicable in the strap-down image guidance system.

Conclusions
The results of comparative simulations show that the proposed method can be competitive to the existing state-of-art image stabilization methods, whether in terms of accuracy or real-time performance, making it suitable to meet the real-time and high accuracy requirements of a strap-down image guidance system.
Aiming at the problem of imaging dithering and real-time requirements of the strap-down missile-borne image guidance system, a real-time electronic image stabilization algorithm which combines the optical flow and binary feature matching algorithm is proposed. The overall framework of this algorithm is proposed at first. Then, the global motion of the image sequences is established and estimated by the pyramid LK optical flow algorithm. The efficiency of different image matching algorithms is analyzed, and the FREAK binary feature matching algorithm is selected to correct the cumulative error caused by optical flow with an interval of 10 frames. The Kalman filter is introduced to filter out the high-frequency jitters of the motion trajectory. The experimental results show that the image sequences can be stabilized by the proposed algorithm with comparably less computational cost and better accuracy. The proposed algorithm has not been implemented on an image guidance system for the immature test conditions. The implementation of this algorithm on board will be investigated in the future. Funding: This research received no external funding. And the APC was funded by Qiang Shen.