Next Article in Journal
Resonant Subwavelength and Nano-Scale Grating Structures for Biosensing Application: A Comparative Study
Next Article in Special Issue
Transformations in the Photogrammetric Co-Processing of Thermal Infrared Images and RGB Images
Previous Article in Journal
Multispectral Face Recognition Using Transfer Learning with Adaptation of Domain Specific Units
Previous Article in Special Issue
Recognition of Cosmic Ray Images Obtained from CMOS Sensors Used in Mobile Phones by Approximation of Uncertain Class Assignment with Deep Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Infrared Small Target Detection Method with Trajectory Correction Fuze Based on Infrared Image Sensor

Science and Technology on Electromechanical Dynamic Control Laboratory, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(13), 4522; https://doi.org/10.3390/s21134522
Submission received: 19 April 2021 / Revised: 22 May 2021 / Accepted: 28 June 2021 / Published: 1 July 2021
(This article belongs to the Special Issue Camera as a Smart-Sensor (CaaSS))

Abstract

:
Due to the complexity of background and diversity of small targets, robust detection of infrared small targets for the trajectory correction fuze has become a challenge. To solve this problem, different from the traditional method, a state-of-the-art detection method based on density-distance space is proposed to apply to the trajectory correction fuze. First, parameters of the infrared image sensor on the fuze are calculated to set the boundary limitations for the target detection method. Second, the density-distance space method is proposed to detect the candidate targets. Finally, the adaptive pixel growth (APG) algorithm is used to suppress the clutter so as to detect the real targets. Three experiments, including equivalent detection, simulation and hardware-in-loop, were implemented to verify the effectiveness of this method. Results illustrated that the infrared image sensor on the fuze has a stable field of view under rotation of the projectile, and could clearly observe the infrared small target. The proposed method has superior anti-noise, different size target detection, multi-target detection and various clutter suppression capability. Compared with six novel algorithms, our algorithm shows a perfect detection performance and acceptable time consumption.

1. Introduction

Detecting infrared small targets on the ground at long distance is a crucial mission in many defense and anti-terrorist applications. Wheeled and tracked vehicles represent typical classes of small militant ground targets. Recently, the detecting sensor based on the trajectory correction fuze has received much attention [1]. In previous research, the traditional detecting method could not effectively detect targets in the complex environment [2]. Therefore, this paper mainly aims to improve the detection accuracy of infrared small targets on the trajectory correction fuze.
Among various detecting methods, thermal infrared imaging is a passive model to effectively detect small targets at long distance. Compared with the visible image sensor, radar [3] and laser [4], the infrared image sensor has capabilities of strong anti-interference, good concealment and all-day work. At present, the forward-looking infrared sensor (FLIR) adopts the superior microbolometer with the noise equivalent temperature difference (NETD) of 50 Mk, the pixel pitch of 12 μm and wavelength of 3–14 μm. In previous research, the authors of [5] designed a correction fuze and reserved space for an infrared image sensor. However, the important parameters of the sensor were not calculated. We aim to make a graceful integration of the infrared image sensor and the fuze that benefits the target detection.
The trajectory correction fuze could reduce the circular error probable (CEP) by adding the correction function to the mortar without any modification of the projectile. Generally, the onboard computer calculates the correction value by the detected target’s azimuth. Then, the original terminal trajectory is changed by a pair of rudders which could produce the aerodynamics. Existing research on the correction fuze combined with the infrared image sensor is still in the initial stage. For example, the authors of [6,7] researched the influences of the velocity, rotation rate and pitch on the dynamic response during terminal correction. However, the target detection method was ignored when the aerodynamic feature and correction strategy were analyzed. Therefore, accurately detecting the infrared small target is essential for realizing the fuze’s function [8].
The infrared small target has three unique characteristics, such as small physical dimensions, and low thermal and visible signatures [9]. Generally, small target size does not exceed 0.15% of the image resolution. Likewise, without dissipating any thermal energy, the vehicle body itself only reflects some incident energy, which is almost equal to the energy reflection of the surrounding environment. Most of the energy difference is generated by the motor, and its infrared band is basically in the range of the long wave (8–14 μm). Such low thermal energy produces a minuscule signal in the infrared image sensor. As the detection distance increases, the signal-to-noise ratio (SNR) of the target degrades drastically. Therefore, these challenges make infrared small target detection an unusually difficult task.
Existing traditional infrared small target detection approaches mainly include the use of spatial-only information in individual single frames and the spatial-temporal information in image sequences, where the spatial-only method could be divided into two categories: filter-based methods and human vision system (HVS) methods. The former designed filters to express noise according to the statistics of local pixels, such as max-median [10], max-mean [11], and morphological [12] transformation. Although these typical filters are of lightweight calculation, they are extremely interfered with by noise. Correspondingly, the improved filter methods of median [13,14], mean [15] and Gaussian [16] were proposed to effectively suppress clutters. In addition, some improved morphological methods such as Top-Hat [17] and region grow [18] were studied. However, the initial elements selected by these methods should be highly matched with various clutter shapes, otherwise the performance will be degraded. To summarize, these methods partly improve the robustness, but still have a high false detection rate with complex background and variable target size.
On the contrary, the HVS [19] methods are directly applied to the target itself. The local contrast method (LCM) [20,21], simulating the attention mechanism of the HVS, is often used to detect targets. Later, a series of improved algorithms based on this were proposed, such as improved LCM (ILCM) [22], novel LCM (NLCM) [23] and relative LCM (RLCM) [24]. By redefining the contrast parameters of the central region, the pixel-sized noises with high brightness (PNHB) are suppressed and the target is enhanced. Moreover, other methods optimize the rectangle structure of LCM to detect targets with different sizes, such as double-neighborhood (DLCM) [25], tri-layer (TLCM) [26], multiscale patch-based (MPCM) [27] and high-boost-based multiscale (HBMLCM) [28]. Although local contrast is further enhanced, sparse clutter elements are also highlighted [29]. Besides, the above methods are cyclically and complexly calculated pixel-by-pixel, which leads to high computational costs and cannot meet the real-time requirements of target detection.
Recently, convolutional neural networks (CNN) [30,31] have been widely used in target detection and recognition. Derived methods consist of one-stage methods, such as single-shot multi-box detector (SSD) [32] and you only look once (YOLO v1-v5) [33,34], and two-stage methods such as Fast-RCNN [35] and Faster RCNN [36], providing a new backbone, activation function, loss function, etc. These methods all require a large number of datasets for training to obtain model parameters. However, the fuzes often lack prior knowledge and cannot obtain huge datasets, and the size of the target is too small to extract features.
As described, our main goal in this paper is to effectively detect small infrared targets in the application of the trajectory correction fuze, and experimentally demonstrate that our method could deal with the challenges associated with long distance, small variable size and minimal thermal signature in a real infrared scene. The comprehensive performance is better than the existing methods. Our main contributions include:
(1)
On the basis of [5], the main parameters of the infrared image sensor were selected, and then the correctness was verified through outdoor experiments.
(2)
Inspired by filtering methods, we proposed a novel two-dimensional density-distance space to obtain the density peak pixels by full use of image information.
(3)
A new pixel growth method was presented to effectively suppress clutters. Then, the real targets were selected from the density peak pixels.
(4)
Three experiments proved the robustness and effectiveness of the algorithm and the applicability of the trajectory correction fuze. Especially, our method maintained a good detection performance without increasing the processing time compared with the previous existing methods.

2. Application Background

The infrared image sensor is located on the front of the fuze. Its detection ability is the premise for completing the correction function. Therefore, the parameters of the infrared image sensor should be calculated based on the trajectory feature of the mortar.

2.1. Design of the Fuze

In this paper, the fuze is used in the tail-stabilized mortar with three characteristics. First, the projectile rotates at a speed of no more than 2 r/s. Second, the pitch and yaw angle of the projectile changes at a high frequency because of the curvature of the trajectory. The above problems lead to the frequent changes of the image background. Finally, the conventional terminal trajectory distance is 1.5 km. With the mortar flying at an initial speed of 272 m/s, the total time is within 8 s. The rest of the time for target detection is extremely short, excluding the correction and solution time. Therefore, the number and function of sensors should be balanced due to the limitation of response time and the space size of the fuze.
As shown in Figure 1, the organization is composed of three parts. The aft part, shown in purple, connects the fuze and the projectile with threads. The mid part, indicated by blue, is used to externally fix the rudder blade and internally contain the sensors for control. When the mortar rotates, the fuze can rotate in the opposite direction to the projectile to ensure a stable field of view for the image sensor. The front part, equipped with the infrared image sensor, is used for navigation. In addition, the detail of the infrared image sensor is shown at the top of Figure 1, where the fairing is at the top. The green base is used to fix the sensor and lens. Therefore, the infrared image sensor is not affected by the rotation of the projectile due to the reverse function of this structure. Correspondingly, this design makes the field of view stable and ensures the detection capability.

2.2. Connection between Infrared Image Sensor and Fuze

When the mortar flight is in the terminal trajectory, the parameters of the trajectory and the infrared image sensor will affect the detection effect directly. Therefore, the detection algorithm should be based on the workflow of the correction fuze and the infrared image sensor. Table 1 lists the trajectory parameters of conventional mortar and parameters of the infrared image sensor selected in this paper.
According to the trajectory equation of mortars, the terminal flight time is generally within 8 s when the detection distance is 1500 m and the launch angle is 53°. Obviously, this time is sufficient for algorithms with high FPS. However, most of the time is spent on the onboard computer calculation and trajectory correction, leaving a short amount of time for target detection. Simultaneously, at the beginning of the terminal trajectory, the pitch angle of the projectile changes rapidly within the integration time. This causes the change of the background on the field of view. Therefore, the classic frame difference method using time domain should not be applied. The appropriate method is to detect the target in a single image.
Notably, in this paper, the parameters of the infrared image sensor were selected based on the target’s size, under the premise that the temperature difference between the target and background can be detected. Due to the fact that the detection range of most current uncooled staring VOx microbolometers is far greater than 1500 m, the infrared small target is clearly visible on the field of view on the terminal trajectory. However, unlike missiles, overload caused by mortars’ launching phase will damage the lens assembly. Therefore, a stable coaxial spherical lens assembly is used. The transformation relation between the infrared image sensor and the target is calculated as follows:
R f = H h
F o v c o l = 2 arctan ( n d 2 f )
F o v r o w = 2 arctan ( m d 2 f )
where f is the focal length of the infrared imager, R and H are the detection distance and size of the target and h is the pixel size of the detectable target on the pixel array. The target size is set to 1-line pairs according to the detection degree of the Johnson Criteria. The long-wave sensor with the spectral band 7.5–13.5 μm is selected, because the thermal wavelength emitted by the target is about 8–14 μm. Fovcol and Fovrow are the vertical and horizontal field of view, respectively. n, m and d represent the array format and single pixel size. These parameters are used to calculate the equivalent detection experiment of the infrared image sensor.
Generally speaking, the fuze needs to detect multi-targets with different sizes. Moreover, due to the long detection distance, the target usually contains only a few pixels and does not exceed 0.15% of the size of the image. Therefore, an infrared small target detection algorithm based on a single image is proposed. The algorithm should not be affected by target size and background.

3. Density-Distance Space

The discussion in Section 2 shows that the algorithm should have three features, namely adaptive infrared small target detection capability, scales’ changeable adaptability and low running time. In an infrared scene, most small targets exist in relatively open areas and have higher grayscale than their surroundings. Correspondingly, infrared small targets should have two features, maximum grayscale in the local region and farther distance from any higher gray value pixel. Figure 2 shows the flowchart of the density-distance space for the infrared small target detection method. First, the two main parameters density-ρ and distance-σ are generated from the original infrared image. Then, the ρ and σ values of all pixels are converted into a special two-dimensional ρ-σ space. The green candidate targets are selected from this space through an adaptive threshold, including real multiple targets and false targets.

3.1. Definition of Density and Distance

Infrared small targets usually have high gray value, lower pixel number, certain contrast or other features. This can be summarized as having a higher gray value in a limited area compared to neighboring areas. According to this definition, infrared small targets can be regarded as having a higher “density”. In detail, the gray value is regarded as “quality”, and the number of pixels is the “volume”. Correspondingly, a simple but effective method to define the density of each pixel is shown in Equation (4):
ρ ( i , j ) = 1 n ( i , j ) ε G ( i , j )
where ρ(i,j) is the density of the pixel at the (i,j) position. G(i,j) is the gray value, and n represents the number of pixels in the area of ε. The density value reflects the average gray features of the target and concentrates features on one pixel. Furthermore, by detecting a single pixel, the size and shape of the target will not affect the detection result.
The parameter σ is defined as the distance feature of the pixel, which means the minimum distance between pixel i and any other pixel j with higher density. Equations (5) and (6) show the calculation of distance, where dij represents the Euclidean distance between pixel-i and pixel-j, and x and y are the coordinate values of the pixels. Figure 3 shows the calculation process of density and distance. The area with light blue is the original image, and the deep blue is ε, which has a 3 × 3 size. The order of density is ρ2 < ρ0 < ρ1, ρ3. For pixel-ρ0, σ is the distance between ρ0 and ρ1, which is shown in yellow. Although ρ2 has a shorter distance, its density is less than ρ0. Correspondingly, the density of ρ3 is higher than ρ0, but the distance is longer than ρ1. Therefore, the two red pixels could not be selected as the correct distance value. It should be noted that when the pixel-i has the maximum density, there is no σ. We temporarily take its σ to the maximum value among all dij values, because it may be the true target and cannot be ignored. Then, it will be compared with other pixels through the subsequent adaptive threshold.
σ i = min j : ρ j > ρ i ( d i j )
d i j = ( x i x j ) 2 + ( y i y j ) 2

3.2. Density-Distance Space

If a pixel has a maximum ρ-density in the local region and a long σ-distance at the same time, we call it “density peak”. Obviously, the small infrared target meets these features. Generally, pixels in a continuous background have low grayscale and are close to pixels with higher grayscale. Correspondingly, the number of density peaks is small. Therefore, a certain number of density peaks could be directly selected as candidate targets. This brings two benefits. One is that the detection accuracy will not be affected by target size and shape. The other is that the detection difficulty is reduced. Therefore, evaluating the density peak by two-dimensional ρ-σ space and adaptive threshold α, we select a certain number of candidate targets. The calculation of α is shown in Equation (7):
α = ρ × σ
where α is mainly used to evaluate whether the pixel is a density peak. By multiplying ρ and σ, the difference of α between the real target and the background is enlarged. Then, all the pixels are sorted by α. The number of real targets is n. To ensure that the real targets will not be lost within the minimum calculation amount, we take at least 4n as candidate targets from density peaks with a larger α. Notably, it is almost impossible that a real target has two pixels with the same density, and the σ-distance of the lower-density pixel is extremely small. Therefore, the target region only has one density peak.
Figure 4 shows the establishment and preliminary detection of ρ-σ space of two targets with different sizes at the same position. The sizes of the two targets in Figure 4a and b are 2 × 2 and 3 × 5, respectively. We extracted a region with 25 × 25 pixels containing the target. As shown, even if the target sizes are different in the original image, the targets with the same position have similar positions in the ρ-σ space, approximately. This test shows that the ρ-σ space has the ability to detect multi-scale targets.

3.3. Adaptive Pixel Growth

Notably, it is easy to confuse the detection of real targets, where some density peaks are located at the edges of some background regions. Therefore, an adaptive pixel growth (APG) algorithm is used to eliminate false targets so as to retain the real targets.
The APG method is similar to that of region growth. Each candidate target (pixel) is regarded as a “seed”, and represented by Tk(x,y), where k is the number of candidate targets and (x,y) is the coordinate of the pixel. The seed grows in the direction of its eight neighboring connected pixels. The growth condition depends on the grayscale difference between the seed and its neighboring pixels. When the difference is less than the threshold, the seed grows one step, otherwise it stops growing. We use Th to denote the adaptive threshold to limit the growth of the seed, as shown in Equations (8) and (9):
T h = η × ( G T k G ¯ P k )
P k ( i , j ) = { ( i , j ) | x m i x + m , y n j y + n , m 2 + n 2 0 }
where Pk denote the pixels around the seed Tk, GTk and GPk represent the grayscale of the seed Tk and pixel Pk and G ¯ P k is the average gray value of the pixels Pk(i,j). The range of Pk is the pixels in a certain area around but not containing the seed, as shown in Equation (9). m and n are positive integers, η is the threshold coefficient and the range is 0–1, which is obtained by large numbers of experiments. After one step of pixel growth, the seed, Tk, extends to one or more pixels in the set Pk. Equally, the pixels in the set Pk which meet the threshold, Th, become new seeds. All seeds follow this strategy in each step of the algorithm. When the growth of seeds cannot meet the threshold, Th, the growth stops. The total number of pixels after the final growth of each seed is regarded as area Sk.
Figure 5 shows the process of APG, where seed, Tk, is shown in yellow, and the surrounding pixels, Pk, are shown in blue. The red dotted frame indicates the range after each step of APG. At the beginning, there are three pixels around the seed, T1(x,y), within the growth threshold, Th, which are P1(x + 1,y + 1), P1(x,y − 1) and P1(x − 1,y), respectively. After the first step, three new seeds marked with number 2 are added. Similarly, after the second step, the new seeds are marked with number 3. For this seed, the area S1 contains 10 pixels after two steps of APG.
The growth feature of seeds in the target region, edge region and clutter region is shown in Figure 6, and 20 candidate targets including 2 real targets are detected in Figure 6a. The seed 1 is in the target region, and the number of growth steps is small, as shown in Figure 6b. Correspondingly, the area Sk of the seed is small. Figure 6c,d show the growth results of seeds 2 and 3. For a seed in the edge region, such as buildings, pixel set Tk will grow along the high grayscale edge under the premise of not exceeding Th. Therefore, the entire edge region will be covered by the area of seed growth. Especially, when the seed is located at the junction of two connecting regions, the growth will contain both regions. Obviously, the growth areas of seeds in the edge region have high density. Figure 6e,f show that the growth areas of seeds 4 and 5 are located in clutter regions, such as clouds, sea–sky line and other float backgrounds. Clearly, the growth areas of clutters are sparse. Through the comparison of the above three conditions, the target region has the smallest area Sk. Therefore, the APG method could effectively remove the clutters and background, and detect real targets.

4. Experimental and Analysis

In this section, the parameters of the infrared image sensor are first verified to ensure that the correction fuze can clearly capture infrared targets. Then, our proposed algorithm is tested to verify its effectiveness, robustness and real-time performance by simulation and the hardware-in-the-loop experiment. There are two key parameters that should be set in the following experiments. First, n is set to 5 due to the fact that the number of targets we detected generally did not exceed 5. Then, for a small target with several pixels, the growth directions of its neighboring pixels surrounding the seed are usually 3–5, which occupy 40–60% of the 8 directions. Therefore, η is set to 0.4 to limit the growth area of real targets. We do not need any further parameter changes in all of the following experiments.

4.1. Equivalent Experiment about Detection Capability of the Infrared Image Sensor

The prerequisite for the algorithm verification was that the infrared imager could clearly observe infrared small targets. For this purpose, both equivalent size and temperature difference should be considered. We used FLIR’s Tau2 infrared imager with the same parameters as in Table 1 for experiments. A vehicle model connecting with an electrically conductive board was used to replace the infrared target. A temperature sensor powered by a battery was connected with the electrically conductive board through its own thermometer. When the surrounding environment temperature, T0, is known, the temperature sensor could control the temperature, T1, of the electrically conductive board to maintain the temperature difference (T1T0). The infrared image sensor was carried by an UAV to achieve the long-distance detection. The experimental conditions are shown in Table 2.
The size feature of the tank was 2.3 m according to the data, and the size of the infrared target model was 0.1 m. Correspondingly, when the real detection distance is 1500 m, the equivalent distance should not be less than 65 m. Similarly, the temperature difference should also be equivalent, but it is not linear with the detection distance. It needs to be multiplied by an atmospheric transmittance coefficient, τa. In Reference [37], the range of τa is 0.3–0.7 under the bad condition of cloud cover. Therefore, τa is set to 0.5, based on the average. For the ground environment, when the temperature difference between the vehicle target and the surrounding environment is 15 K on average, the equivalent temperature difference is 7 K.
The experimental equipment, arrangement and detection results are expressed in Figure 7. The highlighted region in the image is the small infrared target. Notably, the target is vignetted due to the diffusion of its own thermal radiation, leading to the changes of the target’s shape and size. However, the density-space method will not be affected. We only detected one pixel with the highest density in the target region. The experiment proved that the infrared image sensor could clearly observe the target at long distance, and simultaneously provided a guarantee for subsequent simulation experiments.

4.2. Simulation of Algorithms

Simulation experiments were mainly used to prove the effectiveness of the proposed algorithm. Notably, the simulation was run on a laptop with a 2.8 GHz Intel i7-7700HQCPU processor and 8 GB RAM. In order to evaluate our algorithm objectively, only one key parameter should be preset in this simulation: the number of candidate targets (seeds). According to the analysis in Section 3, we set the number of candidate targets to 20 so that the infrared image can contain no more than 5 real targets.
The detection result is easily affected by noise. Therefore, the anti-noise ability should be tested. In order to reflect the effect intuitively, an image with one infrared target was selected, and different levels of Gaussian noise were added in this picture, respectively. SNR is defined as in Equation (10), where σ t 2 is the grayscale variance of the target region and σ n 2 is the variance of Gaussian noise. Figure 8 shows the noise with different SNR and target detection results. Obviously, the target is hard to detect as the SNR decreases gradually. When SNR was from 44.1 to 45.4, we obtained a perfect detection result. The target missed when SNR was reduced to 43.9. The changes of position of the remaining candidate targets are caused by the randomness of Gaussian noise. Consequently, the lower limit of SNR that the algorithm could detect the target at is around 44. When SNR was higher than 44, the algorithm could detect the real target. Otherwise, depending on the distribution of noise, it may not detect the real target. The test clearly showed a certain degree of anti-noise ability of the algorithm based on density-distance space and the APG method.
SNR = 10 lg σ t 2 σ n 2
In addition, the multi-target detection ability of the algorithm in different environments was tested by simulation. We chose six typical images with different backgrounds, target number and target size to accurately test the robustness of the algorithm. Figure 9 shows the ρ-σ space and the detection results. The targets are marked with red rectangles. Two pairs of ground and sky images, respectively containing 3 and 4 infrared targets with different size, grayscale and regions, were selected. Besides, the limit of the detection ability of the algorithm was tested by the last image with 6 targets. All backgrounds in the image sequences include clutters with different types and shapes. The details of the original images are listed in Table 3.
In Figure 9, the red rectangles represent the real targets, and the green ones represent the candidate targets. In the first four images, all real targets were well-detected under the interference of various clutters. No false detections or missed detections occurred. Two extreme conditions should be considered. In the fifth image, only three of the four real targets were detected because the missing target has a minuscule grayscale. In the sixth image, all targets were detected by setting n to 6 individually. The results proved that our method had a superior robustness, adaptability and efficiency for the detection of multiple infrared targets in most conditions. However, the target with a low α value will still be missed when its thermal signature is lower than clutters.
To further evaluate the comprehensive effectiveness of the proposed algorithm, six novel algorithms were used for comparison, namely the Top-Hat method [17], the least squares support vector machine method (LS-SVM) [38], the high-boost-based multiscale local contrast measure (HBMLCM) [28], the multiscale relative local contrast measure (RLCM) [24], the multiscale patch-based contrast measure (MPCM) [27] and the minimum local Laplace of Gaussian (MinLocalLoG) [13]. We used the receiver operator characteristic (ROC) curve to compare the effects of the algorithm. The ROC is calculated by the true positive rate (TPR) and false positive rate (FPR), as shown below:
TPR = True   Positive True   Positive + False   Negative
FPR = False   Positive Fasle   Positive + True   Negative
The ROC curve is composed of FPR as the abscissa and TPR as the ordinate so as to provide a quantitative comparison of the detection performances. Meanwhile, the area under the curve (AUC) was the area included by the ROC curve and the axis of abscissa and ordinate. We prepared five different datasets of 200 frames for evaluation. Every frame has only one true target with a corresponding coordinate. In order to be considered a correct result, the detected target must have the same coordinate as that of the real target. The detection rate could be regarded as the AUC value, which means the proportion of the number of real targets detected in 200 frames. Meanwhile, the higher AUC value presents the better performance of our method to detect the real target in one frame.
Figure 10 shows the ROC curves and AUC value. The red dotted line represents the proposed algorithm. It can be seen from all ROC curves that the proposed algorithm generally has a superior detection accuracy. In sequences 3–5, the AUC value of our algorithm is almost close to 1. However, the detection accuracy of sequence 2 is not ideal. This is due to more clutter and the low target grayscale. For example, the second sequence shown in Figure 10f has many clutters with a small area, which seriously interfere with the step of pixel growth. The AUC value of our method was 6.02 × 10−5 and 9.03 × 10−5 lower than that of HBMLCM and LS-SVM. Thus, the detection rate of our method is slightly worse than that of LS-SVM and HBMLCM but better than the other four algorithms.
Then, the running time test was performed. We compared the time consumption of all algorithms by calculating the average time of 200 frames for each sequence, as listed in Table 4. Although the running time of our algorithm was slower than the other four algorithms, it was not more than an order of magnitude and had better performance. The total average running time is listed at the bottom of the table. The running time of the proposed method is almost equal to that of MPCM. Compared with MinLocalLoG, LS_SVM, HBMLCM and MPCM, our running time was more, by 0.0057, 0.0096, 0.0031 and 0.0006 s, but was still in the same order of magnitudes. Top-Hat had the shortest running time, which is about 3 times lower than the proposed algorithm. RLCM has the largest amount of calculation, surpassing most methods by 2 orders of magnitude.
The comprehensive performance and running time of HBMLCM were better, but its adaptability of clutter is weaker than our algorithm. However, the target detection task of the fuze was to firstly ensure the detection accuracy, and then improve the real-time performance as much as possible. Therefore, compared with other algorithms, our method was more applicable for fuze.
To summarize, our algorithm had a better capability and robustness than the other algorithms, and the running time was also acceptable. Compared with the six existing methods, the proposed method has the best performance of detection accuracy. Although Top-Hat has the best real-time performance, the detection accuracy is significantly lower than that of our method. In terms of comprehensive performance, HBMLCM is close to our methods. However, the ROC curve in Figure 10a shows that the detection ability of our method is much better than HBMLCM for low thermal targets. Moreover, the simulation also provides a theoretical basis for hardware-in-loop (HIL) experiments.

4.3. HIL Experiment

The HIL experiment was used to test the effectiveness of our algorithm in the fuze. In order to simulate the outdoor environment, we use a turntable to fix the fuze prototype. Four infrared small targets were placed in different directions in the field of view. The detection distance was set to 5 m according to the same equivalent distance between mortar and target. Moreover, considering the influence of the field of view, the diameter of the target placement range was 1.5 m, calculated by the detection distance and the parameters of the infrared imager.
Figure 11a shows the prototype of the infrared image sensor and its strapdown design with the correction fuze. Figure 11b shows the fuze prototype fixed on the 3-DOF turntable. First, the experimental system was powered on. The turntable simulates the rotation of the mortar around the projectile’s axis at a speed of 2 r/s. At the same time, the fuze was turned on to keep the field of view stable. In addition, the random swing within 5 degrees of pitch and yaw direction was added to simulate the aerodynamics. The detection result of one frame is shown in Figure 11c. Although the disturbance deformed the infrared target, the proposed method could detect all the targets successfully.
Through simulation and the HIL experiment, our method was proven to be robust and was also suitable for the trajectory correction fuze. In terms of subjective and objective evaluation, it could be concluded that the performance of our algorithm is satisfactory compared with other state-of-the-art algorithms for small infrared target detection.

5. Conclusions

In this paper, a simple and effective method for small infrared target detection was proposed for the trajectory correction fuze. The three challenges, namely long detection distance, small variable size and minimal thermal signature, were solved. The research has made a great contribution to the application of infrared cameras to intelligent fuzes.
First, the characteristics of the fuze and trajectory were analyzed. The infrared image sensor was selected by calculating the parameters. Second, the spatial-only target detection method was used. Inspired by filtering methods, a density-distance space was proposed to detect the candidate targets from the original infrared image. Finally, an adaptive pixel growth method was presented to select real targets from seeds (candidate targets) by suppressing various kind of clutter.
Three experiments proved the robustness, effectiveness and applicability of the method, respectively. First, the outdoor equivalent detection experiment verified the correctness of the parameters of the infrared image sensor. The challenge of long-distance detection was overcome. Second, the simulation proved that our method has superior anti-noise detection, multiple target detection and small various targets’ size detection capability. Meanwhile, compared with six state-of-the-art algorithms, by ROC curve and running time in 5 different sequences, our method showed perfect detection accuracy and acceptable time consumption. Compared with Top-Hat, our method had better detection accuracy, and the running time was 0.0151 s slower. The HBMLCM has the same comprehensive performance as our method. However, our method had better capabilities of clutter suppression and low thermal target detection. Although the average running time of our method was 0.0057, 0.0096, 0.0031 and 0.0006 s slower than MinLocalLoG, LS_SVM, HBMLCM and MPCM, the order of magnitude was the same. Therefore, the challenge of minimal thermal signature has been solved. Finally, the HIL experiment generated a perfect detection result by the application of our method to the correction fuze based on the infrared image sensor.
As for the future work, it should be considered that the size of the target will enlarge during flight of mortar, which means that the small target detection method could not be used. Therefore, it is necessary to detect the key parts of the infrared target with large areas through the deep learning method.

Author Contributions

Conceptualization, C.Z. and D.L.; methodology, C.Z.; software, C.Z.; validation, C.Z. and Y.W.; formal analysis, C.Z. and J.L.; investigation, C.Z. and J.Q.; resources, C.Z.; data curation, C.Z.; writing—original draft preparation, C.Z.; writing—review and editing, C.Z. and D.L.; visualization, C.Z. and Y.W.; supervision, D.L. and J.L.; project administration, D.L.; funding acquisition, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, Z.; Shen, Q.; Deng, Z.; Cheng, J. Real-Time Estimation for Roll Angle of Spinning Projectile Based on Phase-Locked Loop on Signals from Single-Axis Magnetometer. Sensors 2019, 19, 839. [Google Scholar] [CrossRef] [Green Version]
  2. Theodoulis, S.; Sève, F.; Wernert, P. Robust gain-scheduled autopilot design for spin-stabilized projectiles with a course-correction fuze. Aerosp. Sci. Technol. 2015, 42, 477–489. [Google Scholar] [CrossRef]
  3. He, C.; Xiong, D.; Zhang, Q.; Liao, M. Parallel Connected Generative Adversarial Network with Quadratic Operation for SAR Image Generation and Application for Classification. Sensors 2019, 19, 871. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Yue, R.; Wang, H.; Jin, T.; Gao, Y.; Sun, X.; Yan, T.; Zang, J.; Yin, K.; Wang, S. Image Motion Measurement and Image Restoration System Based on an Inertial Reference Laser. Sensors 2021, 21, 3309. [Google Scholar] [CrossRef]
  5. Li, R.; Li, D.; Fan, J. Correction Strategy of Mortars with Trajectory Correction Fuze Based on Image Sensor. Sensors 2019, 19, 1211. [Google Scholar] [CrossRef] [Green Version]
  6. Fresconi, F. Guidance and Control of a Projectile with Reduced Sensor and Actuator Requirements. J. Guid. Control. Dyn. 2011, 34, 1757–1766. [Google Scholar] [CrossRef]
  7. Li, R.; Li, N.; Fan, J. Dynamic Response Analysis for a Terminal Guided Projectile with a Trajectory Correction Fuze. IEEE Access 2019, 7, 94994–95007. [Google Scholar] [CrossRef]
  8. Zhang, C.; Li, D. Mechanical and Electronic Video Stabilization Strategy of Mortars with Trajectory Correction Fuze Based on Infrared Image Sensor. Sensors 2020, 20, 2461. [Google Scholar] [CrossRef]
  9. Uzair, M.; Brinkworth, R.; Finn, A. Detecting Small Size and Minimal Thermal Signature Targets in Infrared Imagery Using Biologically Inspired Vision. Sensors 2021, 21, 1812. [Google Scholar] [CrossRef]
  10. Chan, R.H.; Kan, K.K.; Nikolova, M.; Plemmons, R.J. A two-stage method for spectral–spatial classification of hyperspectral images. J. Math. Imaging Vis. 2020, 62, 790–807. [Google Scholar] [CrossRef] [Green Version]
  11. Deshpande, S.D.; Er, M.H.; Venkateswarlu, R.; Chan, P. Max-mean and max-median filters for detection of small-targets. Proc. SPIE 1999, 3809, 74–83. [Google Scholar] [CrossRef]
  12. Deng, L.; Zhu, H.; Zhou, Q.; Li, Y. Adaptive top-hat filter based on quantum genetic algorithm for infrared small target detection. Multimedia Tools Appl. 2017, 77, 10539–10551. [Google Scholar] [CrossRef]
  13. Zhang, K.; Yang, K.; Li, S.; Chen, H.-B. A Difference-Based Local Contrast Method for Infrared Small Target Detection Under Complex Background. IEEE Access 2019, 7, 105503–105513. [Google Scholar] [CrossRef]
  14. Lu, Y.; Dong, L.; Zhang, T.; Xu, W. A Robust Detection Algorithm for Infrared Maritime Small and Dim Targets. Sensors 2020, 20, 1237. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Chen, F.; Huang, M.; Ma, Z.; Li, Y.; Huang, Q. An Iterative Weighted-Mean Filter for Removal of High-Density Salt-and-Pepper Noise. Symmetry 2020, 12, 1990. [Google Scholar] [CrossRef]
  16. Kim, S.; Yang, Y.; Lee, J.; Park, Y. Small Target Detection Utilizing Robust Methods of the Human Visual System for IRST. J. Infrared Millimeter Terahertz Waves 2009, 30, 994–1011. [Google Scholar] [CrossRef]
  17. Ming, Z.; Li, J.; Zhang, P. The Design of Top-hat Morphological Filter and Application to Infrared Target Detection. Infr. Phys. Technol. 2005, 48, 67–76. [Google Scholar] [CrossRef]
  18. Huang, S.; Peng, Z.; Wang, Z.; Wang, X.; Li, M. Infrared Small Target Detection by Density Peaks Searching and Maximum-Gray Region Growing. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1919–1923. [Google Scholar] [CrossRef]
  19. Wang, X.; Lv, G.; Xu, L. Infrared dim target detection based on visual attention. Infrared Phys. Technol. 2012, 55, 513–521. [Google Scholar] [CrossRef]
  20. Chen, C.L.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A Local Contrast Method for Small Infrared Target Detection. IEEE Trans. Geosci. Remote Sens. 2014, 52, 574–581. [Google Scholar] [CrossRef]
  21. Deng, L.; Zhang, J.; Xu, G.; Zhu, H. Infrared small target detection via adaptive M-estimator ring top-hat transformation. Pattern Recognit. 2021, 112, 107729. [Google Scholar] [CrossRef]
  22. Han, J.; Ma, Y.; Zhou, B.; Fan, F.; Liang, K.; Fang, Y. A Robust Infrared Small Target Detection Algorithm Based on Human Visual System. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2168–2172. [Google Scholar] [CrossRef]
  23. Qin, Y.; Li, B. Effective Infrared Small Target Detection Utilizing a Novel Local Contrast Method. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1890–1894. [Google Scholar] [CrossRef]
  24. Han, J.; Liang, K.; Zhou, B.; Zhu, X.; Zhao, J.; Zhao, L. Infrared Small Target Detection Utilizing the Multiscale Relative Local Contrast Measure. IEEE Geosci. Remote Sens. Lett. 2018, 15, 612–616. [Google Scholar] [CrossRef]
  25. Wu, L.; Ma, Y.; Fan, F.; Wu, M.; Huang, J. A Double-Neighborhood Gradient Method for Infrared Small Target Detection. IEEE Geosci. Remote Sens. Lett. 2020, 1–5. [Google Scholar] [CrossRef]
  26. Han, J.; Moradi, S.; Faramarzi, I.; Liu, C.; Zhang, H.; Zhao, Q. A Local Contrast Method for Infrared Small-Target Detection Utilizing a Tri-Layer Window. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1822–1826. [Google Scholar] [CrossRef]
  27. Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. 2016, 58, 216–226. [Google Scholar] [CrossRef]
  28. Shi, Y.; Wei, Y.; Yao, H.; Pan, D.; Xiao, G. High-Boost-Based Multiscale Local Contrast Measure for Infrared Small Target Detection. IEEE Geosci. Remote Sens. Lett. 2017, 15, 33–37. [Google Scholar] [CrossRef]
  29. Liu, J.; He, Z.; Chen, Z.; Shao, L. Tiny and Dim Infrared Target Detection Based on Weighted Local Contrast. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1780–1784. [Google Scholar] [CrossRef]
  30. Wang, L.; Li, R.; Shi, H.; Sun, J.; Zhao, L.; Seah, H.S.; Quah, C.K.; Tandianus, B. Multi-Channel Convolutional Neural Network Based 3D Object Detection for Indoor Robot Environmental Perception. Sensors 2019, 19, 893. [Google Scholar] [CrossRef] [Green Version]
  31. Gao, X.; Luo, H.; Wang, Q.; Zhao, F.; Ye, L.; Zhang, Y. A Human Activity Recognition Algorithm Based on Stacking Denoising Autoencoder and LightGBM. Sensors 2019, 19, 947. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Ding, L.; Xu, X.; Cao, Y.; Zhai, G.; Yang, F.; Qian, L. Detection and tracking of infrared small target by jointly using SSD and pipeline filter. Digit. Signal Process. 2020, 110, 102949. [Google Scholar] [CrossRef]
  33. Yang, X.; Wang, F.; Bai, Z.; Xun, F.; Zhang, Y.; Zhao, X. Deep Learning-Based Congestion Detection at Urban Intersections. Sensors 2021, 21, 2052. [Google Scholar] [CrossRef] [PubMed]
  34. Wang, K.; Li, S.; Niu, S.; Zhang, K. Detection of Infrared Small Targets Using Feature Fusion Convolutional Network. IEEE Access 2019, 7, 146081–146092. [Google Scholar] [CrossRef]
  35. López-Sastre, R.J.; Herranz-Perdiguero, C.; Guerrero-Gómez-Olmedo, R.; Oñoro-Rubio, D.; Maldonado-Bascón, S. Boosting Multi-Vehicle Tracking with a Joint Object Detection and Viewpoint Estimation Sensor. Sensors 2019, 19, 4062. [Google Scholar] [CrossRef] [Green Version]
  36. Shi, J.; Chang, Y.; Xu, C.; Khan, F.; Chen, G.; Li, C. Real-time leak detection using an infrared camera and Faster R-CNN technique. Comput. Chem. Eng. 2020, 135, 106780. [Google Scholar] [CrossRef]
  37. Srivastava, A.; Rodriguez, J.; Saco, P.; Kumari, N.; Yetemen, O. Global Analysis of Atmospheric Transmissivity Using Cloud Cover, Aridity and Flux Network Datasets. Remote Sens. 2021, 13, 1716. [Google Scholar] [CrossRef]
  38. Moradi, S.; Moallem, P.; Sabahi, M.F. Fast and robust small infrared target detection using absolute directional mean difference algorithm. Signal Process. 2020, 177, 107727. [Google Scholar] [CrossRef]
Figure 1. Model of the trajectory correction fuze and the infrared image sensor.
Figure 1. Model of the trajectory correction fuze and the infrared image sensor.
Sensors 21 04522 g001
Figure 2. Flowchart of the density-distance space method.
Figure 2. Flowchart of the density-distance space method.
Sensors 21 04522 g002
Figure 3. Calculation process of density and distance of pixels.
Figure 3. Calculation process of density and distance of pixels.
Sensors 21 04522 g003
Figure 4. Detection results of the two targets with different size in the same condition. Both (a) and (b) contain 4 images, which are the original image, detection result, target region mesh and ρ-σ space, respectively.
Figure 4. Detection results of the two targets with different size in the same condition. Both (a) and (b) contain 4 images, which are the original image, detection result, target region mesh and ρ-σ space, respectively.
Sensors 21 04522 g004
Figure 5. Process of the APG method. From left to right is the two-step growth process of the No. 1 seed.
Figure 5. Process of the APG method. From left to right is the two-step growth process of the No. 1 seed.
Sensors 21 04522 g005
Figure 6. Growth features of seeds: (a) contains 5 seeds in different regions, (b) is the seed of the target region and (cf) are the seeds in clutter regions.
Figure 6. Growth features of seeds: (a) contains 5 seeds in different regions, (b) is the seed of the target region and (cf) are the seeds in clutter regions.
Sensors 21 04522 g006
Figure 7. Experiment of detection capability of the infrared imager. The target observation result is shown on the left. The equipment is shown on the right.
Figure 7. Experiment of detection capability of the infrared imager. The target observation result is shown on the left. The equipment is shown on the right.
Sensors 21 04522 g007
Figure 8. Simulation of the anti-noise ability of the algorithm. The SNR decreases from left to right.
Figure 8. Simulation of the anti-noise ability of the algorithm. The SNR decreases from left to right.
Sensors 21 04522 g008
Figure 9. Simulation of the detection ability of multi-target for six images. The first row is the original images, the second row is the corresponding density-distance space and the bottom row is the detection results.
Figure 9. Simulation of the detection ability of multi-target for six images. The first row is the original images, the second row is the corresponding density-distance space and the bottom row is the detection results.
Sensors 21 04522 g009
Figure 10. Simulation of the ROC curves of different algorithms and corresponding AUC values: (ae) are the comparison results of 5 different sequences, where each sequence has 200 frames, and (f) is any frame of image in sequence 2, which is mainly used to specifically explain the reason for the low AUC value of our method.
Figure 10. Simulation of the ROC curves of different algorithms and corresponding AUC values: (ae) are the comparison results of 5 different sequences, where each sequence has 200 frames, and (f) is any frame of image in sequence 2, which is mainly used to specifically explain the reason for the low AUC value of our method.
Sensors 21 04522 g010
Figure 11. Experiment of HIL.
Figure 11. Experiment of HIL.
Sensors 21 04522 g011
Table 1. Parameters of mortar and infrared image sensor.
Table 1. Parameters of mortar and infrared image sensor.
Trajectory ParametersInfrared Imager
Detection distance (m)1500Focal length (mm)19
Launch angle (degree)53Pixel pitch (μm)17
Pitch range (degree)60.3–65.2Fov (degree)17 × 13
Time (s)8Array format320 × 240
Initial velocity (m/s)272Spectral band (μm)7.5–13.5
Table 2. Experimental conditions.
Table 2. Experimental conditions.
Experimental ConditionsDetection Distance (m)Target Size (m)Temperature Difference (K)
Real Conditions15002.315
Equivalent Conditions650.17
Table 3. Details of the six original images.
Table 3. Details of the six original images.
No.BackgroundTarget NumberImage Size
1Sea Building2284 × 213
2Ground3220 × 140
3Cloud Sky3281 × 240
4Ground4220 × 140
5Cloud Sky4250 × 200
6Ground6220 × 140
Table 4. Average running time for a frame, of seven algorithms in five sequences (Seq.).
Table 4. Average running time for a frame, of seven algorithms in five sequences (Seq.).
Top-Hat
(s)
MinLocalLoG
(s)
LS_SVM
(s)
HBMLCM
(s)
RLCM
(s)
MPCM
(s)
Proposed
(s)
Seq.10.00580.01550.01150.01811.97560.02130.0214
Seq.20.00590.01600.01200.01841.96160.02210.0214
Seq.30.00590.01540.01150.01791.98170.02130.0214
Seq.40.00610.01520.01130.01771.97990.02140.0211
Seq.50.00610.01490.01120.01781.96910.02170.0202
Average0.00600.01540.01150.01801.97350.02160.0211
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, C.; Li, D.; Qi, J.; Liu, J.; Wang, Y. Infrared Small Target Detection Method with Trajectory Correction Fuze Based on Infrared Image Sensor. Sensors 2021, 21, 4522. https://doi.org/10.3390/s21134522

AMA Style

Zhang C, Li D, Qi J, Liu J, Wang Y. Infrared Small Target Detection Method with Trajectory Correction Fuze Based on Infrared Image Sensor. Sensors. 2021; 21(13):4522. https://doi.org/10.3390/s21134522

Chicago/Turabian Style

Zhang, Cong, Dongguang Li, Jiashuo Qi, Jingtao Liu, and Yu Wang. 2021. "Infrared Small Target Detection Method with Trajectory Correction Fuze Based on Infrared Image Sensor" Sensors 21, no. 13: 4522. https://doi.org/10.3390/s21134522

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop