Next Article in Journal
Design of Plasmonic Yagi–Uda Nanoantennas for Chip-Scale Optical Wireless Communications
Next Article in Special Issue
A New Parallel Intelligence Based Light Field Dataset for Depth Refinement and Scene Flow Estimation
Previous Article in Journal
Constructing Grith Eight GC-LDPC Codes Based on the GCD FLRM Matrix with a New Lower Bound
Previous Article in Special Issue
Fuzzy Strategy Grey Wolf Optimizer for Complex Multimodal Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Combined Approach to Infrared Small-Target Detection with the Alternating Direction Method of Multipliers and an Improved Top-Hat Transformation

1
Key Laboratory of Nondestructive Testing (Ministry of Education), Nanchang Hang Kong University, Nanchang 330031, China
2
School of Computing and Digital Technology, Birmingham City University, Birmingham B5 5JU, UK
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(19), 7327; https://doi.org/10.3390/s22197327
Submission received: 6 August 2022 / Revised: 13 September 2022 / Accepted: 15 September 2022 / Published: 27 September 2022
(This article belongs to the Special Issue Intelligent Monitoring, Control and Optimization in Industries 4.0)

Abstract

:
In infrared small target detection, the infrared patch image (IPI)-model-based methods produce better results than other popular approaches (such as max-mean, top-hat, and human visual system) but in some extreme cases it suffers from long processing times and inconsistent performance. In order to overcome these issues, we propose a novel approach of dividing the traditional target detection process into two steps: suppression of background noise and elimination of clutter. The workflow consists of four steps: after importing the images, the second step applies the alternating direction multiplier method to preliminarily remove the background. Comparatively to the IPI model, this step does not require sliding patches, resulting in a significant reduction in processing time. To eliminate residual noise and clutter, the interim results from morphological filtering are then processed in step 3 through an improved new top-hat transformation, using a threefold structuring element. The final step is thresholding segmentation, which uses an adaptive threshold algorithm. Compared with IPI and the new top-hat methods, as well as some other widely used methods, our approach was able to detect infrared targets more efficiently (90% less computational time) and consistently (no sudden performance drop).

1. Introduction

Infrared small-target detection is a key technology in infrared search-and-track systems and has been widely used in many areas, such as aerial reconnaissance, early warning, military surveillance, and reconnaissance [1,2,3], for its advantages of long-range detection, full-time operation, and anti-interference. However, due to the nature of light scattering and energy loss in long-distance imaging, the targets are captured in small sizes and often with clutter and noise, resulting in images with low signal-to-noise ratios [4,5,6]. Despite many efforts made in the past decades (as outlined below), the detection of small infrared targets remains a challenge, especially in an environment of complex backgrounds or when detecting extra-small targets.
Generally, infrared small-target detection methods fall into two categories: multiframe image detection, and single-frame image detection. Multiframe image detection utilizes the prior information of the target and background from the previous frame(s) to detect the moving targets. While early studies/algorithms were mostly implemented through dynamic programming [7], three-dimensional matched filtering [8], and multilevel hypothesis testing [9], studies after the 2000s often used combined methods, e.g., using a direction-matched filter based on spatiotemporal information to suppress clutter [10], or a combination of a bilateral filter and a temporal cross product (BF-TCP) [11] with temporal cross-entropy to generate spatiotemporal images to detect targets. In recent years, methods such as the Markov random field guide noise model [12], spatial-temporal local contrast filtering (STLCF) [13], spatial-temporal features measure (STFM) [14], or guided filtering and convolution neural networks were used in many studies [15]. However, those approaches often required prior information about the image and suffered from high time complexities, which was due to numerous calculations on the motion trajectories of all potential targets to determine the potential targets. Furthermore, their performances still depended on the detection performance of a single frame image, which therefore limited their applications in practice. As a result, more studies now focus on the single-frame image approach [16].
Early studies on single-frame images aimed to improve the contrast and signal-to-noise ratio by enhancing targets and suppressing noise, either through linear or nonlinear filtering, or by estimating the background component with preprocessing. Popular algorithms include max-median, max-mean [17], and wavelet transformation [18], which work well when the background in the image is simple. However, significant change in the background, where the grey value gradient of the corresponding area changes greatly, often results in a false detection. In addition, the complex backgrounds also cause high false alarm rates for similar reasons. To tackle such issues, some studies took a morphological approach. In morphology, the scale of the structuring element is a critical parameter; its value determines the accuracy of feature extraction [19] by utilizing the prior knowledge of the size and shape of the target to construct the appropriate structuring elements and then obtain the detection image through the differential operation between the original image and the filtered image, such as in traditional top-hat [20] and adaptive top-hat methods [21,22]. The structuring element has a great impact on the top-hat operation and the new white top-hat transformation (NWTH) [23] is often considered the best in the morphological approach, along with some improved ring top-hat transformation [24,25,26].
Alternatively, human visual system (HVM) was introduced to infrared small-target detection in 2013 when the local contrast measure (LCM) method [27] was proposed. This approach considers the target, and the background can be separated by constructing different local contrasts, as the grey values of the small target are higher than the grey values of the background. It provided both initial good detection performance and short processing time, hence many further studies were carried out, such as improved LCM (ILCM) [28], relative LCM (RLCM) [29], and novel LCM (NLCM) [30]. In addition, the method combined with other modules, such as the method of combining weighted double-layer local contrast and multidirectional map [31], which realizes the detection of small targets in terms of local contrast and gradient of targets, and the method of combining the Laplace of Gaussian (LoG) filter and the negative LoG filter with local contrast [32]. However, such methods show mediocre effects in background suppression when dealing with bright backgrounds and they are also known for the corresponding phenomenon of noise point enhancement.
Based on the characteristics of the infrared small target image, Denney et al. [33] first suggested the target detection problem could be converted into a robust principal component analysis (RPCA) problem. Gao et al. [34] further proposed an infrared patch image model (IPI) that considered the infrared background as a single low-rank subspace and therefore the small target should be regarded as an outlier, i.e., the small-target detection problem became an optimization problem of low-rank-matrix and sparse-matrix recovery. In IPI, the infrared image was first processed and transformed into smaller image patches. Then, the accelerated proximal gradient (APG) algorithm was used to solve the low-rank- and sparse-matrix separation problem. Such an approach showed superior results but required a significantly longer processing time, normally in seconds or tens of seconds (for comparison, the processing times of the methods based on the aforementioned approaches were under 0.1 s). In addition, its performance could be affected by serious clutter and noise. To address such issues, some studies considered replacing APG with the alternating direction method of multipliers (ADMM), such as the weighted IPI model (WIPI) [35] and the non-negative IPI model, by minimizing the partial sum of singular values (NIPPS) [36]. Despite the excellent detection performance, the high time complexity of this approach is yet to be resolved.
With the development of deep learning in the field of computer vision, some studies applied this method to infrared small-target detection [37,38,39,40]. Such an approach provides comparative performance but requires training the model with a large amount of data in advance. Furthermore, such models rely on the types of training data, i.e., the detection performances on various and new backgrounds/scenarios may vary. Although the deep learning model and network structure are becoming lighter and lighter [41,42], it is still a challenge to apply these models to the field of infrared small-target detection because the characteristics of the infrared small target only occupy individual pixels and fuzzy textures.
To address such issues, we propose a novel combined approach to further improve the excellent detection performance of IPI models, as well as to significantly reduce the processing time by incorporating an improved NWTH transformation with a specifically designed threefold structuring element. Our method produced a consistent performance across all five testing image sequences. It used 90% less time than the IPI method and scored the best in all metrics overall as compared to the methods using a single approach.
The paper is organized as follows: Section 2 presents the RPCA and morphological approaches and their recent developments which underpin our study. Section 3 describes the combined approach and the workflow of the proposed method. Section 4 shows the experimental results and the evaluation against six state-of-art methods. The conclusion is presented in Section 5.

2. Related Work

2.1. Robust Principal Component Analysis (RPCA)

In infrared small-target detection, on one hand, the infrared images normally have a characteristic of nonlocal autocorrelation [43]; thus, the background can be represented by a low-rank matrix. For example, Figure 1 shows four classic infrared images and their corresponding singular value curves. Although the images have different backgrounds, their singular values converge to zero quickly.
On the other hand, the small target can be considered a sparse matrix when the target area is less than 15% of the total image and the signal-to-clutter ratio between the target and the background is less than 4 dB [27]. As a result, the original infrared image is composited of a background image with low-rank characteristics and a foreground image containing a small target showing sparse characteristics and noise. Therefore, RPCA can be used to separate the background and the foreground.

2.2. Alternating Direction Method of Multipliers (ADMM)

The ADMM algorithm is a method to solve the RPCA convex optimization problem [35,36] through which the infrared image is taken as the low-rank data observation matrix D . However, when D is affected by random noise, its low-rank characteristic disappears, and D becomes full rank. One solution is to convert the constrained optimization into an unconstrained optimization through convex optimization, i.e., to decompose D into a low-rank matrix and a sparse matrix to its real structure. As a result, the RPCA problem can be represented in the following form:
min A , E A *   +   λ E 1   +   γ N F 2       s . t . D   =   A   +   E   +   N
where A *   =   i = 1 m δ i ( A ) represents the kernel norm of matrix A ; δ i ( A ) represents the i th singular value of A ; λ is the weighting of the noise; E 1   =   i j | E i j | represents the sum of the absolute values of all elements in matrix E ; A is the background; E is the target; N is the random noise; and γ is the weighting of the random noise with a small value. The augmented Lagrange function of Equation (1) is defined as follows:
L ρ ( A , E , N , G , ρ ) = A *   +   λ E 1   +   γ N F 2   +   G , D A E N   +   ρ / 2 D A E N F 2
where ρ is the penalty factor; G is the Lagrange multiplier; and ρ / 2 D A E N F 2 represents the square regular term, as the additional constraint when compared to Equation (1).

2.3. Top-Hat and NWTH

In the traditional top-hat method [20], separated the target by subtracting the result of the original image after the opening operation from the original image. In morphology, opening is used to eliminate bright pixels (i.e., the target) from an infrared image via a carefully constructed structuring element. The opening operation is defined as follows:
f ( x , y ) B = ( f ( x , y ) B ) B
where f ( x , y ) represents the original image; B is the structuring element; represents the opening operation; is the dilation operation; and ⊖ is the erosion operation.
To better tackle the heavy clutter and noise, the new white top-hat method (NWTH) [23] proposed a new operation by swapping the order of erosion and dilation as below:
f ( x , y ) B o i = ( f ( x , y ) Δ B ) B b
where represents new operation; and Δ B and B b represent structural elements in Figure 2. In addition, a different but correlated structuring element (see Figure 2) were used for both erosion and dilation.

3. The Proposed Method

3.1. Overview

On one hand, although the RPCA-based IPI methods generally produced better results, they suffered from long processing times, which, in most cases, were among tens of seconds. This was due to the nature of reconstructing the matrix via the patch images using the nonlocal autocorrelation and cannot be easily improved without fundamentally changing the algorithms. On the other hand, methods using the morphologic or the HVM approaches could run much faster (under one second) and were still able to produce good results. Our motivation was to explore whether a combined approach could be made possible to achieve both a better result and a short processing time.
For a given infrared image, it can be decomposed as target, background, and noise [34]:
f D ( x , y ) = f T ( x , y ) + f B ( x , y ) + f N ( x , y )
where ( x , y ) is the coordinate of the pixel; f D is the original infrared image; f T represents the target; f B represents the background; and f N represents the noise.
Rather than using a single step to separate the target matrix f T directly, we proposed to first separate the background f B from the image, which can be described as a typical RPCA problem (where ADMM could be used without using IPI). In the next step, with the new image having most of the background removed, the morphological filtering method could be performed. We considered NWTH to be a good choice, as it was specifically designed to tackle noise and clutter. It is worth pointing out that although the morphological approach was not good at dealing with the complex background, such a weakness would not be exposed in our combined approach wherein the background was already preliminarily removed in the first step. We did not choose the HVM-based approach because it had issues dealing with images that had bright backgrounds where such a character could not be mitigated in the first step. As a result, our proposed method is described in Figure 3.

3.2. Image Decomposition—ADMM

The goal of this step was to separate the background from the image, which was an RPCA-related problem as discussed in Section 2. As compared to the traditional RPCA-based methods wherein both the background and the noise were suppressed in a single process, in our approach, the image decomposition step had a much higher tolerance and allowed partially residual background and noise, which would be suppressed at the next step of morphological filtering, i.e., the decomposition aimed to mainly separate f B from Equation (1). Therefore, this step was considered a preliminary suppression and ADMM could be applied to the whole image directly. As compared to the IPI-based methods, our approach did use process patch images with a sliding window, and thus could significantly improve the processing time.
To attack the problem defined by Equation (2), only one of A , E , and G was targeted to solve the proximity function at each iteration, while the other two were fixed. The pseudocode of the algorithm is shown in Algorithm 1.
Algorithm 1 The pseudocode of ADMM.
Output: Sparse matrix A k and low-rank matrix E k
let λ = 1 / m a x ( m , n ) , E 0 = 0 , N 0 = 0 , k = 1 , σ = 2 ;
while (not converged)
        ( U k ,   k ,   V k ) = S V D ( D E k N k + ρ k 1 G k )
        A k + 1 = U k S ρ k 1 ( k ) V k T ;
        E k + 1 = S λ / ρ ( D A k + 1 N k + ρ 1 G k ) ;
        N k + 1 = ρ / ρ + 2 γ ( D A k + 1 N k + 1 + ρ 1 G k ) ;
        G k + 1 = G k + ρ k ( D A k + 1 E k + 1 N k + 1 ) ;
        ρ k + 1 = σ ρ k ;
        k = k + 1 ;
end
k represents the number of iterations; σ represents the coefficient of the penalty factor at each iteration; S V D ( D E k N k + ρ k 1 G k ) represents the singular value decomposition of matrix D E k N k + ρ k 1 G k ; U k and V k represent the left and right orthogonal matrices of the singular value decomposition of matrix D E k N k + ρ k 1 G k ; k represents the diagonal matrix composed of the eigenvalues of the singular value decomposition; S ρ k 1 represents the contraction operator given the specific penalty factor ρ k 1 .
After decomposing the low-rank matrix (i.e., the background f B ), the remaining components of the image consisted of the target f T and the noise, which included the original noise f N plus the residual background.

3.3. Morphological Filtering—An Improved NTWH Transformation

To better identify the small targets from the decomposed sparse-matrix image, we constructed a threefold structuring element when adapting the top-hat-based method NWTH [23]. The structuring element is shown in Figure 4. S p is the structuring element for dilation, which is formed by S i subtracting S o . S o (a square) represents the outer shape of S p , for which the size is slightly larger than the target, while S i (a square diamond) represents the inner shape of S p , for which the size is slightly smaller than the target. S f (a circle) is the structuring element for erosion, for which the size should be between S i and S o . The matrices of S p and S f are shown in Figure 5, where “1” represents the structuring elements.
To demonstrate how this threefold structuring element works, the process of the improved NWTH transformation is shown in Figure 6, wherein the images of the target region at each step are at the top and their corresponding matrices are at the bottom. The target is in the shape of 5 × 3, and its corresponding pixels are highlighted in blue in Figure 6a. With the specifically constructed structuring element S p , the pixels in the surrounding area of the target all gained the local maximum values via dilation. Figure 6b shows the target was successfully highlighted by a rectangle (of bright pixels) while the target itself was restrained in a smaller diamond shape (of grey pixels). The result of erosion is shown in Figure 6c, wherein the target was enlarged into a rectangle (of grey pixels) with a highlighted outer boundary (of bright pixels). This was because of the circle structuring element S f had a size between the outer boundary and the inner boundary of S p . The final result was obtained by subtracting Figure 6c from Figure 6a, wherein all background and noise were eliminated and the target was successfully obtained, as shown in Figure 6d. During the substruction, the pixels in the nontarget regions might result in negative values, which should be set to 0. As a result, our improved NWTH transformation was defined as below:
T i N W = f ( x , y ) m i n ( ( f ( x , y ) S p ) S f ,   f ( x , y ) )
where f ( x , y ) represents the original image; min is the minimum operation; is dilation; ⊖ is erosion; and S p and S f are defined in Figure 4. To better eliminate the clutter and noise in the target region, the sizes of S i and S o for S p should be adjusted accordingly.

3.4. Adaptive Thresholding Segmentation

After the above steps, there might be a few false alarm points which generally occupied only one or two pixels. To eliminate them, the image was binarized via adaptive thresholding segmentation. Threshold T was defined as [24,44]:
T = M + k × S
where M and S represent the average value and standard deviation of the image after background suppression, respectively, and k is an empirical constant which has its value set to 40 in this study.

4. Experimental Results and Analysis

4.1. Experimental Setup

4.1.1. Hardware and Software

The simulation experiment was carried out in a MATLAB r2020b environment. The experimental hardware used an HPS-P18C32GB workstation, with Intel Xeon scalable platinum 8124 m 3.0 GHz, and 32 GB DDR RAM.

4.1.2. Datasets

To test and verify the performance of the proposed method, five sequences of images were selected from open-sourced infrared image datasets [45,46] and their properties are shown in Table 1.

4.1.3. Baseline Methods

Six methods were selected for comparison, including the traditional max-mean algorithm [17] as a baseline method, two morphological filtering methods (the classical top-hat transform [20] and the new top-hat method NWTH [23]), two HVS-based methods (the LCM algorithm [27] and the RLCM algorithm [29]), and one RPCA-based model (IPI [34]). The parameters of such methods are shown in Table 2.

4.1.4. Evaluation Metrics

To quantitatively measure the effectiveness of target highlighting and background compression, signal-to-clutter ratio (SCR) and background suppression factor (BSF) [12,13,34,35] are the two commonly used metrics:
S C R = | μ t μ b | / σ b
B S F = σ i n / σ o u t
where μ t represents the mean values of target pixels; μ b represents the mean of the background pixels around the target; σ b represents the standard deviation of that background; and σ o u t and σ i n represent the standard deviation between the output image and the input image. BSF and SCR are calculated against the images before thresholding segmentation. The larger the value, the better the target detection and background compression effects of the algorithm.
False alarm rate ( F a ) [34,46,47] is used to describe an algorithm’s capacity for making correct detections. The calculation uses the result after thresholding segmentation. In this paper, we adopted the definition of the falsely detected pixels:
F a = N f / N w
where N f represents the number of pixels that are falsely detected and N w represents the total number of pixels of the whole image.
To evaluate an algorithm’s effect on a sequence of infrared frames, the average values ( S C R ¯ , B S F ¯ , and F a ¯ ) are defined as below:
S C R ¯ = 1 N i = 1 N S C R i
B S F ¯ = 1 N i = 1 N B S F i
F a ¯ = 1 N i = 1 N F a i
where N represents the total number of frames in the sequence; S C R i , B S F i , and F a i represent the values of SCR, BSF, and F a of the i th frame.
The processing time of each frame was recorded to calculate the average processing time of each image sequence.

4.2. Experimental Results: Results at Each Stage, in Four Typical Backgrounds

To better illustrate the workflow of the proposed method, four typical backgrounds, i.e., (a) sky, (b) cloud, (c) land, and (d) sea, were selected from the SIRST dataset [48]. The simulation results of each of the four steps are shown in Figure 7. Row one represents the original infrared images, respectively, wherein the targets are circled with a blue rectangle. Row two shows the low-rank-matrix image after image decomposition (i.e., ADMM), wherein the background was preliminarily suppressed. Row three displays the images after the morphological filtering stage (i.e., the improved top-hat transformation), wherein the targets were successfully separated. At this stage, the background suppression was completed and most of the background noise was eliminated. While scatters of noise might still exist, they were further removed at the adaptive threshold segmentation step. The results are shown in row four, wherein the targets were identified with no concern for false alarm points.
Figure 8 shows the images after the morphological filtering step in three-dimensional diagrams, in which the backgrounds were effectively suppressed. In all images, there was no false alarm point observed.

4.3. Experimental Results: Comparison to the State-of-Art Algorithms

To evaluate the effectiveness and adaptability of the proposed method, the comparisons were carried out in three aspects: visual observation, quantitative measurement, and overall performance.

4.3.1. Visual Observation

The visual comparisons are shown in Figure 9 (the results before the adaptive threshold segmentation step were used). In terms of background suppression, the effect of max-mean was mediocre and most of the high-frequency background remained in all five sequences. For the classic top-hat and the LCM methods, both effects were insufficient as a large amount of continuous background clutter was left in every image. The RLCM method produced much better results as compared to the three. However, due to the nature of contrast enhancement, some noise points became more prominent, which were observed in sequences 1, 2, and 5. For NWTH, IPI, and our method, the background suppression effects were excellent.
In terms of detecting the correct targets, all six methods were able to highlight the target regions, despite many also falsely highlighting noise points (however, those noise points did not necessarily become false alarm points after adaptive threshold segmentation). For the max-mean, the traditional top-hat, and both LCM methods, their detecting capacities were restricted by their background suppression effects, i.e., the residual bright background would mostly result in false alarm points. The NWTH method had bright points left in all five sequences, which indicated the structuring element of the top-hat transformation could be refined for the selected image sequences with complex backgrounds. The IPI method produced the second-best outcomes with some potential false detections in sequences 4 and 5 which had fast-changing backgrounds with a tracking camera position. Our method had the best visual representations with two minor noise points in sequence 5.
Figure 9 shows the most representative frame from each image sequence, aiming to demonstrate the method’s overall performance on the given sequence. However, it is worth pointing out that some methods did perform inconsistently on certain frames, which resulted in a false alarm or nondetection. For example, the NWTH method did not detect any target from frames 11 to 13 in sequence 3 (Figure 10); the RLCM method was not able to detect any target from frames 11 to 15 in sequence 3 (Figure 10); the IPI method retained the background components in nearly half of the frames (i.e., 1–6, 10–12, and 16) in sequence 1 (see Figure 9, column 1, row 7 for example), which resulted in low SCR values. Since the image datasets did not provide any timestamp on each frame, we were not able to identify the real time elapsed in those frames. Therefore, we could not evaluate to what extent those inconsistent performances would affect the accuracy of monitoring or surveillance in the real world.

4.3.2. Quantitative Comparison

The results of the average SCR are shown in Table 3. Our method had three best scores (seq 1, 2, and 4) and two second-best scores (seq 3, 5), which demonstrates that our method was superior in highlighting the target. NWTH, RLCM, and IPI had similar overall rankings, which were considerably better than max-mean, top-hat, and LCM. More specifically, our method produced much better SCR values (240+) than all methods in sequences 1 and 2. In sequence 3, the LCM had the best SCR value. Although our score (490.00) came second, it was considered sufficient and showed a clear advantage over both the NWTH and IPI results. In sequence 4, our method scored the best (35.26), while most other methods had values around 10. In sequence 5, the IPI had a distinct SCR value over others, but our result (20.68) was still much larger than the results (around 10) of the remaining five methods.
The results of the average BSF are shown in Table 4. NWTH, IPI, and our method had much better values than max-mean, top-hat, LCM, and RLCM in all five sequences, while IPI and our method showed superior results to NWTH, especially in sequences 1 and 4. As compared to the IPI algorithm, our method scored one best (seq 3) and four second-best (seq 1, 2, 4, and 5). Such results were considered on par with IPI (four best and one second best) as the differences in sequence 2 and sequence 5 were merely 2%.
For the processing time, the results of three methods (NWTH, IPI, and our method) are shown in Table 5. NWTH was an algorithm using the morphological approach that had a very short processing time due to the simplicity of matrix operations. It yielded the best processing times in milliseconds, with an average time of 0.016 s across all five sequences. IPI was an algorithm adapting RPCA, which required a significant amount of processing time due to the complexity of the sliding steps. It had the worst processing time among the three, with an average of 29.28 s. Although our method combined both the RPCA and morphological approach, it did not require a small patch in image decomposition, i.e., it had a much smaller time complexity than IPI. Our method had an average processing time of 2.25 s, which was a huge improvement (92% less) from IPI and could be considered as near-real-time detection.
The false alarm rates over threshold levels are shown in Figure 11, which indicated that NWTH, IPI, and our method were superior to max-mean, top-hat, LCM, and RLCM in all sequences (except IPI in sequence 1). More specifically, when compared to NWTH, our method achieved negligible better false alarm rates in sequences 1–3 but clear better results in sequences 4 and 5, in which the background became more complex and/or the targets were smaller. When compared to IPI, our method had better rates in sequences 1–3, and 5, while IPI was better in sequence 4. It is worth pointing out that the IPI method showed poor false alarm rates at low thresholds in sequences 1–3 and such results were considered due to its inconsistent background suppression performance, as discussed in the previous visual observation section.

4.3.3. Overall Comparison

Based on the comparisons above, all NWTH, IPI, and our method exhibited a clear superior performance to max-mean, top-hat, LCM, and RLCM in both visual observations and quantitative (SCR, BSF, and F a ) comparisons. Despite showing clutter and noise in the visual observation, NWTH was able to eliminate most of them via threshold segmentation, thus achieving much better false alarm rates than the other four algorithms. However, NWTH was inferior to both IPI and our method in almost every aspect except for processing time. When compared to IPI, our method showed better visual and SCR results and similar results in BSF, which might suggest that both methods possess similar target detection performances. However, the IPI method showed poor F a rates at low thresholds in sequences 1–3 due to its unstable background suppression effects in certain situations. In addition, the main drawback of IPI is the high processing times required, whereas our method had a clear advantage, using approximately 8% of IPI’s processing time on average. In a nutshell, our method showed the best overall performance and had wider applications than IPI thanks to its relatively low processing time (2.25 s on average).

4.4. Additional Experimental Results: Effectiveness of Our New Top-Hat Structuring Element

In the morphologic filtering step of our method, we proposed a threefold structuring element (see Figure 4) as an improvement to NWTH. To further evaluate its effectiveness, an additional experiment was run using the improved transformation T i N W (see Equation (6)) on all five sequences in which both BSF and SCR were captured. Figure 12 and Figure 13 showed the ratios of the differences between our improved transformation and the original NWTH for both SCR and BSF.
For SCR, our new structuring element showed overall better results in sequence 1–3, except for in a few frames (i.e., frames 17–22, sequences 2 and 3), and much better results (mostly 100% better) in sequences 4 and 5 for all frames, which indicates that our solution is superior for dealing with complex backgrounds. For BSF, our structuring element did yield better performances in sequences 1–3, but the differences were small (around 10%). In sequences 4 and 5, the performances were mixed, and no clear winner could be identified. In other words, there was no clear advantage of our specifically designed structuring element over the original element from NWTH when being used solely. However, our structuring element became more effective when combined with image decomposition in the proposed algorithm. As shown in Table 3 and Table 4, our combined approach achieved better scores in both SCR and BSF for all sequences.

5. Conclusions

In this paper, we presented a combined approach to detect small targets in infrared images which contained two key separate steps. The first was to convert small-target detection problems into optimization problems of low-rank sparse matrix recovery, where ADMM (without sliding steps) was used for preliminary background suppression. The second key step was to process the obtained interim images via an improved NWTH transformation with a specifically designed threefold structuring element, where the targets were further separated from the noise and clutter. The method was described in a workflow of four stages: (1) Image input; (2) image decomposition; (3) morphological filtering; and (4) thresholding segmentation. The outcomes of the workflow at each stage showed the effectiveness of the combined approach, which was conducted through four images with typical backgrounds (i.e., sky, cloud, land, and sea). In the evaluation against other state-of-art methods, our method showed superior results in both visual and quantitative comparisons. When compared to the baseline max-mean method, two top-hat-based methods (i.e., classical top-hat and NWTH), and two HVM-based methods (i.e., LCM and RLCM), our approach outperformed them in all SCR, BSF, and false alarm rate results. In addition, the effectiveness of the improved threefold structuring element against the one from NWTH was further demonstrated in the head-to-head comparisons of SCR and BSF. In addition, when compared to IPI, our method requires significantly less processing time and more consistent detection performances among the five different types of image sequences.

Author Contributions

Conceptualization, T.X.; methodology, T.X. and Q.S.; software, T.X.; validation, T.X.; formal analysis, T.X.; investigation, T.X.; resources, L.Y.; data curation, T.X.; writing—original draft preparation, T.X.; writing—review and editing, Q.S. and L.Y.; visualization, T.X.; supervision, L.Y.; project administration, L.Y.; funding acquisition, L.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This thesis is supported by the National Natural Science Foundation of China (51865038).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yavari, M.; Moallem, P.; Kazemi, M.; Moradi, S. Small Infrared Target Detection Using Minimum Variation Direction Interpolation. Digit. Signal Process. A Rev. J. 2021, 117, 103174. [Google Scholar] [CrossRef]
  2. Xu, L.; Wei, Y.; Zhang, H.; Shang, S. Robust and Fast Infrared Small Target Detection Based on Pareto Frontier Optimization. Infrared Phys. Technol. 2022, 123, 104192. [Google Scholar] [CrossRef]
  3. Bai, X.; Zhou, F.; Jin, T. Enhancement of Dim Small Target through Modified Top-Hat Transformation under the Condition of Heavy Clutter. Signal Process. 2010, 90, 1643–1654. [Google Scholar] [CrossRef]
  4. Li, Y.; Zhang, Y. Robust Infrared Small Target Detection Using Local Steering Kernel Reconstruction. Pattern Recognit. 2018, 77, 113–125. [Google Scholar] [CrossRef]
  5. Li, Z.Z.; Chen, J.; Hou, Q.; Fu, H.X.; Dai, Z.; Jin, G.; Li, R.Z.; Liu, C.J. Sparse Representation for Infrared Dim Target Detection via a Discriminative Over-Complete Dictionary Learned Online. Sensors 2014, 14, 9451–9470. [Google Scholar] [CrossRef] [PubMed]
  6. Wan, M.; Gu, G.; Cao, E.; Hu, X.; Qian, W.; Ren, K. In-Frame and Inter-Frame Information Based Infrared Moving Small Target Detection under Complex Cloud Backgrounds. Infrared Phys. Technol. 2016, 76, 455–467. [Google Scholar] [CrossRef]
  7. Tonissen, S.M.; Evans, R.J. Performance of Dynamic Programming Techniques for Track-Before-Detect. IEEE Trans. Aerosp. Electron. Syst. 1996, 1440, 1440–1451. [Google Scholar] [CrossRef]
  8. Reed, I.S.; Gagliardi, R.M.; Shao, H.M. Application of Three-Dimensional Filtering to Moving Target Detection. IEEE Trans. Aerosp. Electron. Syst. 1983, 6, 898–905. [Google Scholar] [CrossRef]
  9. Blostein, S.D. A Sequential Detection Approach to Target Tracking. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 197–212. [Google Scholar] [CrossRef]
  10. Tartakovsky, A.G.; Brown, J. Adaptive Spatial-Temporal Filtering Methods for Clutter Removal and Target Tracking. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 1522–1537. [Google Scholar] [CrossRef]
  11. Bae, T.W. Small Target Detection Using Bilateral Filter and Temporal Cross Product in Infrared Images. Infrared Phys. Technol. 2011, 54, 403–411. [Google Scholar] [CrossRef]
  12. Gao, C.; Wang, L.; Xiao, Y.; Zhao, Q.; Meng, D. Infrared Small-Dim Target Detection Based on Markov Random Field Guided Noise Modeling. Pattern Recognit. 2018, 76, 463–475. [Google Scholar] [CrossRef]
  13. Deng, L.; Zhu, H.; Tao, C.; Wei, Y. Infrared Moving Point Target Detection Based on Spatial-Temporal Local Contrast Filter. Infrared Phys. Technol. 2016, 76, 168–173. [Google Scholar] [CrossRef]
  14. Mu, J.; Rao, J.; Chen, R.; Li, F. Low-Altitude Infrared Slow-Moving Small Target Detection via Spatial-Temporal Features Measure. Sensors 2022, 22, 5136. [Google Scholar] [CrossRef]
  15. Qian, K.; Zhou, H.; Qin, H.; Rong, S.; Zhao, D.; Du, J. Guided Filter and Convolutional Network Based Tracking for Infrared Dim Moving Target. Infrared Phys. Technol. 2017, 85, 431–442. [Google Scholar] [CrossRef]
  16. Jiang, Y.; Dong, L.; Chen, Y.; Xu, W. An Infrared Small Target Detection Algorithm Based on Peak Aggregation and Gaussian Discrimination. IEEE Access 2020, 8, 106214–106225. [Google Scholar] [CrossRef]
  17. Deshpande, S.D.; Er, M.H.; Venkateswarlu, R.; Chan, P. Max-Mean and Max-Median Filters for Detection of Small-Targets. Signal Data Process. Small Targets 1999, 3809, 74–83. [Google Scholar] [CrossRef]
  18. Gregoris, D.J.; Yu, S.K.; Tritchew, S.; Sevigny, L. Wavelet Transform-Based Filtering for the Enhancement of Dim Targets in FLIR Images. Wavelet Appl. 1994, 2242, 573. [Google Scholar] [CrossRef]
  19. Yao, R.; Guo, C.; Deng, W.; Zhao, H. A Novel Mathematical Morphology Spectrum Entropy Based on Scale-Adaptive Techniques. ISA Trans. 2021, 126, 691–702. [Google Scholar] [CrossRef]
  20. Zeng, M.; Li, J.; Peng, Z. The Design of Top-Hat Morphological Filter and Application to Infrared Target Detection. Infrared Phys. Technol. 2006, 48, 67–76. [Google Scholar] [CrossRef]
  21. Deng, L.; Zhu, H.; Zhou, Q.; Li, Y. Adaptive Top-Hat Filter Based on Quantum Genetic Algorithm for Infrared Small Target Detection. Multimed. Tools Appl. 2018, 77, 10539–10551. [Google Scholar] [CrossRef]
  22. Ai, J. The Application of SVD-Based Speckle Reduction and Tophat Transform in Preprocessing of Ship Detection. IET Conf. Publ. 2015, 2015, 9–12. [Google Scholar] [CrossRef]
  23. Bai, X.; Zhou, F. Analysis of New Top-Hat Transformation and the Application for Infrared Dim Small Target Detection. Pattern Recognit. 2010, 43, 2145–2156. [Google Scholar] [CrossRef]
  24. Deng, L.; Zhang, J.; Xu, G.; Zhu, H. Infrared Small Target Detection via Adaptive M-Estimator Ring Top-Hat Transformation. Pattern Recognit. 2021, 112, 107729. [Google Scholar] [CrossRef]
  25. Wang, C.; Wang, L. Multidirectional Ring Top-Hat Transformation for Infrared Small Target Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8077–8088. [Google Scholar] [CrossRef]
  26. Zhu, H.; Zhang, J.; Xu, G.; Deng, L. Balanced Ring Top-Hat Transformation for Infrared Small-Target Detection with Guided Filter Kernel. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 3892–3903. [Google Scholar] [CrossRef]
  27. Chen, C.L.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A Local Contrast Method for Small Infrared Target Detection. IEEE Trans. Geosci. Remote Sens. 2014, 52, 574–581. [Google Scholar] [CrossRef]
  28. Han, J.; Ma, Y.; Zhou, B.; Fan, F.; Liang, K.; Fang, Y. A Robust Infrared Small Target Detection Algorithm Based on Human Visual System. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2168–2172. [Google Scholar] [CrossRef]
  29. Han, J.; Liang, K.; Zhou, B.; Zhu, X.; Zhao, J.; Zhao, L. Infrared Small Target Detection Utilizing the Multiscale Relative Local Contrast Measure. IEEE Geosci. Remote Sens. Lett. 2018, 15, 612–616. [Google Scholar] [CrossRef]
  30. Qin, Y.; Li, B. Effective Infrared Small Target Detection Utilizing a Novel Local Contrast Method. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1890–1894. [Google Scholar] [CrossRef]
  31. Ren, L.; Pan, Z.; Ni, Y. Double Layer Local Contrast Measure and Multi-Directional Gradient Comparison for Small Infrared Target Detection. Optik 2022, 258, 168891. [Google Scholar] [CrossRef]
  32. Li, Q.; Nie, J.; Qu, S. A Small Target Detection Algorithm in Infrared Image by Combining Multi-Response Fusion and Local Contrast Enhancement. Optik 2021, 241, 166919. [Google Scholar] [CrossRef]
  33. Denney, B.S.; de Figueiredo, R.J. Optimal Point Target Detection Using Adaptive Auto Regressive Background Prediction. In Signal and Data Processing of Small Targets 2000; SPIE: Bellingham, WA, USA, 2000; Volume 4048, pp. 46–57. [Google Scholar]
  34. Gao, C.; Meng, D.; Yang, Y.; Wang, Y.; Zhou, X.; Hauptmann, A.G. Infrared Patch-Image Model for Small Target Detection in a Single Image. IEEE Trans. Image Process. 2013, 22, 4996–5009. [Google Scholar] [CrossRef]
  35. Dai, Y.; Wu, Y.; Song, Y. Infrared Small Target and Background Separation via Column-Wise Weighted Robust Principal Component Analysis. Infrared Phys. Technol. 2016, 77, 421–430. [Google Scholar] [CrossRef]
  36. Dai, Y.; Wu, Y.; Song, Y.; Guo, J. Non-Negative Infrared Patch-Image Model: Robust Target-Background Separation via Partial Sum Minimization of Singular Values. Infrared Phys. Technol. 2017, 81, 182–194. [Google Scholar] [CrossRef]
  37. Fan, Z.; Bi, D.; Xiong, L.; Ma, S.; He, L.; Ding, W. Dim Infrared Image Enhancement Based on Convolutional Neural Network. Neurocomputing 2018, 272, 396–404. [Google Scholar] [CrossRef]
  38. Liu, Q.; Lu, X.; He, Z.; Zhang, C.; Chen, W.S. Deep Convolutional Neural Networks for Thermal Infrared Object Tracking. Knowl.-Based Syst. 2017, 134, 189–198. [Google Scholar] [CrossRef]
  39. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  40. Zhao, B.; Wang, C.; Fu, Q.; Han, Z. A Novel Pattern for Infrared Small Target Detection with Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4481–4492. [Google Scholar] [CrossRef]
  41. Che, J.; Wang, L.; Bai, X.; Liu, C.; Zhou, F. Spatial—Temporal Hybrid Feature Extraction Network for Few—Shot Automatic Modulation Classification. IEEE Trans. Veh. Technol. 2022, 1–6. [Google Scholar] [CrossRef]
  42. Liu, L.; Ma, B.; Zhang, Y.; Yi, X.; Li, H. AFD-Net: Adaptive Fully-Dual Network for Few-Shot Object Detection. In Proceedings of the 29th ACM International Conference on Multimedia, Chengdu, China, 20–24 October 2021; pp. 2549–2557. [Google Scholar] [CrossRef]
  43. Wright, J.; Peng, Y.; Ma, Y.; Ganesh, A.; Rao, S. Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices by Convex Optimization. In Proceedings of the Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 7–10 December 2009; pp. 2080–2088. [Google Scholar]
  44. Shi, Y.; Wei, Y.; Yao, H.; Pan, D.; Xiao, G. High-Boost-Based Multiscale Local Contrast Measure for Infrared Small Target Detection. IEEE Geosci. Remote Sens. Lett. 2018, 15, 33–37. [Google Scholar] [CrossRef]
  45. Moradi, S.; Moallem, P.; Sabahi, M.F. Fast and Robust Small Infrared Target Detection Using Absolute Directional Mean Difference Algorithm. Signal Processing 2020, 177, 107727. [Google Scholar] [CrossRef]
  46. Moradi, S.; Moallem, P.; Sabahi, M.F. A False-Alarm Aware Methodology to Develop Robust and Efficient Multi-Scale Infrared Small Target Detection Algorithm. Infrared Phys. Technol. 2018, 89, 387–397. [Google Scholar] [CrossRef]
  47. Aghaziyarati, S.; Moradi, S.; Talebi, H. Small Infrared Target Detection Using Absolute Average Difference Weighted by Cumulative Directional Derivatives. Infrared Phys. Technol. 2019, 101, 78–87. [Google Scholar] [CrossRef]
  48. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Asymmetric Contextual Modulation for Infrared Small Target Detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Virtual, 5–9 January 2021; pp. 949–958. [Google Scholar] [CrossRef]
Figure 1. Infrared image and its corresponding singular value curve.
Figure 1. Infrared image and its corresponding singular value curve.
Sensors 22 07327 g001
Figure 2. Relationship of the structuring elements in NWTH [23]. Δ B consists of B i and B o , Δ B and represent structural elements for dilation and erosion operations, respectively.
Figure 2. Relationship of the structuring elements in NWTH [23]. Δ B consists of B i and B o , Δ B and represent structural elements for dilation and erosion operations, respectively.
Sensors 22 07327 g002
Figure 3. The workflow of the proposed method. red squares represents small target.
Figure 3. The workflow of the proposed method. red squares represents small target.
Sensors 22 07327 g003
Figure 4. Relationship of the threefold structuring elements in our improved NWTH transformation.   S p consists of S i and S o , S p and S f represent structural elements for dilation and erosion operations, respectively.
Figure 4. Relationship of the threefold structuring elements in our improved NWTH transformation.   S p consists of S i and S o , S p and S f represent structural elements for dilation and erosion operations, respectively.
Sensors 22 07327 g004
Figure 5. The matrix representations of the structuring elements.
Figure 5. The matrix representations of the structuring elements.
Sensors 22 07327 g005
Figure 6. The process of morphological filtering: (a) Small target area in the original image; (b) dilation; (c) erosion; (d) the results of our improved NWTH. A 9 × 9 matrix of the target region is shown at each step.
Figure 6. The process of morphological filtering: (a) Small target area in the original image; (b) dilation; (c) erosion; (d) the results of our improved NWTH. A 9 × 9 matrix of the target region is shown at each step.
Sensors 22 07327 g006
Figure 7. The simulation results at each stage of four typical backgrounds: (a1) sky, (b1) cloud, (c1) land, and (d1) sea; (a2d2) show low-rank-matrix image after image decomposition; (a3d3) displays the images after the morphological filtering stage; (a4d4) show final results.
Figure 7. The simulation results at each stage of four typical backgrounds: (a1) sky, (b1) cloud, (c1) land, and (d1) sea; (a2d2) show low-rank-matrix image after image decomposition; (a3d3) displays the images after the morphological filtering stage; (a4d4) show final results.
Sensors 22 07327 g007aSensors 22 07327 g007b
Figure 8. Three-dimensional views of the improved Top-hat operation results: (a) sky, (b) cloud, (c) land, and (d) sea.
Figure 8. Three-dimensional views of the improved Top-hat operation results: (a) sky, (b) cloud, (c) land, and (d) sea.
Sensors 22 07327 g008
Figure 9. Background suppression results of the six methods (Table 2). # indicates the frame number. The blue rectangle represents the target, while the yellow box shows the false alarm point or the unsuppressed background.
Figure 9. Background suppression results of the six methods (Table 2). # indicates the frame number. The blue rectangle represents the target, while the yellow box shows the false alarm point or the unsuppressed background.
Sensors 22 07327 g009aSensors 22 07327 g009b
Figure 10. No target was detected by RLCM and NWTH in sequence 3 in some frames.
Figure 10. No target was detected by RLCM and NWTH in sequence 3 in some frames.
Sensors 22 07327 g010
Figure 11. False alarm rates over threshold levels. The Fa represents the average of all 30 frames of that sequence.
Figure 11. False alarm rates over threshold levels. The Fa represents the average of all 30 frames of that sequence.
Sensors 22 07327 g011
Figure 12. The ratio of SCR difference to NWTH. Capped at 200% for better presentation.
Figure 12. The ratio of SCR difference to NWTH. Capped at 200% for better presentation.
Sensors 22 07327 g012
Figure 13. The ratio of BSF difference to NWTH. Capped at 20% for better presentation.
Figure 13. The ratio of BSF difference to NWTH. Capped at 20% for better presentation.
Sensors 22 07327 g013
Table 1. The properties of the image sequences. “*” means to converted to grey-scale images in the step “Image Input”.
Table 1. The properties of the image sequences. “*” means to converted to grey-scale images in the step “Image Input”.
SequenceTarget RegionFramesFrame SizeAvg SCRImage Description
Seq 1 5   ×   5 30 256   ×   200 0.62Sky background; mostly covered by scattered clouds; fixed camera position and the target is from left to right.
Seq 2 5   ×   5 30 238   ×   158 0.65Sky and sea background, with a clear horizontal boundary; fixed camera position and the target is moving from top to bottom.
Seq 3 5   ×   5 30 302   ×   202 0.51Sky background; partially covered by thick cloud; fixed camera position and the target is moving from right to left.
Seq 4 * 4   ×   4 30 256   ×   256 3.81Land background (Rapidly changing); tracking camera position; small target (16 pixels).
Seq 5 * 3   ×   3 30 256   ×   256 1.63Land background; tracking camera position; ultra-small target (9 pixels).
Table 2. The parameters of the comparison methods.
Table 2. The parameters of the comparison methods.
MethodsParameter Settings
Max-mean Sliding   window   size = 21   ×   21
Top-hat Structuring   element   size = 5   ×   5
NWTH R O = 9 ,   R i = 4   for   sequences   1 4 ;   R O = 8 ,   R i = 3 for sequence 5
LCM Cell   size   v = 3 , h = 3, 5, 7, 9
RLCM Scale = 3 ;   k 1 = 2 ,   5 ,   9 k 2 = 4 ,   9 ,   16
IPI Patch   size = 80   ×   80 ,   sliding   step = 5 ,   λ = 1 / m a x ( m , n )
Ours S o = 7 ,   S i = 3   for   sequences   1 4 ;   S o = 5 ,   S i = 2 for sequence 5; other parameters are shown in Algorithm 1.
Table 3. Average SCR (notation: best result, second-best result).
Table 3. Average SCR (notation: best result, second-best result).
MethodSequence 1Sequence 2Sequence 3Sequence 4Sequence 5
Max-mean10.4826.914.109.6714.87
Top-hat6.776.8924.967.2810.27
NWTH79.46137.3466.4022.5213.63
LCM3.171.233.365.504.92
RLCM65.23136.16886.3514.4912.72
IPI21.24146.91142.919.9572.37
Ours265.84240.54490.0035.2620.68
Table 4. Average BSF (notation: best result, second-best result).
Table 4. Average BSF (notation: best result, second-best result).
MethodSequence 1Sequence 2Sequence 3Sequence 4Sequence 5
Max-mean3.9612.756.694.261.48
Top-hat1.053.438.392.050.81
NWTH5.6513.2020.0711.923.29
LCM0.920.930.951.540.54
RLCM2.023.986.876.971.27
IPI10.9614.4122.9217.763.99
Ours7.1714.1523.6914.093.98
Table 5. Average processing time of a single frame (notation: best result, second-best result).
Table 5. Average processing time of a single frame (notation: best result, second-best result).
MethodSequence 1Sequence 2Sequence 3Sequence 4Sequence 5Average
NWTH0.0150.0140.0160.0170.0160.016
IPI17.3913.0360.6231.9323.4529.28
Ours1.320.772.603.702.862.25
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xi, T.; Yuan, L.; Sun, Q. A Combined Approach to Infrared Small-Target Detection with the Alternating Direction Method of Multipliers and an Improved Top-Hat Transformation. Sensors 2022, 22, 7327. https://doi.org/10.3390/s22197327

AMA Style

Xi T, Yuan L, Sun Q. A Combined Approach to Infrared Small-Target Detection with the Alternating Direction Method of Multipliers and an Improved Top-Hat Transformation. Sensors. 2022; 22(19):7327. https://doi.org/10.3390/s22197327

Chicago/Turabian Style

Xi, Tengyan, Lihua Yuan, and Quanbin Sun. 2022. "A Combined Approach to Infrared Small-Target Detection with the Alternating Direction Method of Multipliers and an Improved Top-Hat Transformation" Sensors 22, no. 19: 7327. https://doi.org/10.3390/s22197327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop