Next Article in Journal
Phoneme-Aware Hierarchical Augmentation and Semantic-Aware SpecAugment for Low-Resource Cantonese Speech Recognition
Previous Article in Journal
A Lightweight Certificateless Authenticated Key Agreement Scheme Based on Chebyshev Polynomials for the Internet of Drones
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Guided Filtering and Spectral-Entropy-Based Non-Uniformity Correction for High-Resolution Infrared Line-Scan Images

1
University of Chinese Academy of Sciences, Beijing 100049, China
2
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
3
School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(14), 4287; https://doi.org/10.3390/s25144287
Submission received: 11 May 2025 / Revised: 8 July 2025 / Accepted: 9 July 2025 / Published: 9 July 2025
(This article belongs to the Section Optical Sensors)

Abstract

Stripe noise along the scanning direction significantly degrades the quality of high-resolution infrared line-scan images and impairs downstream tasks such as target detection and radiometric analysis. This paper presents a lightweight, single-frame, reference-free non-uniformity correction (NUC) method tailored for such images. The proposed approach enhances the directionality of stripe noise by projecting the 2D image into a 1D row-mean signal, followed by adaptive guided filtering driven by local median absolute deviation (MAD) to ensure spatial adaptivity and structure preservation. A spectral-entropy-constrained frequency-domain masking strategy is further introduced to suppress periodic and non-periodic interference. Extensive experiments on simulated and real datasets demonstrate that the method consistently outperforms six state-of-the-art algorithms across multiple metrics while maintaining the fastest runtime. The proposed method is highly suitable for real-time deployment in airborne, satellite-based, and embedded infrared imaging systems. It provides a robust and interpretable framework for future infrared enhancement tasks.

1. Introduction

1.1. Background

With advancements in infrared detection technologies, scanning imaging systems, and cooling materials, infrared imaging has wide applications in military reconnaissance [1], aerospace remote sensing [2,3], night-vision surveillance [4,5], and industrial inspection [6,7]. However, line-scan infrared systems inherently suffer from non-uniformity noise due to cumulative effects such as detector integration drift, temperature variation, electronic readout instability, and scene-dependent radiation fluctuations [8,9,10,11]. These factors often produce structured horizontal stripe noise along the scanning direction, significantly degrading image quality and impacting downstream tasks such as target detection [12,13,14], edge extraction [15], and temperature inversion [16,17].

1.2. Related Work

Non-uniformity correction (NUC) methods for infrared images are generally classified into reference-based [18] and scene-based approaches [19,20,21]. Reference-based methods rely on external standard targets, such as blackbody sources and temperature-controlled references [22], and typically calibrate the system using two-point [23,24] or multi-point response models [25]. While these methods achieve high accuracy in laboratory settings and are suitable for system initialization or periodic recalibration, their strong dependence on external references limits their applicability in real-time and adaptive processing scenarios [26,27,28].
In contrast, scene-based methods are more flexible and practical. They utilize the image’s structural, statistical, or temporal characteristics to estimate non-uniformity noise. These techniques have gained increasing attention recently due to their real-time adaptability [29]. Based on modeling strategies, scene-based NUC methods can be divided into four main types:
(1)
Filtering-based methods: These extract low-frequency bias components using spatial or frequency domain techniques such as mean [30], Gaussian [31], or guided filtering [32,33,34]. The estimated noise is subtracted from the original image. Although computationally efficient, these methods often lead to over-smoothing or texture loss in regions with varying stripe intensities or complex backgrounds [35,36,37,38].
(2)
Statistical modeling methods: These use statistical priors such as brightness distributions, row/column means, or local statistics to estimate noise [39]. While training-free and straightforward, their adaptability to non-stationary noise is limited [40,41].
(3)
Model optimization methods: These construct priors and regularization terms using approaches like total variation [42,43], wavelet transforms [44], curvelet transforms [45], or low-rank decomposition [46]. Although capable of preserving details and separating structured noise, they often suffer from high computational costs and sensitivity to parameter tuning [47,48,49].
(4)
Neural-network-based methods: End-to-end learning models [50], including convolutional neural networks [51], residual networks [52], and autoencoders [53], have shown strong performance on labeled datasets. However, their reliance on extensive training data, poor generalization to unseen scenes, and limited interpretability constrain their practical deployment [54,55,56].
Recently, lightweight deep learning architectures—such as MobileNet-based UNet variants [57] or attention-enhanced residual networks [58]—have been proposed to mitigate the computational burden of full-scale CNNs. However, these models still require substantial offline training, lack robustness to unseen noise distributions, and remain challenging to interpret physically [59]. Moreover, their deployment on embedded or hardware-limited systems remains difficult due to memory and latency constraints. Therefore, a critical need remains for training-free, interpretable, and computationally efficient NUC algorithms suitable for real-time applications.

1.3. Our Contributions

To overcome existing limitations, an adaptive, single-frame non-uniformity correction (NUC) algorithm is proposed for high-resolution infrared line-scan images. Compared to our previous work using one-dimensional guided filtering and linear modeling [10], the proposed method introduces three main improvements: a row-mean projection strategy to highlight stripe directionality, an MAD-based adaptive smoothing scheme for better structural preservation, and a spectral-entropy-based frequency-masking mechanism for effective suppression of both periodic and aperiodic noise.
The main contributions of this work are summarized as follows:
(1)
Row-mean-based 1D modeling: Projects 2D images into 1D sequences through row averaging, which improves stripe directionality, simplifies modeling, and boosts sensitivity to directional noise.
(2)
MAD-driven adaptive guided filtering: A fusion framework combines global background trends with local structural features. Filter scales are adaptively chosen based on local median absolute deviation (MAD), allowing spatially adaptive smoothing while maintaining structural accuracy.
(3)
Spectral-entropy-based frequency masking: A frequency-domain suppression method is introduced that uses spectral entropy to build adaptive thresholds, enabling the isolation and suppression of both periodic and aperiodic interference without requiring iterative optimization or high-order reconstruction.
(4)
Lightweight and efficient implementation: The complete algorithm is streamlined for real-time applications. It requires only a single pass of guided filtering and two FFT operations, making it suitable for embedded and resource-constrained platforms.
Extensive experiments on synthetic and real infrared datasets demonstrate that the proposed method consistently outperforms state-of-the-art approaches in correction accuracy, robustness, and processing speed. These results indicate strong potential for real-time deployment in airborne, satellite, and industrial infrared imaging systems.

2. Materials and Methods

Figure 1 presents the overall workflow of the proposed algorithm, which consists of four main stages: (1) Row-mean signal modeling: The input 2D infrared image is averaged along each row to generate a 1D signal representing the stripe pattern along the horizontal direction [60]. (2) Multi-scale guided filtering and adaptive fusion: Guided filtering is performed at multiple preset spatial scales. A local median absolute deviation is calculated to determine adaptive weights. These weights fuse the filtered results and generate a 1D background estimation signal. (3) Frequency domain processing and stripe suppression: A one-dimensional Fourier transform is applied to the residual signal. Spectral entropy and global statistics are used to calculate an adaptive threshold. A frequency-domain mask is constructed to suppress dominant frequency components. (4) Stripe compensation and image restoration: The 1D filtered residual is replicated along the column direction to form a 2D correction map, subtracted from the original image to produce the corrected output.
Figure 2 presents the non-uniformity correction results of our method on real high-resolution infrared line-scan images.

2.1. One-Dimensional Modeling and Orientation Feature Extraction

Let the input infrared image be denoted as I ( i , j ) , where i [ 1 , H ] and j [ 1 , W ] represent the row and column indices, respectively. The row-mean signal r ( i ) is calculated by averaging all pixel values along each row:
r i = 1 w j = 1 w I i , j
The resulting one-dimensional signal r ( i ) R H × 1 represents the average radiation intensity across each scan line. Figure 3 shows the original input image and the corresponding row-mean signal plot, which highlights the stripe structure along the horizontal direction. This 1D signal is subsequently used as the input for the guided filtering process in the next stage.

2.2. Multi-Scale Guided Filtering with MAD-Based Adaptation

2.2.1. Principle of Guided Filtering

Guided filtering is an edge-preserving image smoothing technique based on a local linear model, initially introduced by He [61]. Due to its efficiency and simplicity, it has been widely applied in image denoising, detail enhancement, and structure-preserving filtering. Unlike traditional mean or Gaussian filters, the guided filter incorporates structural information from a guidance image, producing smoothed results that better preserve edge boundaries. In the context of this work, both the input and guidance signals are set to the row-mean signal r ( i ) .
Within a local window w k , the guided filter assumes a linear relationship between the guidance signal I i and the filtering output q i :
q i = a k I i + b k , i ω k
where a k and b k are linear coefficients assumed to be constant within the window. These coefficients are estimated by minimizing the following cost function, which enforces fidelity to the input while preventing overfitting through regularization:
E a k , b k = i w k ( ( a k I i + b k p i ) 2 + ε a k 2 )
where ε is a regularization parameter used to suppress a k from being too large to prevent overfitting; by solving the above minimization problem, the following can be obtained:
a k = 1 w Σ i w k I i p i μ k p ¯ k σ k 2 + ε
b k = p ¯ k a k μ k
where μ k and σ k 2 denote the mean and variance of the guidance signal I i within window ω k , and p ¯ k is the mean of the input signal p i in the same window. The final output q i obtained by averaging the results of all overlapping windows that include pixel i :
q i = 1 w k | i w k a k I i + b k
Assuming uniform window overlap and symmetric filtering, this simplifies to
q i = a ¯ i I i + b i ¯
where a ¯ i and b i ¯ are the averaged coefficients over all windows containing pixel i . This formulation ensures that the output remains edge-aware and less noise-sensitive, which is particularly important for preserving directional structures in the subsequent fusion process.

2.2.2. MAD-Driven Dynamic Fusion Strategy

To improve robustness to local intensity variations and stripe strength fluctuations, the global statistical characteristics of the row-mean signal r ( i ) are utilized to guide the smoothing scale. A multi-scale guided filtering strategy is employed, where the smoothing radius is dynamically adjusted based on the local MAD.
First, a large-window guided filter is applied to r ( i ) using a fixed radius R l a r g e and smoothing coefficient ϵ . The result is denoted as
r l a r g e i = G u i d e F i l t e r r ( i ) , r ( i ) , R l a r g e , ϵ
To enable adaptive smoothing across varying regions, additional filtered results are precomputed using a set of neighborhood radii R R m i n , R l a r g e r R i . For each index i , a local window centered at i is selected, and the MAD is computed as follows:
M A D l o c a l i = m e d i a n r ( k ) m e d i a n r ( k ) ,   k i r , i + r
Based on the ratio of the local MAD to the global MAD, the inverse proportionality strategy is used to adjust the local filtering radius, R ( i ) , which is shown in Equation (10):
R i = m a x R m i n , m i n R l a r g e , R l a r g e α M A D l o c a l i M A D g l o b a l i + δ
where α is the scaling factor, and δ is a small constant to avoid division by zero.
As illustrated in Figure 4, the filtering radius determined by this strategy is significantly negatively correlated with the local MAD. The blue curve shows the variation of local MAD across image rows, while the red curve represents the corresponding dynamic filtering radius.
The adaptive filter result r R ( i ) i is then combined with the large-window result r l a r g e i through soft weighting:
ω i = 1 exp ( α M A D l o c a l ( i ) ) 1 exp ( α M A D g l o b a l + γ )
Finally, the background estimate for adaptive fusion r ^ i is calculated as follows:
r ^ i = ω i r R i i + ( 1 ω ( i ) ) r l a r g e ( i )
The result r ^ ( i ) preserves the overall trend of the row-mean signal while adapting to local structures and is used for residual computation in the subsequent frequency-domain stage.
As can be seen in Figure 5, when the filtering radius R = 5 , the small-scale guided filtering in Figure 5a retains more noise and has more details but is not smooth enough. When the filter radius R = 30 , the large-scale guided filtering in Figure 5b is too smooth, and the stripes are suppressed, but the details are lost, and the smoothness and details of the image are better preserved by the MAD-driven fusion strategy, which is both robust and adaptive.

2.3. Frequency-Domain Spectral Entropy Gating Mechanism

A frequency-domain filtering scheme is introduced to further suppress the residual periodic stripe noise in the row-mean domain. The residual signal is first obtained by subtracting the smoothed background estimate r ^ i from the original row-mean signal:
δ i = r i r ^ i
The residual signal δ i is fast Fourier transformed to obtain the spectral expression:
F i = F δ i = A i j ϕ i
where A i = F i is the amplitude spectrum, and ϕ i = a r g F i is the phase spectrum. The spectral entropy H is introduced to measure the residual signal’s spectral complexity. The probability distribution of the normalized amplitude is defined as
P i = A i Σ i A i , H = i P i log P i + ε
where ε is a small constant to avoid numerical singularities. Spectral entropy reflects the energy dispersion and structural disorder in the frequency domain. Based on the spectral entropy, an adaptive threshold, T , for amplitude suppression, is defined as
T = μ + β H σ
where μ = m e a n ( A ( i ) ) σ = s t d ( A ( i ) ) , and β is a scaling coefficient that controls the impact of entropy on the threshold.
A binary suppression mask is then constructed:
F ~ i = 0                           ,             A i T F i               ,             A i < T
Equation (17) zeros the frequency components above the threshold to avoid streak mains frequency leakage. The suppressed frequency component is further obtained:
F r e m o v e d i = F i F ~ i
A Fourier inverse transform is performed on F r e m o v e d i to obtain the corresponding time-domain stripe estimate:
δ ~ i = F 1 F r e m o v e d i
The generated signal δ ~ i represents the estimated streak noise component in the row-mean domain.

2.4. Stripe Expansion and Image Restoration

The estimated 1D residual noise signal δ ~ i obtained through frequency-domain processing is expanded into a 2D spatial pattern for correction. This is achieved by replicating the 1D signal across all columns to construct a redundant 2D noise map:
Δ i , j = δ ~ i , j 1 , W
This map Δ i , j represents the estimated horizontal stripe interference and is subtracted from the original image to generate the corrected image:
I ^ i , j = I i , j Δ i , j
Figure 6 illustrates the image restoration results. The estimated stripe noise pattern, generated by extending the 1D residual along the column direction, is shown in Figure 6b. The final corrected image, obtained by subtracting this pattern from the original input, is displayed in Figure 6a. Figure 6c compares the row-wise intensity curves before and after correction, demonstrating a significant reduction in horizontal non-uniformity.

2.5. Experimental Setup

The experiments are conducted on five datasets: OSU [62], KAIST [63], LLVIP [64], Tendero’s dataset [40], and a proprietary long-wave infrared weekly scanned line-scan dataset. All methods are executed under the same hardware and software environment. The computing platform includes a 12th-generation Intel Core i7-12700H CPU (Intel Corporation, Santa Clara, CA, USA) @ 3.61 GHz, 32 GB RAM, and a 64-bit Windows operating system. All algorithms are implemented and tested using MATLAB 2024a.

3. Results

To evaluate the non-uniformity correction performance of the proposed algorithm, we conduct a comparative study with several representative state-of-the-art methods developed in recent years for infrared detectors. These methods can be broadly categorized into four groups:
(1)
Frequency-domain filtering-based methods: the two-stage filtering (TSF) approach proposed by Zeng in 2018 [36] combines frequency-domain filtering with one-dimensional row-guided filtering, aiming to remove stripe artifacts while preserving image structures.
(2)
Spatial-domain guided filtering approaches: the guided filtering with linear fitting (GFLF) method proposed by Li in 2023 [32] performs non-uniformity correction via one-dimensional guided filtering and regression modeling. The ASNR method proposed by Hamadouche in 2024 [33] further integrates frequency mask extraction with guided filtering to enhance stripe suppression capability.
(3)
Optimization-based methods: the ADOM model [49] incorporates Weighted Paradigm Regularization and a Momentum Update Mechanism within an ADMM optimization framework to correct non-uniformity artifacts adaptively.
(4)
Traditional methods: the Median-Histogram-Equalization-based Non-uniformity Correction Algorithm (MIRE) proposed by Tendero in 2012 [40] and the Estimating Bias by Minimizing the Differences Between Neighboring Columns (MDBC) method introduced by Wang in 2016 [47] serve as classical baselines in non-uniformity correction, relying on histogram statistics and local column differences, respectively.

3.1. Noise Modeling and Analysis

Unlike conventional infrared focal plane arrays, where each pixel operates independently, line-scan infrared detectors utilize a single column of pixels that move vertically to scan the scene line by line. Since each image row is acquired by the same detector element, any gain drift or bias error in a specific pixel will manifest consistently across the corresponding image row. This results in structured horizontal stripe artifacts characteristic of line-scan infrared systems.
To model this non-uniformity, a hybrid gain–bias–white noise model is adopted, which can be expressed as
I n o i s e i , j = g i I i d e a l i , j + b i + n i , j
where I n o i s e i , j denotes the observed image, I i d e a l i , j denotes the ideal noise-free image, g i and b i denote the gain term and bias term of the i th line, respectively, obeying the Gaussian distributions g i ~ N 1 , σ g 2 and b i ~ N 0 , σ b 2 , and n i , j ~ N 0 , σ w h i t e 2 is the zero-mean white noise.
In addition to these structured distortions, periodic noise is frequently observed in infrared imaging systems, typically caused by power supply instability, clock jitter, or electronic readout nonlinearities. This type of noise exhibits periodic oscillations along the column direction and is modeled as
P i , j = A cos 2 π f 0 j + ϕ
where A denotes the interference amplitude, f 0 is the normalized spatial frequency, and ϕ is the phase offset. Periodic noise appears as sharp peaks in the frequency domain, which differ from natural broadband signals. The proposed masking strategy removes structural stripes and filters periodic interference, improving overall robustness.

3.2. Evaluation Indicators

This study employs a combination of reference-based and no-reference evaluation metrics to comprehensively assess the effectiveness of non-uniformity correction. These metrics evaluate the algorithm’s performance, including restoration quality, structure preservation, and stripe suppression. The reference-based metrics include peak signal-to-noise ratio (PSNR), roughness, and structural similarity index (SSIM). In contrast, the no-reference metrics include gradient consistency (GC) and frequency-domain noise ratio (NR).
PSNR quantifies image quality by evaluating the mean squared error (MSE) between the restored image and a reference. A more excellent PSNR value typically reflects higher reconstruction accuracy. It is defined as
P S N R = 10 log 10 M A X 2 M S E
where MAX represents the maximum pixel intensity allowable in the image.
Roughness assesses spatial intensity variation and helps identify residual artifacts. It is calculated from the cumulative gradient magnitude in horizontal and vertical directions. To evaluate structural consistency, the roughness of the processed image is compared to that of the original. The closer these values are, the more consistent the image structure is. The formula is given by
R = x I + Σ y I I
where x = 1,1 y = 1,1 are first order difference operators, and I is the input image.
SSIM evaluates structural similarity between the corrected image and the reference, focusing on brightness, contrast, and texture. A value closer to 1 indicates higher structural consistency. SSIM is computed as
S S I M = 2 μ I ^ μ I + C 1 2 σ I ^ I + C 2 μ I ^ 2 + μ I 2 + C 1 σ I ^ 2 + σ I 2 + C 2
The symbols μ I , μ I ^ , σ I 2 , and σ I ^ 2 represent the mean and variance of images I and I ^ , respectively. The term σ I ^ I corresponds to the covariance between the two images. Constants C 1 and C 2 are introduced to avoid instability caused by small denominators.
The GC metric quantifies differences in gradient strength between the enhanced image and the original one affected by noise. A lower GC score implies better structural preservation. It is defined as
G C = Σ I n I c Σ I n
where I n is the noise image, I c is the corrected image, and is the gradient.
NR evaluates the change in spectral energy caused by noise suppression. It reflects the level of frequency-domain filtering. A higher NR indicates more effective noise removal. The metric is computed as
N R = F I n 2 F I c 2
where F denotes the 2D Fourier transform, I n is the noise image, and I c is the corrected image.

3.3. Ablation Experiments

Five experimental configurations are designed to evaluate each core sub-module’s contribution to the overall performance of non-uniformity correction, as shown in Table 1. The tested modules include DR (row-mean dynamic radius guided filtering), FM (frequency-domain mask filtering), and ET (spectral-entropy-based adaptive thresholding). The baseline model, B0, uses only large-window guided filtering without any adaptive components. B1 adds the DR module to assess the effect of local radius adaptation. B2 incorporates only the FM module to verify its ability to suppress periodic stripe noise. B3 combines DR and FM to examine their complementarity. Finally, B4 represents the complete model, integrating all three modules.
In these experiments, except for the tested modules, other parameters are fixed: the large-window guided filter radius is set to 30, the smoothing coefficient ε = 0.16 , and the spectral entropy threshold scaling factor β = 0.003 . All evaluations are conducted on real high-resolution infrared line-scan images. The original image size is 1024 × 55,000, and a 1024 × 1434 region is used for testing.
The quantitative results of the five configurations are presented in Table 2. Compared with the baseline B0, adding DR in B1 improves PSNR and SSIM, indicating enhanced restoration and structural consistency. However, the roughness value remains far from the original image’s 0.036, suggesting that DR alone has limited ability to suppress horizontal undulating noise. GC is notably reduced in B2, where FM is used alone, confirming that frequency-domain masking effectively attenuates periodic interference. Yet, the roughness increases to 0.027, implying a risk of over-smoothing. When DR and FM are jointly applied in B3, all metrics show noticeable improvement, indicating that the combination achieves a better trade-off between suppression and structure preservation. With the addition of the ET module, the whole model B4 achieves the best performance across all five metrics. Its roughness and NR values are closest to the original image, demonstrating that entropy-based thresholding improves robustness while avoiding excessive smoothing.
Figure 7 provides a visual comparison of the different configurations. B0 exhibits prominent horizontal stripes. B1 and B2 partially suppress noise, leaving residual patterns or smoothing artifacts. B3 improves the clarity of cloud structures and building edges. In contrast, B4 eliminates structural and periodic artifacts and more effectively restores natural details. These visual observations align well with the quantitative trends in GC and roughness, further confirming the role of the ET module in balancing detail preservation and noise suppression.

3.4. Parameter Sensitivity Analysis

To evaluate the robustness of the proposed algorithm concerning key hyperparameters, we conduct a sensitivity analysis on two categories of representative parameters in the spatial and frequency domains. Three hyperparameters are tested: (1) large window guided filter radius r , (2) guided filter smoothing coefficient ϵ , (3) spectral entropy threshold scaling factor β .
We select images from the KAIST dataset and systematically vary each parameter while keeping the others at their default values. Noise is added based on the gain–bias model defined in Equation (22), where the gain g i ~ N 0 , 0.02 and b i ~ N 0 , 0.02 , with no additional white noise included.
Figure 8 shows the effect of varying the large-window radius r . When r = 15 , the corrected image still exhibits high-frequency jitter, and residual stripe noise remains visible. As the radius increases to 25–35, the streaks are effectively suppressed, and local contrast is well maintained. However, when r = 45 , the image becomes over-smoothed, and fine details in low-contrast regions are noticeably attenuated. Therefore, a recommended range for r is between 25 and 35.
Figure 9 presents the results under different smoothing coefficients ϵ ; when ϵ = 0.12 or ϵ = 0.22, stripe noise is still visible in the red box region. Increasing ϵ = 0.42 improves suppression while preserving edge details. However, at ϵ = 0.82, excessive smoothing occurs, weakening vehicle contours and road textures. Thus, a moderate ϵ value offers a good trade-off between smoothing and detail preservation.
Figure 10 evaluates the influence of the spectral threshold coefficient β . As it increases, the adaptive threshold rises, which enhances stripe suppression by treating more frequency components as noise; however, an excessively large value may lead to over-smoothing, blurring edge structures, and flattening row-mean profiles. In contrast, spectral suppression is insufficient if β is too small, and residual noise remains. Considering these effects, we recommend setting β ∈ [0.001, 0.003] for optimal performance.

3.5. Method Performance Comparison and Analysis

3.5.1. Comparison Algorithm and Experimental Setup

We compare our algorithm with six state-of-the-art methods: MIRE [40], MDBC [47], TSF [36], ADOM [49], GFLF [32], and ASNR [33]. To ensure a fair comparison, all competing algorithms are configured according to the parameter settings recommended in their original papers and evaluated on the same datasets. For our method, the guided filtering window size is set to [30 × 1], the smoothing parameter r = 0.16 , and the spectral entropy threshold scaling factor β = 0.003 . The MIRE [40] algorithm uses a Gaussian-weighted sliding midway histogram matching strategy. The Gaussian window’s standard deviation is automatically searched over the range [0, 8] with a step size 0.5. The MDBC [47] algorithm sets the regularization weight m k = 0.1 and the iterative step size Δ t = 0.1 , with a maximum of 200 iterations. The TSF [36] method employs a striped band-stop filter in the frequency domain with a bandwidth K = 2 , followed by alternating 1 × 5 mean and Gaussian filtering in the spatial domain. The null-domain standard deviation is set to 1.2, and the number of iterations is capped at 10. For the ADOM [49] algorithm, the momentum coefficient is initialized to 1, the step-size adjustment coefficient δ = 0.1 , the penalty parameters ρ 1 , ρ 2 , and ρ 3 are empirically set to 5, and the regularization parameters λ 1 = λ 2 = 0.01 . The maximum number of iterations is 300, with a convergence threshold of t o l = 1 × 10 4 . The GFLF [32] algorithm is applied to OSU, LLVIP, KAIST, and Tendero’s datasets without image interception, and 1600 columns are intercepted for experiments in the long-wave infrared weekly scanning dataset. The horizontal filtering window of the GFLF algorithm is [8 × 1], ϵ = 0.04 , and the vertical filtering window is [1 × 100], ϵ = 0.16 . In the ASNR [33] algorithm, the guided filter window size is set to [5 × 5], and the smoothing coefficient is ϵ = 0.01 .

3.5.2. Quantitative Testing of Simulated Datasets

To comprehensively evaluate the robustness and adaptability of each non-uniformity correction algorithm under different imaging resolutions and noise conditions, we construct simulated test sets using three publicly available infrared image datasets: OSU, KAIST, and LLVIP. For each dataset, 100 images are randomly selected. These datasets represent diverse application scenarios and resolutions:
(1)
OSU Dataset [62]: Provided by Ohio State University, this dataset captures human activity scenes in natural outdoor environments. With a resolution of 240 × 320, it reflects low-resolution infrared imaging scenarios and is suitable for evaluating algorithm performance under small-scale conditions.
(2)
KAIST Dataset [63]: Released by the Korea Advanced Institute of Science and Technology, this dataset includes city streets, parking lots, and varying illumination conditions across day, dusk, and night. The image resolution is 512 × 640, which provides a moderately complex environment for evaluating detail preservation and mid-scale stripe correction.
(3)
LLVIP Dataset [64]: Developed by the University of Science and Technology of China, this dataset consists of indoor and outdoor scenes captured under low-light and nighttime conditions. The images include fine-grained thermal signatures from pedestrians, and the resolution is 1024 × 1280, making it ideal for high-resolution correction analysis.
Simulated noise is generated using the hybrid gain–bias–white noise model (Equation (22)) and the periodic noise model (Equation (23)). Equation (22) simulates column-based gain and bias distortions, while Equation (23) introduces stripe-like periodic interference along the scanning direction. To ensure disturbance significance and representativeness, the variance of all added noise components is no less than 0.01.
Table 3 lists five representative noise groups for each dataset, ranging from mild to severe non-uniformity. The parameter combinations include variations in gain variance σ g 2 , bias variance σ b 2 , white noise variance σ w h i t e 2 , periodic amplitude A , normalized frequency f 0 , and phase offset ϕ . These synthetic conditions form a unified benchmark for evaluating correction performance under varying interference levels.
Each correction algorithm is tested on every image set under the corresponding noise group using these settings. The evaluation metrics include PSNR, SSIM, and roughness.
The results for the OSU dataset are presented in Table 4. Our method consistently achieves the highest PSNR and SSIM values across all noise groups. In contrast to MDBC and TSF, which tend to cause excessive smoothing or structural distortion, the proposed approach preserves finer textures while maintaining effective stripe suppression. For instance, in OSU-2, our method achieves a PSNR of 33.89 dB, outperforming MDBC (30.33 dB) and TSF (30.56 dB). Moreover, the resulting roughness is closer to the original noisy image, suggesting better structural preservation.
Table 5 presents results on the KAIST dataset, which features more complex scenes and high-frequency texture. Algorithms such as MDBC, ADOM, and ASNR perform poorly in the presence of cyclic and high-frequency noise. Notably, ASNR yields an SSIM of only 0.5774 in KAIST-5, suggesting significant detail loss. In contrast, our method maintains the best overall performance across all noise conditions, effectively balancing suppression and detail retention.
The results on the LLVIP dataset, shown in Table 6, further confirm the superiority of our algorithm under high-resolution and multi-source interference conditions. Traditional algorithms such as TSF and MDBC fail to effectively suppress fine-structured noise, especially in LLVIP-2 and LLVIP-5. Although GFLF and ASNR obtain moderate improvements, they still face limitations in managing complex noise patterns. In contrast, our method achieves PSNR values of 44.29 dB, 41.76 dB, and 37.05 dB in LLVIP-1, LLVIP-2, and LLVIP-4, respectively. Additionally, SSIM values exceed 0.97 across multiple groups, demonstrating the adaptability and robustness of our approach under extreme scenarios.

3.5.3. Quantitative Testing on Real Datasets

To further validate the effectiveness of the proposed algorithm under real imaging conditions, we evaluate it on two real-world infrared datasets with different sources and scene characteristics. The first is a publicly available benchmark dataset (Tendero’s dataset), while the second is a self-collected long-wave infrared scanning dataset acquired in our laboratory. These datasets contain significant non-uniformity and structural complexity and are used to assess the robustness and effectiveness of the algorithms in scenarios without ideal reference images.
The first dataset is Tendero’s, which contains typical fixed-direction stripe noise and is widely used for benchmarking non-uniformity correction algorithms. All available images are used in the test, and the average values of all evaluation metrics are shown in Table 7.
Compared with other methods, our algorithm achieves the highest PSNR and SSIM while maintaining low roughness. MDBC and TSF show poor performance in GC and SSIM due to their reliance on global or frequency-domain statistical modeling and limited adaptability to local texture variations. ADOM effectively suppresses stripe components through energy reallocation, but its coupling of terms tends to cause over-smoothing in textured regions. In contrast, our method achieves stable suppression and structure preservation using spatial modeling and locally adaptive thresholding, resulting in the best performance across all indicators.
The second dataset is the long-wave infrared weekly scanning dataset, collected using a line-array scanning detector with a resolution of 1024 × 55,000. This dataset contains high-frequency fixed-pattern noise and complex background structures. For evaluation, 20 images are selected and cropped to 1024 × 8192. The average performance across these images is shown in Table 8.
As shown in Table 8, MIRE’s global sliding-window estimation strategy fails to address local gain fluctuations, resulting in residual stripe noise or structural artifacts. Although TSF partially suppresses cyclic interference, it frequently introduces boundary artifacts and struggles to accurately localize non-periodic distortions. ADOM and GFLF exhibit moderate performance but show significant deviations in roughness and NR, indicating challenges in preserving texture continuity. In contrast, our method effectively adapts to gain transitions and structural mutation regions, maintains low GC and roughness values, and consistently outperforms others in terms of PSNR and SSIM. These results confirm the strong generalization capability and robustness of the proposed approach in high-resolution infrared imaging scenarios.

3.5.4. Qualitative Visualization Comparison

To further validate the proposed method’s visual performance, we conduct qualitative comparison experiments using three simulated noisy images (OSU-5, KAIST-3, and LLVIP-4) and two real infrared images (from Tendero’s dataset and a self-collected long-wave weekly infrared scanning dataset). These samples are selected to represent various interference patterns, structural complexities, and imaging resolutions.
In the simulated set, OSU-5 includes combined gain, bias, and periodic interference, making it suitable for evaluating overall denoising performance. KAIST-3 exhibits strong bias-enhanced stripes and low-contrast textures, emphasizing the need for structure preservation. LLVIP-4 comes from a high-resolution dataset with dominant periodic noise, which is ideal for testing frequency-domain robustness and edge detail recovery.
Figure 11 presents the visual results for OSU-5. In the low-contrast region marked by the red box, MIRE and MDBC exhibit limited suppression, with residual streaks and noticeable structural blur. TSF provides only weak stripe removal as periodic patterns remain visible. ADOM introduces over-smoothing due to global regularization, while GFLF preserves more structure but fails to eliminate periodic textures. ASNR effectively reduces interference but causes slight blurring of image details. In contrast, our method successfully eliminates stripe noise while preserving background textures and target clarity, demonstrating superior visual consistency and balance between denoising and detail retention.
As shown in Figure 12, for KAIST-3, MIRE and MDBC suffer from severe blurring and edge ambiguity. TSF suppresses some cyclic components but destroys structural texture, leading to an unnatural “oil painting” effect. ADOM results in over-compression and loss of weak texture, while GFLF and ASNR maintain moderate detail but leave residual stripe artifacts. Our method best preserves scene hierarchy and edge sharpness, balancing denoising and structure retention even in bias-enhanced scenarios.
Figure 13 displays results for LLVIP-4. This high-resolution image with prominent periodic stripes poses challenges for frequency-domain methods. MIRE and MDBC fail to estimate stripe frequencies precisely, resulting in incomplete suppression. TSF focuses on band-stop filtering but induces over-smoothing. ADOM and GFLF preserve some texture but cannot entirely suppress periodic components. ASNR removes significant interference but introduces local luminance distortion. Our method shows strong frequency adaptability, suppressing periodic noise while retaining fine details and local contrast, especially in text and edge regions.
Figure 14 presents the result from Tendero’s dataset, which contains dense stripe patterns and weak structural texture for real images. Most methods can remove stripes to varying degrees, but MIRE, MDBC, and TSF compromise luminance uniformity. GFLF and ASNR achieve a better balance but may reduce detail sharpness. Our method effectively suppresses stripes while preserving edge clarity and natural brightness transitions, maintaining visual harmony and detail integrity in low-texture regions.
Figure 15 shows the results of the long-wave infrared weekly scanning dataset, which our laboratory captured. The original image contains low-frequency periodic noise across the sky and cloud regions. TSF effectively suppresses interference but introduces uneven brightness transitions. MDBC yields smoother results but blurs detail. ADOM and GFLF retain some texture but suffer from over-suppression or artifacts. ASNR removes high-frequency noise but causes luminance inconsistency. Our method maintains brightness uniformity, clearly preserves cloud structures and pole contours, and avoids artifacts, demonstrating strong generalization to real long-format infrared scenes.

3.6. Runtime Comparison

To further evaluate the computational efficiency of the algorithms in practical applications, we measure the average runtime of seven algorithms when processing 30 real long-wave infrared weekly scanning images with a resolution of 1024 × 55,000. These high-resolution, long-width images place high demands on both algorithmic complexity and memory efficiency.
Table 9 summarizes the runtime comparison. Among traditional methods, MIRE exhibits the longest average runtime of 492.15 s, primarily due to its sliding-window median matching and global statistical fusion strategy, which require repeated block search and sorting across large-scale images. ADOM records an average runtime of 192.01 s as its ADMM-based framework involves iterative gradient computations and threshold updates, making it unsuitable for real-time applications. Although TSF features a relatively simple structure, its reliance on multiple FFT-based frequency-domain operations results in considerable runtime overhead for long-row images.
MDBC, GFLF, and ASNR achieve faster processing under certain conditions; however, they still incorporate region filtering, pyramid decomposition, or structure-tensor-based modeling. For example, ASNR attempts to balance edge preservation and interference suppression, but its multi-branch filtering leads to a runtime of 7.17 s.
In contrast, our method achieves the shortest average runtime of 0.1815 s, demonstrating excellent computational efficiency. The proposed algorithm adopts a lightweight structure comprising single-channel row-mean extraction, global guided filtering, and frequency-domain suppression. It requires only one global guided filtering operation and two FFTs—without involving image block search, iteration, or multi-scale reconstruction—thereby significantly reducing complexity.
Overall, the method ensures effective stripe removal and structural preservation while delivering outstanding runtime performance. Its low memory footprint, strong portability, and real-time capability make it particularly well suited for large-scale high-resolution column-scan image processing and embedded deployment in practical infrared imaging systems.

4. Discussion

Comprehensive experiments on three public datasets (OSU, KAIST, and LLVIP) and two real datasets (Tendero and a self-acquired long-wave infrared weekly scanning dataset) confirm that the proposed algorithm delivers superior performance in quantitative metrics and visual quality. Compared with recent state-of-the-art methods, it consistently achieves higher PSNR and SSIM, and lower roughness and GC, and demonstrates robustness across diverse noise patterns and resolutions. It notably achieves the shortest runtime among all tested methods, enabling real-time processing on large-scale images.
From a technical perspective, the strength of this algorithm stems from its three complementary components: (1) 1D row-mean modeling reduces data dimensionality while enhancing stripe orientation features; (2) MAD-driven adaptive guided filtering introduces spatial adaptivity, enabling fine-scale noise suppression without over smoothing; and (3) frequency-domain filtering with spectral entropy thresholding allows precise isolation of dominant interference components. These modules collectively form a spatial-frequency collaborative filtering framework that is both accurate and lightweight.
In addition to algorithmic effectiveness, the method is designed with engineering deployment in mind. Its reliance on only one guided filtering operation and two FFTs, without iterative refinement or patch-based processing, minimizes memory usage and computational load. This design ensures high portability and real-time applicability on hardware-constrained platforms such as embedded DSPs or FPGAs.
Nonetheless, certain limitations remain. The current method is optimized for horizontally oriented noise and does not yet generalize to arbitrary or oblique stripe patterns. Future work could explore direction-invariant models using log-Gabor filters or structure tensor-guided adaptive filtering. The algorithm is also passive to long-term detector drift; incorporating temporal frame sequences or on-chip calibration metadata may allow dynamic gain correction. Furthermore, integrating lightweight neural modules, such as temporal convolutional networks (TCNs) or spiking neural networks (SNNs), into the residual estimation stage may enhance accuracy while preserving interpretability and real-time performance.

5. Conclusions

This study proposes a lightweight and interpretable non-uniformity correction method for high-resolution infrared line-scan images affected by structural and periodic stripe noise. The technique achieves accurate stripe suppression by integrating 1D row-mean modeling, MAD-driven adaptive guided filtering, and spectral-entropy-constrained frequency masking while preserving texture and structural consistency.
Experimental validation across five datasets, including synthetic and real-world data, demonstrates that the proposed algorithm outperforms six representative state-of-the-art methods regarding PSNR, SSIM, roughness, GC, and NR. Moreover, it achieves sub-second runtime performance (0.18 s) on 1024 × 55,000 images, significantly surpassing traditional and optimization-based algorithms in computational efficiency.
Beyond its current design, the algorithm offers a scalable and robust framework for future enhancement. Potential extensions include handling arbitrary noise orientations, incorporating temporal modeling for long-term drift correction, and embedding explainable learning-based components for hybrid correction. The method holds strong potential for real-time infrared image correction in embedded, airborne, and satellite-based systems.

Author Contributions

Conceptualization, M.H.; methodology, M.H.; software, M.H.; validation, M.H. and J.J.; resources, Y.Z. (Yaohua Zhu); data curation, Y.Z. (Yanghang Zhu); writing—original draft preparation, M.H.; writing—review and editing, Q.D. and Y.Z. (Yong Zhang). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

An infrared long-wave cooled linear scan detector generated the real infrared image dataset. It is not a public dataset. The publicly available dataset OSU was analyzed in this study and can be found here: http://vcipl-okstate.org/pbvs/bench/Data/01/download.html, accessed on 20 April 2025. The publicly available dataset KAIST was analyzed in this study and can be found here: https://soonminhwang.github.io/rgbt-ped-detection/, accessed on 20 April 2025.The publicly available dataset LLVIP was analyzed in this study and found here: https://github.com/bupt-ai-cz/LLVIP, 20 April 2025. The publicly available Tendero dataset was analyzed in this study and can be found here: https://ipolcore.ipol.im/demo/clientApp/demo.html?id=129, accessed on 22 April 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Strickland, R.N. Infrared Techniques for Military Applications. In Infrared Methodology and Technology; CRC Press: Boca Raton, FL, USA, 2023; pp. 397–427. [Google Scholar]
  2. Kuenzer, C.; Dech, S. Thermal Infrared Remote Sensing. Remote Sens. Digit. Image Process. 2013, 10, 978–994. [Google Scholar]
  3. Huang, Z.; Zhang, Y.; Li, Q.; Zhang, T.; Sang, N.; Hong, H. Progressive Dual-Domain Filter for Enhancing and Denoising Optical Remote-Sensing Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 759–763. [Google Scholar] [CrossRef]
  4. Raza, A.; Chelloug, S.A.; Alatiyyah, M.H.; Jalal, A.; Park, J. Multiple pedestrian detection and tracking in night vision surveillance systems. Comput. Mater. Contin. 2023, 75, 3275–3289. [Google Scholar] [CrossRef]
  5. Patel, H.; Upla, K.P. Night Vision Surveillance: Object Detection Using Thermal and Visible Images. In Proceedings of the 2020 International Conference for Emerging Technology (INCET), Belgaum, India, 5–7 June 2020; pp. 1–6. [Google Scholar]
  6. Venegas, P.; Ivorra, E.; Ortega, M.; Sáez de Ocáriz, I. Towards the Automation of Infrared Thermography Inspections for Industrial Maintenance Applications. Sensors 2022, 22, 613. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, D.; Zhan, C.; Chen, L.; Wang, Y.; Li, G. Review of unmanned aerial vehicle infrared thermography (UAV-IRT) applications in building thermal performance: Towards the thermal performance evaluation of building envelope. Quant. InfraRed Thermogr. J. 2025, 22, 266–296. [Google Scholar] [CrossRef]
  8. Hua, W.; Zhao, J.; Cui, G.; Gong, X.; Ge, P.; Zhang, J.; Xu, Z. Stripe Nonuniformity Correction for Infrared Imaging System Based on Single Image Optimization. Infrared Phys. Technol. 2018, 91, 250–262. [Google Scholar] [CrossRef]
  9. Chen, W.; Li, B. Overcoming Periodic Stripe Noise in Infrared Linear Array Images: The Fourier-Assisted Correlative Denoising Method. Sensors 2023, 23, 8716. [Google Scholar] [CrossRef] [PubMed]
  10. Huang, M.; Chen, W.; Zhu, Y.; Duan, Q.; Zhu, Y.; Zhang, Y. An Adaptive Weighted Residual-Guided Algorithm for Non-Uniformity Correction of High-Resolution Infrared Line-Scanning Images. Sensors 2025, 25, 1511. [Google Scholar] [CrossRef]
  11. Cao, Y.; Tisse, C.L. Solid state temperature-dependent NUC (non-uniformity correction) in uncooled LWIR (long-wave infrared) imaging system. In Proceedings of the Infrared Technology and Applications XXXIX, Bellingham, WA, USA, 18 June 2013; Volume 8704, pp. 838–845. [Google Scholar]
  12. Liu, S.; Cui, H.; Li, J.; Yao, M.; Wang, S.; Wei, K. Low-Contrast Scene Feature-Based Infrared Nonuniformity Correction Method for Airborne Target Detection. Infrared Phys. Technol. 2023, 133, 104799. [Google Scholar] [CrossRef]
  13. He, Y.; Zhang, C.; Tang, Q. Non-uniformity correction based on sparse directivity and low rankness for infrared maritime imaging. In Proceedings of the 6th International Conference on Information Communication and Signal Processing (ICICSP), Xi’an, China, 23–25 September 2023; pp. 154–158. [Google Scholar]
  14. Ding, S.; Wang, D.; Zhang, T. A median-ratio scene-based non-uniformity correction method for airborne infrared point target detection system. Sensors 2020, 20, 3273. [Google Scholar] [CrossRef]
  15. Boutemedjet, A.; Deng, C.; Zhao, B. Edge-aware unidirectional total variation model for stripe non-uniformity correction. Sensors 2018, 18, 1164. [Google Scholar] [CrossRef] [PubMed]
  16. Wolf, A.; Pezoa, J.E.; Figueroa, M. Modeling and compensating temperature-dependent non-uniformity noise in IR microbolometer cameras. Sensors 2016, 16, 1121. [Google Scholar] [CrossRef]
  17. Liu, C.; Sui, X.; Gu, G.; Chen, Q. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera. Meas. Sci. Technol. 2018, 29, 025402. [Google Scholar] [CrossRef]
  18. Chang, S.; Li, Z. Single-reference-based solution for two-point nonuniformity correction of infrared focal plane arrays. Infrared Phys. Technol. 2019, 101, 96–104. [Google Scholar] [CrossRef]
  19. Averbuch, A.; Liron, G.; Bobrovsky, B.Z. Scene-based non-uniformity correction in thermal images using Kalman filter. Image Vis. Comput. 2007, 25, 833–851. [Google Scholar] [CrossRef]
  20. Liu, Y.; Qiu, B.; Tian, Y.; Cai, J.; Sui, X.; Chen, Q. Scene-based dual domain non-uniformity correction algorithm for stripe and optics-caused fixed pattern noise removal. Opt. Express 2024, 32, 16591–16610. [Google Scholar] [CrossRef] [PubMed]
  21. Lian, X.; Li, J.; Sun, D. Adaptive nonuniformity correction methods for push-broom hyperspectral cameras. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 7253–7263. [Google Scholar] [CrossRef]
  22. Song, S.; Zhai, X. Research on non-uniformity correction based on blackbody calibration. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; Volume 1, pp. 2146–2150. [Google Scholar]
  23. Wang, H.; Ma, C.; Cao, J.; Zhang, H. An adaptive two-point non-uniformity correction algorithm based on shutter and its implementation. In Proceedings of the 5th International Conference on Measuring Technology and Mechatronics Automation, Hong Kong, China, 16–17 January 2013; pp. 174–177. [Google Scholar]
  24. Kim, S. Two-point correction and minimum filter-based nonuniformity correction for scan-based aerial infrared cameras. Opt. Eng. 2012, 51, 106401. [Google Scholar] [CrossRef]
  25. Hu, J.; Xu, Z.; Wan, Q. Non-uniformity correction of infrared focal plane array in point target surveillance systems. Infrared Phys. Technol. 2014, 66, 56–69. [Google Scholar] [CrossRef]
  26. Wang, J.; Hong, W. Non-uniformity correction for infrared cameras with variable integration time based on two-point correction. In Proceedings of the AOPC 2021: Infrared Device and Infrared Technology 2021, Beijing, China, 23–25 July 2021; Volume 12061, pp. 258–263. [Google Scholar]
  27. Zhai, G.; Cheng, Y.; Han, Z.; Wang, D. The implementation of non-uniformity correction in multi-TDICCD imaging system. In Proceedings of the AOPC 2015: Image Processing and Analysis, Beijing, China, 5–7 May 2015; Volume 9675, pp. 66–70. [Google Scholar]
  28. Njuguna, J.C.; Alabay, E.; Çelebi, A.; Çelebi, A.T.; Güllü, M.K. Field programmable gate arrays implementation of two-point non-uniformity correction and bad pixel replacement algorithms. In Proceedings of the 2021 International Conference on Innovations in Intelligent Systems and Applications (INISTA), Kocaeli, Turkey, 25–27 August 2021; pp. 1–6. [Google Scholar]
  29. Ashiba, H.I.; Sadic, N.; Hassan, E.S.; El-Dolil, S.; Abd El-Samie, F.E. New proposed algorithms for infrared video sequences non-uniformity correction. Wirel. Pers. Commun. 2022, 126, 1051–1073. [Google Scholar] [CrossRef]
  30. Scribner, D.A.; Sarkady, K.A.; Kruer, M.R.; Caulfield, J.T. Adaptive nonuniformity correction for IR focal plane arrays using neural networks. In Proceedings of the SPIE, San Diego, CA, USA, 21 July 1991; Volume 1541, pp. 100–109. [Google Scholar]
  31. Sghaier, A.; Douik, A.; Machhout, M. FPGA implementation of filtered image using 2D Gaussian filter. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 7. [Google Scholar]
  32. Li, B.; Chen, W.; Zhang, Y. A nonuniformity correction method based on 1D guided filtering and linear fitting for high-resolution infrared scan images. Appl. Sci. 2023, 13, 3890. [Google Scholar] [CrossRef]
  33. Hamadouche, S.A.; Boutemedjet, A.; Bouaraba, A. Destriping model for adaptive removal of arbitrary oriented stripes in remote sensing images. Phys. Scr. 2024, 99, 095130. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Li, X.; Zheng, X.; Wu, Q. Adaptive temporal high-pass infrared non-uniformity correction algorithm based on guided filter. In Proceedings of the 2021 7th International Conference on Computing and Artificial Intelligence, Tianjin, China, 23–26 April 2021; pp. 459–464. [Google Scholar]
  35. Li, M.; Wang, Y.; Sun, H. Single-frame infrared image non-uniformity correction based on wavelet domain noise separation. Sensors 2023, 23, 8424. [Google Scholar] [CrossRef] [PubMed]
  36. Zeng, Q.; Qin, H.; Yan, X.; Yang, S.; Yang, T. Single infrared image-based stripe nonuniformity correction via a two-stage filtering method. Sensors 2018, 18, 4299. [Google Scholar] [CrossRef]
  37. Song, H.; Zhang, K.; Tan, W.; Guo, F.; Zhang, X.; Cao, W. A non-uniform correction algorithm based on scene nonlinear filtering residual estimation. Curr. Opt. Photonics 2023, 7, 408–418. [Google Scholar]
  38. Hamadouche, S.A. Effective three-step method for efficient correction of stripe noise and non-uniformity in infrared remote sensing images. Phys. Scr. 2024, 99, 065539. [Google Scholar] [CrossRef]
  39. Cao, B.; Du, Y.; Xu, D.; Li, H.; Liu, Q. An improved histogram matching algorithm for the removal of striping noise in optical remote sensing imagery. Optik 2015, 126, 4723–4730. [Google Scholar] [CrossRef]
  40. Tendero, Y.; Landeau, S.; Gilles, J. Non-uniformity correction of infrared images by midway equalization. Image Process. Line 2012, 2, 134–146. [Google Scholar] [CrossRef]
  41. Li, D.; Ding, X.; Chai, M.; Ma, C.; Sun, D. Nonuniformity correction method of infrared detector based on statistical properties. IEEE Photonics J. 2024, 16, 7800408. [Google Scholar]
  42. Liang, S.; Yan, J.; Chen, M.; Zhang, Y.; Sang, D.; Kang, Y. Non-uniformity correction method of remote sensing images based on improved total variation model. J. Electron. Imaging 2025, 34, 013006. [Google Scholar] [CrossRef]
  43. Boutemedjet, A.; Hamadouche, S.A.; Belghachem, N. Joint first and second order total variation decomposition for remote sensing images destriping. Imaging Sci. J. 2025, 73, 135–149. [Google Scholar] [CrossRef]
  44. Lv, X.G.; Song, Y.Z.; Li, F. An efficient nonconvex regularization for wavelet frame and total variation based image restoration. J. Comput. Appl. Math. 2015, 290, 553–566. [Google Scholar] [CrossRef]
  45. Dong, L.Q.; Jin, W.Q.; Zhou, X.X. Fast curvelet transform based non-uniformity correction for IRFPA. In Proceedings of the Infrared Materials, Devices, and Applications, Beijing, China, 11–15 November 2008; Volume 6835, pp. 331–338. [Google Scholar]
  46. Wu, X.; Zheng, L.; Liu, C.; Gao, T.; Zhang, Z.; Yang, B. Single-image simultaneous destriping and denoising: Double low-rank property. Remote Sens. 2023, 15, 5710. [Google Scholar] [CrossRef]
  47. Wang, S.P. Stripe noise removal for infrared image by minimizing difference between columns. Infrared Phys. Technol. 2016, 77, 58–64. [Google Scholar] [CrossRef]
  48. Chen, Y.; He, W.; Yokoya, N.; Huang, T.-Z. Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition. IEEE Trans. Cybern. 2020, 50, 3556–3570. [Google Scholar] [CrossRef]
  49. Kim, N.; Han, S.S.; Jeong, C.S. ADOM: ADMM-based optimization model for stripe noise removal in remote sensing image. IEEE Access 2023, 11, 106587–106606. [Google Scholar] [CrossRef]
  50. Silver, D.; Hasselt, H.; Hessel, M.; Schaul, T.; Guez, A.; Harley, T.; Dulac-Arnold, G.; Reichert, D.; Rabinowitz, N.; Barreto, A.; et al. The predictron: End-to-end learning and planning. In Proceedings of the International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017; pp. 3191–3199. [Google Scholar]
  51. Chen, S.; Deng, F.; Zhang, H.; Lyu, S.; Kou, Z.; Yang, J. Infrared non-uniformity correction model via deep convolutional neural network. In Proceedings of the IET Conference CP820, Beijing, China, 11–13 November 2022; pp. 178–184. [Google Scholar]
  52. Jiang, C.; Li, Z.; Wang, Y.; Chen, T. Non-uniformity correction of spatial object images using multi-scale residual cycle network (CycleMRSNet). Sensors 2025, 25, 1389. [Google Scholar] [CrossRef]
  53. Shi, M.; Wang, H. Infrared dim and small target detection based on denoising autoencoder network. Mob. Netw. Appl. 2020, 25, 1469–1483. [Google Scholar] [CrossRef]
  54. Li, T.; Zhao, Y.; Li, Y.; Zhou, G. Non-uniformity correction of infrared images based on improved CNN with long-short connections. IEEE Photonics J. 2021, 13, 7800213. [Google Scholar] [CrossRef]
  55. Ma, Z.; Zhang, S.; Wang, L.; Rao, Q. Global context-aware method for infrared image non-uniformity correction. In Proceedings of the 3rd International Conference on Artificial Intelligence and Computer Information Technology (AICIT), Yichang, China, 20–22 September 2024; pp. 1–5. [Google Scholar]
  56. Zhang, S.; Sui, X.; Yao, Z.; Gu, G.; Chen, Q. Research on nonuniformity correction based on deep learning. In Proceedings of the AOPC 2021: Infrared Device and Infrared Technology, Beijing, China, 23–25 July 2021; Volume 12061, pp. 97–102. [Google Scholar]
  57. Zhang, A.; Li, Y.; Wang, S. 2DDSRU-MobileNet: An end-to-end cloud-noise-robust lightweight convolution neural network. J. Appl. Remote Sens. 2024, 18, 024511. [Google Scholar] [CrossRef]
  58. Zhang, Y.; Gu, Z. DMANet: An image denoising network based on dual convolutional neural networks with multiple attention mechanisms. Circuits Syst. Signal Process. 2025; in press. [Google Scholar]
  59. Zhao, M.; Cao, G.; Huang, X.; Yang, L. Hybrid transformer-CNN for real image denoising. IEEE Signal Process. Lett. 2022, 29, 1252–1256. [Google Scholar] [CrossRef]
  60. Hsiao, T.Y.; Sfarra, S.; Liu, Y.; Yao, Y. Two-dimensional Hilbert–Huang transform-based thermographic data processing for non-destructive material defect detection. Quant. InfraRed Thermogr. J. 2024; in press. [Google Scholar]
  61. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  62. Davis, J.; Keck, M. A two-stage approach to person detection in thermal imagery. In Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), Breckenridge, CO, USA, 5–7 January 2005. [Google Scholar]
  63. Hwang, S.; Park, J.; Kim, N.; Choi, Y.; Kweon, I.S. Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1037–1045. [Google Scholar]
  64. Jia, X.; Zhu, C.; Li, M.; Tang, W.; Zhou, W. LLVIP: A visible-infrared paired dataset for low-light vision. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
Figure 1. Overview of the proposed non-uniformity correction framework.
Figure 1. Overview of the proposed non-uniformity correction framework.
Sensors 25 04287 g001
Figure 2. Correction results on real high-resolution infrared line-scan imagery.
Figure 2. Correction results on real high-resolution infrared line-scan imagery.
Sensors 25 04287 g002
Figure 3. (a) Original infrared image with an 8-level gray-scale color bar; (b) row-mean signal.
Figure 3. (a) Original infrared image with an 8-level gray-scale color bar; (b) row-mean signal.
Sensors 25 04287 g003
Figure 4. Variation of local MAD and corresponding dynamic filter radius.
Figure 4. Variation of local MAD and corresponding dynamic filter radius.
Sensors 25 04287 g004
Figure 5. MAD-driven fusion strategy. (a) R = 5 small window filtering result; (b) R = 30 large window filtering result; (c) MAD-driven fusion result.
Figure 5. MAD-driven fusion strategy. (a) R = 5 small window filtering result; (b) R = 30 large window filtering result; (c) MAD-driven fusion result.
Sensors 25 04287 g005
Figure 6. Image restoration results. (a) Corrected image; (b) streak noise map; (c) comparison of row-mean results.
Figure 6. Image restoration results. (a) Corrected image; (b) streak noise map; (c) comparison of row-mean results.
Sensors 25 04287 g006
Figure 7. Visual comparison of correction results under different module combinations.
Figure 7. Visual comparison of correction results under different module combinations.
Sensors 25 04287 g007
Figure 8. Sensitivity to large-window radius r .
Figure 8. Sensitivity to large-window radius r .
Sensors 25 04287 g008
Figure 9. Sensitivity to smoothing coefficient ε .
Figure 9. Sensitivity to smoothing coefficient ε .
Sensors 25 04287 g009
Figure 10. Sensitivity to spectral threshold β .
Figure 10. Sensitivity to spectral threshold β .
Sensors 25 04287 g010
Figure 11. Visual comparison of correction results on OSU-5.
Figure 11. Visual comparison of correction results on OSU-5.
Sensors 25 04287 g011
Figure 12. Visual comparison of correction results on KAIST-3.
Figure 12. Visual comparison of correction results on KAIST-3.
Sensors 25 04287 g012
Figure 13. Visual comparison of correction results on LLVIP-4.
Figure 13. Visual comparison of correction results on LLVIP-4.
Sensors 25 04287 g013
Figure 14. Visual comparison on Tendero’s dataset.
Figure 14. Visual comparison on Tendero’s dataset.
Sensors 25 04287 g014
Figure 15. Visual comparison of long-wave infrared weekly scanning images.
Figure 15. Visual comparison of long-wave infrared weekly scanning images.
Sensors 25 04287 g015
Table 1. Five experimental configurations.
Table 1. Five experimental configurations.
CombinatorialDRFMETClarification
B0 (Baseline)×××Guided filtering using only large windows
B1 (B0 + DR)××Validating the local adaptive role of DR
B2 (B0 + FM)××Verification of periodic stripe suppression for FM
B3 (B0 + DR + FM)×Testing the complementarity of DR and FM
B4 (Full)Full model
Table 2. Quantitative performance metrics of each module combination on real IR images.
Table 2. Quantitative performance metrics of each module combination on real IR images.
CombinatorialPSNR/(dB)SSIMRoughnessGCNR
B0 (Baseline)38.590.91640.02180.54780.999
B1 (B0 + DR)38.880.91680.0220.54881
B2 (B0 + FM)39.490.93410.0270.49250.999
B3 (B0 + DR + FM)42.20.96520.03190.35581
B4 (Full)42.440.96690.03240.34471.001
Table 3. Noise parameter settings for the OSU, KAIST, and LLVIP simulated datasets.
Table 3. Noise parameter settings for the OSU, KAIST, and LLVIP simulated datasets.
Noise Group σ g 2 σ b 2 σ w h i t e 2 A f 0 ϕ Characterization
OSU-10.010.01------------Mild column bias and gain error
OSU-20.0150.035------------Offset dominant stripe structure enhancement
OSU-30.040.015------------The gain of the dominant response varies significantly
OSU-4---------0.050.08π/3Independent simulation of periodic disturbances
OSU-50.0250.0250.010.060.06π/2Stripe and periodic complex interference
KAIST-10.020.02------------Moderately equilibrated non-homogeneous structures
KAIST-20.040.015------------Gain dominance and texture perturbation
KAIST-30.0250.045------------Bias enhancement with distinctive streaks
KAIST-4---------0.070.07π/4Simulation of purely periodic mains frequency interference
KAIST-50.030.030.010.080.05π/2Structural streaks and cyclic coupling
LLVIP-10.020.015------------Slightly non-uniform structure at high resolution
LLVIP-20.0350.025------------Gain dominates band microstructure texture interference
LLVIP-30.030.040.02---------Bias enhancement with a bit of white noise
LLVIP-4---------0.090.04π/2High-resolution periodic stripe jitter characteristics
LLVIP-50.040.040.010.120.03πMulti-source joint extreme interference simulation
Table 4. Quantitative results on the OSU simulated dataset.
Table 4. Quantitative results on the OSU simulated dataset.
Simulated Image DataMetricNoiseMIREMDBCTSFADOMGFLFASNROURS
OSU-1PSNR---35.6232.3631.1822.4633.2729.1136.39
SSIM---0.96410.93810.9340.87820.93970.88950.9686
Roughness0.12750.11790.11790.11640.10590.11420.07820.1202
OSU-2PSNR---31.6830.3330.5622.3432.2128.1533.89
SSIM---0.92110.91860.92080.85120.92740.81990.9361
Roughness0.18980.12290.12050.11750.10320.11840.09560.1273
OSU-3PSNR---33.0631.7330.7722.3432.5728.5934.83
SSIM---0.94190.92430.92210.85510.92790.85670.9501
Roughness0.15310.12330.12220.12020.10870.11880.08640.1274
OSU-4PSNR---29.4930.5130.6622.0831.6628.2432.18
SSIM---0.90470.90450.92390.84220.92720.82940.9305
Roughness0.13110.12370.11840.11560.10310.11350.0910.1288
PSNR---28.329.2329.3522.4231.1826.5131.82
OSU-5SSIM---0.85920.86480.87680.81420.89180.76610.9085
Roughness0.18360.14280.14160.1390.11640.13740.11350.1453
Table 5. Quantitative results on the KAIST simulated dataset.
Table 5. Quantitative results on the KAIST simulated dataset.
Simulated Image DataMetricNoiseMIREMDBCTSFADOMGFLFASNROURS
KAIST-1PSNR---40.8734.6441.4535.2441.5839.942.25
SSIM---0.92710.91120.92990.87920.92740.91530.9302
Roughness0.22040.06310.08250.07050.05910.06760.06070.091
KAIST-2PSNR---41.7334.842.2739.2642.2841.0242.64
SSIM---0.93930.91940.94050.91780.93820.93270.9431
Roughness0.18750.06450.0840.07270.06950.06950.05610.102
KAIST-3PSNR---36.8433.136.2732.5436.934.8137.27
SSIM---0.86870.83210.86380.79980.86230.82430.8755
Roughness0.4340.07040.10760.08660.07970.08770.08710.1119
KAIST-4PSNR---38.5631.8936.830.1738.5629.8139.57
SSIM---0.87570.77850.84880.74230.87530.64930.8938
Roughness0.17850.05180.09160.06860.09010.05630.12320.132
KAIST-5PSNR---34.5127.8830.4732.2534.2627.1836.6
SSIM---0.78620.66460.70980.74470.78170.57740.8548
Roughness0.38860.1920.22350.20970.22150.19850.21940.2406
Table 6. Quantitative results on the LLVIP simulated dataset.
Table 6. Quantitative results on the LLVIP simulated dataset.
Simulated Image DataMetricNoiseMIREMDBCTSFADOMGFLFASNROURS
LLVIP-1PSNR---41.4540.3342.4732.3643.7339.8844.29
SSIM---0.98040.96210.97260.89170.97980.97060.9827
Roughness0.06750.0280.03560.0330.03260.03070.02760.0378
LLVIP-2PSNR---39.3236.4539.2232.0741.0437.4241.76
SSIM---0.97380.93020.95210.88070.96820.94740.9747
Roughness0.10040.02920.04240.0380.03650.03440.03420.0478
LLVIP-3PSNR---33.5332.5433.931.1334.533.835.71
SSIM---0.80550.78910.80510.79090.81190.82540.847
Roughness0.16130.07850.08220.080.07020.0790.05650.0827
LLVIP-4PSNR---36.127.9128.1726.2434.7425.1737.05
SSIM---0.96250.82070.08320.77760.95270.7230.9712
Roughness0.05090.02720.03920.03810.04160.02780.04540.047
PSNR---28.329.2329.3522.4231.1826.5131.82
LLVIP-5SSIM---0.85920.86480.87680.81420.89180.76610.9085
Roughness0.18360.14280.14160.1390.11640.13740.11350.1453
Table 7. Quantitative results on Tendero’s dataset.
Table 7. Quantitative results on Tendero’s dataset.
Real Image DataMetricNoiseMIREMDBCTSFADOMGFLFASNROURS
Tendero’s dataPSNR---25.6321.7125.6523.7227.8428.3429.74
SSIM---0.60430.62190.60480.57610.78110.7450.8296
Roughness0.33570.14190.15150.14290.12980.21180.11050.2291
GC---0.50140.4180.42870.42040.36630.49740.3322
NR---1.0411.03181.0421.0581.03441.04281.0671
Table 8. Quantitative results on the long-wave infrared weekly scanning dataset.
Table 8. Quantitative results on the long-wave infrared weekly scanning dataset.
Real Image DataMetricNoiseMIREMDBCTSFADOMGFLFASNROURS
Long-wave infrared weekly scanning datasetPSNR---36.5537.0536.9630.8536.6337.2737.78
SSIM---0.83720.86010.8440.8070.84150.87210.8784
Roughness0.03930.03050.03610.03370.02150.03150.01890.0372
GC---0.86380.77280.82450.84620.82860.84280.5748
NR---1.00971.01151.00891.03671.01251.00921.0417
Table 9. Comparison of the time complexity of different algorithms.
Table 9. Comparison of the time complexity of different algorithms.
AlgorithmsMIREMDBCTFSADOMGFLFASNROURS
Time/s492.15270.27445.9356192.00581.53457.17140.1815
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, M.; Zhu, Y.; Duan, Q.; Zhu, Y.; Jiang, J.; Zhang, Y. Adaptive Guided Filtering and Spectral-Entropy-Based Non-Uniformity Correction for High-Resolution Infrared Line-Scan Images. Sensors 2025, 25, 4287. https://doi.org/10.3390/s25144287

AMA Style

Huang M, Zhu Y, Duan Q, Zhu Y, Jiang J, Zhang Y. Adaptive Guided Filtering and Spectral-Entropy-Based Non-Uniformity Correction for High-Resolution Infrared Line-Scan Images. Sensors. 2025; 25(14):4287. https://doi.org/10.3390/s25144287

Chicago/Turabian Style

Huang, Mingsheng, Yanghang Zhu, Qingwu Duan, Yaohua Zhu, Jingyu Jiang, and Yong Zhang. 2025. "Adaptive Guided Filtering and Spectral-Entropy-Based Non-Uniformity Correction for High-Resolution Infrared Line-Scan Images" Sensors 25, no. 14: 4287. https://doi.org/10.3390/s25144287

APA Style

Huang, M., Zhu, Y., Duan, Q., Zhu, Y., Jiang, J., & Zhang, Y. (2025). Adaptive Guided Filtering and Spectral-Entropy-Based Non-Uniformity Correction for High-Resolution Infrared Line-Scan Images. Sensors, 25(14), 4287. https://doi.org/10.3390/s25144287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop