Next Article in Journal
Utilising Inertial Measurement Units and Force–Velocity Profiling to Explore the Relationship Between Hamstring Strain Injury and Running Biomechanics
Previous Article in Journal
Measurement of Human Body Segment Properties Using Low-Cost RGB-D Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Weighted Residual-Guided Algorithm for Non-Uniformity Correction of High-Resolution Infrared Line-Scanning Images

1
University of Chinese Academy of Sciences, Beijing 100049, China
2
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
3
School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(5), 1511; https://doi.org/10.3390/s25051511
Submission received: 2 January 2025 / Revised: 23 February 2025 / Accepted: 27 February 2025 / Published: 28 February 2025
(This article belongs to the Section Electronic Sensors)

Abstract

:
Gain and bias non-uniformities in infrared line-scanning detectors often result in horizontal streak noise, degrading image quality. This paper introduces a novel non-uniformity correction algorithm combining residual guidance and adaptive weighting, which achieves superior denoising and detail preservation compared to existing methods. The method combines residual and original images in a dual-guidance mechanism and significantly enhances denoising performance and detail preservation through iterative compensation strategies and locally weighted linear regression. Additionally, the algorithm employs local variance to adjust weights dynamically, achieving efficient correction in complex scenes while reducing computational complexity to meet real-time application requirements. Experimental results on both simulated and real infrared datasets demonstrate that the proposed method outperforms mainstream algorithms regarding peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics, achieving an optimal balance between detail preservation and noise suppression. The algorithm demonstrates robust performance in complex scenes, making it suitable for real-time applications in high-resolution infrared imaging systems.

1. Introduction

In recent years, significant advancements in infrared imaging technology and manufacturing processes have led to widespread application in thermal imaging, environmental monitoring, and military operations [1,2]. However, infrared detectors are susceptible to environmental temperature and voltage fluctuations, which cause variations in response coefficients. These variations generate fixed-pattern streak noise that is difficult to eliminate through hardware solutions, thereby degrading image quality and adversely affecting subsequent image processing [3,4,5,6].
Researchers have developed non-uniformity correction methods to address these challenges, mitigate streak noise, and improve image quality. Non-uniformity correction methods can be broadly classified into reference-based [7] and scene-based approaches [8,9]. Reference-based correction methods use blackbody images to calculate correction coefficients [10,11,12,13,14,15,16]. While straightforward and easy to implement, these methods have notable limitations. They require periodic calibration, which is particularly inconvenient in dynamic environments where calibration coefficients can drift over time as operating conditions change [13]. Recent studies have attempted to address thermal drift via a semi-transparent shutter [17] or camera housing stabilization [18]. However, such hardware-based solutions can still be challenging for real-time or large-scale deployments. Moreover, standard global calibrations may be inadequate in fine-detail scenarios, as Pron and Bouache [19] show that localized or pixel-level calibration can outperform manufacturer-provided corrections, albeit with increased computational overhead. Furthermore, reference-based methods struggle with complex noise patterns, especially in variable environments, and fail to provide effective real-time solutions.
Unlike reference-based methods, scene-based correction methods do not rely on external reference images and include correction algorithms based on filtering, statistics, model optimization, and neural networks. Despite their promise, scene-based methods face several challenges. Filter-based methods effectively remove low-frequency noise but often cause significant detail loss, particularly in high-resolution images [20,21,22,23,24,25,26,27]. Optimization-based methods, which leverage techniques such as total variation regularization or low-rank matrix decomposition, can effectively separate noise but suffer from high computational complexity, making them less practical for real-time high-resolution image processing [28,29,30,31,32,33]. Neural network-based methods excel in noise suppression and image enhancement but require extensive training datasets and computational resources, limiting their applicability [34,35,36,37]. Statistical methods, such as histogram and moment matching, infer noise distribution based on grayscale statistics [38,39,40,41,42]. However, their performance diminishes when noise deviates from assumed distributions, and they often fail to preserve image details and structural features.
High-resolution infrared line-scanning images present unique challenges due to their directional and structured noise, which arises from the same detector element generating each row of pixels. To tackle these challenges, this paper introduces an innovative non-uniformity correction algorithm based on residual guidance and adaptive weighting. The key contributions of this work are as follows:
(1) Residual-guided filtering and dual-guidance mechanism: The proposed method combines residual and original images as guidance for detail preservation and global smoothing, respectively. Through weighted fusion, it addresses the edge-blurring issues inherent in traditional guided filtering, significantly enhancing detail preservation, particularly in high-resolution images.
(2) Iterative residual compensation scheme: A dynamic residual compensation mechanism is introduced to optimize the correction results by gradually smoothing the Gaussian filtering, eliminating the residual noise while avoiding introducing artifacts by over-compensation. Compared with the static compensation scheme in the traditional method, the dynamic compensation can adaptively adjust the compensation intensity according to the distribution of noise, which significantly improves the robustness of the algorithm.
(3) Weighted linear regression based on local variance: Local variance is used to adjust the weights of each region dynamically, and differentiated correction is implemented for areas with different noise intensities, improving global correction accuracy. Unlike the traditional method that uses uniform weights, this algorithm can adaptively adjust according to the local features of the image, which is especially suitable for infrared image processing in complex scenes.
(4) Efficiently adapting to complex scenes: A local image interception strategy based on scene complexity analysis is proposed to optimize computational efficiency, enabling the algorithm to meet the real-time processing requirements in high-resolution images. Compared with the correction method based on region segmentation, this algorithm can characterize the noise distribution of the whole image more accurately without introducing apparent deviations.
Experimental results show that the proposed algorithm achieves excellent denoising and detail preservation under varying noise intensities and complex scenes. Compared with mainstream methods such as those implemented by Cao [43], Li [23], and Ahmed [24], the algorithm significantly improves PSNR and SSIM metrics and demonstrates robust performance in high-resolution infrared line-scanning images. By effectively addressing the limitations of detail preservation and the high computational costs of existing methods, this algorithm provides an efficient and practical solution for non-uniformity correction in complex scenes within infrared line-scanning systems.

2. Materials and Methods

Infrared line-scanning imaging is characterized by its one-dimensional progressive scanning mechanism, making horizontal stripe noise one of its significant challenges. This noise, caused by the non-uniform response of the detector array, manifests as stripe-like intensity fluctuations along the scanning direction, severely degrading image quality. To address this issue, we propose an innovative residual-guided adaptive correction algorithm. The overall flow of the method is illustrated in Figure 1.
The algorithm comprises four main stages: (1) Row-mean calculation and residual generation: a portion of the image columns is extracted, and the row mean is calculated to generate a residual image. (2) One-dimensional guided filtering and result fusion: The residual and the original images are utilized in a dual-guidance mechanism to perform one-dimensional guided filtering on the row-mean image. Local variance information is then used to fuse the filtering results, producing a preliminary corrected image. (3) Iterative residual compensation: residuals from the filtered image are iteratively compensated using a dynamic optimization approach, refining the corrected result. (4) Linear correction coefficient application: linear correction coefficients are calculated based on the optimized portion of the image and applied globally to correct the entire image. The non-uniformity correction results of the proposed algorithm for real infrared scanning images are shown in Figure 2.

2.1. Row-Mean Calculation and Residual Generation

Let the input 2D grayscale image be denoted as I i , j . A subset of columns is extracted, forming a smaller grayscale image containing M rows and N columns. The row mean, which represents the average pixel value of all columns in each row, is defined as follows:
I ¯ i = 1 N j = 1 N I i , j
where i 1 , M is the row index. The row-mean image I ¯ i , j is obtained by expanding the mean value of each row across the entire row:
I ¯ i , j = I ¯ i , y 1 , N
The row-mean image I ¯ i , j captures the low-frequency components of stripe noise, while high-frequency details are retained in the residual image, which is defined as follows:
R i , j = I i , j I ¯ i , j
This operation separates low-frequency noise from high-frequency details, with the residual image as a guide for subsequent filtering. Figure 3 illustrates the residual generation process.

2.2. One-Dimensional Guided Filtering and Fusion

2.2.1. Principle of One-Dimensional Guided Filtering

Kaiming He [44] proposed the guided filtering model in 2013. This model structurally optimizes the input image through a local linear model. The key formulas are shown in Equations (4)–(7), and the specific derivation is detailed in Reference [44].
q i = a k I i + b k , i w k
In Equation (4), q i is the output image pixel value, a k , b k is the linear coefficient within a small window, I i is the guided image pixel value, and w k is the window of the guided filter. Some differences between the output image q and the input image p represent noise or other textures in the image that need to be eliminated. We define the loss function between image p and image q within the window w k as shown in Equation (5):
E a k , b k = i w k a k I i + b k p i 2 + ε a k 2
In Equation (5), ε is the penalty parameter. The main goal of guided filtering is to minimize the difference between the output image q and the input image p . The optimal coefficients a k and b k are given by the following equations:
a k = 1 w i w k I i p i μ k P k ¯ 1 w i w k σ k 2 + ε
b k = p ¯ k a k μ k
where μ k and σ k 2 are the mean and variance of I within w k , w is the number of pixels within the window w k , and p ¯ k is the mean of the input image p k within the window w k , respectively.

2.2.2. Image-Guided Process

Guided filtering typically uses a single guide image, which can lead to trade-offs between preserving details and achieving smoothness [44]. To overcome this, our method introduces a dual-guidance approach, using both the residual and original images as guides. The residual image captures fine details by focusing on high-frequency noise, while the original image provides overall structural information to ensure large-scale consistency. By combining these two guides, the algorithm achieves enhanced detail preservation and noise suppression. The residual image R ( i , j ) isolates high-frequency details for effective low-frequency noise optimization. In contrast, the original image contains global structural information, which enhances large-scale smoothing and edge preservation.
Based on the principle of guided filtering described in Section 2.2.1, the residual image R i , j is first used as the guide image. The residual image R i , j which contains detailed signals after stripe noise removal is applied to optimize the row-mean image I ¯ i , j in a targeted manner. The result of the guided filtering based on the residual image is expressed as follows:
I ^ R i , j = a R i R i , j + b R i
where a R i and b R i are the local linear coefficients calculated based on the residual image and I ^ R i , j represents the output image obtained after guided filtering of the row-mean image using the residual image. The results of I ^ R i , j and the corresponding row-mean signal are shown in Figure 4a,b.
Then, the intercepted original image I i , j is used as the guide image. The original image I i , j , which contains global structural and intensity information, enhances global consistency and edge preservation during the guided filtering of the row-mean image I ¯ i , j . The result of guided filtering based on the original image is expressed as follows:
I ^ I i , j = a I i I i , j + b I i
where a I i and b I i are the local linear coefficients calculated based on the original image. I ^ I i , j represents the output image obtained after guided filtering of the row-mean image using the original image. The results of I ^ I i , j and the corresponding row-mean signal are shown in Figure 4c,d.
As observed in Figure 4b, the image exhibits more detailed changes and local fluctuations. In contrast, Figure 4d demonstrates relatively smoother curves, highlighting the preservation of the global structure.

2.2.3. Fusion of Weights

Local variance is employed as the fusion weight to balance the contributions of the two guide images. The local variance reflects the degree of variation in pixel values within a region, representing the texture complexity of the image. The fusion result, denoted as I ^ f u s e d i , j , is expressed as follows:
I ^ f u s e d i , j = ω i , j I ^ R i , j + 1 ω i , j I ^ I i , j
where the weight function ω i , j is defined as follows:
ω i , j = 1 1 + k σ i , j t
where k controls the steepness of the mapping and t is the threshold that determines the value at which the local features cause the weights to change rapidly. The computation of ω i , j depends on the local variance derived from the input image. During the fusion of the two guided filter outputs, a nonlinear sigmoid function is applied to map the local variance and dynamically adjust the fusion weights. Figure 1 illustrates the overall workflow of the residual-guided filtering algorithm.

2.3. Nonlinear Scaling and Residual-Based Iterative Compensation

2.3.1. Nonlinear Scaling

The algorithm incorporates a nonlinear scaling strategy before the iterative process to optimize correction results and provide better initial conditions for subsequent iterative residual compensation. This nonlinear scaling is based on the statistical properties of local regions, dynamically adjusting the contribution of residuals to the correction results. The scaling factor is decreased for high-variance regions to prevent over-enhancement of existing details and increased for low-variance regions to enhance potential information. This adjustment refines the distribution of the corrected image, improving both the stability and convergence speed of the iterative compensation.
The Tanh function calculates the final nonlinear scaling factor S i , j . Characterized by its steep response within the output range [−1, 1], the Tanh function is sensitive to changes in local variance, making it well-suited as a scaling factor. After transformation, it smooths transitions within the image to prevent artifacts caused by abrupt weight changes. The expression for the nonlinear scaling factor S i , j is given as follows:
S i , j = 1 tanh γ V i , j t
where V i , j represents the local variance, which reflects the intensity of details in the local region of the image. The parameter t is the threshold for local variance, and γ controls the steepness of the scaling function.

2.3.2. Iteration Based on Residual Compensation

The residual R k i , j , representing the difference between the input image and its corrected version after the k -th iteration, is calculated as follows:
R k i , j = I i , j I k i , j
where I i , j is the original input image and I k i , j is the corrected image at iteration k .
To achieve optimal calibration, the compensation factor α k is dynamically adjusted. The initial factor α 0 sets the intensity of the first correction, while subsequent factors decrease with the residual standard deviation σ k , ensuring stable convergence and avoiding artifacts. The compensation factor is expressed as follows:
α k = α 0 σ k σ 0 β k 1
where σ 0 and σ k are the standard deviations of the initial and current residuals, and β 0,1 controls the reduction rate.
To suppress high-frequency noise, the residuals R k i , j are smoothed using a Gaussian filter, and the corrected image is updated as follows:
I k + 1 i , j = I k i , j + α k R k S m o o t h i , j
where R k S m o o t h i , j denotes the smoothed residuals. The iteration stops when the residual standard deviation σ k drops below ε σ 0 (with ε as a predefined scale factor) or when the maximum iteration count K m a x is reached.

2.4. Weighted Linear Regression and Full Map Correction

The algorithm effectively reduces cross-stripe non-uniformity noise in infrared line-scanning images through the above process. However, the high imaging speed of line-scanning detectors—capable of producing tens of thousands of image columns per second—introduces computational challenges. Specifically, the need to calculate linear coefficients for each window a k , b k and perform iterative residual compensation increases the algorithm’s runtime, making real-time performance challenging to achieve.
The noise exhibits strong row-wise correlations since the same detector element generates each row in a line-scanning image. To address this, linear correction coefficients a ( i ) and b ( i ) are computed for each row to correct the image globally. Traditional uniform-weighted linear regression often leads to under-correction or over-smoothing in regions with varying noise intensities. To mitigate this, the proposed method employs locally weighted linear regression, which dynamically adjusts weights based on local variance.
For a particular row   i , the pixel values in the intercepted image can be expressed as follows:
j j = a i i j + b i + ε j
where a ( i )   and b ( i ) are the linear correction coefficients and ε j is the fitting error. To reflect the contribution of different pixels to the correction, the weights ω ( i ,   j ) are defined as follows:
ω i ,   j = 1 1 + V i ,   j
where V ( i ,   j ) represents the local variance, reflecting the noise level.
The linear correction parameters a ( i ) and b ( j ) are calculated as Equations (18) and (19), respectively.
a i = j ω i ,   j I i ,   j I k + 1 i ,   j ω ¯ i I ¯ i I ¯ k + 1 i j ω i ,   j I i ,   j 2 ω ¯ i I ¯ i 2
b i = I ¯ k + 1 i a i I ¯ i
where ω ¯ i = j ω i ,   j   is the sum of weights and I ¯ i and I ¯ k + 1 i are the weighted mean values, respectively. After calculating the correction factors a i and b i for each row, the corrected whole image I c o r r e c t e d i , j can be expressed as follows:
I   c o r r e c t e d i ,   j = a i I i , j + b i
where I i , j is the high-resolution input image, and I c o r r e c t e d ( i ,   j ) is the final corrected image. The corrected result is shown in Figure 5b.

3. Results

Our algorithm was compared with several state-of-the-art methods for infrared detectors to evaluate the effectiveness of the proposed non-uniformity correction algorithm. The compared algorithms included the one-dimensional guided filtering algorithm for low-texture infrared images (1DGF) proposed by Cao in 2016 [22], the multi-stage wavelet transform and guided filtering-based denoising algorithm (MSGF) proposed by Cao in 2018 [43], the improved mixed-noise removal method based on non-local means (CNLM) proposed by Li in 2019 [39], the non-uniformity correction method combining one-dimensional guided filtering and linear fitting (GFLF) proposed by Li in 2023 [23], and the stripe noise removal algorithm for one-dimensional signals based on 2D-to-1D image conversion (ENSI) proposed in 2023 [24]. The dataset used in this study comprises six parts: the FLIRADAS dataset [45], the MassMIND dataset [46], Tendero’s dataset [38], the KAIST dataset [47], the LLVIP dataset [48], and long-wave infrared weekly swept real line-scanning images. Experiments were conducted on a 12th generation Intel® Core™ i7-12700H CPU @ 3.61 GHz system with 32 GB RAM and a 64-bit Windows operating system. The software platform used for algorithm implementation was Matlab 2024a.

3.1. Noise Modeling Analysis

Unlike array detectors, where each pixel operates independently, infrared line-scanning detectors use the same pixel to scan each row, resulting in each image being generated by the same detector element. This imaging mechanism introduces unique noise characteristics, mainly horizontal streak artifacts. The primary sources of non-uniformity include variations in pixel response and random noise during signal readout.
A noise model comprising gain, bias, and random noise is typically constructed to model non-uniformity and evaluate the effectiveness of the correction algorithm. Changes in gain and bias are modeled as Gaussian random variables: the gain g i at row I follows a Gaussian distribution with a mean of 1 and variance σ g 2 , and the bias b ( i ) follows a Gaussian distribution with a mean of 0 and variance σ b 2 . In addition, random white noise n i , j is modeled as a Gaussian distribution with a mean of 0 and variance σ w h i t e 2 . Combining these factors, the simulated image with noise can be represented as follows:
I n o i e s i , j = g i I 1 i , j + b i + n i , j
where I 1 i , j denotes the ideal image without noise and n i , j represents the random white noise affecting each pixel.

3.2. Effect of Image Size on Non-Uniformity Correction

This experiment analyzes how the number of intercepted columns affects non-uniformity correction and provides guidance for selecting an optimal column count.
The experimental data consist of 40 frames of noise-free high-resolution infrared images with an original image size of 1024 × 55,000. Intercepted images of size 1024 × 8192 were used for evaluation. Streak noise was added, with the gain coefficient g ( i ) following a Gaussian distribution (mean = 1, variance = 0.02) and the bias coefficient b ( i ) following a Gaussian distribution (mean = 0, variance = 0.02). No additional Gaussian white noise was introduced.
The number of intercepted columns was incrementally increased from 400 to 3200, and correction parameters were calculated and applied to the full map. The correction effect was evaluated using the mean PSNR, as shown in Table 1.
As shown in Table 1, the PSNR value improves as the number of intercepted columns increases. Beyond 1600 columns, the PSNR stabilizes, indicating that the correction parameters effectively capture the noise characteristics of the entire map. Increasing the number of columns beyond 1600 yields less improvement (less than 0.1 in PSNR) while increasing computational overhead. Thus, selecting around 1600 columns for practical applications balances correction accuracy and computational efficiency.

3.3. Effect of Image Scene Complexity on Non-Uniformity Correction

To study the impact of scene complexity on calibration performance, this experiment simulates infrared line-scanning images by adding Gaussian white noise with varying variances. Simulated images of size 1024 × 8192 were used, with 1600 columns intercepted for correction parameter calculation. The gain g ( i ) and bias b ( i ) follow Gaussian distributions with means of 1 and 0 and variances of 0.02, respectively. The variance of the Gaussian white noise n ( i , j ) was gradually increased from 0.02 to 0.2 to simulate scenes of varying complexity. The experiment was conducted on 40 frames of noise-free, 14-bit infrared line-scanning images. An example of the noisy image after adding Gaussian noise is shown in Figure 6, and the PSNR values for corrected images under different noise variances are presented in Table 2.
From Table 2, the PSNR increases as the noise variance grows, reaching a peak at a variance of 0.1. This indicates that the algorithm effectively separates streak noise from the background while maintaining image structure under moderate noise levels. However, when the noise variance exceeds 0.1, the PSNR declines due to the increased noise intensity destroying local structural information, making it challenging for the algorithm to fully recover the original image details.
These results demonstrate that the algorithm is robust in varying scene complexities. It handles non-uniformity correction in complex scenarios, making it suitable for infrared line-scanning detectors under diverse conditions.

3.4. Quantitative Analysis of Algorithmic Non-Uniformity Correction

The algorithms in this paper are compared experimentally with state-of-the-art non-uniformity correction methods, including MSGF, 1DGF, GFLF, ENSI, and CNLM. To evaluate performance comprehensively, reference evaluation metrics such as PSNR, SSIM, and roughness are used in the simulated images. In contrast, non-reference metrics such as ICV and GC assess the correction effect on real images.

3.4.1. Experimental Datasets

This study utilizes two types of experimental dataset: simulated and real.
Simulated datasets were created to compare streak noise removal models under controlled conditions. The FLIRADAS and MassMIND datasets serve as reference datasets. FLIRADAS is a thermal imaging dataset captured with a vehicle-mounted RGB thermal camera, containing 4224 infrared images at a resolution of 480 × 640. MassMIND is a long-wave infrared oceanographic dataset comprising 2916 images with a resolution of 512 × 640. From these datasets, 100 images were randomly selected, and five types of streak noise were added using Equation (21):
  • Case 1: g ( i ) and b ( i ) follow Gaussian distributions (mean = 1, variance = 0.02). No Gaussian white noise is added, simulating weak streak noise to test the algorithm’s correction ability under mild conditions.
  • Case 2: g ( i ) and b ( i ) follow Gaussian distributions (mean = 1, variance = 0.05). Moderate streak noise is simulated to evaluate correction under intermediate conditions.
  • Case 3: g ( i ) and b ( i ) follow Gaussian distributions (mean = 1, variance = 0.08). This case simulates strong streak noise, testing the algorithm’s performance under extreme conditions.
  • Case 4: gain noise varies linearly and periodically in the horizontal direction, simulating complex, non-uniform streak noise.
  • Case 5: g ( i ) and b ( i ) follow Gaussian distributions (mean = 1, variance = 0.02), with added Gaussian white noise n ( i , j ) (variance = 0.04) to simulate environmental interference and test robustness.
Two real datasets containing streak noise were also used for comparison: Tendero’s Dataset, which consists of 20 infrared images of varying resolutions, and the long wave infrared weekly scanning dataset, which contains 40 images with a resolution of 1024 × 55,000. For the experiments, 1024 × 8192 columns were intercepted from the long-wave dataset.

3.4.2. Parameter Setting

All algorithms were run on the same set of infrared images to ensure a fair comparison under consistent pre-processing steps. Parameters were drawn from each method’s original references whenever possible, with minor modifications to accommodate this study’s resolutions and noise characteristics. Specifically, for our proposed method, we adopted a [15 × 1] window in the guided filtering stage to capture horizontal streak noise effectively, with a smoothing parameter r = 0.16 balancing noise suppression and detail retention. The iterative compensation was capped at five cycles after pilot tests showed negligible PSNR/SSIM improvements beyond five iterations, and an initial compensation coefficient of α = 0.05 was chosen to avoid over-smoothing. In contrast, MSGF [43] employed wavelet decomposition (sym8, three levels), followed by a guided filter of size [5 × 5] and a smoothing parameter ε = 0.01 . The 1DGF [23] used a one-dimensional guided filter with a [1 × 100] window and ε = 0.01 , reflecting its emphasis on row-wise filtering in low-texture infrared settings. For GFLF [22], we set an [8 × 1] horizontal filtering window and a [1 × 100] vertical filtering window, applying smoothing parameters of 0.04 horizontally and 0.16 vertically to address both row- and column-wise artifacts. ENSI [24] employed a Gaussian filter window of [1 × 5] after converting the image dimension from 2D to 1D. At the same time, CNLM [39] used a [7 × 7] search window for identifying similar patches and a similarity threshold of h2 = 0.1, thereby balancing fine detail preservation with effective noise removal. Notably, for the large-scale long-wave infrared weekly scanning dataset, our algorithm and GFLF extracted 1600 columns to capture the global noise distribution efficiently. All other parameters were consistent with their respective references, ensuring each algorithm operated near its recommended conditions for a fair and transparent performance evaluation.

3.4.3. Parameter Sensitivity Analysis

To further address the choice of multiple parameters in the algorithm, we performed an ablation study on four key hyperparameters: (1) the guide filtering window size w , (2) the smoothing parameter r , (3) the maximum iteration count n , and (4) the initial compensation coefficient α . We selected 30 representative images from the FLIRADAS dataset (Case1 noise scenario) and systematically varied each parameter while keeping the others fixed at their default values.
The zoomed-in area in Figure 7 shows that a smaller window size [5 × 1] leaves noticeable streaks, while a larger window size [35 × 1] causes blurred edges. A medium-sized window size [15 × 1] can balance detail preservation and denoising.
From the enlarged area of Figure 8, we can see that when r = 0.12 or r = 0.22, the image is relatively smooth, and the details are relatively clear, but more noise is left. When r = 0.42 or 0.82, the noise removal effect is noticeable, but excessive smoothing leads to a loss of image details, especially when r = 0.82, when the texture of the building becomes blurred.
According to Figure 9, as the number of iterations n increases, the PSNR gradually increases, but the increase decreases while the time increases linearly. Considering the performance and computing time, it is ideal to choose n = 5 to n = 7 because, in this range, the improvement of PSNR has stabilized, and the running time is relatively reasonable.
As can be seen from Figure 10, when α = 0.2, the correction will overshoot, and when α is small, more iterations are required to achieve the same effect. Between 0.05 and 0.1, the convergence speed and correction quality are better balanced.
Based on these observations, we suggest using [15 × 1] for the guide window, r = 0.16, up to five iterations, and α = 0.05 as default. Nevertheless, users can fine-tune these parameters if confronted with significantly different imaging conditions or noise intensities.

3.4.4. Evaluation Indicators

Several evaluation metrics are employed to assess the performance of the algorithms:
PSNR evaluates the similarity between the corrected image and the original image. A higher PSNR indicates better correction and detail retention. The formula is as follows:
P S N R = 10 log 10 M A X 2 M S E
where M A X is the maximum possible pixel value of the image and M S E represents the mean squared error between the corrected and reference images.
S S I M measures the brightness, contrast, and structure similarity between the corrected and original images. The closer the SSIM value is to 1, the higher the subjective quality. The formula is as follows:
S S I M = 2 μ I ^ μ I + C 1 2 σ I ^ I + C 2 μ I ^ 2 + μ I 2 + C 1 σ I ^ 2 + σ I 2 + C 2
where μ I , μ I ^ , σ I 2 , and σ I 2 are the mean and variance of I and I ^ , respectively; σ I ^ I is the covariance between I and I ^ ; and C 1 and C 2 are constants to avoid division by zero.
Roughness is used to measure the overall frequency characteristics of an image. The closer the roughness value of the corrected image is to that of the original image, the better the correction effect. The calculation formula is as follows:
p = h I ^ + h T I I
where | | · | | denotes the L-1 norm, denotes the convolution operation, h is the horizontal gradient operator, I ^ is the corrected image, h T is the vertical gradient operator, and I is the input image.
I C V assesses uniformity and contrast enhancement by measuring the ratio of the mean to the standard deviation of the pixel values in selected regions. A higher I C V indicates better uniformity. The formula is as follows:
I C V = I ^ m I ^ S
where I ^ m and I ^ S are the mean and standard deviation of the selected region.
GC reflects the retention of image details by comparing the gradients of the corrected and original images. A smaller GC value indicates higher detail preservation. The formula is as follows:
G C = Σ G r a d I G r a d I ^ Σ G r a d I
where G r a d ( · ) denotes the gradient calculation of the image, I is the input image, and I ^ is the output image.

3.4.5. Quantitative Testing of Simulated Datasets

The proposed algorithm’s performance was evaluated using simulated datasets derived from FLIRADAS and MassMIND, incorporating five types of streak noise. The proposed method was compared against the MSGF, 1DGF, GFLF, ENSI, and CNLM algorithms regarding PSNR, SSIM, and roughness. The results are summarized in Table 3. To provide a clearer and more focused comparison, the row-mean images from these algorithms are shown in Figure 7 and Figure 8, with each case displayed separately for better clarity.
In Figure 11a–g and Figure 12a–g, each row represents a specific noise level, and each column corresponds to a different algorithm. We show the row-mean images of different algorithms under various noise conditions. These figures clearly show that the proposed method outperforms other methods, especially in suppressing noise without losing key image details, where our proposed algorithm consistently performs the best under all noise levels.
In contrast, existing methods exhibited clear limitations. MSGF and 1DGF could not handle complex noise patterns effectively, often losing high-frequency details or introducing ripple artifacts. GFLF balanced denoising and detail retention but encountered artifacts due to its region division strategy. ENSI performed adequately for simple noise but failed in scenarios with higher noise intensity, while CNLM managed small-scale noise but struggled to recover details in larger-scale noise regions. The proposed method demonstrated superior adaptability and robustness in scenarios with significant challenges, such as Case4 and Case5, making it a reliable choice for complex noise conditions.
For further illustration, we zoom in on specific regions of the row-mean images obtained by different algorithms. In addition, the overall comparison in Table 3 provides a quantitative analysis of PSNR, SSIM, and roughness metrics, highlighting the superior performance of our method.

3.4.6. Quantitative Testing of Real Datasets

To further evaluate the effectiveness of the proposed algorithm, experiments were conducted on real datasets, including Tendero’s dataset and the long-wave infrared weekly scanning dataset. These datasets contain more complex noise characteristics and inhomogeneities than simulated datasets, providing a more realistic assessment of the algorithm’s performance in practical applications. Table 4 summarizes the results for Tendero’s dataset, while Table 5 presents the long-wave infrared weekly scanning dataset results.
For Tendero’s dataset, the proposed algorithm achieves an ICV value of 2.3522, which is significantly higher than other algorithms, indicating superior contrast uniformity and effective suppression of streak noise. Additionally, the GC value of 0.0015, the lowest among all of the algorithms, reflects better retention of gradient details and avoidance of over-smoothing. By comparison, the MSGF and GFLF algorithms perform poorly in complex noise regions, losing details and resulting in lower ICV and higher GC values.
The proposed algorithm consistently outperforms alternatives in the long-wave infrared weekly scanning dataset. It achieves a PSNR of 43.21 dB and an SSIM of 0.9439, indicating enhanced denoising and structure preservation. While GFLF and 1DGF demonstrate reasonable performance, they suffer from detail loss and insufficient correction in high-noise regions, resulting in slightly lower PSNR and SSIM values. The algorithm’s ability to achieve an ICV value of 2.8053, the highest among all of the methods, further underscores its capability to provide uniform and high-quality correction for complex real-world infrared images.

3.4.7. Analysis of the Quantitative Results of the Simulated Dataset

Four simulated datasets evaluated the proposed algorithm’s non-uniform correction capability. The KAIST dataset has 8995 images with a resolution of 512 × 640, which contain rich urban dynamic environments and day and night scenes and can complement the FLIRADAS dataset. The LLVIP dataset provides many low-light night scenes, consisting of 3463 images with a resolution of 1024 × 1280. One hundred images were randomly selected from each of the four simulated datasets. Noise was added to these images using Formula (21) for further testing.
As evident from Figure 13, the proposed algorithm consistently achieves higher PSNR and SSIM values across all noise scenarios, particularly excelling in high-noise conditions. This indicates its robust ability to suppress noise while preserving structural details and overall image quality. By leveraging residual-based compensation and weighted linear regression, the algorithm addresses the challenges of detail loss and uneven correction faced by traditional methods.
Overall, the algorithm demonstrates superior adaptability and effectiveness across both simulated and real datasets, establishing itself as a reliable solution for denoising high-resolution infrared images under complex noise conditions.

3.5. Qualitative Analysis of the Effect of Algorithmic Non-Uniformity Correction

3.5.1. Qualitative Analysis of Simulated Data

Under varying noise conditions, different algorithms’ denoising performance and detail preservation were evaluated qualitatively. As shown in Figure 14, under weaker noise conditions, the MSGF algorithm leaves behind noticeable streak noise in certain regions, failing to preserve finer image details. The 1DGF algorithm effectively suppresses noise but introduces a degree of over-smoothing, particularly evident in the blurring of image edges and structural information. Conversely, the GFLF algorithm provides better noise suppression but sacrifices some texture fidelity due to excessive smoothing in complex structural areas. The ENSI algorithm exhibits high background smoothness but struggles with local uniformity and fails to address noise consistently. The CNLM method retains more texture details but leaves residual streak noise, reflecting an imbalance between noise suppression and detail retention. In comparison, the proposed algorithm achieves a superior balance, effectively removing streak noise while maintaining texture integrity, particularly in regions of structural complexity.
For simulated datasets, including the FLIRADAS and MassMIND datasets, the proposed algorithm demonstrated robustness in addressing non-uniformity and preserving essential image features under various noise intensities. As depicted in Figure 15, our algorithm effectively removes horizontal stripe noise under moderate noise conditions while maintaining detailed texture information, such as structural contours and edge clarity. In contrast, the MSGF and CNLM algorithms leave significant noise artefacts, particularly in low-contrast areas, while the 1DGF algorithm struggles with over-smoothing. The red-boxed areas show that the GFLF and ENSI methods provide acceptable denoising performance but lose details in complex image regions.
Figure 16 illustrates the calibration results for Case3 noise intensity using the MassMIND dataset. Here, the GFLF algorithm exhibits a more apparent trade-off between stripe noise removal and texture preservation, with noticeable blurring in high-noise regions. The CNLM algorithm fails to adapt to the complex noise distribution fully, leaving residual noise streaks. However, the proposed algorithm achieves a robust balance by leveraging local weight fusion and adaptive smoothing mechanisms, delivering visually consistent results with improved detail retention.

3.5.2. Qualitative Analysis of Real Data

As observed in Figure 17, the MSGF method shows poor denoising performance, with prominent horizontal stripe noise remaining due to the loss of structural information caused by the wavelet transform. The 1DGF algorithm exhibits blurred details in the tree region on the right side and slight brightness unevenness in the transition between the left and right areas. The GFLF method leaves horizontal texture residues on the building wall, where noise blends with structural details. ENSI displays noticeable horizontal noise in the sky region on the left, attributed to inadequate handling of low-frequency backgrounds during smoothing, resulting in residual noise and a blurred tree texture. Similarly, the CNLM method leaves residual noise in the sky and flat building areas, with uneven brightness at the boundary between the sky and trees. In contrast, the OURS algorithm effectively suppresses horizontal stripe noise while preserving image details, particularly along building edges and tree contours. The noise is smoothed while retaining clear structural information without introducing significant blurring.

3.6. Algorithm Time Complexity Analysis

To analyze the time efficiency of our proposed algorithm, experiments were conducted on real high-resolution infrared line-scanning images with dimensions of 1024 × 55,000 pixels and 14-bit acquisition accuracy. Table 6 compares the average runtime of 10 executions for our algorithm against WAGE [21], 1DGF [22], GFLF [23], ADOM [28], TV-STV [29], CNLM [39], and MSGF [43].
As shown in Table 6, our algorithm achieves a runtime of 1.6821 s, which is competitive with GFLF (1.5114 s) and significantly faster than other methods like ADOM and FTV-STV, which require over 180 and 887 s, respectively. The lightweight iterative compensation and efficient guide filtering contribute to this advantage. In comparison, the IDGF method also demonstrates reasonable performance with a runtime of 5.2821 s, but its processing involves computational overhead for row and column guide filtering. Other methods, such as MSGF and WAGE, rely on multiscale or high-dimensional processing, resulting in significantly longer runtimes.
To further evaluate the efficiency of our algorithm, Table 7 and Table 8 present detailed performance metrics for varying numbers of intercepted columns during the correction process. Our algorithm demonstrates a steady increase in runtime yet maintains high efficiency with less than 2 s required for up to 6000 columns. Similarly, the PSNR and SSIM metrics remain consistent and robust across different column intercepts, achieving average values of 36.97 dB and 0.8685 at the final iteration.
Table 9 and Table 10 detail the performance for selected scenarios with 5000 and 6000 columns, where N represents the number of iterations. Our algorithm consistently achieves superior PSNR and SSIM metrics compared to the GFLF algorithm while maintaining minimal computational overhead. For instance, our algorithm reports a PSNR of 36.84 dB and an SSIM of 0.8642 for 6000 columns, clearly outperforming GFLF’s performance under the same conditions.
The proposed algorithm effectively balances computational efficiency and correction quality, demonstrating significant advantages in real-time processing scenarios, particularly for high-resolution infrared line-scanning images.

4. Discussion

This study introduces a non-uniformity correction method tailored for high-resolution infrared line-scanning images, leveraging residual guidance and adaptive weighting. The proposed framework addresses challenges such as directional stripe noise, loss of detail, and stringent real-time processing requirements. Our method demonstrates significant improvements in denoising performance while maintaining high levels of detail preservation, which is crucial for infrared imaging applications, especially those involving complex, non-uniform noise patterns.
The proposed algorithm effectively combines residual guided filtering and adaptive weighting techniques. Utilizing both residual and original images optimizes preserving fine details while smoothing the noise in a controlled manner. This dual-guidance mechanism helps overcome the edge-blurring issues typically associated with traditional guided filtering methods. The adaptive compensation mechanism further enhances performance by dynamically adjusting the compensation strength, thus preventing over-correction and ensuring that critical image details are retained during noise suppression.
Moreover, using weighted linear regression based on local variance allows the algorithm to adapt to varying noise intensities across different image regions. This localized correction ensures a more accurate global correction, especially in complex and noisy regions. In contrast, conventional methods that rely on uniform correction parameters often struggle with maintaining detail in areas with varying noise levels.
Extensive experimental results, both quantitative and qualitative, confirm that our algorithm outperforms existing methods in terms of key performance metrics, such as PSNR, SSIM, and roughness. Our method achieves superior noise suppression and preserves the image’s structural integrity across multiple datasets, including simulated and real infrared images. In particular, the algorithm excels in scenarios with higher noise intensities and more complex noise patterns, demonstrating its robustness and adaptability to challenging conditions.
One key advantage of our approach is its ability to meet real-time processing requirements without compromising the quality of the correction. The proposed algorithm achieves efficient correction with minimal computational overhead, making it suitable for real-time infrared imaging systems applications. This efficiency, combined with the high-quality denoising performance, ensures that the algorithm can be effectively used in dynamic environments where high resolutions and fast processing are essential.
Despite the strengths of our method, some areas could benefit from further improvement. The current method assumes a Gaussian noise model, which may not fully capture the complexities of real-world noise, particularly in more dynamic environments. Future work could incorporate more advanced noise models to handle a broader range of noise types. Additionally, using parallel computing techniques or hardware accelerators, such as GPUs or FPGAs, could further enhance the real-time processing capabilities of the algorithm, allowing it to handle even larger datasets more efficiently.
In conclusion, the proposed algorithm represents a robust and efficient solution for non-uniformity correction in high-resolution infrared line-scanning images. It significantly improves existing methods by addressing the challenges of directional stripe noise and maintaining fine detail. Its adaptability to complex noise conditions and real-time performance make it a promising tool for various infrared imaging applications, including environmental monitoring, military surveillance, and thermal imaging.

Author Contributions

Conceptualization, M.H.; methodology, M.H.; software, M.H.; validation, M.H., Y.Z. (Yaohua Zhu) and W.C.; resources, W.C.; data curation, Y.Z. (Yanghang Zhu); writing—original draft preparation, M.H.; writing—review and editing, Q.D. and Y.Z. (Yong Zhang). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

An infrared long-wave cooled linear scan detector generated the real infrared image dataset. It is not a public dataset. The publicly available dataset FLIRADAS was analyzed in this study and can be found here: https://camel.ece.gatech.edu/, accessed on 12 December 2024. The publicly available dataset MassMIND was analyzed in this study and can be found here: https://github.com/uml-marine-robotics/MassMIND, accessed on 12 December 2024. The publicly available Tendero dataset was analyzed in this study and can be found here: https://ipolcore.ipol.im/demo/clientApp/demo.html?id=129, accessed on 12 December 2024. The publicly available dataset LLVIP was analyzed in this study and can be found here: https://github.com/bupt-ai-cz/LLVIP, accessed on 18 February 2025. The publicly available dataset KSIST was analyzed in this study and can be found here: https://soonminhwang.github.io/rgbt-ped-detection/, accessed on 18 February 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, C.; Sui, X.; Gu, G.; Chen, Q. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera. Meas. Sci. Technol. 2018, 29, 025402. [Google Scholar] [CrossRef]
  2. Cao, Y.; He, Z.; Yang, J.; Cao, Y.; Yang, M.Y. Spatially adaptive column fixed-pattern noise correction in infrared imaging system using 1D horizontal differential statistics. IEEE Photonics J. 2017, 9, 7803513. [Google Scholar] [CrossRef]
  3. Hao, X.; Liu, L.; Yang, R.; Yin, L.; Zhang, L.; Li, X. A review of data augmentation methods of remote sensing image target recognition. Remote Sens. 2023, 15, 827. [Google Scholar] [CrossRef]
  4. Lu, C. Stripe non-uniformity correction of infrared images using parameter estimation. Infrared Phys. Technol. 2020, 107, 103313. [Google Scholar] [CrossRef]
  5. Chen, B.Y.; Feng, X.; Wu, R.H.; Guo, Q.; Wang, X.; Ge, S.M. Adaptive wavelet filter with edge compensation for remote sensing image denoising. IEEE Access 2019, 7, 91966–91979. [Google Scholar] [CrossRef]
  6. Ball, J.E.; Anderson, D.T.; Chan, C.S. Comprehensive survey of deep learning in remote sensing: Theories, tools, and challenges for the community. J. Appl. Remote Sens. 2017, 11, 042609. [Google Scholar] [CrossRef]
  7. Song, S.; Zhai, X. Research on non-uniformity correction based on blackbody calibration. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (IT-NEC), Chongqing, China, 12–14 June 2020; Volume 1, pp. 2146–2150. [Google Scholar]
  8. Vollmer, M.; Klaus-Peter, M.A. Infrared Thermal Imaging: Fundamentals, Research and Applications; Wiley-VCH: Hoboken, NJ, USA, 2017. [Google Scholar]
  9. Teena, M.; Manickavasagan, A. Thermal infrared imaging. In Imaging with Electromagnetic Spectrum; Springer: Berlin/Heidelberg, Germany, 2014; pp. 147–173. [Google Scholar]
  10. Schulz, M.; Caldwell, L. Nonuniformity correction and correctability of infrared focal plane arrays. Infrared Phys. Technol. 1995, 36, 763–777. [Google Scholar] [CrossRef]
  11. Scribner, D.A.; Kruer, M.R.; Gridley, J.C.; Sarkady, K. Physical limitations to nonuniformity correction in focal plane arrays. Proc. SPIE 1988, 865, 185–201. [Google Scholar]
  12. Kim, S. Two-point correction and minimum filter-based nonuniformity correction for scan-based aerial infrared cameras. Opt. Eng. 2012, 51, 106401. [Google Scholar] [CrossRef]
  13. Zhu, R.; Wang, C.; Wei, Q.; Jia, H.; Zhou, W. Development of nonuniformity correction system for infrared detector. Infrared Laser Eng. 2013, 42, 1669–1673. [Google Scholar]
  14. Sheng, M.; Xie, J.; Fu, Z. Calibration-based NUC method in real-time based on IRFPA. Phys. Procedia 2011, 22, 372–380. [Google Scholar] [CrossRef]
  15. Shi, Y.; Zhang, T.; Cao, Z. A new piecewise approach for nonuniformity correction in IRFPA. Int. J. Infrared Millim. Waves 2004, 25, 959–972. [Google Scholar] [CrossRef]
  16. Boutemedjet, A.; Deng, C.; Zhao, B. Robust approach for nonuniformity correction in infrared focal plane array. Sensors 2016, 16, 1890. [Google Scholar] [CrossRef]
  17. Olbrycht, R.; Więcek, B. New Approach to Thermal Drift Correction in Microbolometer Thermal Cameras. Quant. InfraRed Thermogr. J. 2015, 12, 184–195. [Google Scholar] [CrossRef]
  18. Brazane, S.; Riou, O.; Delaleux, F.; Ibos, L.; Durastanti, J.F. Management of Thermal Drift of Bolometric Infrared Cameras: Limits and Recommendations. Quant. InfraRed Thermogr. J. 2025, 22, 54–69. [Google Scholar] [CrossRef]
  19. Pron, H.; Bouache, T. Alternative Thermal Calibrations of Focal Plane Array Infrared Cameras. Quant. InfraRed Thermogr. J. 2016, 13, 94–108. [Google Scholar] [CrossRef]
  20. Wang, E.; Jiang, P.; Hou, X.; Zhu, Y.; Peng, L. Infrared stripe correction algorithm based on wavelet analysis and gradient equalization. Appl. Sci. 2019, 9, 1993. [Google Scholar] [CrossRef]
  21. Wang, E.; Jiang, P.; Li, X.; Cao, H. Infrared stripe correction algorithm based on wavelet decomposition and total variation-guided filtering. J. Eur. Opt. Soc.-Rapid Publ. 2020, 16, 1. [Google Scholar] [CrossRef]
  22. Cao, Y.; Yang, M.Y.; Tisse, C.-L. Effective strip noise removal for low-textured infrared images based on 1-D guided filtering. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 2176–2188. [Google Scholar] [CrossRef]
  23. Li, B.; Chen, W.; Zhang, Y. A nonuniformity correction method based on 1D guided filtering and linear fitting for high-resolution infrared scan images. Appl. Sci. 2023, 13, 3890. [Google Scholar] [CrossRef]
  24. Ahmed Hamadouche, S.; Boutemedjet, A.; Bouaraba, A. Efficient and robust techniques for infrared imaging system correction. Imaging Sci. J. 2024, 1–20. [Google Scholar] [CrossRef]
  25. Shao, Y.; Sun, Y.; Zhao, M.; Chang, Y.; Zheng, Z.; Tian, C.; Zhang, Y. Infrared image stripe noise removing using least squares and gradient domain guided filtering. Infrared Phys. Technol. 2021, 119, 103968. [Google Scholar] [CrossRef]
  26. Jia, J.; Wang, Y.; Cheng, X.; Yuan, L.; Zhao, D.; Ye, Q.; Zhuang, X.; Shu, R.; Wang, J. Destriping algorithms based on statistics and spatial filtering for visible-to-thermal infrared pushbroom hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4077–4091. [Google Scholar] [CrossRef]
  27. Cui, H.; Jia, P.; Zhang, G.; Jiang, Y.H.; Li, L.T.; Wang, J.Y.; Hao, X.Y. Multiscale intensity propagation to remove multiplicative stripe noise from remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2308–2323. [Google Scholar] [CrossRef]
  28. Kim, N.; Han, S.-S.; Jeong, C.-S. ADOM: ADMM-based optimization model for stripe noise removal in remote sensing image. IEEE Access 2023, 11, 106587–106606. [Google Scholar] [CrossRef]
  29. Ahmed, H.S.; Boutemedjet, A.; Azzedine, B. Infrared image stripe noise removal by solving first- and second-order total variation inverse problem. In Proceedings of the 2023 2nd International Conference on Electronics, Energy and Measurement (IC2EM), Medea, Algeria, 28–29 November 2023; pp. 1–6. [Google Scholar]
  30. Zhang, Y.; Shao, Y.; Shen, J.; Lu, Y.; Zheng, Z.; Sidib, Y.; Yu, B. Infrared image impulse noise suppression using tensor robust principal component analysis and truncated total variation. Appl. Opt. 2021, 60, 4916–4929. [Google Scholar] [CrossRef] [PubMed]
  31. Song, Q.; Huang, Z.; Ni, H.; Bai, K.; Li, Z. Remote sensing images destriping with an enhanced low-rank prior and total variation regulation. Signal Image Video Process. 2022, 16, 1895–1903. [Google Scholar] [CrossRef]
  32. Yang, J.H.; Zhao, X.L.; Ma, T.H.; Chen, Y.; Huang, T.Z.; Ding, M. Remote sensing images destriping using unidirectional hybrid total variation and nonconvex low-rank regularization. J. Comput. Appl. Math. 2020, 363, 124–144. [Google Scholar] [CrossRef]
  33. Xu, J.; Wang, N.; Xu, Z.; Xu, K. Weighted lp norm sparse error constraint based ADMM for image denoising. Math. Probl. Eng. 2019, 2019, 1262171. [Google Scholar] [CrossRef]
  34. Guan, J.; Lai, R.; Xiong, A. Wavelet deep neural network for stripe noise removal. IEEE Access 2019, 7, 44544–44554. [Google Scholar] [CrossRef]
  35. Huang, Z.; Zhu, Z.; Wang, Z.; Li, X.; Xu, B.; Zhang, Y.; Fang, H. D3CNNs: Dual denoiser driven convolutional neural networks for mixed noise removal in remotely sensed images. Remote Sens. 2023, 15, 443. [Google Scholar] [CrossRef]
  36. Islam, M.R.; Xu, C.; Han, Y.; Ashfaq, R.A.R. A novel weighted variational model for image denoising. Int. J. Pattern Recognit. Artif. Intell. 2017, 31, 1754022. [Google Scholar] [CrossRef]
  37. Ashiba, H.I.; Sadic, N.; Hassan, E.S.; El-Dolil, S.; Abd El-Samie, F.E. New proposed algorithms for infrared video sequences non-uniformity correction. Wirel. Pers. Commun. 2022, 126, 1051–1073. [Google Scholar] [CrossRef]
  38. Tendero, Y.; Landeau, S.; Gilles, J. Non-uniformity correction of infrared images by midway equalization. Image Process. Line 2012, 2012, 134–146. Available online: https://www.ipol.im/pub/art/2012/glmt-mire/ (accessed on 12 July 2012). [CrossRef]
  39. Li, F.; Zhao, Y.; Xiang, W.; Liu, H. Infrared image mixed noise removal method based on improved NL-means. Infrared Laser Eng. 2019, 48, 163–173. [Google Scholar]
  40. Yan, J.; Kang, Y.; Ni, Y.; Zhang, Y.; Fan, J.; Hu, X. Non-uniformity correction method of remote sensing images based on adaptive moving window moment matching. J. Imaging Sci. Technol. 2022, 66, 50502. [Google Scholar] [CrossRef]
  41. Geng, L.; Chen, Q.; Shi, F.; Wang, C.; Yu, X. An improvement for scene-based nonuniformity correction of infrared image sequences. In Proceedings of the International Symposium on Photoelectronic Detection and Imaging 2013: Infrared Imaging and Applications, Beijing, China, 25–27 June 2013; SPIE: Bellingham, WA, USA, 2013; Volume 8907, pp. 669–677. [Google Scholar]
  42. Gao, H.T.; Liu, W.; He, H.Y.; Zhang, B.X.; Jiang, C. De-striping for TDICCD remote sensing image based on statistical features of histogram. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 311–316. [Google Scholar] [CrossRef]
  43. Cao, Y.; He, Z.; Yang, J.; Ye, X.; Cao, Y. A multi-scale non-uniformity correction method based on wavelet decomposition and guided filtering for uncooled long wave infrared camera. Signal Process. Image Commun. 2018, 60, 13–21. [Google Scholar] [CrossRef]
  44. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef]
  45. Gebhardt, E.; Wolf, M. CAMEL dataset for visual and thermal infrared multiple object detection and tracking. In Proceedings of the IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS), Christchurch, New Zealand, 27–30 November 2018. [Google Scholar]
  46. Nirgudkar, S.; DeFilippo, M.; Sacarny, M.; Benjamin, M.; Robinette, P. MassMIND: Massachusetts Marine INfrared Dataset. arXiv 2023, arXiv:2209.04097. [Google Scholar]
  47. Hwang, S.; Park, J.; Kim, N.; Choi, Y.; Kweon, I.S. Multispectral Pedestrian Detection: Benchmark Dataset and Base-line. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1037–1045. [Google Scholar]
  48. Jia, X.; Zhu, C.; Li, M.; Tang, W.; Zhou, W. LLVIP: A Visible-Infrared Paired Dataset for Low-Light Vision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
Figure 1. Overall workflow of proposed algorithm.
Figure 1. Overall workflow of proposed algorithm.
Sensors 25 01511 g001
Figure 2. The effect of non-uniform correction of real IR scanned images.
Figure 2. The effect of non-uniform correction of real IR scanned images.
Sensors 25 01511 g002
Figure 3. Residual image generation (a) original image; (b) intercepted image; (c) extended row-mean image; (d) residual image of an intercepted image.
Figure 3. Residual image generation (a) original image; (b) intercepted image; (c) extended row-mean image; (d) residual image of an intercepted image.
Sensors 25 01511 g003
Figure 4. Results of one-dimensional guided filtering using dual guidance. (a) Corrected image using residual image as guide; (b) signal map derived from (a) shows preserved local variations and improved texture fidelity; (c) corrected image using original image as guide; (d) signal map derived from (c) highlights smoother transitions and large-scale noise suppression.
Figure 4. Results of one-dimensional guided filtering using dual guidance. (a) Corrected image using residual image as guide; (b) signal map derived from (a) shows preserved local variations and improved texture fidelity; (c) corrected image using original image as guide; (d) signal map derived from (c) highlights smoother transitions and large-scale noise suppression.
Sensors 25 01511 g004
Figure 5. Linear weighted fitting plot. (a) Image after residual iteration; (b) fitted corrected image.
Figure 5. Linear weighted fitting plot. (a) Image after residual iteration; (b) fitted corrected image.
Sensors 25 01511 g005
Figure 6. Long-wave infrared weekly scanning image with Gaussian white noise added to the left side, indicated by the red line.
Figure 6. Long-wave infrared weekly scanning image with Gaussian white noise added to the left side, indicated by the red line.
Sensors 25 01511 g006
Figure 7. Comparison chart of correction for different guided filtering windows w .
Figure 7. Comparison chart of correction for different guided filtering windows w .
Sensors 25 01511 g007
Figure 8. Comparison chart of correction for different smoothing parameters r .
Figure 8. Comparison chart of correction for different smoothing parameters r .
Sensors 25 01511 g008
Figure 9. Time and PSNR under different iteration counts: (a) time vs. iteration count; (b) PSNR vs. iteration count.
Figure 9. Time and PSNR under different iteration counts: (a) time vs. iteration count; (b) PSNR vs. iteration count.
Sensors 25 01511 g009
Figure 10. Comparison chart of correction for different initial compensation coefficient α .
Figure 10. Comparison chart of correction for different initial compensation coefficient α .
Sensors 25 01511 g010
Figure 11. Row-mean images of different algorithms on the FLIRADAS dataset.
Figure 11. Row-mean images of different algorithms on the FLIRADAS dataset.
Sensors 25 01511 g011aSensors 25 01511 g011b
Figure 12. Row means of different algorithms on the MassMIND dataset.
Figure 12. Row means of different algorithms on the MassMIND dataset.
Sensors 25 01511 g012
Figure 13. Test results for four datasets.
Figure 13. Test results for four datasets.
Sensors 25 01511 g013
Figure 14. Correction plots of different algorithms for FLIRADAS dataset under Case1 noise.
Figure 14. Correction plots of different algorithms for FLIRADAS dataset under Case1 noise.
Sensors 25 01511 g014
Figure 15. Correction plots of different algorithms for FLIRADAS dataset under Case2 noise.
Figure 15. Correction plots of different algorithms for FLIRADAS dataset under Case2 noise.
Sensors 25 01511 g015
Figure 16. Correction plots of different algorithms for MassMIND dataset under Case3 noise.
Figure 16. Correction plots of different algorithms for MassMIND dataset under Case3 noise.
Sensors 25 01511 g016
Figure 17. Corrected plots of long-wave infrared weekly scanning images in different algorithms.
Figure 17. Corrected plots of long-wave infrared weekly scanning images in different algorithms.
Sensors 25 01511 g017aSensors 25 01511 g017b
Table 1. Mean PSNR values of corrected images with different numbers of intercepted columns.
Table 1. Mean PSNR values of corrected images with different numbers of intercepted columns.
Columns400800120016002000240028003200
PSNR42.0242.1342.2842.4642.4742.4742.4942.5
Table 2. Effect of different Gaussian white noises on calibration effect.
Table 2. Effect of different Gaussian white noises on calibration effect.
variance0.020.040.060.080.1
PSNR42.8643.2844.0644.0444.18
variance0.120.140.160.180.2
PSNR43.9543.5943.2743.0942.62
Table 3. Quantitative test results for the simulated dataset.
Table 3. Quantitative test results for the simulated dataset.
Simulated Image DataMetricNoiseMSGF1DGFGFLFENSICNLMOURS
FLIRADAS Data PSNR33.4334.9440.0239.5240.3736.2341.84
Case1SSIM0.80350.89180.98710.98590.98760.84420.9905
Roughness0.13360.13120.13010.12640.13140.13120.1322
PSNR25.529.6236.3335.7836.130.5237.87
Case2SSIM0.50050.69330.96550.96160.96290.66520.9691
Roughness0.14030.13240.13020.12640.13130.13380.1349
PSNR21.524.8531.7131.4832.3324.9734.58
Case3SSIM0.27470.48960.91640.85720.91820.47910.9396
Roughness0.15060.13680.13070.12780.13150.13950.1426
PSNR34.0135.2140.3639.7241.0535.5241.98
Case4SSIM0.82060.89920.98810.98670.99030.85230.9948
Roughness0.13360.13130.13030.12650.13160.13130.1323
PSNR26.9428.1327.8428.1627.8827.9428.72
Case5SSIM0.4570.51860.50060.52080.50180.50350.5318
Roughness0.13940.13560.13600.13220.13720.12930.1382
MassMIND DataCase1PSNR32.8533.837.0735.9337.5734.7839.01
SSIM0.79610.87490.98270.98110.98260.85050.9854
Roughness0.18610.1840.18280.17950.18410.18460.1847
Case2PSNR25.9127.5934.2633.8434.3428.9135.85
SSIM0.49740.65010.96260.95670.96370.62820.9676
Roughness0.19070.18420.18330.18020.18490.18520.1865
Case3PSNR20.9423.8730.6830.2731.1924.4532.51
SSIM0.28480.46610.92700.83390.93320.45560.9488
Roughness0.19720.18670.18320.18070.18470.18690.1914
Case4PSNR3434.5237.3836.0338.0135.6639.18
SSIM0.82220.90010.98390.98180.98520.86960.9885
Roughness0.18590.18400.18290.17970.18460.18460.1848
Case5PSNR26.7527.8327.4827.9627.527.7228.51
SSIM0.44830.50460.49180.51980.49310.49480.5462
Roughness0.19030.18720.18710.18390.18890.18320.1891
Table 4. Test results for Tendero’s dataset.
Table 4. Test results for Tendero’s dataset.
Real Image DataMetricNoiseMSGF1DGFGFLFENSICNLMOURS
Tendero’s dataICV1.91891.94472.26532.13212.30262.28312.3522
GC---0.23830.01560.04340.00540.66640.0015
Table 5. Test results for the long-wave infrared weekly scanning dataset.
Table 5. Test results for the long-wave infrared weekly scanning dataset.
Real Image DataMetricNoiseMSGF1DGFGFLFENSICNLMOURS
Long-wave infrared weekly scanning datasetPSNR----39.7540.1939.8140.139.4643.21
SSIM---0.90530.91760.91490.91610.91220.9439
Roughness0.09320.09240.09250.09230.09250.09230.0929
ICV2.66842.69542.70782.72112.70722.70322.8053
GC---0.26410.00060.01630.00050.01840.0002
Table 6. Comparison of time complexity of different algorithms.
Table 6. Comparison of time complexity of different algorithms.
AlgorithmsMSGF1DGFGFLFADOMCNLMWAGEFTV-STVOURS
Time/s6.86035.28211.5114185.0071415.1505313.1802887.58031.6821
Table 7. GFLF intercepts different columns of statistical results.
Table 7. GFLF intercepts different columns of statistical results.
Columns5000600070008000900010,000
Time/s1.51141.68271.87452.02712.21642.3832
PSNR34.1734.1834.2634.2634.2534.25
SSIM0.76520.76580.7670.7670.7670.7672
Table 8. Our algorithm intercepts different columns of statistical results.
Table 8. Our algorithm intercepts different columns of statistical results.
Columns5000600070008000900010,000
Time/s1.68211.87832.07132.26392.42122.6001
Table 9. Our algorithm intercepts 5000 columns of statistical results.
Table 9. Our algorithm intercepts 5000 columns of statistical results.
N12345
Time/s1.62091.63341.64471.66471.6821
PSNR36.3236.4536.5736.6836.77
SSIM0.84620.8510.85510.85860.8617
Table 10. Our algorithm intercepts 6000 columns of statistical results.
Table 10. Our algorithm intercepts 6000 columns of statistical results.
N12345
Time/s1.76611.80561.82011.84581.8783
PSNR36.3136.4736.6136.7336.84
SSIM0.84620.85180.85660.86070.8642
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, M.; Chen, W.; Zhu, Y.; Duan, Q.; Zhu, Y.; Zhang, Y. An Adaptive Weighted Residual-Guided Algorithm for Non-Uniformity Correction of High-Resolution Infrared Line-Scanning Images. Sensors 2025, 25, 1511. https://doi.org/10.3390/s25051511

AMA Style

Huang M, Chen W, Zhu Y, Duan Q, Zhu Y, Zhang Y. An Adaptive Weighted Residual-Guided Algorithm for Non-Uniformity Correction of High-Resolution Infrared Line-Scanning Images. Sensors. 2025; 25(5):1511. https://doi.org/10.3390/s25051511

Chicago/Turabian Style

Huang, Mingsheng, Weicong Chen, Yaohua Zhu, Qingwu Duan, Yanghang Zhu, and Yong Zhang. 2025. "An Adaptive Weighted Residual-Guided Algorithm for Non-Uniformity Correction of High-Resolution Infrared Line-Scanning Images" Sensors 25, no. 5: 1511. https://doi.org/10.3390/s25051511

APA Style

Huang, M., Chen, W., Zhu, Y., Duan, Q., Zhu, Y., & Zhang, Y. (2025). An Adaptive Weighted Residual-Guided Algorithm for Non-Uniformity Correction of High-Resolution Infrared Line-Scanning Images. Sensors, 25(5), 1511. https://doi.org/10.3390/s25051511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop