Next Article in Journal
A Security Framework for Increasing Data and Device Integrity in Internet of Things Systems
Previous Article in Journal
6G—Enabling the New Smart City: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Destriping of Remote Sensing Images by an Optimized Variational Model

1
School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, China
2
Jilin Provincial Science and Technology Innovation Center of Intelligent Perception and Information Processing, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(17), 7529; https://doi.org/10.3390/s23177529
Submission received: 26 July 2023 / Revised: 26 August 2023 / Accepted: 28 August 2023 / Published: 30 August 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Satellite sensors often capture remote sensing images that contain various types of stripe noise. The presence of these stripes significantly reduces the quality of the remote images and severely affects their subsequent applications in other fields. Despite the existence of many stripe noise removal methods in the research, they often result in the loss of fine details during the destriping process, and some methods even generate artifacts. In this paper, we proposed a new unidirectional variational model to remove horizontal stripe noise. The proposed model fully considered the directional characteristics and structural sparsity of the stripe noise, as well as the prior features of the underlying image, to design different sparse constraints, and the p quasinorm was introduced in these constraints to better describe these sparse characteristics, thus achieving a more excellent destriping effect. Moreover, we employed the fast alternating direction method of multipliers (ADMM) to solve the proposed non-convex model. This significantly improved the efficiency and robustness of the proposed method. The qualitative and quantitative results from simulated and real data experiments confirm that our method outperforms existing destriping approaches in terms of stripe noise removal and preservation of image details.

1. Introduction

In recent years, remote sensing data have been widely applied in various fields. For instance, MODIS data include information on vegetation coverage, atmospheric conditions, and surface temperature, providing researchers with a vast amount of spectral data for Earth studies. However, it has been observed that certain bands in the level 1 and 2 MODIS data products publicly available on the official website of NASA exhibit prominent stripe noise. These stripe noise patterns can be broadly classified into two categories: periodic and non-periodic. Periodic stripes primarily arise from the need to stitch together data obtained from multiple detectors during the sensing process to acquire a sufficiently large focal plane, resulting in radiometric response discrepancies [1]. Additionally, the mechanical movements of the sensor itself introduce inevitable interferences and errors. Detector-to-detector and mirror-side stripes serve as two representative examples of periodic stripe noise [2]. On the other hand, non-periodic stripe noise manifests as random patterns, with uncertain lengths and occurrence positions. It is greatly influenced by spectral distribution and temperature factors, thus demanding higher post-processing requirements for the data. Apart from these factors, numerous other contributors to stripe noise in remote sensing imagery exist, making the effective removal of such noise a challenging task that researchers strive to address.
Hence, many scientific researchers have started participating in the task of removing stripe noise in order to improve the quality of remote sensing images. They have proposed numerous effective destriping methods.These destriping methods can be mainly categorized into the following groups: filter-based, statistical-based, model-optimization-based, and deep-learning-based methods. Filter-based methods primarily involve the use of filters of different sizes to eliminate stripe noise [3,4,5,6,7]. For example, in [3], Beat et al. proposed a filtering method for the removal of horizontal or vertical stripes based on a joint analysis of wavelets and Fourier transform. In [5], Cao et al. utilized wavelet decomposition to separate remote sensing images into different scales and applied one-dimensional guided filtering for destriping. Statistical-based methods focus on repairing images with stripe noise from the perspective of detector response and utilize the assumption of data similarity [8,9,10,11]. In [10], Carfantan et al. established a linear relationship through the gain estimation of detectors and proposed a statistical linear destriping (SLD) method for push-broom satellite imaging systems. Additionally, to remove irregular stripes from MODIS data, Shen et al. introduced a method based on local statistics and expected information [11]. In recent years, with the continuous development of deep learning methods in the field, this has been widely applied in various domains of image processing. Deep-learning-based destriping algorithms have also made progress in the research [12,13,14,15,16]. Chang et al. designed a HIS-DeNet network based on CNN to extract spectral and spatial information for removing stripe noise from hyperspectral images [13]. Huang et al. proposed a dual-network fusion denoising convolutional neural network ( D 3 NNs ) that could simultaneously remove both random noise and stripe noise [16].
Model-optimization-based methods are widely regarded as highly effective approaches at present. These methods mainly utilize the prior features and texture features of images to construct energy functionals with different regularization terms and obtain the final clean image by solving it mathematically. In 2010, Bouali et al. proposed the unidirectional variational method (UTV), which utilized the directional characteristics of stripe noise [2]. This method achieved significant results in the field of stripe noise removal for remote sensing images; however, it was prone to losing some fine details. Consequently, researchers started exploring the sparsity [17] and low rank [18] of the image itself and began addressing the limitations of this method from a mathematical perspective. As a result, many improved variants of UTV were studied. In [19], Zhou et al. proposed an adaptive coefficient unidirectional variational optimization model by replacing the 2 norm in the regularization term with the 1 norm. In [20], Liu et al. utilized the l 0 norm to characterize the global sparsity characteristics of stripe noise, effectively separating the stripe noise and achieving better stripe noise removal results. Chang et al. introduced the low-rank and sparse image decomposition (LRSID) model based on the low rank of a single image, extending the algorithm from 2D images to 3D hyperspectral images [21]. Compared to the original UTV method, the improved algorithms demonstrated a significantly enhanced destriping performance.
In summary, destriping methods in the research have been able to effectively remove stripe noise; however, there is still significant room for improvement in this area. Filtering-based and statistical-based methods have achieved good results in removing periodic stripes. However, they have often failed to deliver satisfactory results when dealing with complex non-periodic stripes. Deep-learning-based methods have high requirements for dataset preparation and choice of loss functions, and their applicability is limited to specific scenarios. In comparison, model-optimization-based methods have broader applicability; however, they still have their limitations. Many optimization models, for instance, consider the sparse characteristics of stripe noise structure. However, their exploration of these sparse characteristics is not sufficiently thorough. This deficiency leads to the loss of detailed information from the underlying image during the process of stripe removal. As a result, it is imperative to select appropriate norms that better characterize the relevant sparsity. Additionally, some optimization models involve overly intricate regularization constraints. Although they can yield improvements in destriping, the high number of parameters makes the process of parameter adjustment remarkably difficult. Hence, for more effective parameter configuration, the selection of simpler and more reasonable regularization constraints is of paramount importance.
In this paper, we propose a new destriping model based on p quasinorm and unidirectional variation to overcome the limitations of previous methods. The model fully considers the prior characteristics of remote sensing images, as well as the directional and structural properties of stripe noise. We introduce the p quasinorm to characterize the global sparsity of stripe noise and the local sparsity of the vertical image gradients. This norm facilitates the derivation of sparser solutions compared to the 1 and 2 norms, leading to superior stripe noise removal results. Additionally, the model employed the 0 norm to capture the local sparsity of the horizontal gradient of stripe noise, further preventing the loss of image details during the destriping process. In the model-solving process, we adopted the fast alternating direction method of multipliers (ADMM) algorithm [22], which transforms the complex non-convex problem into simple subproblems for solution. The fast ADMM algorithm converges more rapidly and has shorter computation time compared to the traditional ADMM algorithm. The overall framework of the algorithm is presented in Figure 1. Finally, we conducted extensive experiments and compared our proposed method with six classical methods. The proposed method achieves superior stripe noise removal results and demonstrates a certain robustness. The contributions and innovations of this work can be summarized as follows:
(1)
We utilize the gradient information obtained from remote sensing image decomposition to design regularization constraints in different directions, effectively avoiding the ripple effect during the destriping process.
(2)
The p quasinorm is introduced into the proposed model to better capture the relevant sparsity properties, thereby preserving a greater amount of fine details in the underlying image.
(3)
The fast ADMM algorithm is employed to solve the destriping model. It reduces the computational time, enabling efficient processing of large-scale data.
The subsequent sections are organized as follows: in Section 2, we introduce the relevant knowledge and research related to the proposed method. Section 3 provides a detailed explanation of the proposed stripe noise removal model and its solution. In Section 4, we conduct extensive experiments and compare our proposed method with six different approaches. The experimental results are analyzed and discussed in Section 5. Finally, Section 6 presents the conclusion of this study.

2. Related Work

2.1. Characteristics of Stripe Noise and UTV Model

It is commonly believed in the literature that the stripe noise present in remote sensing images is an additive noise. Assuming a remote sensing image g L 2 ( Ω ) , its degradation model can be expressed as
g ( x , y ) = u ( x , y ) + n ( x , y )
where g ( x , y ) represents the actual observed data obtained from the sensor, u ( x , y ) denotes the underlying image, and n ( x , y ) denotes the stripe noise. Based on this characteristic, many researchers in the field of stripe noise removal have focused their attention on variational model optimization methods.
The classical total variation (TV) model was initially proposed by Rudin et al. [23]. It plays a significant role in the field of image restoration. The model expression is as follows:
E ( u ) = 1 2 Ω u g F 2 + λ Ω u x 2 + u y 2
in which λ is the coefficient of the regularization term, and its value plays a crucial role in the effectiveness of destriping methods. However, this model does not consider the correlated characteristics of stripe noise, resulting in limited effectiveness in removing stripe noise. In general, stripe noise tends to appear along the same direction. In this study, we focus on investigating horizontal stripe noise. To better analyze relevant prior characteristics, we extracted the image from the 27th band of MODIS data and computed the gradient information in different directions, as shown in Figure 2.
It can be observed that the stripe noise mainly affects the gradients in the vertical stripe direction, while the gradients along the stripe direction are minimally affected by the stripe noise. This implies that the horizontal gradient related to the stripe noise is much larger than the vertical gradient. Based on these observations, we can draw the following conclusions:
n ( x , y ) x n ( x , y ) y
In this regard, some researchers have improved the total variation model by utilizing the directional characteristics of stripe noise and proposed the unidirectional total variation model. The mathematical formulation of the UTV model can be represented as
E ( u ) = Ω ( u g ) x 2 + λ u y 2 d x d y
The UTV model can be solved using the Euler–Lagrange equation. Furthermore, it is highly effective in removing stripe noise from remote sensing images. However, it only considers the local sparsity properties of gradients in different directions and overlooks the global sparsity properties of stripe noise. This can lead to the occurrence of ripple artifacts in the destriping process for remote sensing images heavily contaminated by stripe noise. Additionally, sparse constraints based on the l 2 norm can lead to the loss of details in striped images.

2.2. The Sparsity Analysis of the l p Quasinorm

Selecting an appropriate norm to characterize sparsity is crucial, as it better emphasizes the structural sparsity and low-rank properties of image and stripe components. In the field of image processing, the l p norm is typically defined as G p = ( i = 1 N j = 1 N G i j p ) 1 / p , while the l p quasinorm is defined as G p p = i = 1 N j = 1 N G i j p . The value of p chosen for the l p quasinorm in this paper was 0 < p < 1 . Compared to the l 2 norm and l 1 norm, the l p quasinorm offers greater flexibility and degrees of freedom [24]. In Figure 3, taking the gradient information in the vertical direction of the underlying image u as an example, it demonstrates the sparsity characterization capabilities of different norms. We can observe that the contours of the l p quasinorm are more likely to approach the coordinate axes, thereby inducing sparser solutions.
In the recent years, some researchers have achieved substantial progress in the field of image restoration by employing the l p quasinorm [25,26]. Therefore, in order to better describe the correlated sparsity characteristics between underlying image and stripe noise, we introduced the l p quasinorm into the destriping method in our study. The goal is to effectively remove stripes while preserving as much detail information in remote sensing images as possible.

3. Proposed Method

3.1. The Proposed Model

3.1.1. Global Sparsity Constraint

Based on the previous analysis of stripe noise characteristics, we observed that stripe noise in remote sensing images typically appears as entire rows or columns. In practical scenarios, variations in length and the random positioning of the stripes can also exist. However, overall, stripe noise is sparse due to factors such as sensor offset. In many destriping algorithms [27,28,29,30,31], they commonly utilize the l 1 norm to characterize the sparse nature of the stripe noise. Considering the superior sparse representation capability of the l p quasinorm, we proposed a new global sparse representation:
R s = u g p p
Even in the case of severe stripe noise pollution, Equation (5) exhibits a certain level of robustness. It ensures the similarity between denoised and original images, thereby minimizing the loss of excessive detail information.

3.1.2. Local Sparsity Constraint

As shown in Figure 3, it can be observed that the gradient properties along the stripe direction and vertical direction exhibit different characteristics. Therefore, we need to establish separate sparse representation terms for each directional gradient. Firstly, from Figure 3b, we can observe that the gradient along the direction of the stripes in the remote sensing image with stripe noise is less affected by interference. This suggests that in addition to the inherent sparse distribution of the stripe noise, its horizontal gradient should also possess a certain level of sparsity. Furthermore, in [32], the researchers extracted features of the stripe noise using singular value decomposition and observed that the stripe noise exhibited a certain degree of low-rank property. Therefore, the gradient matrix of stripe noise along the horizontal direction consists mostly of zero elements. This conclusion remains valid regardless of the density and location of the stripe noise. Considering that the l 0 norm is capable of discerning zero and non-zero elements within a matrix, the regularization term for the horizontal direction can be expressed as:
R h = x ( u g ) 0
Furthermore, considering the prior information of the image itself, the stripe noise disrupts the local continuity of the underlying image, resulting in significant variations in the vertical gradients. To ensure the local continuity of the underlying image u , it is desirable to minimize the variations in y u . This also indicates that the gradient along the vertical direction of the image should possess a certain degree of sparsity. Introducing the l p quasinorm, the sparse representation term for the vertical direction can be represented as:
R v = y u p p
Finally, considering the stripe noise and the intrinsic characteristics of the underlying image, in combination with the aforementioned three sparse constraints, we propose a new model for remote sensing image destriping:
u = arg min u u g p p + λ 1 x ( u g ) 0 + λ 2 y u p p
where u g p p is referred to as the fidelity term, while x ( u g ) 0 and y u p p are referred to as the regularization terms. λ 1 and λ 2 represent the regularization coefficients, which determine the weighting values between the fidelity and regularization terms.

3.2. The Solution Based on Fast ADMM

Due to the non-convex and non-differentiable nature of the proposed model, some classical optimization algorithms may not be suitable for solving it. Therefore, the ADMM algorithm is commonly employed to solve such problems. Distinguished from other methods, this paper utilized the fast ADMM algorithm to solve the proposed model. The convergence speed of the original ADMM algorithm is O ( 1 / k ) , while the fast ADMM algorithm can increase it to O ( 1 / k 2 ) [22]. This significantly reduces the overall computation time of the algorithm, making it more efficient for handling large-scale datasets. The main idea is to introduce three intermediate variables, M 1 , M 2 , and M 3 , to transform the unconstrained extremum problem presented in Equation (8) into a constrained extremum problem.
u = arg min u , M 1 , M 2 , M 3 M 1 p p + λ 1 M 2 0 + λ 2 M 3 p p s . t . M 1 = u g , M 2 = x ( u g ) , M 3 = y u
Then, we can obtain the augmented Lagrange function of the problem as follows:
L = M 1 p p + λ 1 M 2 0 + λ 2 M 3 p p + Q 1 , ( u g ) M 1 + α 1 2 M 1 ( u g ) 2 2 + Q 2 , x ( u g ) M 2 + α 2 2 M 2 x ( u g ) 2 2 + Q 3 , y u M 3 + α 3 2 M 3 y u 2 2
where Q 1 , Q 2 and Q 3 are the Lagrange multipliers. α 1 , α 2 and α 3 are the coefficients of the penalty terms. We needed to solve the subproblems for each variable individually. Additionally, auxiliary variables M 1 a , M 2 a , M 3 a , Q 1 a , Q 2 a , Q 1 a , and η i were introduced to accelerate the iteration process. Among them, M 1 a represents the intermediate variable in the iteration of M 1 and is equivalent to M 1 in solving the subproblem, and the other variables are similar.
(1)
The subproblem related to u is
u = arg min u Q 1 , ( u g ) M 1 + α 1 2 M 1 ( u g ) 2 2 + Q 2 , x ( u g ) M 2 + α 2 2 M 2 x ( u g ) 2 2 + Q 3 , y u M 3 + α 3 2 M 3 y u 2 2
Considering the decoupling of other variables from u , Equation (11) can be represented using the convolution as follows:
u = arg min u α 1 2 M 1 ( u g ) Q 1 α 1 2 2 + α 2 2 M 2 K x ( u g ) Q 2 α 2 2 2 + α 3 2 M 3 K y u Q 3 α 3 2 2
where ∗ represents the convolution operation, K x = 1 , 1 represents the convolution kernel for horizontal differencing, and K y = 1 , 1 T presents the convolution kernel for vertical differencing.
Then, Equation (12) can be solved using the convolution theorem and fast Fourier transform (FFT). After introducing acceleration auxiliary variables, the iterative formula for u can be obtained as follows:
u ( k + 1 ) = F 1 φ α 1 + α 2 F ( K x ) F ( K x ) + α 3 F ( K y ) F ( K y )
in which
φ = α 1 F ( M 1 a ( k ) ) F ( Q 1 a ( k ) α 1 ) + α 2 F ( K x ) F ( M 2 a ( k ) ) F ( Q 2 a ( k ) α 2 ) + α 3 F ( K y ) F ( M 3 a ( k ) ) F ( Q 3 a ( k ) α 3 ) + α 1 + α 2 F ( K x ) F ( K x ) F ( g )
where ∘ denotes component-wise multiplication, F denotes the operator for the Fourier transform, F 1 denotes the operator for the inverse Fourier transform, and F ( K i ) denotes the conjugate map for F ( K i ) .
(2)
The subproblem related to M 1 is
M 1 = arg min M 1 M 1 p p + Q 1 , ( u g ) M 1 + α 1 2 M 1 ( u g ) 2 2
Considering the decoupling of M 1 from the other variables, the subproblem for M 1 can be represented as follows:
M 1 = arg min M 1 M 1 p p + α 1 2 M 1 ( u g ) Q 1 α 1 2 2
It can be solved using the shrinkage operator with soft thresholding [33,34], resulting in the accelerated iterative formula for M 1 :
M 1 ( k + 1 ) = s h r i n k p u ( k + 1 ) g + Q 1 a ( k ) α 1 , 1 α 1
where
s h r i n k p ( m , n ) = m a x m n 2 p m p 1 , 0 · m m
(3)
The subproblem related to M 2 is
M 2 = arg min M 2 λ 1 M 2 0 + Q 2 , x ( u g ) M 2 + α 2 2 M 2 x ( u g ) 2 2
Similarly, due to the decoupling of M 2 from the other variables, we can obtain the following expression:
M 2 = arg min M 2 λ 1 M 2 0 + α 2 2 M 2 x ( u g ) Q 2 α 2 2 2
Then, according to the hard thresholding shrinkage theorem [35,36], M 2 can be updated in an accelerated manner:
M 2 ( k + 1 ) = h a r d x ( u ( k + 1 ) g ) + Q 2 a ( k ) α 2 , 2 λ 1 α 2
where
h a r d ( m , n ) = 0 , m < n m , m n
(4)
The subproblem related to M 3 is
M 3 = arg min M 3 λ 2 M 3 p p + Q 3 , y u M 3 + α 3 2 M 3 y u 2 2
In a similar manner to M 1 , we can derive the accelerated iterative formula for M 3 as follows:
M 3 ( k + 1 ) = s h r i n k p y u ( k + 1 ) + Q 3 a ( k ) α 3 , λ 2 α 3
Finally, utilizing the gradient ascent method, the Lagrange multipliers Q 1 , Q 2 , and Q 3 can be updated by
Q 1 ( k + 1 ) = Q 1 a ( k ) + α 1 u ( k + 1 ) g M 1 ( k + 1 )
Q 2 ( k + 1 ) = Q 2 a ( k ) + α 2 x ( u ( k + 1 ) g ) M 2 ( k + 1 )
Q 3 ( k + 1 ) = Q 3 a ( k ) + α 3 y u ( k + 1 ) M 3 ( k + 1 )
Following the determination of the iterative formulas for each variable, the accelerated iteration process can be initiated. The calculation formula for the primal-dual residual, which is crucial for accelerated iterations, is as follows:
Z i ( k ) = α i 1 Q i ( k ) Q i a ( k ) + α i M i ( k ) M i a ( k )
If condition Z i ( k + 1 ) < μ Z i ( k ) is satisfied, then update the iteration step length. Otherwise, the algorithm is restarted using the result of the previous iteration as the initial value. The auxiliary variable can be updated by the following expressions:
η i ( k + 1 ) = 1 + 1 + 4 η i ( k ) 2 2
M i a ( k + 1 ) = M i ( k + 1 ) + η i ( k ) 1 η i ( k + 1 ) M i ( k + 1 ) M i ( k )
Q i a ( k + 1 ) = Q i ( k + 1 ) + η i ( k ) 1 η i ( k + 1 ) Q i ( k + 1 ) Q i ( k )
where i = 1 , 2 , 3 , and μ is a scaling coefficient that approximates 1.
Thus far, with all subproblems solved, the overall algorithm can be summarized as presented in Algthorm 1.
Algorithm 1: The proposed destriping model with Fast ADMM
Input: Degraded image g and related parameter λ 1 , λ 2 , α 1 , α 2 , and α 3
1: Initialize: Set u ( 0 ) = M i ( 0 ) = M i a ( 0 ) = Q i ( 0 ) = Q i a ( 0 ) = 0   ( i = 1 , 2 , 3 ) , η i ( 0 ) = 1 , ε = 10 4 , n m a x = 200 .
2: While: u ( k + 1 ) u ( k ) 2 2 u ( k ) 2 2 < ε and n < n m a x
3: update u ( k + 1 ) by using Equation (13)
4: update M 1 ( k + 1 ) , M 2 ( k + 1 ) , and M 3 ( k + 1 ) by using Equations (17), (21) and (24)
5: update Q 1 ( k + 1 ) , Q 2 ( k + 1 ) , and Q 3 ( k + 1 ) by using Equations (25)–(27)
6: update Z i ( k + 1 ) by using Equation (28)
7: if  Z i ( k + 1 ) < μ Z i ( k ) , i = 1 , 2 , 3 then
8: update η i ( k + 1 ) , M i a ( k + 1 ) and Q i a ( k + 1 ) by using Equations (29)–(31)
9: else
10: η i ( k + 1 ) = 1 , M i a ( k + 1 ) = M i ( k + 1 ) , Q i a ( k + 1 ) = Q i ( k + 1 ) , Z i ( k + 1 ) = μ 1 Z i ( k ) , i = 1 , 2 , 3
11: end if
12: n = n + 1
13: End While
Output: Destriped image u

4. Experiment Results

To validate the effectiveness and generalizability of the proposed method, we conducted separate tests on simulated and real stripe noise. The experiments involved a significant number of comparative experiments to evaluate the performance of the proposed method on different types of stripe noise, ensuring the logical rigor of the experiments. For the comparative experiments, we analyzed and compared six typical destriping methods. Among them, WAFT [3] and WLS [37] are filtering-based and statistical-based methods. UTV [2] is a model optimization method based on the l 2 norm. SAUTV [19] is an earlier model optimization algorithm based on the l 1 norm, while GSLV [27] and RBSUTV [38] are more advanced model optimization methods proposed in the research in recent years. Furthermore, all our experiments were conducted on a personal computer with an Intel(R) Core (TM) i7-6700 CPU @ 3.40 GHz and 16 GB RAM, using MATLAB R2022b.
In the evaluation of the method, this study employed a comprehensive evaluation approach that combined subjective and objective assessments, resulting in the more persuasive evaluation results. In terms of subjective evaluation, the effectiveness of different methods in removing stripes can be directly observed by examining the restored remote sensing images and their corresponding stripe noise maps. We also zoomed in on some areas with noticeable differences and marked them with red boxes in the images. Furthermore, we plotted the mean cross-track profiles and mean column power spectrum of the restored remote sensing images to better demonstrate the differences in the stripe noise removal results among the different methods. In terms of objective evaluation, we use different reference indicators for simulated data and real data. For the simulated data, since the original images are available, we used peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) [39] as reference metrics. For the real data, we employed two commonly used no-reference evaluation metrics in the field of stripe noise removal: mean relative deviation (MRD) [40,41] and inverse coefficient of variation (ICV) [20,42].

4.1. Simulated Data Experiments

In this section, we conducted extensive simulation experiments to validate the superiority of the algorithm. In Figure 4, we selected six remote sensing images captured by different sensors for conducting simulated stripe noise experiments. Among them, Figure 4a,b,f are MODIS data products obtained from 1B-level calibrated radiance, which can be downloaded from the official website of NASA [43]. Figure 4c is a hyperspectral image captured by the Tiangong-1 satellite. Figure 4d is a hyperspectral image of Washington DC Mall, which can be obtained from the relevant website [44]. Figure 4e is captured by the VIIRS sensor and is included in the “Earth at Night” collection, which can also be obtained from the official website of NASA [45].
In the experiment, the simulated stripe noise can be mainly classified into two categories: periodic and non-periodic. Inspired by the ideas presented in [31], we verified the effectiveness and robustness of the proposed method by adding stripes of different intensities and ratios. During destriping process, the remote sensing image was operated in MATLAB as an eight-bit encoded matrix. Therefore, the intensity of the stripe noise could be selected within the range of [0, 255]. The ratio refers to the ratio of the number of rows where stripe noise appears to the total number of rows in the remote sensing image matrix and can be selected within the range of [0, 1]. Additionally, to facilitate better comparison of the stripe noise removal effects of different methods, we normalized the stripe noise. Moreover, in the simulated experiment, we employed two important objective reference metrics: PSNR and SSIM. The calculation methods for these metrics are as follows:
PSNR ( I , R ) = 10 log 255 × 255 1 m n i = 1 m j = 1 n ( I i j R i j ) 2
where I represents the original image, R represents the destriped image, and m n represents the total number of pixels in the image.
SSIM ( I , R ) = 2 M I M R + ( 255 k 1 ) 2 2 σ I R + ( 255 k 2 ) 2 M I 2 + M R 2 + ( 255 k 1 ) 2 σ I 2 + σ R 2 + ( 255 k 2 ) 2
where M I and M R respectively denote the pixel means of images I and R . σ I R denotes the covariance between g and u, while σ I 2 and σ R 2 respectively denote the variances of the images I and R . k 1 and k 2 are constants used in the calculations.

4.1.1. Periodic Stripe Noise

For the assessment of periodic stripe noise, we selected three different remote sensing images and added stripe noise of varying intensities and ratios for experimentation. In Figure 5, through the restored remote sensing images, we observed that all the methods effectively removed the stripe noise. However, the WAFT method introduced some ripple artifacts in the denoised image, while UTV and SAUTV caused the blurring of the edge details in the recovered image. The other three methods and the proposed method performed well, with no significant visual differences observed.
In Figure 6, the denoising results of most methods are similar to Figure 5, except for the WLS method, which still exhibits some noticeable stripes in the denoised images. This was because we added stripe noise with different ratios to the two remote sensing images, resulting in different frequencies. The WLS method was unable to accurately compute the local linear relationship between the image and stripe noise through guided filtering, thus leading to the incomplete removal of stripe noise. This indicates that filter-based and statistical-based stripe noise removal algorithms are not universally applicable. In Figure 7, we present the estimates of stripe noise of Figure 6. For several model optimization algorithms, it can be observed that whether it is UTV based on the l 2 norm or SAUTV, GSLV, and RBSUTV based on the l 1 norm, a relatively large amount of underlying image information is removed while destriping. In contrast, the proposed method removed the least amount of underlying image detail information. It indicates that the l p quasinorm has a better capability to characterize the sparse nature of stripe noise compared to the l 1 and l 2 norms.
To distinguish the subtle differences between the different methods, we increased the ratio of periodic stripes in Figure 8 and plotted the corresponding metric curves. Figure 9 presents the mean cross-track profiles of the stripe removal results for each method. Although some methods showed no significant differences in the visual results of stripe removal compared to the proposed method, the mean cross-track profile of the proposed method was noticeably superior to these methods. It can be observed that, except for the proposed method, the mean cross-track profiles of other methods have a certain deviation from the original image at the peak. This also indicates that the l 1 and l 2 norms inadequately capture the sparse characteristics of stripe noise, resulting in the loss of detail information during the process of destriping. Figure 10 displays the mean column power spectrum for each method. The main frequencies of the stripe noise are concentrated around 0.1, 0.2, 0.3, and 0.4, which align with the actual situation. Around the frequency of 0.05, the curve of the proposed method closely fits the original curve compared to the other methods. This indicates that the proposed method outperforms the other methods in preserving details.
Furthermore, from the results presented in Table 1, it can be observed that WAFT and WLS exhibit a good performance in removing stripes at lower levels. However, as the level of stripe noise increases, their ability to remove the noise deteriorates rapidly. Consistently with the results presented in Figure 6 and Figure 7, UTV, SAUTV, and GSLV all result in the loss of some details from the underlying image when removing stripe noise, leading to lower PSNR and SSIM values compared to the proposed method. The RBSUTV method, due to its specific characteristics, performs better than the proposed method in handling low-level stripe noise; however, it shows a poor performance when dealing with high-level stripe noise. In comparison, the proposed method is superior to other methods in most cases and demonstrated robustness. Although it may not surpass RBSUTV in some situations, the difference between the two is minimal.

4.1.2. Nonperiodic Stripe Noise

For experiments conducted on non-periodic stripe noise, we also selected three examples for demonstration and explanation, with the relevant results shown in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16. Among them, Figure 11, Figure 12 and Figure 14 show the results of removing different levels of non-periodic stripe noise. Overall, the results are similar to the handling of periodic stripe noise; however, there are still some notable points. In Figure 11, the horizontal streets are mistakenly treated as stripe noise and removed in the results of SAUTV and RBSUTV. This is because these edge structures are very similar to stripe noise and are prone to being misprocessed. Therefore, during the destriping process, the consideration of global sparsity is necessary. Additionally, due to the overall smoothness of the simulated remote sensing image presented in Figure 12, there is less detailed information of the underlying image contained in the presented stripe noise in Figure 13. This suggests that the image itself also has an impact on the stripe removal noise capability of the algorithm.
In Figure 15, it can be observed that, unlike periodic stripes, the mean cross-track profiles of non-periodic stripe noise appear more chaotic. Additionally, from the mean column power spectrum in Figure 16, it is evident that the frequencies of non-periodic stripe noise are not concentrated but rather dispersed. As a result, non-periodic stripe noise has a significant impact on filtering-based and statistical-based destriping methods. Furthermore, from the PSNR and SSIM values of the recovered images using different methods presented in Table 2, we can observe that the performance of WAFT and WLS noticeably decreases compared to removing periodic stripes, and other algorithms also show slight decreases. This indicates that the difficulty of destriping increases when non-periodic stripe noise is present. However, the overall trends remain consistent with periodic stripe noise. The proposed method in this paper still outperforms other algorithms in terms of performance.

4.2. Real Data Experiments

In the experiments conducted with the use of real data, we selected four remote sensing images from the MODIS dataset. Figure 17a was obtained from the 33rd band of the MODIS data, primarily affected by non-periodic stripe noise. Figure 18a and Figure 19a were both obtained from the 27th band of the MODIS data. Figure 18a is primarily contaminated by periodic stripes, while Figure 19a is primarily contaminated by non-periodic stripes. This indicates that the stripe noise in the same band is not fixed. Figure 20a was obtained from the 31st band of the MODIS data, and it exhibits a high ratio of stripe noise but with a relatively low intensity level.
Furthermore, since there are no original images available for the real data, we selected two commonly used no-reference metrics, MRD and ICV, to calculate the objective evaluation metrics. These metrics were used to quantitatively assess the performance of different methods, and their calculation methods are as follows:
MRD = 1 m n i = 1 m j = 1 n X ij Y ij X ij × 100 %
where m n represents the number of pixels in the region unaffected by stripes, X ij represents the pixel value of the original image, and Y ij represents the pixel value of the image after removing the stripes. Considering the characteristics of stripe noise, we selected individual rows without stripe noise as the region unaffected by stripe noise to calculate the MRD index. Furthermore, we calculated the mean value from multiple calculations as the final result to avoid the randomness of the results.
ICV = Y m Y s t d
where Y m represents the average value of pixels, and Y s t d represents the standard deviation of pixels. As suggested in [2], we selected a 10 × 10 pixel window to compute the ICV index in homogeneous regions of the destriping image. To better differentiate the differences between different algorithms, we performed multiple calculations in the experiment and selected the set of data with the highest value as the final result.
Figure 17, Figure 18, Figure 19 and Figure 20 depict the restored images after applying different methods for removing stripe noise in the real data. It is worth noting that Figure 17 presents extremely dark regions, which are highly challenging for destriping. As a result, most methods exhibited artifacts in that specific area. However, the proposed method effectively avoided the generation of artifacts by incorporating a vertical sparsity constraint with the l p quasinorm. Furthermore, Figure 19 reveals that, in practical scenarios, non-periodic stripe noise appears randomly in terms of location, length, and intensity, significantly increasing the difficulty of removing such stripes. As observed in the local magnification images, both WAFT and WLS still exhibit residual stripe noise in their restored images. Even the RBSUTV method, which demonstrates a good performance on destriping in simulated experiments, failed to achieve satisfactory results when confronted with this type of stripe. In this case, the proposed method still achieved excellent destriping results. This strongly indicates the superior capability of the l p quasinorm in characterizing the sparse nature of underlying image, enabling better removal of stripe information contained in the underlying image.
In Figure 21, we can observe that the mean cross-track profiles of the destriping results obtained from WAFT, UTV, and SAUTV align with the trend of the original image. However, their curves appear overly smooth, indicating a loss of detailed information. In Figure 21c, some prominent spikes are present, indicating the presence of residual stripe noise in the denoised result of WLS. The RBSUTV method shows significant deviations in certain row mean values compared to the original image, which could be attributed to the choice of regularization terms. In comparison, the curves of GSLV and the proposed method align most closely with the trend of the original image. Due to the relatively low intensity of the stripe noise, all methods effectively removed the stripe noise. Consequently, in Figure 22, there is minimal difference in the normalized power spectrum of the stripe removal results among the various methods.
Additionally, Table 3 presents the objective evaluation metrics, MRD and ICV, for Figure 17, Figure 18, Figure 19 and Figure 20, with the best results being highlighted. From the table, we can observe that the proposed method outperforms the other methods in most cases. Although it may not achieve the best results in some cases, the disparity arising from the optimal results remains minor. This further demonstrates the effectiveness and robustness of the proposed method.

5. Discussion

5.1. Discussion of Experiment Results

In Section 4, we conducted extensive simulations and real experiments. The results show that the proposed method outperforms other methods in most cases; however, there are still some details worth discussing. From the visual results of destriping, we observed that WAFT and WLS performed poorly as the restored images still contained some residual stripe noise. For the model optimization methods, both UTV and SAUTV tend to blur edge structures and even introduce artifacts during the stripe removal process. Although there is no significant visual difference in the stripe removal results between GSLV and RBSUTV, the loss of detail information can still be detected from the correlation curve graphs. It is evident that the characterization of sparse properties by l 1 or l 2 norms is limited. Therefore, introducing the l p quasinorm into the sparse constraint proves to be beneficial. Although the proposed algorithm achieved good overall results, there are still some gaps in capturing fine details compared to the original image. From Table 1 and Table 2, we can observe that the proposed method presents some differences compared to the other methods when dealing with low-intensity stripe noise. This indicates that our consideration of sparse constraints on the image component and stripe component may not be comprehensive enough. Additionally, in Table 3, there are differences between the ICV metric of the proposed method and some other methods. Part of the reason could be that the selected region pixels were not evenly distributed, leading to less accurate computation results. Therefore, when evaluating the performance of the algorithm, it is important to consider other metrics for a comprehensive assessment.
Computer performance is very powerful at present, and algorithm runtime is not the primary concern for most people. However, with the advent of the big data era, algorithm runtime has also become a noteworthy factor in the research. In [22], the fast ADMM algorithm was proposed to speed up the entire iteration process by adjusting the step size. Therefore, we adopted the fast ADMM algorithm in our study to solve the proposed optimization model, greatly reducing the overall runtime of the proposed method. In Table 4, the runtime for solving six different-sized images using both ADMM and Fast ADMM is presented. It can be observed that as the image size increases, the Fast ADMM algorithm gains a more pronounced advantage. Additionally, during the experimental process, we also found that the Fast ADMM algorithm sometimes contributes to slight improvements in experimental results.

5.2. Analysis of the Parameters

The regularization coefficients have a decisive impact on the destriping performance of the optimization model, and selecting suitable coefficients is crucial. In practical scenarios, it is challenging to design universal coefficients for different types of stripe noise, and they are often determined through extensive experimentation processes. The main parameters of the proposed method are the two regularization parameters: λ 1 and λ 2 . Additionally, the choice of these parameters depends on the level of stripe noise. Based on the analysis of the proposed model and extensive experiments, the following conclusions can be drawn:
(1)
When the stripe noise is weak, it is generally recommended to select a larger value for λ 1 , which increases the weight of the horizontal stripe component, better preserving the details of the underlying image.
(2)
When the stripe noise is strong, it is generally recommended to select a larger value for λ 2 , which increases the weight of the vertical image component, enhancing the destriping capability.
However, the interaction between these two parameters should not be ignored. To obtain the optimal values for both parameters, it would require exhaustive testing in a two-dimensional space, which is a formidable task. Therefore, following the recommendation in [31], we employed a greedy algorithm combined with extensive experiments to rapidly determine parameter settings that yield better destriping effects. Specifically, the suggested range for λ 1 was [1, 100], and for λ 2 was [0.01, 1].

5.3. Limitation

The proposed method achieved a good destriping results in many cases; however, it still presented certain limitations. Remote sensing images obtained from sensors are often subject to various interferences, resulting in the presence of complex stripe noise in the images. The proposed method did not incorporate adaptive parameter selection. Therefore, selecting suitable parameters for different types of stripe noise became challenging and requires significant time and computational effort. Additionally, as shown in Figure 23a, some small-scale stripe noises disrupt the overall sparsity and low-rank properties of the stripe noise. Consequently, the proposed method was unable to completely remove such noise, and similar situations are observed in other methods [46] as well. In the experiments, we mitigated this limitation by adjusting the coefficients of the regularization terms; however, the overall image restoration quality was significantly compromised. Therefore, a research direction for future work would be to develop methods to effectively remove these small-scale random stripe noises while preserving the overall image restoration quality.

6. Conclusions

In this paper, we proposed a univariate variational model based on the l p quasinorm for removing stripe noise and applied it to different types of stripes in remote sensing data. The model considered the sparsity and low-rank properties of the stripe and image components separately. By introducing the l p quasinorm, it effectively avoided the loss of details during the destriping process. In the model-solving process, a fast ADMM algorithm was employed to speed up the convergence during iterations. Finally, extensive simulations and real experiments were conducted to compare the proposed method with six different methods. From both subjective and objective experimental results, it can be observed that the proposed method not only effectively removes stripes but also better preserves the details of the original image compared to the other methods. Even under severe stripe contamination, the proposed method achieved a good removal performance and demonstrated strong robustness. Furthermore, the fast ADMM algorithm reduces the running time of the proposed method, providing it with better prospects in the era of big data.
In addition, although this paper focused on the application of the proposed method to remote sensing data, its applicability was not limited to this specific domain. The method can be extended to the entire field of stripe noise removal. Therefore, in order to adapt to a wider range of application scenarios, we will address the limitations of the proposed method and make improvements in the future work. Additionally, we will continue to explore additional characteristics of the stripe and image components to better remove stripe noise. Moreover, with the continuous development and improvement of neural network methods, we will also consider integrating deep learning methods with traditional methods to obtain more effective approaches for destriping.

Author Contributions

All authors participated in the research and discussion of the algorithm. F.Y. verified the algorithm and conducted a large number of experiments. S.W. designed the framework of the algorithm and wrote the manuscript. Q.Z. programmed the algorithm and analyzed the results. Y.L. and H.S. assisted with the experimental work and calibrated the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Department Project of Jilin Province, grant number 20210203039SF. And this research was also funded by National Natural Science Foundation of China, grant number 42204144.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers and the editor for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

Correction Statement

This article has been republished with a minor correction to resolve typographical errors. This change does not affect the scientific content of the article.

Abbreviations

The following abbreviations are used in this manuscript:
MODISMultidisciplinary Digital Publishing Institute
VIIRSVisible Infrared Imaging Radiometer Suite
FFTFast Fourier Transform
PSNRPeak Signal to Noise Ratio
SSIMStructural Similarity
MRDMean Relative Deviation
ICVInverse Coefficient of Variation

References

  1. Pande-Chhetri, R.; Abd-Elrahman, A. De-striping hyperspectral imagery using wavelet transform and adaptive frequency domain filtering. ISPRS J. Photogramm. Remote Sens. 2011, 66, 620–636. [Google Scholar] [CrossRef]
  2. Bouali, M.; Ladjal, S. Toward optimal destriping of MODIS data using a unidirectional variational model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2924–2935. [Google Scholar] [CrossRef]
  3. Münch, B.; Trtik, P.; Marone, F.; Stampanoni, M. Stripe and ring artifact removal with combined wavelet—Fourier filtering. Opt. Express 2009, 17, 8567–8591. [Google Scholar] [CrossRef]
  4. Chen, J.; Shao, Y.; Guo, H.; Wang, W.; Zhu, B. Destriping CMODIS data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [Google Scholar] [CrossRef]
  5. Cao, Y.; Yang, M.Y.; Tisse, C.L. Effective strip noise removal for low-textured infrared images based on 1-D guided filtering. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 2176–2188. [Google Scholar] [CrossRef]
  6. Torres, J.; Infante, S.O. Wavelet analysis for the elimination of striping noise in satellite images. Opt. Eng. 2001, 40, 1309–1314. [Google Scholar]
  7. Gadallah, F.; Csillag, F.; Smith, E. Destriping multisensor imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [Google Scholar] [CrossRef]
  8. Shen, H.; Zhang, L. A MAP-based algorithm for destriping and inpainting of remotely sensed images. IEEE Trans. Geosci. Remote Sens. 2008, 47, 1492–1502. [Google Scholar] [CrossRef]
  9. Rakwatin, P.; Takeuchi, W.; Yasuoka, Y. Restoration of Aqua MODIS band 6 using histogram matching and local least squares fitting. IEEE Trans. Geosci. Remote Sens. 2008, 47, 613–627. [Google Scholar] [CrossRef]
  10. Carfantan, H.; Idier, J. Statistical linear destriping of satellite-based pushbroom-type images. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1860–1871. [Google Scholar] [CrossRef]
  11. Shen, H.; Jiang, W.; Zhang, H.; Zhang, L. A piece-wise approach to removing the nonlinear and irregular stripes in MODIS data. Int. J. Remote Sens. 2014, 35, 44–53. [Google Scholar] [CrossRef]
  12. Xiao, P.; Guo, Y.; Zhuang, P. Removing stripe noise from infrared cloud images via deep convolutional networks. IEEE Photonics J. 2018, 10, 1–14. [Google Scholar] [CrossRef]
  13. Chang, Y.; Yan, L.; Fang, H.; Zhong, S.; Liao, W. HSI-DeNet: Hyperspectral image restoration via convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 667–682. [Google Scholar] [CrossRef]
  14. Huang, Z.; Zhang, Y.; Li, Q.; Li, Z.; Zhang, T.; Sang, N.; Xiong, S. Unidirectional variation and deep CNN denoiser priors for simultaneously destriping and denoising optical remote sensing images. Int. J. Remote Sens. 2019, 40, 5737–5748. [Google Scholar] [CrossRef]
  15. Guan, J.; Lai, R.; Xiong, A. Learning spatiotemporal features for single image stripe noise removal. IEEE Access 2019, 7, 144489–144499. [Google Scholar] [CrossRef]
  16. Huang, Z.; Zhu, Z.; Wang, Z.; Li, X.; Xu, B.; Zhang, Y.; Fang, H. D3CNNs: Dual Denoiser Driven Convolutional Neural Networks for Mixed Noise Removal in Remotely Sensed Images. Remote Sens. 2023, 15, 443. [Google Scholar] [CrossRef]
  17. Chen, Y.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Wang, M. Group sparsity based regularization model for remote sensing image stripe noise removal. Neurocomputing 2017, 267, 95–106. [Google Scholar] [CrossRef]
  18. Yang, J.H.; Zhao, X.L.; Ma, T.H.; Chen, Y.; Huang, T.Z.; Ding, M. Remote sensing images destriping using unidirectional hybrid total variation and nonconvex low-rank regularization. J. Comput. Appl. Math. 2020, 363, 124–144. [Google Scholar] [CrossRef]
  19. Zhou, G.; Fang, H.; Yan, L.; Zhang, T.; Hu, J. Removal of stripe noise with spatially adaptive unidirectional total variation. Optik 2014, 125, 2756–2762. [Google Scholar] [CrossRef]
  20. Dou, H.X.; Huang, T.Z.; Deng, L.J.; Zhao, X.L.; Huang, J. Directional 0 Sparse Modeling for Image Stripe Noise Removal. Remote Sens. 2018, 10, 361. [Google Scholar] [CrossRef]
  21. Chang, Y.; Yan, L.; Wu, T.; Zhong, S. Remote sensing image stripe noise removal: From image decomposition perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  22. Goldstein, T.; O’Donoghue, B.; Setzer, S.; Baraniuk, R. Fast alternating direction optimization methods. SIAM J. Imaging Sci. 2014, 7, 1588–1623. [Google Scholar] [CrossRef]
  23. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  24. Wang, L.; Chen, Y.; Lin, F.; Chen, Y.; Yu, F.; Cai, Z. Impulse noise denoising using total variation with overlapping group sparsity and Lp-pseudo-norm shrinkage. Appl. Sci. 2018, 8, 2317. [Google Scholar] [CrossRef]
  25. Liu, X.; Chen, Y.; Peng, Z.; Wu, J.; Wang, Z. Infrared image super-resolution reconstruction based on quaternion fractional order total variation with Lp quasinorm. Appl. Sci. 2018, 8, 1864. [Google Scholar] [CrossRef]
  26. Liu, Q.; Sun, L.; Gao, S. Non-convex fractional-order derivative for single image blind restoration. Appl. Math. Model. 2022, 102, 207–227. [Google Scholar] [CrossRef]
  27. Liu, L.; Xu, L.; Fang, H. Simultaneous intensity bias estimation and stripe noise removal in infrared images using the global and local sparsity constraints. IEEE Trans. Geosci. Remote Sens. 2019, 58, 1777–1789. [Google Scholar] [CrossRef]
  28. Chen, Y.; He, W.; Yokoya, N.; Huang, T.Z. Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition. IEEE Trans. Cybern. 2019, 50, 3556–3570. [Google Scholar] [CrossRef]
  29. Yuan, G.; Ghanem, B. l0tv: A new method for image restoration in the presence of impulse noise. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5369–5377. [Google Scholar]
  30. Shen, H.; Li, X.; Cheng, Q.; Zeng, C.; Yang, G.; Li, H.; Zhang, L. Missing information reconstruction of remote sensing data: A technical review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 61–85. [Google Scholar] [CrossRef]
  31. Liu, X.; Lu, X.; Shen, H.; Yuan, Q.; Jiao, Y.; Zhang, L. Stripe noise separation and removal in remote sensing images by consideration of the global sparsity and local variational properties. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3049–3060. [Google Scholar] [CrossRef]
  32. Wu, X.; Qu, H.; Zheng, L.; Gao, T.; Zhang, Z. A remote sensing image destriping model based on low-rank and directional sparse constraint. Remote Sens. 2021, 13, 5126. [Google Scholar] [CrossRef]
  33. Woodworth, J.; Chartrand, R. Compressed sensing recovery via nonconvex shrinkage penalties. Inverse Probl. 2016, 32, 075004. [Google Scholar] [CrossRef]
  34. Chen, Y.; Peng, Z.; Gholami, A.; Yan, J.; Li, S. Seismic signal sparse time–frequency representation by Lp-quasinorm constraint. Digit. Signal Process. 2019, 87, 43–59. [Google Scholar] [CrossRef]
  35. Jiao, Y.; Jin, B.; Lu, X. A primal dual active set with continuation algorithm for the 0-regularized optimization problem. Appl. Comput. Harmon. Anal. 2015, 39, 400–426. [Google Scholar] [CrossRef]
  36. Wang, J.; Xia, Q.; Xia, B. Fast image restoration method based on the L0, L1, and L1 gradient minimization. Mathematics 2022, 10, 3107. [Google Scholar] [CrossRef]
  37. Li, F.; Zhao, Y.; Xiang, W. Single-frame-based column fixed-pattern noise correction in an uncooled infrared imaging system based on weighted least squares. Appl. Opt. 2019, 58, 9141–9153. [Google Scholar] [CrossRef] [PubMed]
  38. Wang, J.L.; Huang, T.Z.; Zhao, X.L.; Huang, J.; Ma, T.H.; Zheng, Y.B. Reweighted block sparsity regularization for remote sensing images destriping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4951–4963. [Google Scholar] [CrossRef]
  39. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  40. Wang, J.L.; Huang, T.Z.; Ma, T.H.; Zhao, X.L.; Chen, Y. A sheared low-rank model for oblique stripe removal. Appl. Math. Comput. 2019, 360, 167–180. [Google Scholar] [CrossRef]
  41. Zeng, Q.; Qin, H.; Yan, X.; Yang, T. Fourier domain anomaly detection and spectral fusion for stripe noise removal of TIR imagery. Remote Sens. 2020, 12, 3714. [Google Scholar] [CrossRef]
  42. Nichol, J.E.; Vohora, V. Noise over water surfaces in Landsat TM images. Int. J. Remote Sens. 2004, 25, 2087–2093. [Google Scholar] [CrossRef]
  43. LAADS DAAC. Available online: https://ladsweb.modaps.eosdis.nasa.gov/ (accessed on 12 May 2023).
  44. MAXAR Resources. Available online: http://www.digitalglobe.com/product-samples (accessed on 12 May 2023).
  45. Earth at Night. Available online: https://earthobservatory.nasa.gov/images/event/79869/earth-at-night (accessed on 12 May 2023).
  46. Song, Q.; Wang, Y.; Yan, X.; Gu, H. Remote sensing images stripe noise removal by double sparse regulation and region separation. Remote Sens. 2018, 10, 998. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of the proposed destriping method.
Figure 1. The schematic diagram of the proposed destriping method.
Sensors 23 07529 g001
Figure 2. Gradient properties in different directions of MODIS band 27 (a) Original striped image. (b) Horizontal gradient property. (c) Vertical gradient property.
Figure 2. Gradient properties in different directions of MODIS band 27 (a) Original striped image. (b) Horizontal gradient property. (c) Vertical gradient property.
Sensors 23 07529 g002
Figure 3. Description of sparsity for various norms. (a) p = 2 . (b) p = 1 . (c) 0 < p < 1 .
Figure 3. Description of sparsity for various norms. (a) p = 2 . (b) p = 1 . (c) 0 < p < 1 .
Sensors 23 07529 g003
Figure 4. Original images for simulated experiments. (a) MODIS band 31 data D1. (b) MODIS band 20 data D2. (c) Tiangong-1 satellite hyperspectral data D3. (d) Washington DC Mall hyperspectral data D4. (e) VIIRS hyperspectral data D5. (f) MODIS band 31 data D6.
Figure 4. Original images for simulated experiments. (a) MODIS band 31 data D1. (b) MODIS band 20 data D2. (c) Tiangong-1 satellite hyperspectral data D3. (d) Washington DC Mall hyperspectral data D4. (e) VIIRS hyperspectral data D5. (f) MODIS band 31 data D6.
Sensors 23 07529 g004
Figure 5. The destriping results of simulated periodic stripe noise in MODIS data band 31 D1. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 5. The destriping results of simulated periodic stripe noise in MODIS data band 31 D1. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g005
Figure 6. The destriping results of simulated periodic stripe noise in MODIS data band 20 D2. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 6. The destriping results of simulated periodic stripe noise in MODIS data band 20 D2. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g006
Figure 7. Noise estimation comparison results of Figure 6 images. (a) Added stripe noise; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 7. Noise estimation comparison results of Figure 6 images. (a) Added stripe noise; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g007
Figure 8. The destriping results of simulated periodic stripe noise in Tiangong-1 satellite hyperspectral data D3. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 8. The destriping results of simulated periodic stripe noise in Tiangong-1 satellite hyperspectral data D3. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g008
Figure 9. Mean cross-track profiles of Figure 8 images. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; and (h) The proposed method.
Figure 9. Mean cross-track profiles of Figure 8 images. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; and (h) The proposed method.
Sensors 23 07529 g009
Figure 10. Mean column power spectrum of Figure 8 images. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; and (h) The proposed method.
Figure 10. Mean column power spectrum of Figure 8 images. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; and (h) The proposed method.
Sensors 23 07529 g010
Figure 11. The destriping results of simulated nonperiodic stripe noise in Washington DC Mall hyperspectral data D4. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 11. The destriping results of simulated nonperiodic stripe noise in Washington DC Mall hyperspectral data D4. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g011
Figure 12. The destriping results of simulated nonperiodic stripe noise in VIIRS hyperspectral data D5. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 12. The destriping results of simulated nonperiodic stripe noise in VIIRS hyperspectral data D5. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g012
Figure 13. Noise estimation comparison results of Figure 12 images. (a) Added stripe noise; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 13. Noise estimation comparison results of Figure 12 images. (a) Added stripe noise; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g013
Figure 14. The destriping results of simulated nonperiodic stripe noise in MODIS data band 22 D6. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g)RBSUTV; (h) The proposed method.
Figure 14. The destriping results of simulated nonperiodic stripe noise in MODIS data band 22 D6. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g)RBSUTV; (h) The proposed method.
Sensors 23 07529 g014
Figure 15. Mean cross-track profiles of Figure 14 images. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 15. Mean cross-track profiles of Figure 14 images. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g015
Figure 16. Mean column power spectrum of Figure 14 images. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 16. Mean column power spectrum of Figure 14 images. (a) Degraded image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g016
Figure 17. The destriping results of different method in MODIS data band 27 D7. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 17. The destriping results of different method in MODIS data band 27 D7. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g017
Figure 18. The destriping results of different method in MODIS data band 27 D8. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 18. The destriping results of different method in MODIS data band 27 D8. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g018
Figure 19. The destriping results of different methods in MODIS data band 33 D9. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 19. The destriping results of different methods in MODIS data band 33 D9. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g019
Figure 20. The destriping results of different method in MODIS data band 30 D10. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 20. The destriping results of different method in MODIS data band 30 D10. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g020
Figure 21. Mean cross-track profiles of Figure 20 images. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 21. Mean cross-track profiles of Figure 20 images. (a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g021
Figure 22. Mean column power spectrum of Figure 20 images.(a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Figure 22. Mean column power spectrum of Figure 20 images.(a) Original image; (b) WAFT; (c) WLS; (d) UTV; (e) SUTV; (f) GSLV; (g) RBSUTV; (h) The proposed method.
Sensors 23 07529 g022
Figure 23. An example of the proposed model failure to completely remove stripes. (a) Original image. (b) Result of the proposed method.
Figure 23. An example of the proposed model failure to completely remove stripes. (a) Original image. (b) Result of the proposed method.
Sensors 23 07529 g023
Table 1. The PSNR and SSIM results of MODIS data band 20 D2 with periodic noise.
Table 1. The PSNR and SSIM results of MODIS data band 20 D2 with periodic noise.
ImageMethodr = 0.2r = 0.5r = 0.8
IntensityIntensityIntensity
205080205080205080
PSNRWAFT43.402241.052636.403938.746134.411631.128036.788233.792026.0641
WLS46.072439.445435.332241.671436.161430.625135.080931.133127.3167
UTV39.260238.135834.558138.396034.885332.357237.702833.202330.5506
SAUTV41.047338.913137.499639.351335.372334.711436.076835.177029.5822
GSLV43.152140.622338.037241.209438.551733.715839.834937.953830.5086
RBSUTV48.614643.735837.499641.472535.249732.308833.334229.857327.9977
Proposed46.569243.338239.611644.813941.058635.525041.297938.607532.6568
SSIMWAFT0.99200.99090.98090.98530.96760.93510.97920.96540.9361
WLS0.99680.99560.99220.99760.99580.98810.99560.99140.9309
UTV0.99010.98780.97550.98830.97740.96610.98640.96970.9497
SAUTV0.99820.99250.99180.99490.99050.98850.99020.98790.9585
GSLV0.99840.99450.99320.99680.99430.98520.99740.99360.9644
RBSUTV0.99950.99860.98820.99660.97870.97420.96960.93810.9155
Proposed0.99930.99830.99490.99870.99820.99360.99750.99460.9741
Table 2. The PSNR and SSIM results of VIIRS hyperspectral data D5 with non-periodic noise.
Table 2. The PSNR and SSIM results of VIIRS hyperspectral data D5 with non-periodic noise.
ImageMethodr = 0.2r = 0.5r = 0.8
IntensityIntensityIntensity
205080205080205080
PSNRWAFT34.248131.419430.625633.201630.717829.072232.757827.506624.6842
WLS39.388533.854329.970538.924632.422228.182636.405726.003324.2943
UTV36.336932.982631.825833.761830.451729.924632.213628.750725.9453
SAUTV38.243133.145832.074135.721032.603930.599333.755530.050127.2049
GSLV41.095740.521336.992039.204836.631332.600138.847134.552228.9274
RBSUTV43.094338.461135.261438.806136.943932.729133.463027.581724.9959
Proposed42.462840.648237.527342.039837.639033.587141.082735.086930.6429
SSIMWAFT0.98820.98130.96600.98630.96790.90490.98580.92430.8919
WLS0.99740.97940.94480.99500.96350.92900.99210.92450.8717
UTV0.97580.96170.95510.96280.94710.93390.95670.93880.9004
SAUTV0.98850.97530.96290.98030.97230.96060.97740.96160.9246
GSLV0.99870.99630.97980.99820.98550.96990.99680.96860.9102
RBSUTV0.99970.99520.97560.99840.99110.97220.98010.90280.8692
Proposed0.99950.99670.98620.99910.99030.97840.99760.97850.9539
Table 3. The MRD and ICV results of the different methods on real data.
Table 3. The MRD and ICV results of the different methods on real data.
ImageIndexWAFTWLSUTVSAUTVGSLVRBSUTVProposed
MODISMRD (%)3.90135.72864.32303.97822.93616.91483.3134
data D7ICV48.4949.9463.3754.0973.1141.1756.10
MODISMRD (%)3.62931.67542.89511.25673.76742.03631.1608
data D8ICV83.9773.8080.1588.2374.2984.1289.74
MODISMRD (%)2.32941.46042.00181.16881.86332.23611.0299
data D9ICV79.1572.47107.47112.9390.2976.21101.63
MODISMRD (%)6.29815.14547.10686.20154.83345.50622.3385
data D10ICV101.13109.64163.37127.50137.58114.04172.35
Table 4. Computational cost of solving the proposed model with ADMM and Fast ADMM.
Table 4. Computational cost of solving the proposed model with ADMM and Fast ADMM.
Image Size200 × 200300 × 300400 × 400500 × 500600 × 600700 × 700
ADMM1.18712.92636.06899.254913.354518.2670
Fast ADMM0.33520.95381.89673.03244.12325.8862
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, F.; Wu, S.; Zhang, Q.; Liu, Y.; Sun, H. Destriping of Remote Sensing Images by an Optimized Variational Model. Sensors 2023, 23, 7529. https://doi.org/10.3390/s23177529

AMA Style

Yan F, Wu S, Zhang Q, Liu Y, Sun H. Destriping of Remote Sensing Images by an Optimized Variational Model. Sensors. 2023; 23(17):7529. https://doi.org/10.3390/s23177529

Chicago/Turabian Style

Yan, Fei, Siyuan Wu, Qiong Zhang, Yunqing Liu, and Haonan Sun. 2023. "Destriping of Remote Sensing Images by an Optimized Variational Model" Sensors 23, no. 17: 7529. https://doi.org/10.3390/s23177529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop