Next Article in Journal
Remote Sensing and Machine Learning for Accurate Fire Severity Mapping in Northern Algeria
Next Article in Special Issue
Hyperspectral Image Denoising by Pixel-Wise Noise Modeling and TV-Oriented Deep Image Prior
Previous Article in Journal
Fluctuations in Refracted Star Signals Caused by the Stratospheric Internal Gravity Waves
Previous Article in Special Issue
Convolutional Neural Network-Based Method for Agriculture Plot Segmentation in Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Sparse Collaborative Low-Rank Prior Knowledge Representation for Thick Cloud Removal in Remote Sensing Images

1
School of Science, Chang’an University, Xi’an 710064, China
2
School of Mathematical and Statistics, Northwestern Polytechnical University, Xi’an 710072, China
3
School of Mathematics, Southwest Jiaotong University, Chengdu 611756, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(9), 1518; https://doi.org/10.3390/rs16091518
Submission received: 26 March 2024 / Revised: 22 April 2024 / Accepted: 22 April 2024 / Published: 25 April 2024

Abstract

:
Efficiently removing clouds from remote sensing imagery presents a significant challenge, yet it is crucial for a variety of applications. This paper introduces a novel sparse function, named the tri-fiber-wise sparse function, meticulously engineered for the targeted tasks of cloud detection and removal. This function is adept at capturing cloud characteristics across three dimensions, leveraging the sparsity of mode-1, -2, and -3 fibers simultaneously to achieve precise cloud detection. By incorporating the concept of tensor multi-rank, which describes the global correlation, we have developed a tri-fiber-wise sparse-based model that excels in both detecting and eliminating clouds from images. Furthermore, to ensure that the cloud-free information accurately matches the corresponding areas in the observed data, we have enhanced our model with an extended box-constraint strategy. The experiments showcase the notable success of the proposed method in cloud removal. This highlights its potential and utility in enhancing the accuracy of remote sensing imagery.

1. Introduction

Remote sensing technology has been widely employed across various fields, including for unmixing [1,2], fusion [3,4,5,6], and classification [7]. However, these images often suffer from inevitable cloud contamination, resulting in significant information loss and constraining the further analysis of remote sensing data [8,9]. Consequently, advancing cloud removal techniques is critical for enhancing the practical utility of remote sensing images.
Mathematically, multitemporal images contaminated by clouds can be represented by a tensor O R a × b × d × t (a and b denote the spatial dimension, d represents the spectral dimension, and t is the time dimension); the clean image component is represented by U , the cloud component is denoted by C , and the model noise is signified by E . Then, the degradation process is formulated as follows:
O = U + C + E .
Numerous cloud removal techniques have been developed by researchers [10,11]. These methods are generally classified into two main approaches depending on whether a cloud mask is available, offering distinct strategies for addressing the challenge of cloud removal.
The first approach is called the nonblind method. It uses a given cloud mask as prior knowledge to reconstruct information obscured by clouds. Traditional methods for addressing this problem are spatial-based methods, which only utilize the interrelations between pixels across the spatial dimension [12,13,14,15]. To more effectively harness the correlations across spectral bands, researchers [16,17] have developed spectral-based methods aimed at improving the reconstruction of missing data. However, the above methods always fail to produce promising reconstruction results if the remote sensing imagery is obscured by thick clouds. Methods that take advantage of multiple images have been developed, which are classified as either multitemporal [18,19,20,21,22] or multisource [23,24] methods. Wang et al. [20] proposed a method for scene reconstruction that employs a robust matrix completion technique through temporal contiguity. Then, they presented an efficient algorithm based on the augmented Lagrangian method (ALM) with inexact proximal gradients (IPGs) to address optimization problems. Zhang et al. [21] proposed a cloud and shadow removal technique based on the learning of spatial–temporal patches. Li et al. [23] developed an innovative cloud removal methodology employing a CMD network that incorporates optical and SAR imagery. Gao et al. [24] utilized a generative adversarial network to fuse optical and SAR images in order to reconstruct information obscured by clouds. To effectively leverage image prior knowledge, hybrid methods have been proposed to exploit two or all aspects of an image’s spatial, spectral, and temporal features. For instance, Chen et al. [25] proposed a Spatially and Temporally Weighted Regression (STWR) method that fully leverages cloud-free information from input Landsat scenes. Melgani [26] introduced contextual multiple linear and nonlinear prediction models. These models assume that the image’s spectral and spatial characteristics remain relatively consistent across the image sequence. While nonblind methods are effective in cloud removal, their heavy reliance on cloud masks somewhat limits their applicability in comprehensively addressing cloud removal challenges.
The second approach, known as the blind method, removes clouds without providing a cloud mask. Wen et al. [27] utilized a technique known as Robust Principal Component Analysis (RPCA) to initially identify cloud cover, followed by reconstructing the missing information using discriminant RPCA to eliminate thick clouds. Chen et al. [28] proposed TVLRSDC, which means total-variation plus low-rank sparsity decomposition. A deep residual neural network model created by Meraner et al. [29] focused on the effective removal of clouds from multispectral satellite images from Sentinel-2. By integrating SAR and optical data through a fusion process, the synergistic characteristics of both imaging systems were exploited to provide guidance for image reconstruction. Wang et al. [30] developed an unsupervised domain factorization network aimed at eliminating thick clouds from multitemporal remote sensing images. These blind removal methods always perform cloud detection and removal separately and independently, which usually changes information for cloud-free regions.
In order to remove clouds better, it is necessary to study prior knowledge about the clouds. It is widely recognized that cloud components can be effectively characterized by sparse functions. Recently, the adoption of element-wise sparse functions, such as the l 1 -norm, has become prevalent [27,28,31] owing to their concise form and convexity. However, an element-wise sparse function ignores the correlations across the spectrum. Therefore, investigating an appropriate sparse function to characterize the cloud properties becomes crucial. Recently, Ji et al. [32] introduced a model that combines box-constrained low-rank and group sparse techniques, with the specific purpose of detecting and removing clouds. This approach defines cloud properties using group sparsity in the spectral dimension. Nonetheless, it falls short of fully exploring the characteristics of the cloud component.
To further refine the sparsity representation of the cloud component, we introduce the tri-fiber-wise sparse (TriSps) function. This function utilizes the sparsity of mode-1, -2, and -3 fibers to capture more cloud information. Our development of TriSps is inspired by the inherent structural sparsity of cloud components. Specifically, when each tube is considered as a whole, the fibers exhibit sparse characteristics, termed fiber-wise sparsity. In Figure 1a, an image affected by cloud cover is illustrated, and Figure 1b shows the corresponding cloud imagery. In Figure 1c–e, histograms of the l 2 -norms of mode-1, -2, and -3 fibers extracted from the cloud component are depicted. It is evident from Figure 1c–e that the majority of l 2 -norm values for mode-1, -2, and -3 fibers are zero, indicating significant sparsity. Additionally, we incorporate the global prior of image components, characterized by multi-rank. Leveraging the proposed sparse function and global low rank, we devised a novel cloud removal model and take advantage of a proximal alternating minimization (PAM)-based approach [33] to efficiently solve the proposed model.
The contributions of this paper are threefold:
  • To leverage the inherent structural sparsity of cloud components, we propose the TriSps function specifically for cloud removal purposes. Unlike element-wise and tube-wise sparsity, the TriSps function is designed to capture the properties of clouds in three-dimensional directions more effectively.
  • Building upon the TriSps function, we propose a novel cloud removal model that simultaneously estimates both the image and cloud aspects.
  • We devised an effective algorithm based on PAM to tackle our method. Experiments with synthetic and real datasets highlight the proposed method’s prowess in cloud removal, outperforming other advanced methods currently available in the field.
This paper unfolds as follows. Section 2 introduces fundamental notations and definitions essential for understanding. In Section 3, we introduce a tri-fiber-wise sparse function to characterize the cloud component’s properties. Furthermore, we have devised a method for detecting and removing clouds. In Section 4, we validate the effectiveness of our method through synthetic and real experiments. Lastly, Section 5 provides some conclusions.

2. Notations and Preliminaries

We present key notations and definitions [34] that are fundamental in our study. the primary notations employed in this paper are outlined in Table 1 for clarity and reference.
Furthermore, some definitions are presented below.
Definition 1
(Tensor mode-n product [35]). Given a tensor A R R 1 × R 2 × × R N and a matrix U R J × R N , the mode-n product of A by U is defined as
A × n U = Fold n ( U A ( n ) ) .
In this context, A ( n ) R R N × d n R d stands for the matricization of tensor A in the nth mode, while Fold n ( · ) represents the operator that, in the nth mode, reshapes the matrix back into a tensor form.
Definition 2
(TNN [36]). The tensor nuclear norm of tensor A R R 1 × R 2 × R 3 is characterized as follows:
A TNN = k = 1 I 3 Z k * .
Here, Z k represents the kth frontal slice obtained from the Fourier-transformed tensor Z = A × 3 Q R 3 .
Based on the TNN, the tensor completion model can be written as follows:
min A A TNN , s . t . A Ω = B Ω ,
where B Ω signifies the tensor obtained by copying entries from A corresponding to the index set Ω while setting all other entries to zero.

3. The Proposed Method

In this section, we present our proposed sparse function, TriSps. Utilizing the TriSps function as a foundation, we introduce a cloud removal model that combines tensor multi-rank. Then, we devise a PAM algorithm to solve the proposed model.

3.1. The Tri-Fiber-Wise Sparse Function

Existing sparse functions, such as element-wise and tube-wise sparsity, fail to fully exploit the correlations across the spectrum when it comes to cloud properties in remotely sensed images. To overcome this deficiency, we introduce a novel sparse function designed to efficiently capture cloud characteristics across three dimensions. the presence of clouds in contaminated remotely sensed images is relatively sparse compared to the entire image, as shown in Figure 1b. This observation implies that the cloud component exhibits fiber-wise sparsity, aligning with the fundamental nature of clouds. Figure 1 shows the sparsity within a cloud-contaminated image, where Figure 1c–e particularly highlight that most of the l 2 -norms of mode-n ( n = 1 , 2 , 3 ) fibers derived from the cloud component are zero. in other words, the mode-1, -2, and -3 fibers of the cloud component exhibit clear fiber-wise sparsity. This realization allows us to simultaneously consider the sparsity across mode-1, -2, and -3 fibers rather than focusing on a single fiber’s sparsity. Leveraging this insight, we introduce the tri-fiber-wise sparse (TriSps) function. This novel function adeptly captures the sparse structures of mode-1, -2, and -3 fibers simultaneously. Utilizing the TriSps function, we can more thoroughly characterize cloud properties and significantly enhance the accuracy of cloud detection in remotely sensed imagery. Mathematically, the TriSps function of C is defined as
J λ ( C ) = n = 1 3 λ n S n ( C ) ,
with
S n ( C ) = i C ( n ) i 2 ,
where λ n ( n = 1 , 2 , 3 ) denotes the positive weights, and C ( n ) i denotes the ith column of mode-n unfolding C ( n ) , i.e., the ith mode-n fiber of C .
Remark 1.
1. Equation (6) characterizes the sparsity of mode-n ( n = 1 , 2 , 3 ) fibers, since it is the approximation of the following l 0 -quasi-norm of C with respect to mode-n fibers:
C f n 0 = # i | C ( n ) i 0 ,
which is to indicate the number of mode-n fibers that are non-zero. Therefore, the proposed TriSps function in Equation (5) can sufficiently take advantage of the sparsity property of the cloud component.
2. the proposed TriSps function reduces to the sparse function proposed by Ji et al. [32] if the weights are λ 1 = λ 2 = 0 and λ 3 = 1 , which means that the function only considers the sparsity along the tubes. Different from the sparse function in [32], our proposed TriSps function is more general and can capture the properties of clouds in three-dimensional directions.

3.2. Proposed Model

The multitemporal images have high correlations among mode-1, -2, and -3 fibers. The tensor rank function [37,38] serves as an effective tool for characterizing the global correlation of image components U , adeptly capturing the inherent low-rank characteristics of images. in our study, we employ a multi-rank regularization function for U . We transform U into a third tensor and establish the regularization function as described below.
U TNN = r = 1 d t X r * ,
where X r represents the rth frontal slice of X . That is, X = T ( U ) × 3 Q T , where T ( U ) restructures U into a third-order tensor, and Q satisfies Q T Q = I ( I denotes the identity matrix).
Additionally, we observe that the images exhibit similarity during adjacent temporal periods, which can be described by
D ( U ) F 2 = D U ( 4 ) F 2 .
Here, D ( · ) represents a difference operator, D represents a difference matrix, and U ( 4 ) is the unfolding of U in its fourth mode.
Using the proposed TriSps function and the prior knowledge of the image component, we propose the following cloud removal model:
min C , X , Q T Q = I , U = f b c ( O , C , U ˜ ) J λ ( C ) + r = 1 d t X r * + γ 2 D U ( 4 ) F 2 , s . t . , T ( U ) = X × 3 Q , O = U + C .
The first term in the objective function represents the prior knowledge derived from the cloud component C , while the last two terms encapsulate the prior knowledge from the image component U . λ and γ denote positive regularization parameters. U = f b c ( O , C , U ˜ ) represents a box constraint designed to preserve cloud-free details within the image component, ensuring it remains consistent with the observed data. the adoption of the box-constraint strategy is motivated by the need to maintain the integrity of the cloud-free portions in U . Without this constraint, these portions may differ from those in the observed data, leading to a decrease in the quality of the reconstruction. Thus, they need the assistance of some strategies to improve the reconstruction quality of the cloud-free part. in this paper, we use the following box-constraint strategy and extend it to the proposed model. the box-constraint function is determined by U = f b c ( O , C , U ˜ ) , whose ith mode-3 fiber is
U ( 3 ) i = O ( 3 ) i , if avg ( C ( 3 ) i ) < ϵ , U ˜ ( 3 ) i , otherwise ,
where ϵ 0 is a given thresholding value, and avg ( x ) denotes the average value of vector x .

3.3. Optimization Algorithm

This subsection outlines the development of the PAM algorithm [33], crafted to address our proposed model. To facilitate this, we integrate auxiliary variables M = X , N = C , S = C and reformulate the model (10) as
min C , X , N , Q T Q = I , S , M , U = f b c ( O , C , U ˜ ) r = 1 d t M r * + λ 1 i N ( 1 ) i 2 + λ 2 i S ( 2 ) i 2 + λ 3 i C ( 3 ) i 2 + γ 2 D U ( 4 ) F 2 , s . t . , T ( U ) = X × 3 Q , O = U + C , M = X , N = C , S = C ,
where M r denotes the frontal slice of M corresponding to index r.
The aforementioned constrained problem can be reformulated as follows:
min C , X , N , Q T Q = I , S , M , U = f b c ( O , C , U ˜ ) r = 1 d t M r * + λ 1 i N ( 1 ) i 2 + λ 2 i S ( 2 ) i 2 + λ 3 i C ( 3 ) i 2 + γ 2 D U ( 4 ) F 2 + η 1 2 T ( U ) X × 3 Q F 2 + η 2 2 M X F 2 + η 3 2 O U C F 2 + η 4 2 N C F 2 + η 5 2 S C F 2 ,
where η i > 0 ( i = 1 , , 5 ) represents penalty parameters.
The aforementioned objective function refers to g ( U , C , X , M , Q , N , S ) . Within the PAM-based algorithm framework, we iteratively update individual variables in an alternating fashion:
U s + 1 = arg min U = f b c ( O , C s , U ˜ ) g ( U , C s , X s , M s , Q s , N s , S s ) + μ 2 U U s F 2 , C s + 1 = arg min C g ( U s + 1 , C , X s , M s , Q s , N s , S s ) + μ 2 C C s F 2 , X s + 1 = arg min X g ( U s + 1 , C s + 1 , X , M s , Q s , N s , S s ) + μ 2 X X s F 2 , M s + 1 = arg min M g ( U s + 1 , C s + 1 , X s + 1 , M , Q s , N s , S s ) + μ 2 M M s F 2 , Q s + 1 = arg min Q T Q = I g ( U s + 1 , C s + 1 , X s + 1 , M s + 1 , Q , N s , S s ) + μ 2 Q Q s F 2 , N s + 1 = arg min N g ( U s + 1 , C s + 1 , X s + 1 , M s + 1 , Q s + 1 , N , S s ) + μ 2 N N s F 2 , S s + 1 = arg min S g ( U s + 1 , C s + 1 , X s + 1 , M s + 1 , Q s + 1 , N s + 1 , S ) + μ 2 S S s F 2 .
Here, the superscript notation s signifies the outcome obtained after the sth iteration. μ represents a proximal parameter. Subsequently, each variable can be updated according to the following procedure.
  • Updating the U -subproblem
The U -subproblem is
min U = f b c ( O , C , U ˜ ) η 1 2 T ( U ) X s × 3 Q s F 2 + γ 2 D U ( 4 ) F 2 + η 3 2 O U C s F 2 + μ 2 U U s F 2 .
T ( · ) , as a reversible reshape operator, allows for the rephrasing of the aforementioned equation.
min U = f b c ( O , C , U ˜ ) η 1 2 U ( 4 ) T 1 ( X s × 3 Q s ) ( 4 ) F 2 + γ 2 D U ( 4 ) F 2 + η 3 2 O ( 4 ) U ( 4 ) C ( 4 ) s F 2 + μ 2 U ( 4 ) U ( 4 ) s F 2 .
Here, the third tensor is reformatted to its initial four-dimensional form using the inverse operator T 1 ( · ) , which is applied to transform the tensor from R a × b × d t R a × b × d × t . Clearly, U has the following solution:
U ( 4 ) s + 1 = ( η 1 + η 3 + μ ) I + γ D T D 1 η 1 T 1 X s × 3 Q s ( 4 ) + η 3 O ( 4 ) C ( 4 ) s + μ U ( 4 ) s .
In this formulation, I R t × t represents the identity matrix. Incorporating the box constraint, we obtain the image component, as detailed below:
U ˜ s + 1 = Fold 4 ( U ( 4 ) s + 1 ) .
In this context, the matrix is reformatted into a tensor by the operator Fold ( 4 ) ( · ) . the image component U is computed by the box-constraint function U = f b c ( O , C , U ˜ ) .
  • Updating the X -subproblem
The X -subproblem is written as
min X η 1 2 T ( U s + 1 ) X × 3 Q s F 2 + η 2 2 M s X F 2 + μ 2 X X s F 2 .
Equation (16) yields a closed-form solution, which is given by
X s + 1 = η 1 T ( U s + 1 ) × 3 ( Q s ) T + η 2 M s + μ X s η 1 + η 2 + μ .
  • Updating the C -subproblem
The subproblem C is outlined below.
min C λ 3 i C ( 3 ) i 2 + η 3 2 O U C F 2 + η 4 2 N C F 2 + η 5 2 S C F 2 + μ 2 C C s F 2 .
By integrating the last four parts, the presented problem can be equivalently rewritten as follows:
min C λ 3 i C ( 3 ) i 2 + δ + μ 2 C Q F 2 ,
with Q = η 3 ( O U ) + η 4 N + η 5 S + μ C s δ + μ and δ = i = 3 5 η i . Subsequently, the C -subproblem can be dissected into the following constituent subproblems:
min C i λ 3 C ( 3 ) i 2 + γ + μ 2 C ( 3 ) i Q ( 3 ) i F 2 ,
where Q ( 3 ) i is the tube of Q .
Then, the solution of the tube C ( 3 ) i is given by
( C s + 1 ) ( 3 ) i = max ( Q ( 3 ) i 2 λ 3 γ + μ , 0 ) Q ( 3 ) i Q ( 3 ) i 2 ,
where we define 0 0 = 1 .
  • Updating the M -subproblem
The M -subproblem is
min M r = 1 d t M r * + η 2 2 M X s + 1 F 2 + μ 2 M M s F 2 .
By including the two final terms, the aforementioned issue can be transformed into the following equivalent form:
min M r = 1 d t M r * + η 2 + μ 2 M r R r F 2 ,
where R r denotes the frontal slice of R = η 2 X s + 1 + μ M s η 2 + μ corresponding to index r. As a result, the M -subproblem can be effectively addressed through the following d t subproblems:
min M r M r * + η 2 + μ 2 M r R r F 2 , r = 1 , , d t .
Each subproblem’s (19) solution can be achieved through the implementation of a singular value thresholding (SVT) operator, specifically
M r s + 1 = SVT ( R r , 1 η 2 + μ ) = U d i a g ( m a x ( σ i 1 η 2 + μ , 0 ) ) V T .
In this context, σ i represents the ith singular value on Σ . the matrix R r undergoes singular value decomposition to yield the matrices U , Σ , and V T .
  • Updating the N -subproblem
The N -subproblem is
min N λ 1 i N ( 1 ) i 2 + η 4 2 N C F 2 + μ 2 N N s F 2 .
By including the two final terms, the aforementioned issue can be written as
min N λ 1 i N ( 1 ) i 2 + η 4 + μ 2 N F F 2 ,
where F = η 4 C + μ N s η 4 + μ . Next, the aforementioned subproblem can be divided into the following subproblems:
min N i λ 1 N ( 1 ) i 2 + η 4 + μ 2 N ( 1 ) i F ( 1 ) i F 2 .
Then, the solution of the tube of N ( 1 ) i can be obtained:
( N s + 1 ) ( 1 ) i = max F ( 1 ) i 2 λ 1 η 4 + μ , 0 F ( 1 ) i F ( 1 ) i 2 .
  • Updating the S -subproblem
The S -subproblem is
min S λ 2 i S ( 2 ) i 2 + η 5 2 S C F 2 + μ 2 S S s F 2 .
Similarly to solving the N -subproblem, we incorporate the last two terms:
min S λ 2 i S ( 2 ) i 2 + η 5 + μ 2 S B F 2 ,
where B = η 5 C + μ S s η 5 + μ . The aforementioned subproblem can be divided into the following subproblems:
min S i λ 2 S ( 2 ) i 2 + η 5 2 S ( 2 ) i ( C s + 1 ) ( 2 ) i F 2 + μ 2 S ( 2 ) i ( S s ) ( 2 ) i F 2 .
Then, the solution of the tube of S ( 2 ) i can be obtained:
( S s + 1 ) ( 2 ) i = max B ( 2 ) i 2 λ 2 η 5 + μ , 0 B ( 2 ) i B ( 2 ) i 2 .
  • Updating the Q -subproblem
The Q -subproblem is
min Q T Q = I η 1 2 T ( U s + 1 ) X s + 1 × 3 Q F 2 + μ 2 Q Q s F 2 .
The problem can be addressed by solving the following formulation:
min Q T Q = I η 1 2 T ( U s + 1 ) X s + 1 × 3 Q F 2 + μ 2 Q Q s F 2 = min Q T Q = I η 1 2 T ( U s + 1 ) ( 3 ) Q X ( 3 ) s + 1 F 2 + μ 2 Q Q s F 2 = min Q T Q = I η 1 2 Tr T ( U s + 1 ) ( 3 ) Q X ( 3 ) s + 1 T T ( U s + 1 ) ( 3 ) Q X ( 3 ) s + 1 + μ 2 Tr ( Q Q s ) T ( Q Q s ) = max Q T Q = I Tr η 1 T ( U s + 1 ) ( 3 ) ( X ( 3 ) s + 1 ) T + μ Q s Q T .
Here, Tr ( · ) signifies the trace of a matrix.
This subproblem yields a closed-form solution, which is given by
Q s + 1 = U ^ V ^ T .
where U ^ Σ ^ V ^ T = SVD η 1 T ( U s + 1 ) ( 3 ) ( X ( 3 ) s + 1 ) T + μ Q s .
We outline the algorithm for cloud detection and removal in Algorithm 1.
Algorithm 1 Tri-fiber-wise sparse collaborative low-rank prior knowledge algorithm.
Input: Multitemporal images contaminated by clouds O and the parameters λ 1 , λ 2 , λ 3 , η i ( i = 1 , , 5 ) , and γ .
1:
for  s = 1 to m a x i t e r  do
2:
      Update U s + 1 by (15);
3:
      Update X s + 1 via (17);
4:
      Update C s + 1 via (18);
5:
      Update N s + 1 via (21);
6:
      Update S s + 1 via (22);
7:
      for i = 1 to d t  do
8:
            Update M r s + 1 via (20);
9:
      end for
10:
    Update Q s + 1 via (23);
11:
    Check for the convergence. If satisfied, then stop.
12:
end for
Output: Image component U .

4. Experiments

In this section, the advantages of our method in cloud detection and removal are investigated. We evaluated the effectiveness of the sparse function against the TNN [37], ALM-IPG [20], TVLRSDC [28], and BC-SLRpGS [32] methods. The algorithm was stopped when
U s + 1 U s F U s F < t o l and C s + 1 C s F C s F < t o l ,
or the number of iterations exceeded m a x i t e r = 2000 . The initial values were set to U 0 = 0 , C 0 = 0 , M 0 = 0 , S 0 = 0 , X 0 = 0 , N 0 = 0 , Q 0 = 0 , and μ = 0.01 . We set t o l = 10 5 and m a x i t e r = 2000 . The experiments were run on Windows 10 and MATLAB (R2017b). This computer has an Intel Core i7-9700K CPU @ 3.60 GHz with 16 GB of RAM.

4.1. Synthetic Experiments

Initially, we employed multitemporal remote sensing images to showcase the efficiency of our proposed technique in restoring information hidden by cloud cover. We generated four simulated datasets—Munich, Picardie, France, and Beijing—by extracting data from the Sentinel-2 (https://earthexplorer.usgs.gov) and Landsat-8 (https://theia.cnes.fr/atdistrib/rocket/#/home) (accessed on 21 April 2024) collections. The details of these datasets are comprehensively outlined in Table 2 and visually represented in Figure 2, Figure 3, Figure 4 and Figure 5. We refer to the ground sampling distance as GSD. Various types of cloud masks were applied to these multitemporal images to augment the complexity of the task.
To rigorously assess the effectiveness of the proposed method, we employ three quantitative metrics, mean PSNR, mean CC [39], and mean SSIM [40], as established benchmarks for this type of analysis. Superior performance is indicated by higher values across these metrics. In Table 3, we provide a comprehensive quantitative comparison of our method against established methods like TNN, ALM-IPG, TVLRSDC, and BC-SLRpGS. The highest values of PSNR, CC, and SSIM are indicated in bold for clear distinction. Our method consistently surpasses the comparative methods in PSNR across all datasets. Regarding SSIM, our method displays a slight disadvantage compared to the BC-SLRpGS method in the Munich dataset and to the ALM-IPG method in the Picardie dataset, highlighting areas for further improvement.
To provide a more intuitive comparison of the methods’ performance, we detail the cloud removal results in Figure 6, Figure 7, Figure 8 and Figure 9. These figures include zoomed-in patches and corresponding residual images for a more nuanced comparison. Our method and BC-SLRpGS successfully reconstructed most cloud information in the Munich dataset, whereas other methods introduced distortions in image details. In the Picardie dataset, the TVLRSDC method was unable to adequately reconstruct cloud regions, in contrast to TNN, ALM-IPG, BC-SLRpGS, and our proposed method, which yielded satisfactory results. Notably, the TNN and ALM-IPG methods necessitate a predefined cloud mask, potentially incorporating additional information into the reconstruction process. Our method produced darker residual images, as seen at the bottom of Figure 6 and Figure 7, signifying a closer similarity to cloud-free images and, thus, more effective cloud removal.
The exceptional performance of our method can be attributed to our effective utilization of the sparse structure of mode-1, -2, and -3 fibers. For the Beijing dataset, our method, along with ALM-IPG and BC-SLRpGS, achieved promising outcomes, outperforming TNN and TVLRSDC methods, which were visually inferior. The zoomed-in views in Figure 8 further substantiate our method’s clarity and precision in reconstructing more detailed and clearer images compared to other methods. For the France dataset, as illustrated in Figure 9, all tested methods failed to achieve promising results, primarily due to the dataset’s intricate details that defy reconstruction through global correlations. Nonetheless, our method yielded notably clearer outcomes compared to the others and demonstrated superior accuracy in cloud mask detection relative to BC-SLRpGS.

4.2. Real Experiments

Two real datasets, namely, the Eura and Morocco data, as outlined in Table 4 and visualized in Figure 10 and Figure 11, were used to assess the efficacy of our method. The reconstructed results for the Eura and Morocco datasets using various methods are depicted in Figure 12 and Figure 13, respectively. From Figure 12, it is evident that the TNN, ALM-IPG, and TVLRSDC methods are unsuccessful in effectively removing the clouds and reconstructing cloud information. Conversely, both BC-SLRpGS and the proposed method exhibit similar performance in this aspect. From the zoomed-in figures presented in Figure 12, it is evident that our method delivers smoother results and captures a greater amount of detail in comparison to the BC-SLRpGS method. As shown by the visual comparisons in Figure 13, our method demonstrates enhanced performance compared to the other methods. From temporal node 1 in Figure 13, the proposed method effectively eliminates the cloud, while the result of TVLRSDC still exhibits some discrete clouds. Furthermore, as displayed in the zoomed-in figure in Figure 13, our method produces a clearer reconstruction result than the other compared methods. From temporal node 2 in Figure 13, we find that the proposed method removes the cloud areas well and can preserve the cloud-free areas, whereas the TNN and ALM-IPG methods change the cloud-free information after the reconstruction process. In conclusion, the proposed method outperforms other methods in terms of both quantitative metrics and visual quality.

5. Conclusions

We have introduced a novel tri-fiber-wise sparse function to characterize the cloud component. By leveraging sparse prior information, our method excels at detecting clouds and effectively reconstructing contaminated values, providing robust results. Moreover, we described the global prior of the image component by tensor multi-rank. Utilizing the introduced novel sparse and low-rank functions, we have developed a cloud removal model that incorporates a box-constrained strategy. This model not only effectively detects clouds but also simultaneously estimates missing information. The experiments demonstrate the superior effectiveness of the proposed method to other cloud removal techniques.

Author Contributions

Conceptualization, D.-L.S. and T.-Y.J.; data curation, T.-Y.J.; formal analysis, T.-Y.J.; investigation, D.-L.S.; methodology, T.-Y.J.; resources, D.-L.S. and M.D.; software, T.-Y.J.; validation, D.-L.S. and T.-Y.J.; visualization, D.-L.S. and M.D.; writing—original draft, D.-L.S.; writing—review and editing, T.-Y.J. and M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grants 12001059, 12001432, 12071062, and 12201522, in part by the Natural Science Foundation of Shaanxi Province under Grant 2020JQ-342, and in part by the Fundamental Research Funds for the Central Universities under Grant 2682023CX069.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gao, L.; Wang, Z.; Zhuang, L.; Yu, H.; Zhang, B.; Chanussot, J. Using Low-Rank Representation of Abundance Maps and Nonnegative Tensor Factorization for Hyperspectral Nonlinear Unmixing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5504017. [Google Scholar] [CrossRef]
  2. Yao, J.; Meng, D.; Zhao, Q.; Cao, W.; Xu, Z. Nonconvex-Sparsity and Nonlocal-Smoothness-Based Blind Hyperspectral Unmixing. IEEE Trans. Image Process. 2019, 28, 2991–3006. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, K.; Wang, Y.; Zhao, X.L.; Chan, J.C.W.; Xu, Z.; Meng, D. Hyperspectral and Multispectral Image Fusion via Nonlocal Low-Rank Tensor Decomposition and Spectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7654–7671. [Google Scholar] [CrossRef]
  4. He, W.; Yokoya, N.; Yuan, X. Fast Hyperspectral Image Recovery of Dual-Camera Compressive Hyperspectral Imaging via Non-Iterative Subspace-Based Fusion. IEEE Trans. Image Process. 2021, 30, 7170–7183. [Google Scholar] [CrossRef] [PubMed]
  5. Dian, R.; Li, S.; Sun, B.; Guo, A. Recent Advances and New Guidelines on Hyperspectral and Multispectral Image Fusion. Inf. Fusion 2021, 69, 40–51. [Google Scholar] [CrossRef]
  6. Li, J.; Zhang, Y.; Sheng, Q.; Wu, Z.; Wang, B.; Hu, Z.; Shen, G.; Schmitt, M.; Molinier, M. Thin Cloud Removal Fusing Full Spectral and Spatial Features for Sentinel-2 Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8759–8775. [Google Scholar] [CrossRef]
  7. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  8. Cao, R.; Chen, Y.; Chen, J.; Zhu, X.; Shen, M. Thick Cloud Removal in Landsat Images based on Autoregression of Landsat Time-Series Data. Remote Sens. Environ. 2020, 249, 112001. [Google Scholar] [CrossRef]
  9. Xu, S.; Cao, X.; Peng, J.; Ke, Q.; Ma, C.; Meng, D. Hyperspectral Image Denoising by Asymmetric Noise Modeling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5545214. [Google Scholar] [CrossRef]
  10. Zheng, W.J.; Zhao, X.L.; Zheng, Y.B.; Lin, J.; Zhuang, L.; Huang, T.Z. Spatial-Spectral-Temporal Connective Tensor Network Decomposition for Thick Cloud Removal. ISPRS J. Photogramm. Remote Sens. 2023, 199, 182–194. [Google Scholar] [CrossRef]
  11. Chen, Y.; Chen, M.; He, W.; Zeng, J.; Huang, M.; Zheng, Y.-B. Thick Cloud Removal in Multitemporal Remote Sensing Images via Low-Rank Regularized Self-Supervised Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5506613. [Google Scholar] [CrossRef]
  12. Criminisi, A.; Perez, P.; Toyama, K. Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 2004, 13, 1200–1212. [Google Scholar] [CrossRef] [PubMed]
  13. Shen, H.; Zhang, L. A MAP-Based Algorithm for Destriping and Inpainting of Remotely Sensed Images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1492–1502. [Google Scholar] [CrossRef]
  14. He, K.; Sun, J. Image Completion Approaches Using the Statistics of Similar Patches. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2423–2435. [Google Scholar] [CrossRef] [PubMed]
  15. Mendez-Rial, R.; Calvino-Cancela, M.; Martin-Herrero, J. Anisotropic Inpainting of the Hypercube. IEEE Geosci. Remote Sens. Lett. 2012, 9, 214–218. [Google Scholar] [CrossRef]
  16. Wang, L.; Qu, J.; Xiong, X.; Hao, X.; Xie, Y.; Che, N. A New Method for Retrieving Band 6 of Aqua MODIS. IEEE Geosci. Remote Sens. Lett. 2006, 3, 267–270. [Google Scholar] [CrossRef]
  17. Li, X.; Shen, H.; Zhang, L.; Zhang, H.; Yuan, Q. Dead Pixel Completion of Aqua MODIS Band 6 Using a Robust M-Estimator Multiregression. IEEE Geosci. Remote Sens. Lett. 2014, 11, 768–772. [Google Scholar]
  18. Lin, C.H.; Tsai, P.H.; Lai, K.H.; Chen, J.Y. Cloud Removal from Multitemporal Satellite Images using Information Cloning. IEEE Trans. Geosci. Remote Sens. 2012, 51, 232–241. [Google Scholar] [CrossRef]
  19. Li, X.; Shen, H.; Li, H.; Zhang, L. Patch Matching-Based Multitemporal Group Sparse Representation for the Missing Information Reconstruction of Remote-Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3629–3641. [Google Scholar] [CrossRef]
  20. Wang, J.; Olsen, P.A.; Conn, A.R.; Lozano, A.C. Removing Clouds and Recovering Ground Observations in Satellite Image Sequences via Temporally Contiguous Robust Matrix Completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2754–2763. [Google Scholar]
  21. Zhang, Q.; Yuan, Q.; Li, J.; Li, Z.; Shen, H.; Zhang, L. Thick Cloud and Cloud Shadow Removal in Multitemporal Imagery using Progressively Spatio-Temporal Patch Group Deep Learning. ISPRS J. Photogramm. Remote Sens. 2020, 162, 148–160. [Google Scholar] [CrossRef]
  22. Zhang, Q.; Yuan, Q.; Li, Z.; Sun, F.; Zhang, L. Combined Deep Prior with Low-Rank Tensor SVD for Thick Cloud Removal in Multitemporal Images. ISPRS J. Photogramm. Remote Sens. 2021, 177, 161–173. [Google Scholar] [CrossRef]
  23. Li, W.; Li, Y.; Chan, J.C.W. Thick Cloud Removal with Optical and SAR Imagery via Convolutional-MappingDeconvolutional Network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2865–2879. [Google Scholar] [CrossRef]
  24. Gao, J.; Yuan, Q.; Li, J.; Zhang, H.; Su, X. Cloud Removal with Fusion of High Resolution Optical and SAR images using Generative Adversarial Networks. Remote Sens. 2020, 12, 191. [Google Scholar] [CrossRef]
  25. Chen, B.; Huang, B.; Chen, L.; Xu, B. Spatially and Temporally Weighted Regression: A Novel Method to Produce Continuous Cloud-Free Landsat Imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 27–37. [Google Scholar] [CrossRef]
  26. Melgani, F. Contextual Reconstruction of Cloud-contaminated Multitemporal Multispectral Images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 442–455. [Google Scholar] [CrossRef]
  27. Wen, F.; Zhang, Y.; Gao, Z.; Ling, X. Two-Pass Robust Component Analysis for Cloud Removal in Satellite Image Sequence. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1090–1094. [Google Scholar] [CrossRef]
  28. Chen, Y.; He, W.; Yokoya, N.; Huang, T.Z. Blind Cloud and Cloud Shadow Removal of Multitemporal Images based on Total Variation Regularized Low-Rank Sparsity Decomposition. ISPRS J. Photogramm. Remote Sens. 2019, 157, 93–107. [Google Scholar] [CrossRef]
  29. Meraner, A.; Ebel, P.; Zhu, X.X.; Schmitt, M. Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. ISPRS J. Photogramm. Remote Sens. 2020, 166, 333–346. [Google Scholar] [CrossRef]
  30. Wang, J.-L.; Zhao, X.-L.; Li, H.-C.; Cao, K.-X.; Miao, J.; Huang, T.-Z. Unsupervised Domain Factorization Network for Thick Cloud Removal of Multitemporal Remotely Sensed Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5405912. [Google Scholar] [CrossRef]
  31. Duan, C.; Pan, J.; Li, R. Thick Cloud Removal of Remote Sensing Images Using Temporal Smoothness and Sparsity Regularized Tensor Optimization. Remote Sens. 2020, 12, 3446. [Google Scholar] [CrossRef]
  32. Ji, T.Y.; Chu, D.; Zhao, X.L.; Hong, D. A unified framework of cloud detection and removal based on low-rank and group sparse regularizations for multitemporal multispectral images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5303015. [Google Scholar] [CrossRef]
  33. Attouch, H.; Bolte, J.; Svaiter, B.F. Convergence of Descent Methods for Semi-Algebraic and Tame Problems: Proximal Algorithms, Forward–Backward Splitting, and Regularized Gauss–Seidel methods. Math. Program. 2013, 137, 91–129. [Google Scholar] [CrossRef]
  34. Wang, M.; Hong, D.; Han, Z.; Li, J.; Yao, J.; Gao, L.; Zhang, B.; Chanussot, J. Tensor Decompositions for Hyperspectral Data Processing in Remote Sensing: A comprehensive review. IEEE Geosci. Remote Sens. Mag. 2023, 11, 26–72. [Google Scholar] [CrossRef]
  35. Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  36. Zhang, Z.; Ely, G.; Aeron, S.; Hao, N.; Kilmer, M. Novel Methods for Multilinear Data Completion and De-noising based on Tensor-SVD. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 3842–3849. [Google Scholar]
  37. Zhang, Z.; Aeron, S. Exact Tensor Completion using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
  38. Yuan, L.; Li, C.; Mandic, D.; Cao, J.; Zhao, Q. Tensor Ring Decomposition with Rank Minimization on Latent Space: An Efficient Approach for Tensor Completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 9151–9158. [Google Scholar]
  39. Li, X.; Shen, H.; Zhang, L.; Zhang, H.; Yuan, Q.; Yang, G. Recovering Quantitative Remote Sensing Products Contaminated by Thick Clouds and Shadows using Multitemporal Dictionary Learning. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7086–7098. [Google Scholar]
  40. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
Figure 1. (a) Observed cloud-contaminated image, (b) cloud component, (ce) mode-1, -2, and -3 fiber-wise sparsity, i.e., the l 2 -norms of mode-1, -2, and -3 fibers from (b), respectively.
Figure 1. (a) Observed cloud-contaminated image, (b) cloud component, (ce) mode-1, -2, and -3 fiber-wise sparsity, i.e., the l 2 -norms of mode-1, -2, and -3 fibers from (b), respectively.
Remotesensing 16 01518 g001
Figure 2. Munich dataset taken by Landsat-8.
Figure 2. Munich dataset taken by Landsat-8.
Remotesensing 16 01518 g002
Figure 3. Picardie dataset taken by Sentinel-2.
Figure 3. Picardie dataset taken by Sentinel-2.
Remotesensing 16 01518 g003
Figure 4. Beijing dataset taken by Sentinel-2.
Figure 4. Beijing dataset taken by Sentinel-2.
Remotesensing 16 01518 g004
Figure 5. France dataset taken by Landsat-8.
Figure 5. France dataset taken by Landsat-8.
Remotesensing 16 01518 g005
Figure 6. The outcomes of cloud removal using various methods on the Munich dataset. The corresponding zoomed-in patches accompanying each image are depicted at the bottom.
Figure 6. The outcomes of cloud removal using various methods on the Munich dataset. The corresponding zoomed-in patches accompanying each image are depicted at the bottom.
Remotesensing 16 01518 g006
Figure 7. The outcomes of cloud removal using various methods on the Picardie dataset. The corresponding zoomed-in patches accompanying each image are depicted at the bottom.
Figure 7. The outcomes of cloud removal using various methods on the Picardie dataset. The corresponding zoomed-in patches accompanying each image are depicted at the bottom.
Remotesensing 16 01518 g007
Figure 8. The outcomes of cloud removal using various methods on the Beijing dataset. The corresponding zoomed-in patches accompanying each image are depicted at the bottom.
Figure 8. The outcomes of cloud removal using various methods on the Beijing dataset. The corresponding zoomed-in patches accompanying each image are depicted at the bottom.
Remotesensing 16 01518 g008
Figure 9. The outcomes of cloud removal using various methods on the France dataset. The corresponding zoomed-in patches accompanying each image are depicted at the bottom.
Figure 9. The outcomes of cloud removal using various methods on the France dataset. The corresponding zoomed-in patches accompanying each image are depicted at the bottom.
Remotesensing 16 01518 g009
Figure 10. Eure dataset taken by Sentinel-2.
Figure 10. Eure dataset taken by Sentinel-2.
Remotesensing 16 01518 g010
Figure 11. Morocco dataset taken by Sentinel-2.
Figure 11. Morocco dataset taken by Sentinel-2.
Remotesensing 16 01518 g011
Figure 12. The outcomes of cloud removal using various methods on the Eure dataset. The second row in each image represents the zoomed-in details associated with the images above.
Figure 12. The outcomes of cloud removal using various methods on the Eure dataset. The second row in each image represents the zoomed-in details associated with the images above.
Remotesensing 16 01518 g012
Figure 13. The outcomes of cloud removal using various methods on the Morocco dataset. The second row in each image represents the zoomed-in details associated with the images above.
Figure 13. The outcomes of cloud removal using various methods on the Morocco dataset. The second row in each image represents the zoomed-in details associated with the images above.
Remotesensing 16 01518 g013
Table 1. Notation description.
Table 1. Notation description.
NotationsInterpretations
a, a , A , A scalar, vector, matrix, tensor
Tr ( A ) trace of A R n × n , with Tr ( A ) = i = 1 n a i i
A F A F : = h , w , d , t | a h , w , d , t | 2
A * A * : = Tr ( A T A )
A ( k ) unfolding of A in its kth mode.
A ( k ) i ith mode-k fiber of A
A r rth frontal slice of A
A = X × 3 Q A ( 3 ) i = Q X ( 3 ) i
T ( · ) reshaping of fourth-order tensor into third-order tensor
D ( · ) difference operator in fourth mode
Table 2. Multitemporal remote sensing images for synthetic experiments.
Table 2. Multitemporal remote sensing images for synthetic experiments.
DataImage SizeSpectralTemporalGSDSource
Munich512 × 5123430 mLandsat-8
Picardie500 × 5006320 mSentinel-2
Beijing256 × 2566420 mSentinel-2
France400 × 4007330 mLandsat-8
Table 3. Quantitative metrics for simulated data. Highest values are emphasized in bold.
Table 3. Quantitative metrics for simulated data. Highest values are emphasized in bold.
DatasetIndexMethod
ObservedTNNALM-IPGTVLRSDCBC-SLRpGSProposed
MunichPSNR4.2926.6623.2626.2327.8129.6
SSIM0.47690.83440.84620.83850.85060.8496
CC0.1540.88970.8410.85450.90130.9184
PicardiePSNR4.5642.8943.4937.3343.3544.06
SSIM0.5430.9870.99240.94930.98990.9918
CC0.09040.93970.95090.75580.94210.9514
BeijingPSNR5.8436.7138.3436.1438.8839.58
SSIM0.61820.93790.96080.9490.96220.9638
CC0.01960.93840.96460.92870.96890.9729
FrancePSNR6.2828.3128.3227.0929.5630.14
SSIM0.62240.79470.80010.80490.84880.8536
CC0.23980.96030.95820.91940.96610.9697
Table 4. Multitemporal remote sensing images for real experiments.
Table 4. Multitemporal remote sensing images for real experiments.
DataImage SizeSpectralTemporalGSDSource
Eure400 × 4004410 mSentinel-2
Morocco600 × 6004410 mSentinel-2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, D.-L.; Ji, T.-Y.; Ding, M. A New Sparse Collaborative Low-Rank Prior Knowledge Representation for Thick Cloud Removal in Remote Sensing Images. Remote Sens. 2024, 16, 1518. https://doi.org/10.3390/rs16091518

AMA Style

Sun D-L, Ji T-Y, Ding M. A New Sparse Collaborative Low-Rank Prior Knowledge Representation for Thick Cloud Removal in Remote Sensing Images. Remote Sensing. 2024; 16(9):1518. https://doi.org/10.3390/rs16091518

Chicago/Turabian Style

Sun, Dong-Lin, Teng-Yu Ji, and Meng Ding. 2024. "A New Sparse Collaborative Low-Rank Prior Knowledge Representation for Thick Cloud Removal in Remote Sensing Images" Remote Sensing 16, no. 9: 1518. https://doi.org/10.3390/rs16091518

APA Style

Sun, D. -L., Ji, T. -Y., & Ding, M. (2024). A New Sparse Collaborative Low-Rank Prior Knowledge Representation for Thick Cloud Removal in Remote Sensing Images. Remote Sensing, 16(9), 1518. https://doi.org/10.3390/rs16091518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop