Next Article in Journal
Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet)
Next Article in Special Issue
Tracking-by-Detection Algorithm for Underwater Target Based on Improved Multi-Kernel Correlation Filter
Previous Article in Journal
Characterization of BDS Multipath Effect Based on AT-Conv-LSTM Network
Previous Article in Special Issue
Infrared Small Dim Target Detection Using Group Regularized Principle Component Pursuit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Dimensional Low-Rank with Weighted Schatten p-Norm Minimization for Hyperspectral Anomaly Detection

1
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2
Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China
3
Center for Intelligent Decision-Making and Machine Learning, School of Management, Xi’an Jiaotong University, Xi’an 710049, China
4
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(1), 74; https://doi.org/10.3390/rs16010074
Submission received: 30 September 2023 / Revised: 18 December 2023 / Accepted: 21 December 2023 / Published: 24 December 2023
(This article belongs to the Special Issue Remote Sensing of Target Object Detection and Identification II)

Abstract

:
Hyperspectral anomaly detection is an important unsupervised binary classification problem that aims to effectively distinguish between background and anomalies in hyperspectral images (HSIs). In recent years, methods based on low-rank tensor representations have been proposed to decompose HSIs into low-rank background and sparse anomaly tensors. However, current methods neglect the low-rank information in the spatial dimension and rely heavily on the background information contained in the dictionary. Furthermore, these algorithms show limited robustness when the dictionary information is missing or corrupted by high level noise. To address these problems, we propose a novel method called multi-dimensional low-rank (MDLR) for HSI anomaly detection. It first reconstructs three background tensors separately from three directional slices of the background tensor. Then, weighted schatten p-norm minimization is employed to enforce the low-rank constraint on the background tensor, and L F , 1 -norm regularization is used to describe the sparsity in the anomaly tensor. Finally, a well-designed alternating direction method of multipliers (ADMM) is employed to effectively solve the optimization problem. Extensive experiments on four real-world datasets show that our approach outperforms existing anomaly detection methods in terms of accuracy.

1. Introduction

Compared with conventional images such as RGB images, multispectral images, SAS images [1], and delay-Doppler images [2], hyperspectral images (HSIs) offer the advantage of capturing hundreds of contiguous spectral bands of the same scene. This unique characteristic of HSI proves to be beneficial for target detection and finds wide applications in various fields such as land cover classification [3,4,5], mineral survey [6,7,8], environmental protection [9,10,11], and other applications [12,13,14,15,16,17,18]. In hyperspectral target detection, when the target information is unknown, the unsupervised processing of the target detection is called anomaly detection. However, in practical applications, it is often difficult to obtain the prior information of the target, so hyperspectral anomaly detection is more suitable. In essence, hyperspectral anomaly detection can be viewed as an unsupervised binary classification problem that separates an image into background and anomalies, where anomalies typically represent rare targets that occupy only a small number of pixels.
Over the past two decades, there has been a growing interest in hyperspectral anomaly detection, leading to the development of numerous detection algorithms. The Reed–Xiaoli (RX) algorithm [19] is a classical statistical modelling method for anomaly detection, assuming that the background follows a multivariate Gaussian distribution. The main objective of the RX algorithm is to compute the Mahalanobis distance between the measured pixel and the background [20], which involves estimating the mean vector and the covariance matrix of the background. Two commonly studied extended versions of the RX algorithm are the global RX (GRX) [21] and the local RX (LRX) [22], where the former calculates the distance between the measured pixel and all background pixels, and the latter calculates the distance between the measured pixel and the surrounding background pixels. However, in hyperspectral applications, it is crude to describe the background with a single Gaussian distribution, and the mean vector and covariance matrix of the background are susceptible to the noisy pixels and anomalies.
In general, the HSI can be represented as a three-order tensor with two spatial dimensions and one spectral dimension. Taking into account the similarity between spectral bands, the HSI can be transformed into a matrix along the spectral dimension, which inspires the matrix-based anomaly detection methods. Anomalies are assumed to be randomly distributed in the background and to have sparse properties. By formulating a constrained convex optimization problem that incorporates the characteristics of both the background and the anomalies, successful separation of the anomalies from the background can be achieved. Consequently, the low-rank and sparse matrix decomposition (LRaSMD) algorithms [23,24,25] have been used to separate the HSI data into low-rank background and sparse anomalies and have demonstrated their effectiveness in previous studies [26,27,28]. According to the LRaSMD approach, the spectral response of a pixel y i ( i { 1 , , N } ) in d bands of the HSI can be represented as a spectral vector y i R d with the decomposition.
y i = x i + s i , s i = 0 , i f y i i s p a r t o f b a c k g r o u n d , s i 0 , i f y i i s p a r t o f a n o m a l i e s ,
which can further be written in matrix form as:
Y = X + S ,
where X = [ x 1 , x 2 , , x N ] T and S = [ s 1 , s 2 , , s N ] T R N × d represent the background and anomaly components of the HSI matrix Y = [ y 1 , y 2 , , y N ] T R N × d , where N represents the number of pixels in the HSI, and d represents the number of spectral bands. Furthermore, to address the attention imbalance between anomalies and the background observed in LRaSMD, Zhang et al. [29] proposed the LRaSMD-based Mahalanobis distance (LSMAD) method. Xu et al. [30] integrated cooperative representation and Euclidean distance into the LRaSMD framework. Li et al. [31] investigated LRaSMD under the assumption of a mixture-of-Gaussian (MoG) distribution and developed a global detector based on the Manhattan distance. To further exploit the intrinsic information of the background, low-rank representation (LRR) [32,33] was proposed, which maps the HSI to multiple linear subspaces using a dictionary. Xu et al. [34] proposed a new anomaly detection method called low-rank and sparse representation (LRASR), which employed a dictionary construction strategy and a sparsity-inducing regularization term to reconstruct the background matrix. To preserve the local geometric structure and spatial relationships of the background, the graph and total variation regularized low-rank representation (GTVLRR) [35] method was introduced for HSI anomaly detection. Fu et al. [36] used convolutional neural network (CNN) denoisers [37] as priors for the coefficients of the dictionary.
The aforementioned matrix-based anomaly detection methods tend to destroy the spatial structure of HSI and fail to effectively exploit the inherent spatial information [38,39]. In recent years, tensor-based methods have emerged as a promising approach to HSI anomaly detection, allowing the decomposition of HSI data into low-rank and sparse components. Sun et al. [23] used Tucker decomposition to obtain the low-rank background, using an unmixing method to extract the spectral features of the anomaly. Li et al. [40] embedded priors into the dimensions of a tensor with different regularizations. Song et al. [41] proposed a dictionary construction strategy based on Tucker decomposition, which improved the inclusion of spectral segment information in the dictionary. Shang et al. [42] found a new prior that describes the sparsity of the core tensor of a gradient map (GCS) under Tucker decomposition. However, Tucker decomposition has inherent limitations in terms of rank. To address this issue, Wang et al. [43] extended the concept of LRR from matrix to tensor, taking into account the three-dimensional structure of HSI. Sun et al. [44] represented the background tensor as the product of a transformed tensor and a low-rank matrix. However, these methods pay primary attention to the low-rank of the spectral dimension of the tensor, neglecting the low-rank information in the spatial dimensions. The dictionary, which maps the HSI into multiple linear subspaces, plays a crucial role in the reconstruction of the background component. To achieve an effective separation of background and anomalies, the dictionary should primarily contain background information. Although some methods choose the original data themselves as the dictionary, they may still contain anomalies that can adversely affect the background reconstruction process.
To address these issues, we propose a multi-dimensional low-rank (MDLR) strategy for HSI anomaly detection. Unlike the existing tensor-based methods that construct one background tensor, our approach constructs three background tensors, two capturing the spatial dimension and one representing the spectral dimension. Using the tensor singular value decomposition (t-SVD) technique, we obtain the f-diagonal tensor S , characterizing the background. To enforce low-rankness in the background tensor, we apply the weighted Schatten p-norm minimization (WSNM) to the slices of S . Finally, the three background tensors are merged into a single background tensor. In addition, anomalies in the HSI tend to occur at consistent spatial locations across all spectral bands and exhibit a slight spectral density. To capture this property, we impose a joint spectral–spatial sparsity on the anomaly tensor using the L F , 1 norm. The main contributions of this work can be summarized as follows:
  • Low-rankness along three dimensions in the frequency domain is exploited. Through the low-rank property analysis of the tensor along different dimensions, we found that it is not sufficient to measure the low-rankness along only one dimension. Therefore, multi-dimensional low-rankness is embedded into different tensors with t-SVD along different slices. These tensors are then fused to form a background tensor that captures the low-rank characteristics across all three dimensions and enables the MDLR method to effectively explore more comprehensive background information.
  • To enforce low-rank in the background tensor, WSNM is applied to the frontal slices of the f-diagonal tensor, which enhances the preservation of the low-rank structure in the background tensor.
The rest of this paper is organized as follows. In Section 2, notations and preliminaries are introduced. The proposed multidimensional low-rank model is presented in detail in Section 3. The experimental results are demonstrated in Section 4. The conclusion is given in Section 5.

2. Notations and Preliminaries

In this section, we introduce the notations and preliminaries used in this paper. The column vectors are represented by lowercase letters, e.g., x . The matrix is represented by bold capital letters, e.g., X . An HSI with w rows, h columns, and d spectral bands can be naturally represented as a third-order tensor, X R w × h × d . The discrete Fourier transform (DFT) of X along the spectral dimension can be written as X ^ = fft( X , [], 3). The inverse DFT of X ^ is written as X = ifft( X ^ , [], 3). X represents the conjugate transpose of X . X ( i ) is the i-th frontal slice of X . The block circulant matrix bcirc ( N ) of N R w × h × d is defined as follows:
bcirc ( N ) = N ( 1 ) N ( d ) N ( d 1 ) N ( 2 ) N ( 2 ) N ( 1 ) N ( d ) N ( 3 ) N ( d ) N ( d 1 ) N ( 2 ) N ( 1 ) .
The block vectorization operation bvec ( · ) of N and its inverse operation bvfold ( · ) are denoted as:
bvec ( N ) = N ( 1 ) N ( 2 ) N ( d ) , bvfold ( bvec ( N ) ) = N .
Definition 1
(Tensor product). The product of three-order tensor N R n 1 × n 2 × n 3 and M R n 2 × n 4 × n 3 is A R n 1 × n 4 × n 3 defined as follows:
A = N M = bvfold ( bcirc ( N ) bvec ( M ) ) .
Definition 2
(Slices of Tensor). There are three types of slices in a tensor: that is, horizontal slices X i : : , lateral slices X : j : , and frontal slices X : : k .
Definition 3
(Identity Tensor). The identity tensor I R n 1 × n 2 × n 3 is defined by I ( : , : , 1 ) = eye ( n 1 , n 2 ) , I ( : , : , 2 : n 3 ) = 0 , where eye ( n 1 , n 2 ) is an identity matrix ( n 1 × n 2 ) .
Definition 4
(Conjugate Transpose). The conjugate transpose of a tensor X R n 1 × n 2 × n 3 is denoted as X with
X ^ ( i ) = ( X ^ ( i ) ) T , i = 1 , 2 , . . . , n 3 .
Definition 5
(Orthogonal Tensor). The orthogonal tensor D satisfies D D = D D = I .
Definition 6
(t-SVD). The singular value decomposition of a tensor X R w × h × d can be decomposed into the product of three three-order tensors.
X = U S V ,
where U R w × w × d and V R h × h × d are orthogonal tensors and S R w × h × d is an f-diagonal tensor. The procedure of t-SVD is described in Algorithm 1.
Definition 7
(Tensor Tubal Rank). For a tensor X R n 1 × n 2 × n 3 with t-SVD X = U S V , its tubal rank is the number of non-zero tubes of S :
rank t ( X ) = # { k : S ( k , k , : ) 0 } .
Definition 8
(Tensor Nuclear Norm(TNN)). The TNN of a tensor X R n 1 × n 2 × n 3 is the sum of singular values of all front slices of X ^ , that is,
X : = k = 1 n 3 X ^ ( k ) .

3. Proposed Method

An illustration of the proposed model is shown in Figure 1. Figure 1a illustrates the different dimension low-rank property of the HSI in the frequency domain. To exploit the low-rankness along different dimensions, we combine these three different dimensional tensors to form the background tensor and apply tensor low-rank and sparse decomposition to extract the sparse anomaly object from the low-rank background. Final detection map M can be obtained via the sparse S by computing k = 1 d | S ( i , j , k ) | 2 . We will introduce each part in detail in the following subsections.

3.1. Tensor Low-Rank Linear Representation

LRR uses a dictionary to explore low-rank linear representations of HSI, but the matrix-based approach breaks the tensor structure inherent in HSI. To overcome this limitation, tensor LRR is proposed, which incorporates the t-product to preserve the spatial structure of the tensor. Given a tensor Y R w × h × d , it can be decomposed using the tensor LRR formulation as follows:
Y = A X + S ,
where X is the low-rank background tensor, S is the sparse anomaly tensor, and A is the dictionary. Equation (8) aims to construct the low-rank and sparse components exactly and efficiently by dictionary A from HSI data.
min X , S rank t ( X ) + λ sparse ( S )
s . t . Y = A X + S ,
where rank t ( X ) denotes the tensor tubal rank function [45], λ is a regularization parameter of the S , sparse ( S ) is the sparse norm.

3.2. Weighted Schatten p-Norm Minimization

The problem of determining the rank t ( X ) is known to be NP-hard. To approximate the rank of a matrix, a commonly used method is nuclear norm minimization (NNM), which calculates the sum of the singular values of the matrix X . NNM is typically solved using a singular value thresholding algorithm. However, to obtain a more accurate low-rank approximation, other methods [46,47,48] have been developed. These methods treat different singular values individually rather than uniformly as in NNM, resulting in improved performance. In WSNM, each singular value is assigned a specific weight, and the optimization problem aims to minimize the weighted Schatten p-norm of the matrix X R h × w .
X w , S p = i = 1 min { n , m } w i σ i p 1 p ,
where σ i is the i-th singular value of X , w i is the weight of σ i , w =[ w 1 ,…, w m i n ( n , m ) ] is a non-negative vector to constrain the single value of X . The weighted Schatten p norm minimization problem can be effectively solved by the generalized soft thresholds. Given p and w i , the specific threshold can be obtained by:
G S T ( w i , p ) = ( 2 w i ( 1 p ) ) 1 2 p + w i p ( 2 w i ( 1 p ) ) p 1 2 p .
The main procedures of this approach are shown in Algorithm 1. In this work, the low-rank problem of HSI is solved in tensor form and the nuclear norm of the matrix is converted to the tensor nuclear norm (TNN). The WSNM is applied to the forward slices of  S .

3.3. Mutil-Dimensional Tensor Low-Rank Norm

According to the tensor LRR, tensor X can be expressed as the linear combination of the tensor dictionary A . The choice of the dictionary plays a crucial role in the background tensor reconstruction. Conventional dictionary construction methods are often sensitive to noise and require separate construction for different datasets, making the anomaly detection process complicated. When the dictionary is an identity tensor, the tensor LRR is converted to tensor robust principal component analysis (TRPCA) [49]. By combining WSNM and TRPCA, we have the following:
min X , S X w , S p + λ S F , 1
s . t . Y = X + S .
In the field of HSI unmixing, a latent low-rank representation theory (LatLRR) has been proposed [50]. LatLRR treats itself as a dictionary and learns its own rows and columns separately to obtain two different background representations while incorporating low-rank constraints. Motivated by this concept, we aim to explore the background tensor and reorganize it from different directions of slices. To achieve this, we introduce three background tensors: X w , X h , X d R w × h × d . We run WSNM separately on these three tensors along different dimensional frontal slices. The proposed tensor-based method, called multi-dimensional low-rank (MDLR), can be expressed as follows:
X m s p , = μ w X w w , S p + μ h X h w , S p + μ d X d w , S p ,
where 0 μ w 1 , 0 μ h 1 , and μ d = 1 μ w μ h balance the contributions of X w , X h and X d . We call the reconstruction of the background tensor X w , the reconstruction of the background from the w dimension, X h , the reconstruction of the background from the h dimension, X d the reconstruction of the background from the d dimension. Finally, our model formulation can be written as:
min X , S X m s p , + λ S F , 1
s . t . Y = X + S .

3.4. Optimization Procedure

By introducing auxiliary X w , X h , X d , Equation (12) can be written as the following equivalent problem:
min X w , X h , X d , S μ w X w w , S p + μ h X h w , S p + μ d X d w , S p + λ S F , 1
s . t . X = X w , X = X h , X = X d , Y = X + S .
The Lagrange multipliers E , Q 1 , 2 , 3 are introduced and we use the ADMM to solve the augmented Lagrange function. The optimization problem above is written as follows:
min X w , X h , X d , S , E μ w X w w , S p + μ h X h w , S p + μ d X d w , S p + λ S F , 1
+ α 2 X X w + Q 1 α F 2 + α 2 X X h + Q 2 α F 2
+ α 2 X X d + Q 3 α F 2 + α 2 Y X S + E α F 2
(1)
Update X
X = arg min X α 2 X X w + Q 1 α F 2 + α 2 X X h + Q 2 α F 2
+ α 2 X X d + Q 3 α F 2 + α 2 Y X S + E α F 2 .
The closed-form solution of X can be obtained by taking the derivative of the above objective function and setting it to zero, as follows:
X = ( Y S + E α + X w + X h + X d i = 1 3 Q i α ) / 4
(2)
Update X w
X w = arg min X w μ w X w w , s p + α 2 X X w + Q 1 α F 2
(3)
Update X h
X h = arg min X h μ h X h w , s p + α 2 X X h + Q 3 α F 2
(4)
Update X d
X d = arg min X d μ d X d w , s p + α 2 X X d + Q 3 α F 2 .
The subproblem X w , h , d can be solved using generalized soft-thresholding as shown in Algorithm 1. Before applying the Algorithm 1, X w should be converted to X w R d × h × w , and then it must be reshaped again as X w R w × h × d after Algorithm 1. Similarly, X h should be converted to X w R w × d × h before Algorithm 1 and back to X h R w × h × d after Algorithm 1.
(5)
Update S
S = arg min S λ S F , 1 + α 2 Y X S + E α F 2 .
Then, we have the following closed solution:
S ( : , : , k ) = M ( : , : , k ) F + λ M ( : , : , k ) F M ( : , : , k ) , λ < M ( : , : , k ) F 0 , otherwise ,
where M = Y X + E α .
(6)
Update Lagrange multiplier E and Q 1 , 2 , 3
E = E + α ( Y X S )
Q 1 = Q 1 + α ( X X w )
Q 2 = Q 2 + α ( X X h )
Q 3 = Q 3 + α ( X X d ) .
The overall process of the proposed method is concluded in Algorithm 2. When the optimization process is complete, the anomaly detection map M of HSI data can obtain by the sparse anomaly tensor S as follows:
M ( i , j ) = k = 1 d | S ( i , j , k ) | 2 .
Due to the WSNM regularization, the solution process of the problem in Equations (19)–(21) is actually not a convex optimization problem. Nevertheless, Xie et al. [51] prove that WSNM is not convex, and if the weights satisfy 0 w 1 w 2 w i , at least one accumulation point satisfies (26). A convergence analysis can be found in Theorem 3 of WSNM.
lim k X k + 1 X k F 2 + S k + 1 S k F 2 .
Algorithm 1 WSNM based on t-SVD.
Input: X , Q , p , α , τ
1:
P = X + Q α
2:
P ^ = fft( P , [ ] , 3 )
3:
for i = 0,1,..., [ d + 1 2 ]
4:
[ U ^ ( : , : , i ) , S ^ ( : , : , i ) , V ^ ( : , : , i ) ] = SVD ( P ^ ( : , : , i ) ) ;
5:
    S ^ = S ^ ( : , : , i ) ;
6:
   for  j = 1 : size(diag ( S ^ ) )
7:
       w j = τ ( 2 2 (( 1 α 2 ) w h )/(diag ( S ^ ( j ) ) 1 / p + 1 6 );
  •          get t by calculating Equation (11);
8:
         if |diag ( S ^ ( j ) ) | t, then
9:
            diag ( S ^ ( j ) ) = 0 ;
10:
            else
11:
             k = 0 , μ k = | diag ( S ^ ( j ) ) |
12:
            for  k = 0 , 1 , . . . , J  do
13:
                μ k + 1 = | diag ( S ^ ( j ) ) | w j p ( μ k ) p 1 ;
14:
                k = k + 1 ;
15:
            end
16:
            diag ( S ^ ( j ) ) = sgn(diag ( S ^ ( j ) ) μ k ;
17:
             S ^ n e w ( : , : , i ) = diag ( S ^ ( j ) ) ;
18:
         end
19:
   end
20:
end
21:
for i = [ z + 1 2 ] + 1 , . . . , z
22:
U ^ ( : , : , i ) = conj ( U ^ ( : , : , z i + 2 ) )
23:
S n e w ^ ( : , : , i ) = conj ( S n e w ^ ( : , : , i ) )
24:
V ^ ( : , : , i ) = conj ( V ^ ( : , : , z i + 2 ) )
25:
U = ifft ( U ^ , [ ] , 3 ) , S n e w = iftt ( S ^ n e w , [ ] , 3 ) , V = ifft ( V ^ , [ ] , 3 )
26:
end
Output:  Z = U S n e w V ;
Algorithm 2 MDLR for HSI anomaly detection.
  • Input: HSI tensor Y , μ w , h , d , λ , α
  • Initializtion: X , S , X w , h , d = 0 , Q 1 , 2 , 3 , E = 0 , i = 0 .
  • If i<maxiter or satisfy Equation (11).
  •    Update X by Equation (18);
  •    Update X w , j , d by algorithm (2);
  •    Update S by Equation (23);
  •    Update E and Q 1 , 2 , 3 by Equation (25);
  • End
  • compute the anomaly detection map M by Equation (26);
  • Output: anomaly detection map M.

3.5. Computational Complexity

Given a tensor X R w × h × d , the computational complexity of our model mainly consists of the following two parts: (1) solving the subproblem of Equations (19)–(21) depends on the t-SVD, and the complexity is approximately O ( h 3 d + d 3 h + h 3 w ) ; (2) for Equation (22), O ( w h 2 d + h ( w + h ) d log ( d ) is required. Therefore, the total main cost of the proposed model is O ( h 3 d + d 3 h + h 3 w + w h 2 d + h ( w + d ) d log ( d ) ) . The computational complexity is studied by comparing their running time on San Diego data as shown in Table 1.

4. Experimental Results

In this section, we verify the effectiveness of our method on an extensive dataset compared with the SOTA methods. Standard metrics, such as the 2D receiver operating characteristic (ROC) curve [52] and the area under the curve (AUC) metric [53], are used to quantify the results quantitatively. The ROC curve plots the probability of detection (PD) against the false alarm rate (FAR) for all possible thresholds. The AUC is calculated by integrating the area under the ROC curve. To effectively evaluate the detection performance, 3D ROCs [54] generated from 2D ROC and separability maps are also used for quantitative comparison. All the experimental algorithms are performed in MATLAB 2020a on a computer with Core i9-11900KF 3.50GHz CPU and 32-GB of RAM in Windows 11.

4.1. HSI Datasets

San Diego: The dataset is part of a collection captured by the AVIRIS sensor [55], which measures 100 × 100 × 189 and consists mainly of roofs, shadows, and grass, of which aircrafts are considered anomalies to be detected.
HYDICE-Urban: The dataset is collected by a hyperspectral digital imagery collection experiment (HYDICE) sensor [56] for an urban area, including one vegetation area, one built-up area, and several built-up areas of roads and some vehicles. Its spatial resolution is 1 meter. The size of the crop plus the water vapor removal is 80 × 100 × 175. The 21 pixels occupied by vehicles and roofs of different sizes are used as anomalies.
Airport 1–4: The dataset consists of four images of 100 by 100 pixels in 205 bands taken by the airborne visible/infrared imaging spectrometer (AVIRIS) sensor [57]. As above, they include surface vegetation, roads, and buildings as background. Aircraft flying at different altitudes are treated as anomalies.
Urban 1–4: This dataset of four urban scenes is obtained from a class of sensors as with the airport dataset, with pixels of 100 × 100 and a band number between 190 and 210.

4.2. Compared Methods and Parameter Setting

In this section, we briefly introduce the compared anomaly detection methods and their parameter settings. The parameter values of the compared methods in our experiments are tuned according to the corresponding references.
  • RX [19]: The classical anomaly detection algorithm calculates the Mahalanobis distance between the pixel under test and the background pixels. The parameter λ of RX is set to 1/min(w,h).
  • LSMAD [29]: A method based on low-rank sparse matrix decomposition (LRaSAM) with Mahalanobis distance. We set r = 3, k = 0.8.
  • LRASR [34]: Learn low-rank linear representation (LRR) of backgrounds by constructing dictionaries. The parameters λ and β of LRASR are set to 0.1 and 0.1 in LRASR.
  • GTVLRR [35]: Adding total variation (TV) and graph regularization to the restructuring of the background in the LRR-based method, we set λ = 0.5, β = 0.2, and ω = 0.05 according to the GTVLRR.
  • PTA [40]: According to the properties of the spatial and spectral dimensions of the HSI, PTA adds TV into spatial dimensions and low-rank into spectral dimensions. The parameters α , τ , β of PTA are set to 1, 1, and 0.01 separately.
  • DeCNN-AD [36]: Using convolutional neural network (CNN)-based denoisers as the prior for the dictionary representation coefficients, the cluster number of DeCNN-AD is set to 8 and λ , β are set to 0.01.
  • PCA-TLRSR [43]: The first method extends LRR to tensor LRR for HSI anomaly detection. The reduced dimensions of PCA are tuned according to PCA-TLRSR and parameter λ is set to 0.4.

4.3. Detection Performance

4.3.1. San Diego

The false-color image, ground-truth map, and detection maps of all compared methods are shown in Figure 2. In the San Diego detection maps, methods such as RX, LSMAD, and LRASR fail to accurately detect the three aircrafts in the upper right corner of the HSI data. DeCNN-AD and GTVLRR have difficulty clearly identifying the outline of the aircrafts. PTA can observe the aircrafts, but it contains some background information, such as road buildings. PCA-TLRSR obtains a relatively good performance by recovering the outline of the aircrafts with less background information. In addition, our method can also capture more detailed features of the aircrafts. Figure 3 shows the anomaly detection evaluation metrics of different anomaly detectors for the San Diego dataset, including 3D ROC curves, 2D ROC curves, and separability maps. The proposed method has a slightly higher detection probability than PCA-TLRSR in 3D and 2D ROC curves. The gap between the background box and the anomaly box shows the degree of separation between background and anomaly on the separability maps. The separability maps on the San Diego dataset are shown in Figure 3. The proposed MDLR obtains a larger gap between the background and the anomaly box over all the other methods compared, which indicates that it has a better ability to separate the background and anomaly. The AUC values in the second row of Table 2 provide further evidence that our method achieves the highest performance on the San Diego dataset.

4.3.2. HYDICE-Ubran

The false-color image, ground-truth map, and detection maps of the competitive methods are visually shown in Figure 4. In the detection maps of all compared methods on the HYDICE-Urban dataset, RX and LSMAD have difficulty in clearly recovering the anomaly information. PTA and PCA-TLRSR can observe the anomaly information, but they also contain a significant amount of background information, such as roads. DeCNN-AD, LRASR, GTVLRR, and our proposed method can clearly identify the anomaly information. However, both LRASR and GTVLRR struggle to detect the anomaly in the lower left corner of the image. Our proposed method shows an improvement in terms of visual quality and achieves a higher AUC value compared to other methods, as shown in Table 2.

4.3.3. Airport 1–4

The AUC values of four airport datasets are provided in Table 2. Our method has achieved the highest values. The false-color images, ground-truth maps, and detection maps of four airport dates are demonstrated in Figure 5. In the detection maps of Airport-1, RX, LSMAD, LRASR, and DeCNN-AD datasets, it is difficult to distinguish the aircrafts. GTVLRR, PTA, PCA-TLRSR, and ours can distinguish the aircraft in the middle, but they contain a lot of roof information and the aircraft in the upper left corner is not visible. In the detection maps of Airport-2, our method can clearly observe the middle plane of the figure with less background information compared to other methods, but does not fully preserve its edge information due to the effect of some mixed pixels. Compared to the ground truth of Airport-3, the detection maps of all comparison methods can barely detect the outline of an aircraft. This indicates that the existing methods are not sensitive to detecting dense small targets and are easily contaminated by background information. In the detection maps of the Airport-4 dataset, our detection result shows a clear outline of the aircraft compared to other methods. There is no interference from road information compared to LSMAD, LRASR, DeCNN-AD, GTVLRR, PTA, and PCA-TLRSR. The first row of Figure 6 and Figure 7 show the 2D and 3D ROC curves of different anomaly detectors for Airport 1–4. They demonstrate that our method produces detection maps with relatively little interference from background information. The separability maps on the Airport dataset are shown in the first row of Figure 8. The compared methods fail to effectively separate the background boxes and anomaly boxes, while the proposed MDLR achieves a bigger gap.

4.3.4. Urban 1–4

The AUC values in Table 2 are optimal for all datasets except Urban-1 and Urban-3. For the Urban-1 dataset, from the detection maps shown in Figure 9 we can observe clear lines running through the maps in LSMAD, LRASR, GTVLRR, DeCNN-AD, PTA, and our method. However, the detection map of PCA-TLRSR is difficult to interpret. PCA-TLRSR uses PCA for dimensionality reduction, which aims to reduce noise in the image. The presence of noise in a particular image may have hindered the achievement of optimal results. In the false-color image of Urban-3, there are many large and obvious targets in the background. In the third row of Figure 9, the detection maps of RX and LSMAD barely show the anomaly targets. Other detection algorithms can detect the anomaly targets but retain most of the background contour information. In the detection maps of Urban-2 and Urban-4, compared to the detection maps obtained by other algorithms and the ROC curves in the second row of Figure 6 and Figure 7 obtained by other methods, our method obtains a clearer and more accurate observation of the anomalies. In the separability maps of the Urban dataset in Figure 8, it can be seen that the background boxes and the anomaly boxes of proposed MDLR are obviously separated, which also proves that our method can achieve effective separation of background and anomaly.

4.4. Discussion of Multi-Dimensional Low-Rank

In this section, we analyze the necessity of reconstructing the background with multi-dimensional low-rank.
The discussion on single-dimensional and three-dimensional low-rank: The results presented in Table 3 show that reconstructing the background along a multi-dimensional (w, h, d dimension) gives significantly higher AUC values compared to reconstructing along a single-dimensional (d dimension). This improvement is evident across all datasets, with notable increases in AUC values for the HYDICE-Urban and Airport-1 HSI datasets. Specifically, the AUC values for anomaly detection increased by 4 percent and 6 percent for HYDICE-Urban and Airport-1 datasets, respectively, compared to the one-dimensional reconstruction. These improvements demonstrate the benefit of using multi-dimensional information to effectively separate the background from anomalies in the HSI data. By considering the data from multiple dimensions simultaneously, the proposed method is able to capture more comprehensive and discriminative information about the background, leading to an improved detection performance. This highlights that multi-dimensional has an advantage over single-dimensional in separating background and anomaly.
The discussion on two-dimensional and three-dimensional low-rank: From the results shown in Figure 10, it can be seen that the AUC values obtained by reconstructing the background with different combinations of two dimensions are different. In the case of Airport-1 in Figure 10a,b, reconstructing the background with X d and X w achieves higher AUC values compared to reconstructing with X d and X h . On the other hand, for Airport-3 in Figure 10c,d, reconstructing the background with X h and X w yields higher AUC values compared to reconstructing with X d and X h . These results indicate that there is no fixed combination of two dimensions that consistently gives the best performance for background tensor reconstruction. The optimal combination may vary depending on the specific dataset and the characteristics of the HSI data. Therefore, it is important to explore and analyze the relationship between different dimensional background tensors to achieve the best reconstruction results.
In the analysis of the reconstruction of the background along two different dimensions of the HSI, the focus is on investigating the relationship between these dimensions. For this purpose, the experimental datasets Airport-1 and Airport-3 are selected. Two dimensions of the data are chosen to reconstruct the background, resulting in four different comparison experiments: (a) Reconstruction using X d and X h from Airport-1; (b) Reconstruction using X d and X w from Airport-1; (c) Reconstruction using X d and X h from Airport-3; (d) Reconstruction using X h and X w from Airport-3. The aim of these experiments is to investigate the performance and effectiveness of background reconstruction when using different combinations of two dimensions.
Effects of coefficient μ in two reconstructed background tensors: This study investigates the relationship between the reconstructed background tensors from different dimensions, which aims to better understand their impact on anomaly detection performance. The coefficients between two dimensions are varied in the range [0:0. 1:1] so that the results can be observed when each dimension acts alone and when two dimensions work together. The AUC values of the comparison experiments are visualized in Figure 10. In Figure 10a, the individual reconstruction of X d in Airport-1 achieves an accuracy of 0.8922, while the individual reconstruction of X h only achieves an accuracy of 0.8602. However, when X d and X h are combined in the reconstruction, they achieve a maximum AUC value of 0.8947. Similarly, in Figure 10b, the individual reconstruction of X d in Airport-1 gives an AUC of 0.8922, while the individual reconstruction of X w gives an AUC of 0.9487. However, their combination leads to a maximum AUC value of 0.9518. The same trend can be seen in Figure 10c,d for Airport-3. From Figure 10a,c, it is clear that both Airport-1 and Airport-3 benefit from the background reconstruction using X d and X h . Interestingly, the coefficients of the reconstructed background from the same dimensions differ between the two datasets, indicating that the relationship between the reconstructed background tensors can vary for different datasets. The final AUC values in Table 2 also support the effectiveness of reconstructing the background along multiple dimensions. Although the AUC values of Airport-1 and Airport-3 in Figure 10 are slightly lower, they still demonstrate the validity of the multi-dimensional reconstruction approach in improving the anomaly detection performance.

4.5. Parameter Tuning

In this section, we focus on analyzing the effect of the values of λ and p on the AUC results.
(1) Effects of parameter λ : The influence of parameter λ on model performance was analysed on four HSI datasets. The parameter λ was selected from the set [0.001, 0.005, 0.01, 0.05, 0.1, 1, 2] while keeping the other parameters fixed. AUC value curves with respect to λ on four datasets are shown in Figure 11a. The AUC values of the San Diego, HYIDE-Urban and Airport-4 datasets reach their maximum value when λ is equal to 1. Airport-1 and Airport-2 datasets both reach their maximum value when λ is 2. Airport-3 has a downward trend when λ is 1. For the experiment as a whole, when λ is 0.1 or 1, the AUC values are relatively stable. So for San Diego, HYDICE-Urban, and Airport-4 datasets, λ is set to 1. λ is set to 0.1 on Airport 1-3.
(2) Effects of parameter p: The AUC value curves for the parameter p are shown in Figure 11b. The parameter p has been chosen in the range [0.1, 1]. As the value of p increases, the AUC values on different HSI datasets start to improve. The growing trend of the AUC values for San Diego, HYDICE-Urban, Airport-2, Airport-3, and Airport-4 datasets tends to level off when p reaches 0.6. However, the AUC value of Airport-1 continues to increase as p increases. Based on the AUC value curves, the following choices of p are made. For the San Diego, HYDICE-Urban, Airport-1, and Airport-4 datasets, p is set to 1. For the Airport-2 dataset, p is set to 0.9. For the Airport-3 dataset, p is set to 0.6.

5. Conclusions

In this paper, a novel multi-dimensional low-rank (MDLR) method is proposed for HSI anomaly detection. The MDLR method exploits the low-rank properties of HSI from three dimensions, namely the spatial and spectral dimensions. Multi-dimensional background tensors are reconstructed. Weighted Schatten p-norm minimization is used to enforce the low-rank constraints. In addition, the L F , 1 norm is used to penalize the anomaly tensor to promote joint spectral–spatial sparsity. The optimization problem is solved using ADMM. Experimental results on real HSI datasets demonstrate its effectiveness compared with SOTA in terms of anomaly detection. However, one of the major limitations of MDLR is the computational complexity introduced by the t-SVD operation, especially when dealing with large spectral bands. In future work, we would like to try to incorporate dimensionality reduction preprocessing techniques, which is a promising direction to take in order to address this computational challenge.

Author Contributions

All authors made significant contributions to this work. Conceptualization, X.C.; methodology, Z.W., X.C. and K.W.; investigation, Z.W.; software, Z.W. and X.C.; datacuration, X.C. and H.J.; validation, K.W.; writing original draft preparation, Z.W. and X.C.; writing review and editing, K.W., Z.H. and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Youth Innovation Promotion Association of the Chinese Academy of Sciences under Grant 2022196 and Y202051; National Natural Science Foundation of China under Grant 61873259 and 61821005; CAS Project for Young Scientists in Basic Research under Grant YSBR-041 and the State Key Laboratory of Robotics under Grant 2023-O28.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Zhang, X.; Wu, H.; Sun, H.; Ying, W. Multireceiver SAS Imagery Based on Monostatic Conversion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10835–10853. [Google Scholar] [CrossRef]
  2. Zhu, J.; Song, Y.; Jiang, N.; Xie, Z.; Fan, C.; Huang, X. Enhanced Doppler Resolution and Sidelobe Suppression Performance for Golay Complementary Waveforms. Remote Sens. 2023, 15, 2452. [Google Scholar] [CrossRef]
  3. Hong, D.; Gao, L.; Yokoya, N.; Yao, J.; Chanussot, J.; Du, Q.; Zhang, B. More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4340–4354. [Google Scholar] [CrossRef]
  4. Liu, F.; Wang, Q. A sparse tensor-based classification method of hyperspectral image. Signal Process. 2020, 168, 107361. [Google Scholar] [CrossRef]
  5. An, W.; Zhang, X.; Wu, H.; Zhang, W.; Du, Y.; Sun, J. LPIN: A Lightweight Progressive Inpainting Network for Improving the Robustness of Remote Sensing Images Scene Classification. Remote Sens. 2021, 14, 53. [Google Scholar] [CrossRef]
  6. Tan, K.; Wu, F.; Du, Q.; Du, P.; Chen, Y. A Parallel Gaussian–Bernoulli Restricted Boltzmann Machine for Mining Area Classification With Hyperspectral Imagery. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2019, 12, 627–636. [Google Scholar] [CrossRef]
  7. Ren, Z.; Sun, L.; Zhai, Q.; Liu, X. Mineral Mapping with Hyperspectral Image Based on an Improved K-Means Clustering Algorithm. In Proceedings of the IGARSS 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 2989–2992. [Google Scholar]
  8. Rukhovich, D.I.; Koroleva, P.V.; Rukhovich, D.D.; Rukhovich, A.D. Recognition of the Bare Soil Using Deep Machine Learning Methods to Create Maps of Arable Soil Degradation Based on the Analysis of Multi-Temporal Remote Sensing Data. Remote Sens. 2022, 14, 2224. [Google Scholar] [CrossRef]
  9. Wang, Q.; Li, J.; Shen, Q.; Wu, C.; Yu, J. Retrieval of water quality from China’s first satellite-based Hyperspectral Imager (HJ-1A HSI) data. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, Hawaii, USA, 25–30 July 2010; pp. 371–373. [Google Scholar]
  10. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef]
  11. Wan, Y.; Hu, X.; Zhong, Y.; Ma, A.; Wei, L.; Zhang, L. Tailings Reservoir Disaster and Environmental Monitoring Using the UAV-ground Hyperspectral Joint Observation and Processing: A Case of Study in Xinjiang, the Belt and Road. In Proceedings of the IGARSS 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 9713–9716. [Google Scholar]
  12. Farrar, M.B.; Wallace, H.M.; Brooks, P.R.; Yule, C.M.; Tahmasbian, I.; Dunn, P.K.; Bai, S.H. A Performance Evaluation of Vis/NIR Hyperspectral Imaging to Predict Curcumin Concentration in Fresh Turmeric Rhizomes. Remote Sens. 2021, 13, 1807. [Google Scholar] [CrossRef]
  13. Légaré, B.; Bélanger, S.; Singh, R.K.; Bernatchez, P.; Cusson, M. Remote Sensing of Coastal Vegetation Phenology in a Cold Temperate Intertidal System: Implications for Classification of Coastal Habitats. Remote Sens. 2022, 14, 3000. [Google Scholar] [CrossRef]
  14. Cen, Y.; Huang, Y.H.; Hu, S.; Zhang, L.; Zhang, J. Early Detection of Bacterial Wilt in Tomato with Portable Hyperspectral Spectrometer. Remote Sens. 2022, 14, 2882. [Google Scholar] [CrossRef]
  15. Yin, C.; Lv, X.; Zhang, L.; Ma, L.; Wang, H.; Zhang, L.; Zhang, Z. Hyperspectral UAV Images at Different Altitudes for Monitoring the Leaf Nitrogen Content in Cotton Crops. Remote Sens. 2022, 14, 2576. [Google Scholar] [CrossRef]
  16. Thornley, R.H.; Verhoef, A.; Gerard, F.F.; White, K. The Feasibility of Leaf Reflectance-Based Taxonomic Inventories and Diversity Assessments of Species-Rich Grasslands: A Cross-Seasonal Evaluation Using Waveband Selection. Remote. Sens. 2022, 14, 2310. [Google Scholar] [CrossRef]
  17. Pang, D.; Ma, P.; Shan, T.; Li, W.; Tao, R.; Ma, Y.; Wang, T. STTM-SFR: Spatial–Temporal Tensor Modeling With Saliency Filter Regularization for Infrared Small Target Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  18. Gao, Q.; Zhang, P.; Xia, W.; Xie, D.; Gao, X.; Tao, D. Enhanced Tensor RPCA and its Application. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2133–2140. [Google Scholar] [CrossRef] [PubMed]
  19. Reed, I.S.; Yu, X. Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  20. Manolakis, D.G.; Shaw, G.A. Detection algorithms for hyperspectral imaging applications. IEEE Signal Process. Mag. 2002, 19, 29–43. [Google Scholar] [CrossRef]
  21. Molero, J.M.; Garzón, E.M.; García, I.; Plaza, A.J. Analysis and Optimizations of Global and Local Versions of the RX Algorithm for Anomaly Detection in Hyperspectral Data. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2013, 6, 801–814. [Google Scholar] [CrossRef]
  22. Taitano, Y.P.; Geier, B.A.; Bauer, K.W. A Locally Adaptable Iterative RX Detector. Signal Process. 2010, 2010, 1–10. [Google Scholar] [CrossRef]
  23. Sun, W.; Liu, C.; Li, J.; Lai, Y.M.; Li, W. low-rank and sparse matrix decomposition-based anomaly detection for hyperspectral imagery. J. Appl. Remote Sens. 2014, 8, 15823048. [Google Scholar] [CrossRef]
  24. Farrell, M.D.; Mersereau, R.M. On the impact of covariance contamination for adaptive detection in hyperspectral imaging. IEEE Signal Process. Lett. 2005, 12, 649–652. [Google Scholar] [CrossRef]
  25. Billor, N.; Hadi, A.S.; Velleman, P.F. BACON: Blocked adaptive computationally efficient outlier nominators. Comput. Stat. Data Anal. 2000, 34, 279–298. [Google Scholar] [CrossRef]
  26. Sun, W.; Tian, L.; Xu, Y.; Du, B.; Du, Q. A Randomized Subspace Learning Based Anomaly Detector for Hyperspectral Imagery. Remote Sens. 2018, 10, 417. [Google Scholar] [CrossRef]
  27. Sun, W.; Yang, G.; Li, J.; Zhang, D. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection. J. Appl. Remote Sens. 2018, 12, 015015. [Google Scholar] [CrossRef]
  28. Qu, Y.; Wang, W.; Guo, R.; Ayhan, B.; Kwan, C.; Vance, S.D.; Qi, H. Hyperspectral Anomaly Detection Through Spectral Unmixing and Dictionary-Based low-rank Decomposition. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4391–4405. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Du, B.; Zhang, L.; Wang, S. A low-rank and Sparse Matrix Decomposition-Based Mahalanobis Distance Method for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1376–1389. [Google Scholar] [CrossRef]
  30. Xu, Y.; Du, B.; Zhang, L.; Chang, S. A low-rank and Sparse Matrix Decomposition- Based Dictionary Reconstruction and Anomaly Extraction Framework for Hyperspectral Anomaly Detection. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1248–1252. [Google Scholar] [CrossRef]
  31. Li, L.; Li, W.; Du, Q.; Tao, R. low-rank and Sparse Decomposition With Mixture of Gaussian for Hyperspectral Anomaly Detection. IEEE Trans. Cybern. 2020, 51, 4363–4372. [Google Scholar] [CrossRef]
  32. Liu, G.; Lin, Z.; Yu, Y. Robust Subspace Segmentation by low-rank Representation. In Proceedings of the International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010. [Google Scholar]
  33. Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust Recovery of Subspace Structures by low-rank Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 35, 171–184. [Google Scholar] [CrossRef]
  34. Xu, Y.; Wu, Z.; Li, J.; Plaza, A.J.; Wei, Z. Anomaly Detection in Hyperspectral Images Based on low-rank and Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1990–2000. [Google Scholar] [CrossRef]
  35. Cheng, T.; Wang, B. Graph and Total Variation Regularized low-rank Representation for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 391–406. [Google Scholar] [CrossRef]
  36. Fu, X.; Jia, S.; Zhuang, L.; Xu, M.; Zhou, J.; Li, Q. Hyperspectral Anomaly Detection via Deep Plug-and-Play Denoising CNN Regularization. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9553–9568. [Google Scholar] [CrossRef]
  37. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a Fast and Flexible Solution for CNN-Based Image Denoising. IEEE Trans. Image Process. 2017, 27, 4608–4622. [Google Scholar] [CrossRef] [PubMed]
  38. Guo, S.; Chen, X.; Jia, H.; Han, Z.; Duan, Z.; Tang, Y. Fusing Hyperspectral and Multispectral Images via low-rank Hankel Tensor Representation. Remote Sens. 2022, 14, 4470. [Google Scholar] [CrossRef]
  39. Zhang, Z.; Ding, C.; Gao, Z.; Xie, C. ANLPT: Self-Adaptive and Non-Local Patch-Tensor Model for Infrared Small Target Detection. Remote Sens. 2023, 15, 1021. [Google Scholar] [CrossRef]
  40. Li, L.; Li, W.; Qu, Y.; Zhao, C.; Tao, R.; Du, Q. Prior-Based Tensor Approximation for Anomaly Detection in Hyperspectral Imagery. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 1037–1050. [Google Scholar] [CrossRef] [PubMed]
  41. Song, S.; Zhou, H.; Gu, L.; Yang, Y.; Yang, Y. Hyperspectral Anomaly Detection via Tensor- Based Endmember Extraction and low-rank Decomposition. IEEE Geosci. Remote. Sens. Lett. 2020, 17, 1772–1776. [Google Scholar] [CrossRef]
  42. Shang, W.; Peng, J.; Wu, Z.; Xu, Y.; Jouni, M.; Mura, M.D.; Wei, Z. Hyperspectral Anomaly Detection via Sparsity of Core Tensor Under Gradient Domain. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  43. Wang, M.; Wang, Q.; Hong, D.; Roy, S.K.; Chanussot, J. Learning Tensor low-rank Representation for Hyperspectral Anomaly Detection. IEEE Trans. Cybern. 2022, 53, 679–691. [Google Scholar] [CrossRef]
  44. Sun, S.; Liu, J.; Zhang, Z.; Li, W. Hyperspectral Anomaly Detection Based on Adaptive low-rank Transformed Tensor. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–13. [Google Scholar] [CrossRef]
  45. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted low-rank Tensors via Convex Optimization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5249–5257. [Google Scholar]
  46. Zhang, D.; Hu, Y.; Ye, J.; Li, X.; He, X. Matrix completion by Truncated Nuclear Norm Regularization. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2192–2199. [Google Scholar]
  47. Oh, T.H.; Kim, H.; Tai, Y.W.; Bazin, J.C.; Kweon, I.S. Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 145–152. [Google Scholar]
  48. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
  49. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor Robust Principal Component Analysis with a New Tensor Nuclear Norm. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 925–938. [Google Scholar] [CrossRef] [PubMed]
  50. Liu, G.; Yan, S. Latent low-rank Representation for subspace segmentation and feature extraction. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1615–1622. [Google Scholar]
  51. Xie, Y.; Gu, S.; Liu, Y.; Zuo, W.; Zhang, W.; Zhang, L. Weighted Schatten p-Norm Minimization for Image Denoising and Background Subtraction. IEEE Trans. Image Process. 2015, 25, 4842–4857. [Google Scholar] [CrossRef]
  52. Kerekes, J.P. Receiver Operating Characteristic Curve Confidence Intervals and Regions. IEEE Geosci. 2008, 5, 251–255. [Google Scholar] [CrossRef]
  53. Khazai, S.; Homayouni, S.; Safari, A.; Mojaradi, B. Anomaly Detection in Hyperspectral Images Based on an Adaptive Support Vector Method. IEEE Geosci. 2011, 8, 646–650. [Google Scholar] [CrossRef]
  54. Chang, C.I. Comprehensive Analysis of Receiver Operating Characteristic (ROC) Curves for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–24. [Google Scholar] [CrossRef]
  55. Li, W.; Du, Q. A survey on representation-based classification and detection in hyperspectral remote sensing imagery. Pattern Recognit. Lett. 2016, 83, 115–123. [Google Scholar] [CrossRef]
  56. Ma, L.; Crawford, M.M.; Tian, J. Local Manifold Learning-Based k-Nearest-Neighbor for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4099–4109. [Google Scholar] [CrossRef]
  57. Kang, X.; Zhang, X.; Li, S.; Li, K.; Li, J.Y.; Benediktsson, J.A. Hyperspectral Anomaly Detection with Attribute and Edge-Preserving Filters. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5600–5611. [Google Scholar] [CrossRef]
Figure 1. Illustration of the proposed model for HSI anomaly detection. (a) Multi-dimensional low-rank in frequency domain. (b) Tensor low-rank and sparse decomposition. (c) Detection map.
Figure 1. Illustration of the proposed model for HSI anomaly detection. (a) Multi-dimensional low-rank in frequency domain. (b) Tensor low-rank and sparse decomposition. (c) Detection map.
Remotesensing 16 00074 g001
Figure 2. Detection maps obtained by all compared methods on San Diego dataset. (a) HSI. (b) RX. (c) LSMAD. (d) LRASR. (e) GTVLRR. (f) Ground-truth. (g) DeCNN-AD. (h) PTA. (i) PCA-TLRSR. (j) Ours.
Figure 2. Detection maps obtained by all compared methods on San Diego dataset. (a) HSI. (b) RX. (c) LSMAD. (d) LRASR. (e) GTVLRR. (f) Ground-truth. (g) DeCNN-AD. (h) PTA. (i) PCA-TLRSR. (j) Ours.
Remotesensing 16 00074 g002
Figure 3. Anomaly detection evaluation metrics obtained by different methods on the San Diego dataset. (a) Three-dimensional (3D) ROC curves, (b) 2D ROC curves, (c) separability map.
Figure 3. Anomaly detection evaluation metrics obtained by different methods on the San Diego dataset. (a) Three-dimensional (3D) ROC curves, (b) 2D ROC curves, (c) separability map.
Remotesensing 16 00074 g003
Figure 4. Detection maps on HYDICE-Urban dataset obtained by all compared methods. (a) HSI. (b) RX. (c) LSMAD. (d) LRASR. (e) GTVLRR. (f) Ground-truth. (g) DeCNN-AD. (h) PTA. (i) PCA-TLRSR. (j) Ours.
Figure 4. Detection maps on HYDICE-Urban dataset obtained by all compared methods. (a) HSI. (b) RX. (c) LSMAD. (d) LRASR. (e) GTVLRR. (f) Ground-truth. (g) DeCNN-AD. (h) PTA. (i) PCA-TLRSR. (j) Ours.
Remotesensing 16 00074 g004
Figure 5. Detection maps obtained by all compared methods on Airport-1 (first line), Airport-2 (second line), Airport-3 (third line), and Airport-4 (forth line) datasets.
Figure 5. Detection maps obtained by all compared methods on Airport-1 (first line), Airport-2 (second line), Airport-3 (third line), and Airport-4 (forth line) datasets.
Remotesensing 16 00074 g005
Figure 6. Two-dimensional (2D) ROC curves obtained by all compared methods. (a) Airport-1, (b) Airport-2, (c) Airport-3, (d) Airport-4, (e) Urban-1, (f) Urban-2, (g) Urban-3, (h) Urban-4.
Figure 6. Two-dimensional (2D) ROC curves obtained by all compared methods. (a) Airport-1, (b) Airport-2, (c) Airport-3, (d) Airport-4, (e) Urban-1, (f) Urban-2, (g) Urban-3, (h) Urban-4.
Remotesensing 16 00074 g006
Figure 7. Three-dimensional (3D) ROC curves obtained by all compared methods. (a) Airport-1, (b) Airport-2, (c) Airport-3, (d) Airport-4, (e) Urban-1, (f) Urban-2, (g) Urban-3, (h) Urban-4.
Figure 7. Three-dimensional (3D) ROC curves obtained by all compared methods. (a) Airport-1, (b) Airport-2, (c) Airport-3, (d) Airport-4, (e) Urban-1, (f) Urban-2, (g) Urban-3, (h) Urban-4.
Remotesensing 16 00074 g007
Figure 8. Separability maps obtained by all compared methods. (a) Airport-1, (b) Airport-2, (c) Airport-3, (d) Airport-4, (e) Urban-1, (f) Urban-2, (g) Urban-3, (h) Urban-4.
Figure 8. Separability maps obtained by all compared methods. (a) Airport-1, (b) Airport-2, (c) Airport-3, (d) Airport-4, (e) Urban-1, (f) Urban-2, (g) Urban-3, (h) Urban-4.
Remotesensing 16 00074 g008
Figure 9. Detection maps obtained by all compared methods on Urban-1 (first line), Urban-2 (second line), Urban-3 (third line), and Urban-4 (forth line) datasets.
Figure 9. Detection maps obtained by all compared methods on Urban-1 (first line), Urban-2 (second line), Urban-3 (third line), and Urban-4 (forth line) datasets.
Remotesensing 16 00074 g009
Figure 10. AUC value bars obtained from coefficients μ via the reconstruction of the background with two different dimensions. (a) Airport-1: μ w and μ h , (b) Airport-1: μ w and μ d , (c) Airport-3: μ w and μ h , (d) Airport-3: μ h and μ d .
Figure 10. AUC value bars obtained from coefficients μ via the reconstruction of the background with two different dimensions. (a) Airport-1: μ w and μ h , (b) Airport-1: μ w and μ d , (c) Airport-3: μ w and μ h , (d) Airport-3: μ h and μ d .
Remotesensing 16 00074 g010
Figure 11. The effect of parameter tuning on AUC values. (a) AUC value curves with respect to λ on four datasets; (b) AUC value curves with respect to p on three datasets.
Figure 11. The effect of parameter tuning on AUC values. (a) AUC value curves with respect to λ on four datasets; (b) AUC value curves with respect to p on three datasets.
Remotesensing 16 00074 g011
Table 1. Running time of all compared algorithms on San Diego.
Table 1. Running time of all compared algorithms on San Diego.
HSI DataRXLSMADLRASRGTVLRRDeCNN-ADPTAPCA-TLRSRMDLR
San Diego2.05438.4656.394214.343256.58934.3448.312132.46
Table 2. AUC values of all compared algorithms on different datasets.
Table 2. AUC values of all compared algorithms on different datasets.
HSI DatasetsRXLSMADLRASRGTVLRRDeCNN-ADPTAPCA-TLRSRMDLR
San Diego0.88850.97730.98530.97950.99010.99460.9957 0 . 9976
HYDICE-Urban0.98560.99010.99180.98560.99350.99530.99410.9975
Airport-10.82200.83340.78540.90130.85030.92070.9478 0 . 9538
Airport-20.84030.91890.86570.86950.92040.94280.9697 0 . 9738
Airport-30.92280.94010.94080.92950.94340.93550.9574 0 . 9590
Airport-40.95260.98620.97230.98750.98970.98750.9943 0 . 9953
Urban-1 0 . 9907 0.98290.97970.96050.98200.98260.99020.9835
Urban-20.99460.98360.96280.85390.99730.99700.9941 0 . 9980
Urban-30.95130.96360.94150.93850.93940.9578 0 . 9833 0.9812
Urban-40.98870.98090.95750.92050.98680.99070.9869 0 . 9966
Table 3. AUC values of single-dimensional and multi-dimensional low-rank.
Table 3. AUC values of single-dimensional and multi-dimensional low-rank.
HSI datasetSan DiegoAirport-1Airport-2Airport-3Airport-4
S-dimensional0.99660.89570.96550.93450.9921
M-dimensional 0 . 9976 0 . 9538 0 . 9738 0 . 9590 0 . 9953
HSI datasetHYDIE-UrbanUrban-1Urban-2Urban-3Urban-4
S-dimensional0.95460.96190.99280.9527 0 . 9966
M-dimensional 0 . 9975 0 . 9835 0 . 9980 0 . 9812 0 . 9966
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; Wang, Z.; Wang, K.; Jia, H.; Han, Z.; Tang, Y. Multi-Dimensional Low-Rank with Weighted Schatten p-Norm Minimization for Hyperspectral Anomaly Detection. Remote Sens. 2024, 16, 74. https://doi.org/10.3390/rs16010074

AMA Style

Chen X, Wang Z, Wang K, Jia H, Han Z, Tang Y. Multi-Dimensional Low-Rank with Weighted Schatten p-Norm Minimization for Hyperspectral Anomaly Detection. Remote Sensing. 2024; 16(1):74. https://doi.org/10.3390/rs16010074

Chicago/Turabian Style

Chen, Xi’ai, Zhen Wang, Kaidong Wang, Huidi Jia, Zhi Han, and Yandong Tang. 2024. "Multi-Dimensional Low-Rank with Weighted Schatten p-Norm Minimization for Hyperspectral Anomaly Detection" Remote Sensing 16, no. 1: 74. https://doi.org/10.3390/rs16010074

APA Style

Chen, X., Wang, Z., Wang, K., Jia, H., Han, Z., & Tang, Y. (2024). Multi-Dimensional Low-Rank with Weighted Schatten p-Norm Minimization for Hyperspectral Anomaly Detection. Remote Sensing, 16(1), 74. https://doi.org/10.3390/rs16010074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop