Spectral-Smoothness and Non-Local Self-Similarity Regularized Subspace Low-Rank Learning Method for Hyperspectral Mixed Denoising
Abstract
:1. Introduction
- The sampled HSI is first projected into a low-dimensional subspace, which greatly alleviates the complexity of the denoising algorithm. Then, spectral smoothing regularization is enforced to the basis matrix of the subspace, which constrains the reconstructed HSI to maintain spectral smoothness and continuity.
- The plug-and-play non-local BM4D denoiser is enforced to the coefficient matrix of the subspace to fully utilize the self-similarity of the spatial dimension of HSI. Furthermore, the ${L}_{1}$ norm is used to separate the sparse noise. In the process of alternate optimization, the latent clean HSI is gradually learned from the degraded HSI.
2. Materials and Methods
2.1. Degradation Model
2.2. Subspace Low-Rank Regularization
- If heavy Gaussian noise exists in the sampled image, it is difficult to obtain a fine denoising result by low-rank regularization. Meanwhile, since some sparse noises will also show a certain extent of low-rankness, low-rank regularization is also powerless in such a case.
- For the non-local regularization, the denoising result depends heavily on the choice of the search window and neighborhood window, and the cost of fine denoising results is often exchanged for greater execution time. In real scenarios, too much calculation time is often undesirable.
2.3. Proposed Model
2.4. Optimization
Algorithm 1 Optimization procedure for solving Model 4. |
Require: The degraded HSI $\mathbf{Y}$, stop criterion $\epsilon $, regularization parameters $\gamma >0$ and $\lambda >0$, |
maximum iteration ${t}_{max}$, dimension of subspace k. |
Ensure: The latent noise-free data $\mathbf{X}$. |
1: Initialization: Estimate ${\mathbf{E}}^{\left(0\right)}$ via SVD, set ${\mathbf{Z}}^{\left(0\right)}={\mathbf{E}}^{\left(0\right)T}\times \mathbf{Y}$, set $\epsilon ={10}^{-4}$. |
2: while not converged do |
3: Update ${\mathbf{Z}}^{(t+1)}$ by Equation (7). |
4: Update ${\mathbf{E}}^{(t+1)}$ by Equation (10). |
5: Update ${\mathbf{S}}^{(t+1)}$ by Equation (12). |
6: Update ${\Lambda}^{(t+1)}$ by Equation (13). |
7: Update iteration by $t=t+1$ |
8: end while |
9: return $\mathbf{L}={\mathbf{E}}^{\left(k+1\right)}{\mathbf{Z}}^{\left(k+1\right)}$. |
- Update $\mathbf{Z}$ (Line 3): The subproblem of updating $\mathbf{Z}$ is given by:$${\mathbf{Z}}^{(t+1)}=\underset{\mathbf{Z}}{argmin}{\sigma}_{NL}\left(\mathbf{Z}\right)+\frac{\mu}{2}{\u2225\mathbf{Y}-{\mathbf{E}}^{\left(t\right)}\mathbf{Z}-{\mathbf{S}}^{\left(t\right)}+\frac{{\Lambda}^{\left(t\right)}}{\mu}\u2225}_{F}^{2}$$$${\mathbf{Z}}^{(t+1)}=BM4D({\mathbf{E}}^{\left(t\right)T}(\mathbf{Y}-{\mathbf{S}}^{\left(t\right)}+\frac{{\Lambda}^{\left(t\right)}}{\mu}))$$
- Update $\mathbf{E}$ (Line 4): The subproblem of updating $\mathbf{E}$ is given by:$$\begin{array}{c}\hfill {\mathbf{E}}^{(t+1)}=\underset{\mathbf{E}}{argmin}\gamma {\u2225\mathbf{DE}\u2225}_{F}^{2}+\frac{\mu}{2}{\u2225\mathbf{Y}-\mathbf{E}{\mathbf{Z}}^{(t+1)}-{\mathbf{S}}^{\left(t\right)}+\frac{{\Lambda}^{\left(t\right)}}{\mu}\u2225}_{F}^{2}\end{array}$$$$\frac{2\gamma}{\mu}{\mathbf{D}}^{T}\mathbf{DE}+\mathbf{E}{\mathbf{Z}}^{(t+1)}{\mathbf{Z}}^{(t+1)T}=(\mathbf{Y}-{\mathbf{S}}^{\left(t\right)}+\frac{{\Lambda}^{\left(t\right)}}{\mu}){\mathbf{Z}}^{(t+1)T}$$$$\mathbf{E}={\mathbf{F}}^{T}((1\oslash \frac{2\gamma}{\mu}\mathbf{T})\odot (\mathbf{F}((\mathbf{Y}-{\mathbf{S}}^{(t)}+\frac{{\Lambda}^{(t)}}{\mu}){\mathbf{Z}}^{T})\mathbf{U})){\mathbf{U}}^{T}$$
- Update $\mathbf{S}$ (Line 5): The subproblem of updating $\mathbf{S}$ is given by:$${\mathbf{S}}^{(t+1)}=\underset{\mathbf{S}}{argmin}\lambda {\u2225\mathbf{S}\u2225}_{1}+\frac{\mu}{2}{\u2225\mathbf{Y}-{\mathbf{E}}^{(t+1)}{\mathbf{Z}}^{(t+1)}-\mathbf{S}+\frac{{\Lambda}^{\left(t\right)}}{\mu}\u2225}_{F}^{2}$$Here we use the soft-thresholding function $\mathrm{softT}\mathrm{H}\left(\mathbf{A},\tau \right)=\mathrm{sign}\left(\mathbf{A}\right)max\left(0,\left|\mathbf{A}\right|-\tau \right)$ to solve this subproblem efficiently:$${\mathbf{S}}^{(t+1)}=\mathrm{softT}\mathrm{H}(\mathbf{Y}-{\mathbf{E}}^{(t+1)}{\mathbf{Z}}^{(t+1)}-\mathbf{S}+\frac{{\Lambda}^{\left(t\right)}}{\mu},\mu \lambda )$$
- Update $\Lambda $ (Line 6): Updating Lagrangian multiplier $\Lambda $ is given by:$${\Lambda}^{(t+1)}={\Lambda}^{\left(t\right)}+\mu (\mathbf{Y}-{\mathbf{E}}^{(t+1)}{\mathbf{Z}}^{(t+1)}-{\mathbf{S}}^{(t+1)})$$
3. Results
3.1. Simulation Configurations
- case 1: zero-mean Gaussian noise with the randomly selected variance in the range from 0.2 to 0.25 is first added to all bands. Meanwhile, in 20 continuous bands, 10% of pixels are contaminated by salt and pepper noise.
- case 2: zero-mean Gaussian noise is added as the same condition in case 1, and deadlines with the randomly selected number in the range [3, 10] and widths in [1, 3] are added to the continuous 20 bands.
- case 3: zero-mean Gaussian noise is added as the same condition in case 1, and stripes with the randomly selected number in the range [2, 8] are added to the continuous 20 bands.
- case 4: Simultaneously add all the noises in cases 1–3 to simulate mixed noise.
3.2. Experimental Results on Simulated Datasets
3.2.1. Parameter Settings for Simulated Datasets
3.2.2. Visual Evaluation Results
3.2.3. Quantitative Evaluation Results
3.2.4. Visual Evaluation Results
3.3. Experimental Results on Real Datasets
Quantitative Evaluation Results
4. Discussion
4.1. The Regularization Parameters $\mu $, $\gamma $ and $\lambda $
4.2. The Dimension of Subspace p
4.3. Convergence Analysis
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
HSI | Hyperspectral image |
ADMM | Alternating Direction Method of Multipliers |
BM4D | Block-Matching and 4D filtering |
PCA | principal component analysis |
LRMR | low-rank matrix restoration |
CP | CANDECOMP/PARAFAC |
CNN | convolutional neural network |
TV | Total variation |
WDC | Washington DCMall |
PU | Pavia University |
PSNR | Peak-Signal-to-Noise Ratio |
SSIM | Structural Similarity Index Mersure |
FSIM | Feature Similarity Index Mersure |
ERGAS | Erreur Relative Global Adimensionnelle de Synthèse |
MSA | Mean Spectral Angle |
References
- Wu, Z.; Sun, J.; Zhang, Y.; Wei, Z.; Chanussot, J. Recent Developments in Parallel and Distributed Computing for Remotely Sensed Big Data Processing. Proc. IEEE 2021. [Google Scholar] [CrossRef]
- He, C.; Sun, L.; Huang, W.; Zhang, J.; Zheng, Y.; Jeon, B. TSLRLN: Tensor subspace low-rank learning with non-local prior for hyperspectral image mixed denoising. Signal Process. 2021, 184, 108060. [Google Scholar] [CrossRef]
- Sun, L.; Wu, F.; Zhan, T.; Liu, W.; Wang, J.; Jeon, B. Weighted nonlocal low-rank tensor decomposition method for sparse unmixing of hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1174–1188. [Google Scholar] [CrossRef]
- Sun, L.; Wu, F.; He, C.; Zhan, T.; Liu, W.; Zhang, D. Weighted Collaborative Sparse and L1/2 Low-Rank Regularizations With Superpixel Segmentation for Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2020. [Google Scholar] [CrossRef]
- Lu, Z.; Xu, B.; Sun, L.; Zhan, T.; Tang, S. 3-D Channel and spatial attention based multiscale spatial–spectral residual network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4311–4324. [Google Scholar] [CrossRef]
- Wang, J.; Song, X.; Sun, L.; Huang, W.; Wang, J. A novel cubic convolutional neural network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4133–4148. [Google Scholar] [CrossRef]
- Pan, L.; He, C.; Xiang, Y.; Sun, L. Multiscale Adjacent Superpixel-Based Extended Multi-Attribute Profiles Embedded Multiple Kernel Learning Method for Hyperspectral Classification. Remote Sens. 2021, 13, 50. [Google Scholar] [CrossRef]
- Xu, Y.; Wu, Z.; Chanussot, J.; Wei, Z. Nonlocal patch tensor sparse representation for hyperspectral image super-resolution. IEEE Trans. Image Process. 2019, 28, 3034–3047. [Google Scholar] [CrossRef] [PubMed]
- Xu, Y.; Wu, Z.; Xiao, F.; Zhan, T.; Wei, Z. A target detection method based on low-rank regularized least squares model for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1129–1133. [Google Scholar] [CrossRef]
- Hou, Y.; Zhu, W.; Wang, E. Hyperspectral mineral target detection based on density peak. Intell. Autom. Soft Comput. 2019, 25, 805–814. [Google Scholar] [CrossRef]
- Sun, L.; Zhan, T.; Wu, Z.; Jeon, B. A novel 3d anisotropic total variation regularized low rank method for hyperspectral image mixed denoising. ISPRS Int. J. Geo-Inf. 2018, 7, 412. [Google Scholar] [CrossRef] [Green Version]
- Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
- Buades, A.; Coll, B.; Morel, J. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
- Gu, S.; Lei, Z.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
- Starck, J.L.; Candès, E.; Donoho, D.L. The curvelet transform for image denoising. IEEE Trans. Image Process. 2002, 11, 670–684. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kopsinis, Y.; Mclaughlin, S. Development of EMD-based denoising methods inspired by wavelet thresholding. IEEE Trans. Signal Process. 2009, 57, 1351–1362. [Google Scholar] [CrossRef]
- Chen, G.; Qian, S. Denoising of hyperspectral imagery using principal component analysis and wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2011, 49, 973–980. [Google Scholar] [CrossRef]
- Sun, L.; Jeon, B. Hyperspectral Mixed Denoising Via Subspace Low Rank Learning and BM4D Filtering. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 8034–8037. [Google Scholar]
- Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 2013, 22, 119–133. [Google Scholar] [CrossRef] [PubMed]
- Wen, Y.; Ng, M.K.; Huang, Y. Efficient total variation minimization methods for color image restoration. IEEE Trans. Image Process. 2008, 17, 2081–2088. [Google Scholar] [CrossRef]
- Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising employing a spectral-spatial adaptive total variation model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
- Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising with a spatial–spectral view fusion strategy. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2314–2325. [Google Scholar] [CrossRef]
- Qian, Y.; Ye, M. Hyperspectral imagery restoration using nonlocal spectral-spatial structured sparse representation with noise estimation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 499–515. [Google Scholar] [CrossRef]
- Wu, H.; Liu, Q.; Liu, X. A review on deep learning approaches to image classification and object segmentation. Comput. Mater. Contin. 2019, 60, 575–597. [Google Scholar] [CrossRef] [Green Version]
- Xue, Y.; Wang, Y.; Liang, J.; Slowik, A. A Self-Adaptive Mutation Neural Architecture Search Algorithm Based on Blocks. IEEE Comput. Intell. Mag. 2021, 16, 67–78. [Google Scholar] [CrossRef]
- Liu, Q.; Wu, Z.; Du, Q.; Xu, Y.; Wei, Z. Multiscale Alternately Updated Clique Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021. [Google Scholar] [CrossRef]
- Xue, Y.; Zhu, H.; Liang, J.; Słowik, A. Adaptive crossover operator based multi-objective binary genetic algorithm for feature selection in classification. Knowl. Based Syst. 2021, 227, 107218. [Google Scholar] [CrossRef]
- Zheng, Y.; Liu, X.; Xiao, B.; Cheng, X.; Wu, Y.; Chen, S. Multi-Task Convolution Operators with Object Detection for Visual Tracking. IEEE Trans. Circuits Syst. Video Technol. 2021. [Google Scholar] [CrossRef]
- Cheng, G.; Han, J.; Zhou, P.; Xu, D. Learning Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection. IEEE Trans. Image Process. 2019, 28, 265–278. [Google Scholar] [CrossRef]
- Zheng, Y.; Liu, X.; Cheng, X.; Zhang, K.; Wu, Y.; Chen, S. Multi-task deep dual correlation filters for visual tracking. IEEE Trans. Image Process. 2020, 29, 9614–9626. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Wang, H.; Li, J.; Luo, X.; Shi, Y.Q.; Jha, S.K. Detecting double JPEG compressed color images with the same quantization matrix in spherical coordinates. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2736–2749. [Google Scholar] [CrossRef]
- Zhou, Z.; Zhu, J.; Su, Y.; Wang, M.; Sun, X. Geometric correction code-based robust image watermarking. IET Image Process. 2021. [Google Scholar] [CrossRef]
- Wang, H.; Wang, J.; Zhai, J.; Luo, X. Detection of triple JPEG compressed color images. IEEE Access 2019, 7, 113094–113102. [Google Scholar] [CrossRef]
- Hung, C.W.; Mao, W.L.; Huang, H.Y. Modified PSO algorithm on recurrent fuzzy neural network for system identification. Intell. Autom. Soft Comput. 2019, 25, 329–341. [Google Scholar]
- Mohanapriya, N.; Kalaavathi, B. Adaptive image enhancement using hybrid particle swarm optimization and watershed segmentation. Intell. Autom. Soft Comput. 2019, 25, 663–672. [Google Scholar] [CrossRef]
- Li, H.; Qiu, K.; Chen, L.; Mei, X.; Hong, L.; Tao, C. SCAttNet: Semantic Segmentation Network with Spatial and Channel Attention Mechanism for High-Resolution Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2021, 18, 905–909. [Google Scholar] [CrossRef]
- Zhang, X.; Lu, W.; Li, F.; Peng, X.; Zhang, R. Deep feature fusion model for sentence semantic matching. Comput. Mater. Contin. 2019, 61, 601–616. [Google Scholar] [CrossRef]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Ahn, H.; Chung, B.; Yim, C. Super-Resolution Convolutional Neural Networks Using Modified and Bilateral ReLU. In Proceedings of the 2019 International Conference on Electronics, Information, and Communication (ICEIC), Auckland, New Zealand, 22–25 January 2019; pp. 1–4. [Google Scholar]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [Green Version]
- Yuan, Q.; Zhang, Q.; Li, J.; Shen, H.; Zhang, L. Hyperspectral Image Denoising Employing a Spatial–Spectral Deep Residual Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1205–1218. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Pock, T. Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1256–1272. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Xie, W.; Li, Y. Hyperspectral Imagery Denoising by Deep Learning with Trainable Nonlinearity Function. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1963–1967. [Google Scholar] [CrossRef]
- Xue, Y.; Jiang, P.; Neri, F.; Liang, J. A Multi-Objective Evolutionary Approach Based on Graph-in-Graph for Neural Architecture Search of Convolutional Neural Networks. Int. J. Neural Syst. 2021, 2150035. [Google Scholar] [CrossRef] [PubMed]
- Wei, K.; Fu, Y.; Huang, H. 3-D Quasi-Recurrent Neural Network for Hyperspectral Image Denoising. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 363–375. [Google Scholar] [CrossRef] [Green Version]
- Sun, L.; Ma, C.; Chen, Y.; Zheng, Y.; Shim, H.J.; Wu, Z.; Jeon, B. Low rank component induced spatial-spectral kernel method for hyperspectral image classification. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 3829–3842. [Google Scholar] [CrossRef]
- Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral Image Restoration Using Low-Rank Matrix Recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
- Xu, F.; Chen, Y.; Peng, C.; Wang, Y.; Liu, X.; He, G. Denoising of hyperspectral image using low-rank matrix factorization. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1141–1145. [Google Scholar] [CrossRef]
- Sun, L.; Zhan, T.; Wu, Z.; Xiao, L.; Jeon, B. Hyperspectral mixed denoising via spectral difference-induced total variation and low-rank approximation. Remote Sens. 2018, 10, 1956. [Google Scholar] [CrossRef] [Green Version]
- Cao, X.; Zhao, Q.; Meng, D.; Chen, Y.; Xu, Z. Robust Low-Rank Matrix Factorization Under General Mixture Noise Distributions. IEEE Trans. Image Process. 2016, 25, 4677–4690. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.W. Nonlocal Low-Rank Regularized Tensor Decomposition for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5174–5189. [Google Scholar] [CrossRef]
- Bai, X.; Xu, F.; Zhou, L.; Xing, Y.; Bai, L.; Zhou, J. Nonlocal Similarity Based Nonnegative Tucker Decomposition for Hyperspectral Image Denoising. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 701–712. [Google Scholar] [CrossRef] [Green Version]
- Fan, H.; Chen, Y.; Guo, Y.; Zhang, H.; Kuang, G. Hyperspectral Image Restoration Using Low-Rank Tensor Recovery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4589–4604. [Google Scholar] [CrossRef]
- Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
- Sun, L.; He, C. Hyperspectral Image Mixed Denoising Using Difference Continuity-Regularized Nonlocal Tensor Subspace Low-Rank Learning. IEEE Geosci. Remote Sens. Lett. 2021. [Google Scholar] [CrossRef]
- Lin, B.; Tao, X.; Lu, J. Hyperspectral image denoising via matrix factorization and deep prior regularization. IEEE Trans. Image Process. 2019, 29, 565–578. [Google Scholar] [CrossRef] [PubMed]
- Zheng, Y.B.; Huang, T.Z.; Zhao, X.L.; Jiang, T.X.; Ma, T.H.; Ji, T.Y. Mixed Noise Removal in Hyperspectral Image via Low-Fibered-Rank Regularization. IEEE Trans. Geosci. Remote Sens. 2020, 58, 734–749. [Google Scholar] [CrossRef]
- Sun, L.; Jeon, B. A novel subspace spatial-spectral low rank learning method for hyperspectral denoising. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar]
- Sun, L.; Jeon, B.; Soomro, B.N.; Zheng, Y.; Wu, Z.; Xiao, L. Fast superpixel based subspace low rank learning method for hyperspectral denoising. IEEE Access 2018, 6, 12031–12043. [Google Scholar] [CrossRef]
- Sun, L.; He, C.; Zheng, Y.; Tang, S. SLRL4D: Joint restoration of subspace low-rank learning and non-local 4-d transform filtering for hyperspectral image. Remote Sens. 2020, 12, 2979. [Google Scholar] [CrossRef]
- Zdunek, R. Alternating direction method for approximating smooth feature vectors in nonnegative matrix factorization. In Proceedings of the 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Reims, France, 21–24 September 2014; pp. 1–6. [Google Scholar]
- Chen, Y.; He, W.; Yokoya, N.; Huang, T.Z. Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition. IEEE Trans. Cybern. 2020, 50, 3556–3570. [Google Scholar] [CrossRef]
- He, W.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral image denoising using local low-rank matrix recovery and global spatial–spectral total variation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 713–729. [Google Scholar] [CrossRef]
- Mohamed, M.A.; Xiao, W. Q-metrics: An efficient formulation of normalized distance functions. In Proceedings of the 2007 IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 2108–2113. [Google Scholar]
Dataset | SLRL4D | 3DlogTNN | LRTDGS | LRTDTV | LLRSSTV | FSSLRL | SNSSLrL | |
---|---|---|---|---|---|---|---|---|
Washington DC | Case 1 | k_num = 4 $\lambda $ = 0.12 $\gamma $ = 200 | $\sigma $ = 0.20 $\theta $ = 0.02 $\varphi $ = 0.00005 $\varpi $ = 0.01 | $\lambda 1$ = 0.3 ${\lambda}_{2}=600/\sqrt{m\ast n}$ r = [120,120,4] | $\tau $ = 0.8 $\lambda $ = 30 r = [200,200,4] | $\lambda $ = 0.16 $\tau $ = 0.005 r = 4 | Num_ss = 20 k_num = 4 ${\lambda}_{s}$ = 0.25 $\sigma $ = 0.8 | k_num = 4 $\lambda $ = 0.15 $\gamma $ = 100 |
Case 2 | k_num = 4 $\lambda $ = 0.15 $\gamma $ = 200 | $\sigma $ = 0.21 $\theta $ = 0.02 $\varphi $ = 0.00005 $\varpi $ = 0.01 | $\lambda 1$ = 0.4 ${\lambda}_{2}=650/\sqrt{m\ast n}$ r = [120,120,4] | $\tau $ = 0.8 $\lambda $ = 30 r = [200,200,4] | $\lambda $ = 0.18 $\tau $ = 0.005 r = 4 | Num_ss = 20 k_num = 4 ${\lambda}_{s}$ = 0.25 $\sigma $ = 0.75 | k_num = 4 $\lambda $ = 0.2 $\gamma $ = 100 | |
Case 3 | k_num = 4 $\lambda $ = 0.17 $\gamma $ = 200 | $\sigma $ = 0.22 $\theta $ = 0.03 $\varphi $ = 0.00005 $\varpi $ = 0.011 | $\lambda 1$ = 0.4 ${\lambda}_{2}=700/\sqrt{m\ast n}$ r = [120,120,5] | $\tau $ = 1 $\lambda $ = 35 r = [200,200,4] | $\lambda $ = 0.20 $\tau $ = 0.005 r = 4 | Num_ss = 10 k_num = 4 ${\lambda}_{s}$ = 0.25 $\sigma $ = 0.7 | k_num = 5 $\lambda $ = 0.2 $\gamma $ = 200 | |
Case 4 | k_num = 4 $\lambda $ = 0.2 $\gamma $ = 200 | $\sigma $ = 0.22 $\theta $ = 0.03 $\varphi $ = 0.00005 $\varpi $ = 0.011 | $\lambda 1$ = 0.5 ${\lambda}_{2}=750/\sqrt{m\ast n}$ r = [120,120,5] | $\tau $ = 1 $\lambda $ = 35 r = [200,200,4] | $\lambda $ = 0.20 $\tau $ = 0.005 r = 4 | Num_ss = 10 k_num = 4 ${\lambda}_{s}$ = 0.25 $\sigma $ = 0.7 | k_num = 5 $\lambda $ = 0.2 $\gamma $ = 200 | |
Pavia | Case 1 | k_num = 4 $\lambda $ = 0.12 $\gamma $ = 150 | $\sigma $ = 0.18 $\theta $ = 0.02 $\varphi $ = 0.00003 $\varpi $ = 0.01 | $\lambda 1$ = 0.3 ${\lambda}_{2}=500/\sqrt{m\ast n}$ r = [120,120,4] | $\tau $ = 0.7 $\lambda $ = 25 r = [200,200,5] | $\lambda $ = 0.16 $\tau $ = 0.003 r = 5 | Num_ss = 20 k_num = 5 ${\lambda}_{s}$ = 0.20 $\sigma $ = 0.8 | k_num = 5 $\lambda $ = 0.16 $\gamma $ = 100 |
Case 2 | k_num = 4 $\lambda $ = 0.14 $\gamma $ = 150 | $\sigma $ = 0.20 $\theta $ = 0.02 $\varphi $ = 0.00003 $\varpi $ = 0.01 | $\lambda 1$ = 0.4 ${\lambda}_{2}=550/\sqrt{m\ast n}$ r = [120,120,4] | $\tau $ = 0.72 $\lambda $ = 30 r = [200,200,5] | $\lambda $ = 0.18 $\tau $ = 0.003 r = 4 | Num_ss = 20 k_num = 4 ${\lambda}_{s}$ = 0.20 $\sigma $ = 0.75 | k_num = 4 $\lambda $ = 0.15 $\gamma $ = 100 | |
Case 3 | k_num = 4 $\lambda $ = 0.18 $\gamma $ = 180 | $\sigma $ = 0.22 $\theta $ = 0.03 $\varphi $ = 0.00004 $\varpi $ = 0.011 | $\lambda 1$ = 0.4 ${\lambda}_{2}=600/\sqrt{m\ast n}$ r = [120,120,5] | $\tau $ = 0.85 $\lambda $ = 35 r = [200,200,4] | $\lambda $ = 0.20 $\tau $ = 0.004 r = 4 | Num_ss = 10 k_num = 4 ${\lambda}_{s}$ = 0.25 $\sigma $ = 0.7 | k_num = 5 $\lambda $ = 0.2 $\gamma $ = 180 | |
Case 4 | k_num = 4 $\lambda $ = 0.2 $\gamma $ = 180 | $\sigma $ = 0.22 $\theta $ = 0.03 $\varphi $ = 0.00004 $\varpi $ = 0.012 | $\lambda 1$ = 0.5 ${\lambda}_{2}=650/\sqrt{m\ast n}$ r = [120,120,5] | $\tau $ = 1 $\lambda $ = 35 r = [200,200,4] | $\lambda $ = 0.20 $\tau $ = 0.004 r = 4 | Num_ss = 10 k_num = 4 ${\lambda}_{s}$ = 0.25 $\sigma $ = 0.7 | k_num = 5 $\lambda $ = 0.2 $\gamma $ = 200 | |
Urban | k_num = 4 $\lambda $ = 0.2 $\gamma $ = 200 | $\sigma $ = 0.2235 $\theta $ = 0.03 $\varphi $ = 0.00005 $\varpi $ = 0.011 | $\lambda 1$ = 0.5 ${\lambda}_{2}=700/\sqrt{m\ast n}$ r = [120,120,5] | $\tau $ = 1 $\lambda $ = 35 r = [200,200,4] | $\lambda $ = 0.22 $\tau $ = 0.006 r = 4 | Num_ss = 10 k_num = 4 ${\lambda}_{s}$ = 0.25 $\sigma $ = 0.7 | k_num = 5 $\lambda $ = 0.2 $\gamma $ = 200 |
Dataset | Case | Index | Noisy | SLRL4D [61] | 3DLogTNN [58] | LRTDGS [63] | LRTDTV [55] | LLRSSTV [64] | FSSLRL [60] | SNSSLrL |
---|---|---|---|---|---|---|---|---|---|---|
WDC | case 1 | PSNR | 12.83 | 31.82 | 32.20 | 31.60 | 31.90 | 31.25 | 30.03 | 32.23 |
SSIM | 0.1513 | 0.8986 | 0.9160 | 0.8937 | 0.8992 | 0.8816 | 0.8519 | 0.9181 | ||
FSIM | 0.5169 | 0.9474 | 0.9505 | 0.9408 | 0.9444 | 0.9383 | 0.9303 | 0.9510 | ||
ERGAS | 888.4 | 95.67 | 91.74 | 97.97 | 94.70 | 102.0 | 118.6 | 92.43 | ||
MSA | 0.8079 | 0.1126 | 0.09464 | 0.09803 | 0.1010 | 0.1152 | 0.1443 | 0.0987 | ||
case 2 | PSNR | 12.96 | 32.55 | 31.03 | 31.43 | 32.02 | 31.18 | 30.34 | 32.78 | |
SSIM | 0.1587 | 0.9114 | 0.9036 | 0.8932 | 0.9020 | 0.8834 | 0.8563 | 0.9260 | ||
FSIM | 0.5188 | 0.9526 | 0.9403 | 0.9402 | 0.9462 | 0.9381 | 0.9314 | 0.9552 | ||
ERGAS | 871.1 | 88.05 | 132.5 | 108.1 | 93.41 | 109.1 | 116.0 | 85.95 | ||
MSA | 0.8085 | 0.09653 | 0.1433 | 0.1000 | 0.1000 | 0.1254 | 0.1358 | 0.0898 | ||
case 3 | PSNR | 13.00 | 32.47 | 32.38 | 31.67 | 32.00 | 31.32 | 30.33 | 32.88 | |
SSIM | 0.1581 | 0.9067 | 0.9176 | 0.8951 | 0.9027 | 0.8823 | 0.8556 | 0.9264 | ||
FSIM | 0.5201 | 0.9512 | 0.9510 | 0.9414 | 0.9461 | 0.9386 | 0.9317 | 0.9554 | ||
ERGAS | 873.1 | 87.34 | 89.44 | 97.07 | 93.32 | 101.2 | 116.4 | 84.54 | ||
MSA | 0.8089 | 0.0961 | 0.0898 | 0.0979 | 0.0991 | 0.1155 | 0.1351 | 0.0879 | ||
case 4 | PSNR | 12.78 | 32.14 | 31.04 | 31.61 | 31.91 | 31.23 | 29.76 | 32.60 | |
SSIM | 0.1508 | 0.9080 | 0.9001 | 0.8937 | 0.9001 | 0.8820 | 0.8484 | 0.9263 | ||
FSIM | 0.5155 | 0.9520 | 0.9411 | 0.9408 | 0.9446 | 0.9385 | 0.9291 | 0.9563 | ||
ERGAS | 895.2 | 94.91 | 121.1 | 98.11 | 94.81 | 106.9 | 117.9 | 88.91 | ||
MSA | 0.8088 | 0.1118 | 0.1382 | 0.0984 | 0.1005 | 0.1157 | 0.1427 | 0.0941 | ||
PU | case 1 | PSNR | 12.76 | 30.51 | 30.78 | 30.68 | 31.17 | 29.36 | 28.18 | 31.87 |
SSIM | 0.0951 | 0.7932 | 0.8492 | 0.8221 | 0.8203 | 0,7527 | 0.6890 | 0.8793 | ||
FSIM | 0.4376 | 0.9152 | 0.9211 | 0.9067 | 0.9165 | 0.8998 | 0.8818 | 0.9295 | ||
ERGAS | 910.0 | 123.3 | 116.4 | 118.0 | 113.7 | 134.7 | 165.8 | 107.6 | ||
MSA | 0.8612 | 0.1502 | 0.1076 | 0.1273 | 0.1359 | 0.1495 | 0.2083 | 0.1136 | ||
case 2 | PSNR | 12.85 | 31.47 | 29.53 | 30.77 | 31.14 | 29.25 | 28.31 | 31.95 | |
SSIM | 0.0994 | 0.8233 | 0.8323 | 0.8254 | 0.8183 | 0.7557 | 0.6921 | 0.8746 | ||
FSIM | 0.4355 | 0.92201 | 0.90799 | 0.90694 | 0.91367 | 0.89567 | 0.88036 | 0.9260 | ||
ERGAS | 897.9 | 122.4 | 146.6 | 126.5 | 115.3 | 139.7 | 173.8 | 101.7 | ||
MSA | 0.8749 | 0.14546 | 0.15621 | 0.14339 | 0.14129 | 0.16623 | 0.21357 | 0.1057 | ||
case 3 | PSNR | 13.03 | 31.56 | 31.10 | 30.65 | 31.25 | 29.48 | 28.37 | 32.37 | |
SSIM | 0.1016 | 0.8251 | 0.8558 | 0.8198 | 0.8214 | 0.7548 | 0.6957 | 0.8856 | ||
FSIM | 0.4407 | 0.9421 | 0.9231 | 0.9058 | 0.9153 | 0.8981 | 0.8825 | 0.9330 | ||
ERGAS | 880.0 | 120.1 | 111.8 | 116.2 | 117.9 | 133.1 | 173.8 | 97.42 | ||
MSA | 0.8719 | 0.1442 | 0.1004 | 0.1289 | 0.1358 | 0.1535 | 0.2124 | 0.1073 | ||
case 4 | PSNR | 12.51 | 30.38 | 29.37 | 30.53 | 30.99 | 29.04 | 27.94 | 31.77 | |
SSIM | 0.0919 | 0.7908 | 0.8230 | 0.8156 | 0.8149 | 0.7500 | 0.6779 | 0.8786 | ||
FSIM | 0.4313 | 0.9138 | 0.9057 | 0.9039 | 0.9135 | 0.8955 | 0.8780 | 0.9277 | ||
ERGAS | 928.3 | 125.1 | 144.1 | 119.4 | 116.9 | 142.4 | 168.0 | 104.3 | ||
MSA | 0.8669 | 0.1518 | 0.1563 | 0.1291 | 0.1351 | 0.1626 | 0.2118 | 0.1163 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, W.; He, C.; Sun, L. Spectral-Smoothness and Non-Local Self-Similarity Regularized Subspace Low-Rank Learning Method for Hyperspectral Mixed Denoising. Remote Sens. 2021, 13, 3196. https://doi.org/10.3390/rs13163196
Liu W, He C, Sun L. Spectral-Smoothness and Non-Local Self-Similarity Regularized Subspace Low-Rank Learning Method for Hyperspectral Mixed Denoising. Remote Sensing. 2021; 13(16):3196. https://doi.org/10.3390/rs13163196
Chicago/Turabian StyleLiu, Wei, Chengxun He, and Le Sun. 2021. "Spectral-Smoothness and Non-Local Self-Similarity Regularized Subspace Low-Rank Learning Method for Hyperspectral Mixed Denoising" Remote Sensing 13, no. 16: 3196. https://doi.org/10.3390/rs13163196