Abstract
Camera failure or loss of storage components in imaging equipment may result in the loss of important image information or random pulse noise interference. The low-rank prior is one of the most important priors in image optimization processing. This paper reviews and compares some low-rank constraint models for image matrices. Firstly, an overview of image-inpainting models based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix-factorization-based F norm is presented, and corresponding optimization iterative algorithms are provided. Then, we use different image matrix low-order constraint models to recover satellite images from three types of pulse interference and provide our experimental visual and numerical results. Finally, it can be concluded that the method based on the weighted nuclear norm can achieve the best image restoration effect. The F norm method based on matrix factorization has the shortest computational time and can be used for large-scale low-rank matrix calculations. Compared with nuclear norm-based methods, weighted nuclear norm-based methods and truncated nuclear norm-based methods can significantly improve repair performance.
1. Introduction
In machine vision applications, images often suffer from impulse interference due to various factors such as pulse noise caused by detector pixel failure in the camera or loss of storage elements in the imaging equipment [1]. Satellite images, unmanned aerial vehicle (UAV) images, etc. generally have local smoothness, so their two-dimensional representation matrices usually have obvious low rankness. Low-rank prior information has excellent performance in image denoising [1], inpainting [2,3], reconstruction [4], deblurring [5], and other signal optimization processing fields. In existing low-matrix-rank-based image-inpainting methods, low-rank prior information is mainly divided into the low rank of the signal itself, such as the inherent low rank of the matrix, and the similarity of local map block information [6], the similarity between frames of a video [7], etc. A low-rank matrix with a Hankel-like structure can be constructed in the Fourier domain by using the annihilating filter relationship [2,4]. A high-order tensor rank can be obtained under various tensor decomposition frameworks, such as CANECOMP/PARAFAC (CP), Tucker [8,9], tensor train (TT) [10], and tensor singular value decomposition (t-SVD) [11].
In addition to the low-rank prior, early image-denoising methods assumed that images had sparse representations in their transformation domains, such as the difference domain, wavelet domain, etc. [12,13]. Following this assumption, the sparse prior information was then constrained to recover the image from the noise. Due to the effectiveness of low rank and sparsity in constrained image optimization problems, image-processing schemes that combine constrained sparsity with low-rank prior information have been continuously proposed [14,15]. Some image-denoising methods use decomposition models of low rank and sparse components, such as the robust principal component analysis (RPCA) method [16], aimed at separating low-rank images from sparse interference images. With the progress in the research and development of tensor decomposition tools in the field of mathematics, such as t-SVD, TT decomposition, etc. [10,11,17], related image or video optimization applications based on a low-rank tensor are also being developed [18,19,20,21,22].
In addition to using low-rank prior information to construct the constraint model for impulse interference image inpainting, many theories, methods, and technologies in the field of signal processing can be combined to solve image-inpainting problems, such as various matrix/tensor completion theories in mathematics [18,23,24], finite innovation rate (FIR) theory [25], image and video enhancing (such as Hankel-like-matrix-based technology [1,2,4]), denoising schemes (such as the famous BM3D image-denoising technology [26] and nonlocal TV denoising technology [12]), etc. Various tensor-decomposition-based completion methods, convex optimization schemes, and fast optimization algorithm research systems can also be used to optimize image-inpainting methods.
The above methods are inseparable from the most basic problems in modeling and solving matrix low-rank constraints. The initial modeling method is to establish a constraint minimizing the number of nonzero singular values in the matrix, i.e., minimizing the norm, but this is an NP-hard problem that cannot be solved. In order to approximate the optimal solution, various matrix low-rank constraint modeling schemes have been proposed to replace the norm, such as the norm, weighted nuclear norm, truncated nuclear norm, matrix factorization replacing nuclear norm, and so on. However, to our knowledge, there has been no literature review or comparison of these methods in dealing with the same optimization problem.
This paper uses the low-rank property of the image matrix to optimize the image-inpainting model and algorithm under three kinds of pulse interference. Image-inpainting-modeling schemes based on the nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix-factorization-based F norm are reviewed, and their corresponding optimization iterative algorithms, such as the TSVT_ADMM algorithm, WSVT_ADMM algorithm, and UV_ADMM algorithm, are given. The experimental results of various inpainting methods are displayed visually and numerically, and a comparative analysis is given.
The structure and content of this paper are arranged as follows: Section 1 presents an introduction; Section 2 establishes the basis of the matrix low-rank constraint inpainting model and solution algorithm; Section 3 presents the experimental comparison; and the last section presents the conclusions.
2. The Matrix Low-Rank Constrained Inpainting Model and Its Solution Algorithm
Image-inpainting models based on a low-rank matrix are generally expressed as follows:
where represents the image ( grayscale value image, RGB image, grayscale value video, etc.); represents the interference operator, in which the set of interference pixel positions is ; represents the optimal solution; represents the interfered image; represents the error, generally set as a small constant value matrix, such as constant 10−14; represents the low-rank transformation operation; and is used to transform into a matrix or tensor with low ranks, such as a low-rank matrix transformed by similar blocks of local images, a low-rank structure matrix [1,2,4] transformed by the relationship of the annihilating filter, or a low-rank matrix [7] transformed by the similarity between frames. If videos or RGB images are treated as third-order or higher-order tensors, the rank property may come from the tensor Tucker rank [27], TT rank [19], etc. Under the interference of impulse noise, the operator generally has three representations, as detailed below.
The first representation is random-valued impulse noise (RVIN) [1]:
The value of is random, and its range is within the range of ’s pixel value, such as the range of 0~255, or the range of normalized 0~1. is the interference rate, that is, the percentage of the number of interfering pixels in the total number of pixels in the image.
The second representation is salt and pepper noise, a special kind of RVIN [1]:
where is the maximum value of salt and pepper noise and is the minimum value of salt and pepper noise.
In addition, random pixel loss is also a typical problem in the research field of image repair [2,6,8]:
The low-rank property is another form of sparsity essentially. Sparse constraints on matrices essentially minimize the zero norm of matrix elements, while low-rank constraints minimize the zero norm of singular values of the constraint matrix. The low-rank constraint on a matrix is the norm constraint on the singular values of the matrix, i.e., . Since is nonconvex, the norm taking the form is commonly used for convex substitution [28], where , , and are singular values of matrix of size , . The special case of the norm is the nuclear norm , which means . Whether the low rank constraint form used can accurately perform convex approximation has a significant impact on the repair effect. Let , where function . For the and norms, the function is
Normalize within the range of 0–1 and plot the curves of function at p = 0, 0.3, 0.5, 0.7, and 1. A visualization of the convex approximation is shown in Figure 1. It can be seen that the smaller the value of p is, the closer the convex approximation function curve is to the norm curve.
Figure 1.
The curves of , where p = 0, 0.3, 0.5, 0.7, and 1.
As the simplest convex substitution of the norm, the nuclear norm is the most common in low-rank constraint modeling. To further improve the accuracy of low-rank approximation, we can use the weighted norm of the singular values of the matrix, that is, the weighted nuclear norm [29,30,31,32], or use the truncated nuclear norm [33,34,35,36] to replace the nuclear norm. Common regularization constraint schemes for low-rank matrices are summarized as follows.
2.1. Nuclear Norm
For example, we use the minimized nuclear norm as a low-rank constraint to establish an image-inpainting model, as follows:
where is the impulse interference image of size . The regularization parameter is introduced to convert model (2) into the following unconstrained form:
Three algorithms can be used to solve Equation (3). The most commonly used algorithm is the singular value shrinkage/threshold (SVT) algorithm [37].
The SVT algorithm is shown below.
First, perform singular value decomposition for ,,, where is the diagonal matrix operation of the elements, . Then, soft threshold operation is performed on the singular values [38]. Then, set . Finally, obtain the solution results .
Jain P. proposed the singular value projection (SVP) algorithm in order to solve the problem in model (2) [39]. With the development of large-scale data processing and distributed computing, the alternating direction method of multipliers (ADMM) algorithm has become the mainstream optimization algorithm [40]. When using the ADMM algorithm to solve (3), auxiliary variables and residuals must first be introduced to transform model (3) into multiple sub-problems for an iterative solution:
where is the introduced penalty parameter, and the SVT method is used to solve sub-problem .
In this paper, we use the SVT algorithm, SVP algorithm, and ADMM algorithm to solve the nuclear-norm-based image-inpainting model, which correspond by name to the SVT method, SVP method, and n_ADMM method, respectively. The details of the SVT algorithm, SVP algorithm, and n_ADMM algorithm are shown in Algorithms 1–3, respectively.
| Algorithm 1. The SVT algorithm for solving model (3) |
| Input: . |
| Initialization: . |
| While do |
| . |
| . |
| Update . |
| Update . |
| Update . |
| . |
| End while |
| . |
| Output: . |
| Algorithm 2. The SVP algorithm for solving model (2) |
| Input: . |
| Initialization: . |
| While do |
| . |
| . |
| . |
| Update . |
| Update . |
| . |
| End while |
| . |
| Output: . |
| Algorithm 3. The n_ADMM algorithm for solving model (3) |
| Input: . |
| Initialization: . |
| While do |
| Update . |
| . |
| . |
| Update . |
| Update . |
| Update . |
| . |
| End while |
| . |
| Output: . |
2.2. Weighted Nuclear Norm
The weighted nuclear norm is a scheme that uses weighted singular value constraints to approximate constrained singular values [29,30,31,32]. It is a balanced constraint scheme that makes large singular values smaller and small singular values larger. It can be more accurate than the nuclear norm (i.e., constraints on singular values), where is the weighted function of each singular value of matrix , and . We use the weighted nuclear norm as a low-rank constraint to establish an image-inpainting model, as follows:
Then, we introduce the regularization parameter and convert model (4) into an unconstrained form:
There are many kinds of weighting functions , and the p-norm () is the simplest weighting scheme, namely, . Reference [30] reviewed various weighting functions that approximate the norm of singular values, such as SCAD [41], MCP [42], Logarithm [43], Geman [44], Laplace [45,46], etc., among which the Logarithm scheme is the most classic. In the experimental section of this paper, we choose the Logarithm scheme to perform our comparisons. The weighting function in the Logarithm scheme is shown below:
where is a parameter that is determined based on experience.
The simplest and most direct solution to model (4) is the weighted SVT (WSVT) algorithm. Set the weight , , and then , where .
We use ADMM to solve the weighted nuclear norm image-inpainting problem (5). We introduce auxiliary variables and residuals to transform model (5) into multiple sub-problems for iterative solution:
where is the introduced penalty parameter, and the WSVT algorithm is used to solve the sub-problem . The combination of the weighted SVT algorithm and ADMM algorithm can obtain more accurate iterative estimations. We use the ADMM algorithm to solve the weighted-nuclear-norm-based image-inpainting model (7), and name it the WSVT_ADMM method. The details of the WSVT_ADMM algorithm used to solve model (7) are shown in Algorithm 4.
| Algorithm 4. The WSVT_ADMM algorithm for solving model (7) |
| Input: . |
| Initialization: . |
| While do |
| . |
| . |
| Update . |
| Update . |
| Update . |
| . |
| End while |
| . |
| Output: . |
2.3. Truncated Nuclear Norm
In general, the singular value curve of a low-rank matrix exhibits an exponential extreme decay trend from large to small, and the singular values sorted backward will approach 0. Therefore, the minimization of the nuclear norm is mainly to constrain the minimization of large singular values. To fully utilize the small singular values, a truncated nuclear norm minimization scheme can be used. The purpose of truncated nuclear norm minimization is to constrain the minimization of small singular values [33,34,35,36]. We use the truncated nuclear norm as a low-rank constraint to establish an image-inpainting model, as follows:
where is a truncation operation that extracts the first r larger diagonal elements in the diagonal matrix , and means that the first r larger diagonal elements in diagonal matrix are zeroed and the last r smaller diagonal elements are retained. We introduce a regularization parameter and convert (8) into an unconstrained form:
where are the truncated left and right singular value vector group matrices of . The essence of truncated nuclear norm minimization is to minimize the sum of smaller singular value elements of the constrained low-rank matrices.
The truncated-nuclear-norm-based model can be solved by the APGL or ADMM algorithms. This paper combines the ADMM algorithm with the SVT algorithm to solve the truncated-nuclear-norm-based image-inpainting model (9), and abbreviates it as the TSVT (truncated SVT) algorithm. The details of the TSVT algorithm used to solve model (9) are shown in Algorithm 5.
| Algorithm 5. The TSVT algorithm for solving the model (9) |
| Input: , the truncated rank . |
| Initialization: . |
| While do |
| . |
| . |
| Update . |
| . |
| Update . |
| Update . |
| Update . |
| . |
| End while |
| . |
| Output: . |
2.4. The F Norm of UV Matrix Factorization
The process of solving the nuclear norm minimization problem involves time-consuming matrix singular value decomposition. Earlier, Srebro N. [47] proposed the property and then proved it. Later, the F norm of UV matrix factorization was used instead of the nuclear norm in many applications to simplify the calculation time [1,48]. We use the minimized F norm of UV matrix factorization as a low-rank constraint to establish an image-inpainting model, as follows.
Then, we introduce the regularization parameter and penalty parameter to convert model (10) into an unconstrained form:
where is the residual variable. The initial values of and can be initialized using the LMaFit method [2,49]. Model (11) is commonly solved using the ADMM algorithm, and we name it the UV_ADMM method. The details of the UV_ADMM algorithm are shown in Algorithm 6.
| Algorithm 6. The UV_ ADMM algorithm for solving model (11) |
| Input: . |
| Initialization: and by the LMaFit method [49], . |
| While do |
| Update . |
| Update . |
| Update . |
| Update . |
| Update . |
| . |
| End while |
| . |
| Output: |
Compared with the n_ADMM method, the F norm of the UV-matrix-based UV_ ADMM method avoids the time-consuming SVD in each iteration, making it more suitable for large matrix modeling with low-rank constraints. This method and the weighted nuclear norm method are commonly used in low-rank matrix constrained models. The above model and its solving algorithm are summarized in Table 1.
Table 1.
Modeling and solving algorithms based on the low-rank constraint in this paper.
In addition, other algorithms can solve the above model, for example, commonly used algorithms in sparsity-solving models. A sparse constraint on a signal refers to minimization of the l0 norm of signal elements, while a low-rank constraint on a signal refers to minimization of zero norms of the singular value of the signal matrix. Therefore, the optimization solution based on a low-rank constraint model has a lot in common with the optimization solution based on the sparse constraint model in solving the algorithm. Iterative optimization algorithms based on sparse constraint models can be applied to optimization solutions based on matrix low-rank constraint models, such as convex relaxation algorithms, which find a sparse or low-rank approximation of signals by transforming nonconvex problems into convex problems through iterative solutions. Among them, the conjugate gradient (CG) algorithm, iterative soft thresholding (IST) algorithm [50], Split Bregman algorithm [51], and major minimize (MM) algorithm [52] can be flexibly changed according to different optimization models.
3. Comparative Experiments
In this section, we conduct a comparison of the above methods for solving satellite-image-inpainting problems. We simulated the impulse interference on satellite images with an interference rate of 30% (the interference rate is the percentage of interference pixels among the total number of image pixels). The three kinds of impulse interference were as follows: A. random impulse interference; B. salt and pepper impulse interference; C. and random missing pixels. The satellite images in this paper are sourced from the public dataset DOTA v.2.0 (https://captain-whu.github.io/DOTA/dataset.html, accessed on 12 December 2020), with images provided by the China Resources Satellite Data and Application Center, satellite GF-1, satellite GF-2, etc. The methods employed in the comparison are SVT, SVP, n_ADMM, TSVT_ADMM, WSVT_ADMM, and UV_ADMM. For a fair comparison, each method is carried out using its optimal parameters to ensure every method shows its most representative performance.
The relative least normalized error (RLNE) and the structural similarity (SSIM) [53] are used as image-inpainting quality indicators. The RLNE is an index based on the error between pixels, and the SSIM index is more consistent with human visual perception in image visual evaluation. Generally, the larger the SSIM value is, the better the image-inpainting quality is. All simulations were carried out on a Windows 10 operating system and MATLAB R2019a running on a PC with an Intel Core i7 CPU 2.8 GHz and 16 GB of memory.
A gray satellite image and its singular values curve are shown in Figure 2a and Figure 2b, respectively. The singular values of the image descend rapidly from large to small, and most of them tend to be zero. This indicates that the image has the characteristic of being low rank. The three examples of impulse interference satellite images are shown in Figure 3, where the original image is the image in Figure 2a. It can be seen that the 30% interference rate has caused obvious information loss on the building shape, layout, gray value shading, and other features in the original image.
Figure 2.
A gray satellite image and its singular values curve. (a) The gray satellite image; (b) the singular values curve of image in (a).
Figure 3.
The interference satellite image (interference rate 30%): (a) random impulse interference pattern; (b) salt and pepper impulse interference pattern; (c) random missing pixel pattern.
The comparison of the average values (RLNE, SSIM, and running time) of six image-inpainting methods under the interference of random impulse, salt and pepper noise, and missing pixels is shown in Table 2. The visual comparison of the six image-inpainting methods under the interference of salt and pepper noise is shown in Figure 4.
Table 2.
Numerical comparison of six image-inpainting methods under three types of interference. (The bold indicators are the best indicators among various methods).
Figure 4.
Visual comparison of six image-inpainting methods under the interference of salt and pepper noise: (a) original image; (b) interference image; (c) SVT; (d) SVP; (e) n_ADMM; (f) TSVT_ADMM; (g) WSVT_ADMM; (h) UV_ADMM.
Based on the above visual and numerical comparison, we analyze the experimental methods below.
The matrix rank constraint method based on the F norm of UV matrix factorization (i.e., the UV_ADMM method) is equivalent to the method based on the nuclear norm constraint (i.e., the n_ADMM method) in terms of effectiveness. Overall, the n_ADMM method is slightly better, increasing the RLNE index by about 0.3% and the SSIM index by 0.3~1.
Since the nuclear-norm-based SVT, SVP, and n_ADMM methods, the weighted-nuclear-norm-based WSVT_ADMM method, and the truncated-nuclear-norm-based TSVT method all involve time-consuming SVD calculations in each iteration, the UV_ADMM method based on the UV matrix factorization F norm has significant advantages in terms of runtime. However, the UV_ADMM method did not achieve more accurate results compared to other methods, because the UV_ADMM method needs to initially set the estimated rank, for example, by using the LMaFit method to initialize the rank. However, the rank initialization is estimated and the accuracy of the rank is therefore not high, which leads to inaccurate low-rank constraints. Thus, this UV-matrix-factorization-based method is more commonly used for large-scale low-rank matrix calculations, and it can greatly reduce the inpainting time of large-scale matrix optimization as a result of avoiding the SVD of each iteration.
Since the weighted and truncated nuclear norm can better approach the singular value l0 norm, the WSVT_ADMM and TSVT methods are significantly better than the nuclear-norm-based methods (SVT, SVP, n_ADMM) in terms of the accuracy of inpainting.
In summary, we have obtained a preliminary understanding that, in solving the same image restoration optimization problem using different algorithms, the various constraint schemes based on matrix low rank mentioned above can effectively solve the image optimization problem, but the solution accuracy and computation time vary. Compared to other schemes, weighted schemes can obtain more accurate repair results. In terms of computation time, because the time-consuming SVD decomposition is avoided, the matrix-factorization-based scheme saves more time compared to other schemes, but its repair accuracy is slightly lower than the weighted scheme, the truncated nuclear norm scheme, and the nuclear norm scheme. In addition, under the same low-rank constraint scheme of the matrix, using different algorithms to solve the problem also affects the accuracy of image restoration. For example, the nuclear norm scheme based on the SVP algorithm has a significantly better restoration accuracy than the nuclear norm scheme based on SVT, while the nuclear norm scheme based on the ADMM algorithm takes second place. It is obvious that the repair effects of the weighted nuclear norm scheme and truncated nuclear norm scheme are significantly better than that of the nuclear norm scheme.
4. Conclusions
In the application of machine vision, satellite images may suffer from three forms of impulse noise interference. In this paper, we use the low-rank characteristics of the image matrix to optimize and repair the images under three kinds of impulse interference and provide the optimization algorithm. Firstly, image-inpainting-modeling schemes based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix factorization F norm are reviewed. Then, the corresponding optimization iterative algorithms are provided, such as the TSVT_ ADMM algorithm, WSVT_ ADMM algorithm, UV_ ADMM algorithm, etc. Finally, the experimental results of various matrix-rank-constraint-based methods are presented visually and numerically, and a comparative analysis is provided. The experimental results show that all the mentioned matrix-rank-constraint-based methods can repair images to a certain extent and suppress certain forms of interference noise. Among the methods studied, those based on the weighted nuclear norm and the truncated nuclear norm achieved better repair effects, while methods based on the matrix factorization F norm take the shortest time and can be used for large-scale matrix low-rank calculation.
Author Contributions
Conceptualization, S.M.; methodology, S.M.; investigation, W.Y. and Z.L.; resources, S.M. and Z.L.; writing—original draft preparation, S.M. and F.C.; writing—review and editing, S.M., W.Y. and S.F.; supervision, L.L. and S.F.; funding acquisition, S.M. and L.L. All authors have read and agreed to the published version of the manuscript.
Funding
This work was funded by the National Key Laboratory of Science and Technology on Space Microwave, No. HTKJ2021KL504012; supported by the Science and Technology Innovation Cultivation Fund of Space Engineering University, No. KJCX-2021-17; supported by the Information Security Laboratory of National Defense Research and Experiment, No.2020XXAQ02.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Kyong, H.; Jong, C. Sparse and Low-Rank Decomposition of a Hankel Structured Matrix for Impulse Noise Removal. IEEE Trans. Image Process. 2018, 27, 1448–1461. [Google Scholar]
- Kyong, H.; Ye, J. Annihilating filter-based low-rank Hankel matrix approach for image inpainting. IEEE Trans. Image Process. 2015, 24, 3498–3511. [Google Scholar] [CrossRef] [PubMed]
- Balachandrasekaran, A.; Magnotta, V.; Jacob, M. Recovery of damped exponentials using structured low rank matrix completion. IEEE Trans. Med. Imaging 2017, 36, 2087–2098. [Google Scholar] [CrossRef] [PubMed]
- Haldar, J. Low-rank modeling of local-space neighborhoods (LORAKS) for constrained MRI. IEEE Trans. Med. Imaging 2014, 33, 668–680. [Google Scholar] [CrossRef] [PubMed]
- Ren, W.; Cao, X.; Pan, J.; Guo, X.; Zuo, W.; Yang, M.-H. Image deblurring via enhanced low-rank prior. IEEE Trans. Image Process. 2016, 25, 3426–3437. [Google Scholar] [CrossRef] [PubMed]
- Xu, Z.; Sun, J. Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 2010, 9, 1153–1165. [Google Scholar]
- Zhao, B.; Haldar, J.; Christodoulou, A.; Liang, Z.-P. Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints. IEEE Trans. Med. Imaging 2012, 31, 1809–1820. [Google Scholar] [CrossRef] [PubMed]
- Long, Z.; Liu, Y.; Chen, L.; Zhu, C. Low rank tensor completion for multiway visual data. Signal Process. 2019, 155, 301–316. [Google Scholar] [CrossRef]
- Kolda, T.; Bader, B. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
- Oseledets, I. Tensor-train decomposition. SIAM J. Scien. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
- Kilmer, M.; Braman, K.; Hao, N. Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. Siam J. Matrix Anal. A 2013, 34, 148–172. [Google Scholar] [CrossRef]
- Zhang, X.; Chan, T. Wavelet inpainting by nonlocal total variation. Inverse Probl. Imaging 2010, 4, 191–210. [Google Scholar] [CrossRef]
- Ou, Y.; Li, B.; Swamy, M. Low-rank with sparsity constraints for image denoising. Inf. Sci. 2023, 637, 118931. [Google Scholar] [CrossRef]
- Lingala, S.; Hu, Y.; Dibella, E.; Jacob, M. Accelerated dynamic MRI exploiting sparsity and low-rank structure: K-t SLR. IEEE Trans. Med. Imaging 2011, 30, 1042–1054. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Huang, L.; Li, Y.; Zhang, K.; Yin, C. Low-Rank and Sparse Matrix Recovery for Hyperspectral Image Reconstruction Using Bayesian Learning. Sensors 2022, 22, 343. [Google Scholar] [CrossRef] [PubMed]
- Tremoulheac, B.; Dikaios, N.; Atkinson, D.; Arridge, S.R. Dynamic MR image reconstruction-separation from undersampled (k, t)-space via low-rank plus sparse prior. IEEE Trans. Med. Imaging 2014, 33, 1689–1701. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Q.; Zhou, G.; Xie, S.; Zhang, L.; Cichocki, A. Tensor ring decomposition. arXiv 2016, arXiv:1606.05535. [Google Scholar]
- Zhang, Z.; Aeron, S. Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
- Bengua, J. Efficient tensor completion for color image and video recovery: Low-rank tensor train. IEEE Trans. Image Process. 2017, 26, 1057–7149. [Google Scholar] [CrossRef]
- Ma, S.; Du, H.; Mei, W. Dynamic MR image reconstruction from highly undersampled (k, t)-space data exploiting low tensor train rank and sparse prior. IEEE Access 2020, 8, 28690–28703. [Google Scholar] [CrossRef]
- Ma, S.; Ai, J.; Du, H.; Fang, L.; Mei, W. Recovering low-rank tensor from limited coefficients in any ortho-normal basis using tensor-singular value decomposition. IET Signal Process. 2021, 19, 162–181. [Google Scholar] [CrossRef]
- Tang, T.; Kuang, G. SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition. Electronics 2022, 11, 2859. [Google Scholar] [CrossRef]
- Gross, D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Infor. Theory 2011, 57, 1548–1566. [Google Scholar] [CrossRef]
- Jain, P.; Oh, S. Provable tensor factorization with missing data. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 8–13 December 2014; Volume 1, pp. 1431–1439. [Google Scholar]
- Vetterli, M.; Marziliano, P.; Blu, T. Sampling signals with finite rate of innovation. IEEE Trans. Signal Process. 2002, 50, 1417–1428. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2209. [Google Scholar] [CrossRef] [PubMed]
- Filipovi, M.; Juki’, A. Tucker factorization with missing data with application to low-n-rank tensor completion. Multidim Syst. Sign. Process. 2015, 26, 677–692. [Google Scholar] [CrossRef]
- Wang, X.; Kong, L.; Wang, L.; Yang, Z. High-Dimensional Covariance Estimation via Constrained Lq-Type Regularization. Mathematics 2023, 11, 1022. [Google Scholar] [CrossRef]
- Candès, E.; Wakin, M.; Boyd, S. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
- Lu, C.; Tang, J.; Yan, S.; Lin, Z. Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm. IEEE Trans. Image Process. 2015, 25, 829–839. [Google Scholar] [CrossRef]
- Zhang, J.; Lu, J.; Wang, C.; Li, S. Hyperspectral and multispectral image fusion via superpixel-based weighted nuclear norm minimization. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
- Li, Z.; Yan, M.; Zeng, T.; Zhang, G. Phase retrieval from incomplete data via weighted nuclear norm minimization. Pattern Recognit. 2022, 125, 108537. [Google Scholar] [CrossRef]
- Cao, F.; Chen, J.; Ye, H.; Zhao, J.; Zhou, Z. Recovering low-rank and sparse matrix based on the truncated nuclear norm. Neural Netw. 2017, 85, 10–20. [Google Scholar] [CrossRef] [PubMed]
- Hu, Y.; Zhang, D.; Ye, J.; Li, X.; He, X. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2117–2130. [Google Scholar] [CrossRef] [PubMed]
- Fan, Q.; Liu, Y.; Yang, T.; Peng, H. Fast and accurate spectrum estimation via virtual coarray interpolation based on truncated nuclear norm regularization. IEEE Signal Process. Lett. 2022, 29, 169–173. [Google Scholar] [CrossRef]
- Yadav, S.; George, N. Fast direction-of-arrival estimation via coarray interpolation based on truncated nuclear norm regularization. IEEE Trans. Circuits Syst. II Exp. Briefs. 2021, 68, 1522–1526. [Google Scholar] [CrossRef]
- Cai, J.; Candès, E.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optimiz. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
- Xu, J.; Fu, Y.; Xiang, Y. An edge map-guided acceleration strategy for multi-scale weighted nuclear norm minimization-based image denoising. Digit. Signal Process. 2023, 134, 103932. [Google Scholar] [CrossRef]
- Jain, P.; Meka, R. Guaranteed Rank Minimization via Singular Value Projection. Available online: http://arxiv.org/abs/0909.5457 (accessed on 19 October 2009).
- Stephen, B. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Le. 2010, 3, 1–122. [Google Scholar]
- Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
- Friedman, J. Fast sparse regression and classification. Int. J. Forecast. 2012, 28, 722–738. [Google Scholar] [CrossRef]
- Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010, 38, 894–942. [Google Scholar] [CrossRef] [PubMed]
- Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
- Trzasko, J.; Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopic 0-minimization. IEEE Trans. Med. Imaging 2009, 28, 106–121. [Google Scholar] [CrossRef] [PubMed]
- Liu, Q. A truncated nuclear norm and graph-Laplacian regularized low-rank representation method for tumor clustering and gene selection. BMC Bioinform. 2021, 22, 436. [Google Scholar] [CrossRef] [PubMed]
- Signoretto, M.; Cevher, V.; Suykens, J. An SVD-free approach to a class of structured low rank matrix optimization problems with application to system identification. In Proceedings of the IEEE Conference on Decision and Control, EPFL-CONF-184990, Firenze, Italy, 10–13 December 2013. [Google Scholar]
- Srebro, N. Learning with Matrix Factorizations; Massachusetts Institute of Technology: Cambridge, MA, USA, 2004. [Google Scholar]
- Wen, Z.; Yin, W.; Zhang, Y. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 2012, 4, 333–361. [Google Scholar] [CrossRef]
- Daubechies, I.; Defrise, M.; Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
- Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
- Jacobson, M.; Fessler, J. An Expanded Theoretical Treatment of Iteration-Dependent MajorizeMinimize Algorithms. IEEE Trans. Image Process. 2007, 16, 2411–2422. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).



