Next Article in Journal
MMFD-Net: A Novel Network for Image Forgery Detection and Localization via Multi-Stream Edge Feature Learning and Multi-Dimensional Information Fusion
Previous Article in Journal
Nonlocal Internal Variable and Superfluid State in Liquid Helium II
Previous Article in Special Issue
A Novel Swarm Optimization Algorithm Based on Hive Construction by Tetragonula carbonaria Builder Bees
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Designed Thresholding Algorithm for Low-Rank Matrix Recovery with Application to Missing English Text Completion

1
School of Foreign Languages, Yulin University, Yulin 719000, China
2
School of Mathematics and Statistics, Yulin University, Yulin 719000, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(19), 3135; https://doi.org/10.3390/math13193135
Submission received: 29 August 2025 / Revised: 14 September 2025 / Accepted: 26 September 2025 / Published: 1 October 2025
(This article belongs to the Special Issue Numerical Optimization: Algorithms and Applications)

Abstract

This article is proposing a fast version of adaptive iterative matrix designed thresholding (AIMDT) algorithm which is studied in our previous work. In AIMDT algorithm, a designed thresholding operator is studied to the problem of recovering the low-rank matrices. By adjusting the size of the parameter, this designed operator can apply less bias to the singular values of a matrice. Using this proposed designed operator, the AIMDT algorithm has been generated to solve the matrix rank minimization problem, and the numerical experiments have shown the superiority of AIMDT algorithm. However, the AIMDT algorithm converges slowly in general. In order to recover the low-rank matrices more quickly, we present a fast AIMDT algorithm to recover the low-rank matrices in this paper. The numerical results on some random low-rank matrix completion problems and a missing English text completion problem show that the our proposed fast algorithm has much faster convergence than the previous AIMDT algorithm.

1. Introduction

The low-rank matrix completion problem aims to recover an unknown low-rank matrix from the limited observed entries. It has been applied in many fields such as Netflix problem [1], system identification [2], image processing [3], signal processing [4], video denoising [5], subspace learning [6], target location [7]. The low-rank matrix completion problem can be modeled the following minimization problem:
min X R m × n rank ( X ) s . t . P Ω ( X ) = P Ω ( M ) ,
where Ω { 1 , 2 , , m } × { 1 , 2 , , n } and the projection P Ω : R m × n R m × n is defined as
[ P Ω ( X ) ] i , j = X i , j , ( i , j ) Ω ; 0 , otherwise .
In this paper, without loss of generality, we assume m n . Although the low-rank matrix completion problem (1) has been applied in many fields, it is actually an NP-hard problem due to the non-convexity of the rank function [1]. To overcome such a difficulty, one natural approach is to solve the following relaxation minimization problem [1]:
min X R m × n X s . t . P Ω ( X ) = P Ω ( M ) ,
where X = i = 1 m σ i ( X ) represents the nuclear norm of X R m × n , and σ i ( X ) represents the i-th largest singular value of X R m × n . In theory, Candès et al. [1] have proved that problems (1) and (2) have the same solutions if some conditions are satisfied. Although the problem (2) can be well solved by the convex optimization solver such as SeDuMi [8] and SDPT3 [9], these solver usually need high computational cost for the large size problems. To overcome this shortcoming, the regularization minimization problem of problem (2) has been well studied in the literature [10,11,12,13]:
min X R m × n 1 2 P Ω ( X ) P Ω ( M ) F 2 + λ X ,
where λ > 0 . There have been many classical algorithms proposed to solve the convex optimization problem (3), including the accelerated proximal gradient (APG) algorithm [10], singular value thresholding (SVT) algorithm [11], linearized augmented lagrangian and alternating direction methods [12]. However, these algorithms are all based on the soft thresholding operator [13] which has the bias and sometimes results in over-penalization.
In order to weaken the influence of bias on the recovery of low-rank matrices, Cui et al. [14] designed an artificial operator to recover the low-rank matrices. Compared with the classical soft thresholding operator, this artificial operator has less bias than the soft thresholding operator by adjusting the size of parameter in it. Using this artificial operator, an adaptive iterative matrix designed thresholding (AIMDT) algorithm has been proposed to recover the low-rank matrices in [14], and the numerical results have demonstrated the effectiveness of proposed algorithm in recovering the low-rank matrices. However, the AIMDT algorithm converges quite slowly in general. Therefore, how to accelerate the convergence of AIMDT algorithm is a significant problem. To solve this problem, in this paper an acceleration technique is used to accelerate the previous AIMDT algorithm and generate a fast algorithm, namely, fast adaptive iterative matrix designed thresholding (FAIMDT) algorithm, to recover the low-rank matrices. Compared with the previously proposed AIMDT algorithm, the FAIMDT algorithm has faster convergence.
The paper is organized as follows. In Section 2, we review the designed thresholding operator and the previous AIMDT algorithm proposed in [14]. In Section 3, we present the FAIMDT algorithm for fast recover the low-rank matrices. In Section 4, some numerical results on some random low-rank matrix completion problems and a missing English text completion problem are demonstrated to show the effectiveness of FAIMDT algorithm in recovering the low-rank matrices. At last, we conclude this paper in Section 5.

2. Designed Thresholding Operator and AIMDT Algorithm

In this section, we mainly review some known results from [14] for the designed thresholding operator and the proposed AIMDT algorithm for recovery the low-rank matrices.

2.1. Designed Thresholding Operator

First, we introduce the designed thresholding operator [14] and its related properties.
Definition 1
(See [14]). For any λ > 0 , c 0 and t R , the operator defined by
r c , λ ( t ) = max | t | λ ( c + λ ) c + | t | , 0 sign ( t ) .
is called the designed thresholding operator.
Property 1
(See [14]). Given c 0 , | t | r c , λ ( t ) approaches zero with the increasing of the magnitude of t.
Figure 1 shows the plots of the designed thresholding operator for some parameters c. By adjusting the size of parameter, the designed thresholding operator plays less bias to the larger coefficients.
Lemma 1
(See [14]). The designed thresholding operator r c , λ ( t ) can be viewed as the proximal mapping of a function g ( t ) , i.e.,
r c , λ ( t ) = arg min x R 1 2 ( x t ) 2 + λ g ( x ) ,
where the function g is even, nondecreasing and continuous on [ 0 , + ) , differentiable on ( 0 , + ) , and nondifferentiable at 0 with g ( 0 ) = [ 1 , 1 ] . Moreover, g is concave on [ 0 , + ) and satisfies the triangle inequality.
Definition 2
(See [14]). Given a matrix X R m × n , considering the singular value decomposition (SVD) of matrix X : X = U [ Diag ( { σ i ( X ) } 1 i m ) , 0 m × ( n m ) ] V , where U is a m × m unitary matrix, V is a n × n unitary matrix, σ i ( X ) is the i-th largest singular value of X , Diag ( { σ i ( X ) } 1 i m ) R m × m is the diagonal matrix of the singular values of matrix X arranged in descending order, and 0 m × ( n m ) is a m × ( n m ) zero matrix. For any λ > 0 and c 0 , the operator R c , λ ( X ) defined by
R c , λ ( X ) = U [ Diag ( { r c , λ ( σ i ( X ) ) } 1 i m ) , 0 m × ( n m ) ] V
is called the matrix designed thresholding operator.
Given a matrix X R m × n , the function G : R m × n R + is defined by
G ( X ) = i = 1 m g ( σ i ( X ) ) ,
where R + : = [ 0 , + ) , then we have the following result.
Theorem 1
(See [14]). For any λ > 0 and Y R m × n , assume X ^ R m × n is the optimal solution of the following problem
min X R m × n 1 2 X Y F 2 + λ G ( X ) ,
then, X ^ can be expressed by
X ^ = R c , λ ( Y ) .

2.2. AIMDT Algorithm

From [14], the AIMDT algorithm is given by
X k + 1 = R c , λ μ ( B μ ( X k ) ) ,
where B μ ( X k ) = X k + μ ( P Ω ( M ) P Ω ( X k ) ) and μ ( 0 , 1 ) . In general, the regularization parameter λ has a significant impact on the output results of the AIMDT algorithm. In [14], a proper parameter-setting rule is defined to set λ for the AIMDT algorithm. In iteration of AIMDT algorithm, the optimal regularization parameter λ is set as
λ k = σ r + 1 ( B μ ( X k ) ) μ .
The AIMDT algorithm with the parameter-setting rule (11) is concluded in Algorithm 1.
Algorithm 1 AIMDT algorithm [14]
  • Input: μ ( 0 , 1 ) ;
  • Initialize: X 0 R m × n , k = 0 ;
  • while not converged, do
  • B μ ( X k ) = X k + μ ( P Ω ( M ) P Ω ( X k ) ) ;
  • λ k = σ r + 1 ( B μ ( X k ) ) μ ;
  • λ = λ k ;
  • X k + 1 = R c , λ μ ( B μ ( X k ) ) ;
  • k k + 1 ;
  • end while
  • Output: X o p t

3. Fast Adaptive Iterative Matrix Designed Thresholding Algorithm

In [14], numerical results have shown the superiority of AIMDT algorithm. However, the AIMDT algorithm converges very slowly in numerical experiments. To solve this problem, in this section an acceleration technique is used to accelerate the previous AIMDT algorithm and generate a fast algorithm, namely, fast adaptive iterative matrix designed thresholding (FAIMDT) algorithm, to recover the low-rank matrices. Compared with the previously proposed AIMDT algorithm, the FAIMDT algorithm has the faster convergence. The idea of generating FAIMDT algorithm is inspired by the parameterized fast iterative shrinkage-thresholding algorithm [15]. We present the FAIMDT algorithm in Algorithm 2.
Algorithm 2 FAIMDT algorithm
  • Input: p > 0 , q > 0 , μ ( 0 , 1 ) , ς ( 0 , 4 ] , t 1 = 1 ;
  • Initialize: X 0 R m × n , Y 1 = X 0 , k = 1 ;
  • while not converged, do
  • B μ ( Y k ) = Y k + μ ( P Ω ( M ) P Ω ( Y k ) ) ;
  • λ k = σ r + 1 ( B μ ( X k ) ) μ ;
  • λ = λ k ;
  • X k = R c , λ μ ( B μ ( Y k ) ) ;
  • t k + 1 = p + q + ς t k 2 2 ;
  • Y k + 1 = X k + t k 1 t k + 1 ( X k X k 1 ) ;
  • k = k + 1 ;
  • end while
  • Output: X o p t
The major difference between FAIMDT algorithm and AIMDT algorithm is that FAIMDT algorithm is employed on a combination of the previous points X k and X k 1 , but not directly employed on the previous point X k .

4. Numerical Experiments

To verify the convergence rate of the FAIMDT algorithm, in this section we take some numerical experiments to compare the FAIMDT algorithm with the classical SVT algorithm [11] and the previous proposed AIMDT algorithm [14]. For these algorithms, the stop criterion is unified defined as
X k + 1 X k F max { 1 , X k F } 10 8 .
In this section, these three algorithms are tested on some random low-rank matrix completion problems and a missing English text completion problem.

4.1. Random Low-Rank Matrix Completion

First, we test these three algorithms on the random low-rank matrix completion problems. Here, we set m = n . The Gaussian random matrices are considered and we generate a random low-rank matrix M R n × n of rank r by multiplying two Gaussian random matrices M 1 R n × r and M 2 R r × n . The set Ω is sampled uniformly at random among all sets of cardinality s. We define the sampling ration (SR) as SR = s / m n , and define the freedom ration (FR) as FR = s / r ( m + n r ) . It is impossible to recover the original low-rank matrix if FR < 1 (see [16]). The accuracy of solution X o p t generated by the algorithm is measured by
RE = X o p t M F M F .
In FAIMDT algorithm, the parameters μ , p and q are set as 0.99, 0.05 and 0.5, respectively.
Table 1 shows the numerical results of the algorithms for random low-rank matrix completion problems. In FAIMDT algorithm and AIMDT algorithm, the parameters c = 1 and c = 5 are considered. We can see that two FAIMDT algorithms are all superior to AIMDT algorithm and SVT algorithm in RE and calculation time. That is, the FAIMDT algorithm recovers the random low-rank matrices with higher precision than the AIMDT algorithm and SVT algorithm. Moreover, the FAIMDT algorithm converges faster than AIMDT algorithm and SVT algorithm.
To test the robustness of the proposed FAIMDT algorithm, we also consider the noisy case in numerical experiments. Here, the Gaussian random matrice M R n × n of rank r is perturbed by Gaussian noise, i.e., M = M 1 M 2 + E , where E = 0.01 V and V R m × n is a Gaussian matrix. Table 2 shows the numerical results of the algorithms for random low-rank matrix completion problems in noisy case. We can see that two FAIMDT algorithms are still superior to AIMDT algorithm and SVT algorithm in RE and calculation time.

4.2. Missing English Text Completion

To further test the effectiveness of the FAIMDT algorithm, we demonstrate the performance of the FAIMDT algorithm on a missing English text completion problem. We test the algorithms on a grayscale missing English text image. In this grayscale image, some English words are missing and we want to recover these missing words. The original grayscale English text image and the grayscale missing English text image are displayed in Figure 2 and Figure 3. The Figure 2 was obtained from a screenshot of article [14].
Table 3 presents the numerical results for the missing English text completion problem. From Table 3, the FAIMDT algorithm with c = 1 performs better and faster than AIMDT algorithm and SVT algorithm in recovering the missing English text. The recovered English text images by algorithms are displayed in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8, respectively.

5. Conclusions

In this paper, we study a fast algorithm to recover the low-rank matrices. Numerical experiments on some random low-rank matrix completion problems and a missing English text completion problem show that this fast algorithm converges faster than our previously proposed AIMDT algorithm. Although the FAIMDT algorithm converges faster than the AIMDT algorithm, we have yet to present a proof of its convergence and an estimate of its convergence rate. This is mainly because of the designed thresholding operator r c , λ is in fact the proximal mapping of the nonconvex function G ( X ) , and FAIMDT algorithm is actually a nonconvex optimization algorithm, which poses difficulties in analyzing its convergence and convergence rate. In the future, we will focus on addressing these two questions.

Author Contributions

Conceptualization, methodology, resources, writing—original draft, H.H.; funding acquisition, software, writing—review and editing, A.C.; funding acquisition, methodology, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shaanxi Fundamental Science Research Project for Mathematics and Physics under Grant 23JSQ056, the Doctoral Research Project of Yulin University under Grant 21GK04, and the Natural Science Basic Research Program of Shaanxi Province under Grant 2024JC-YBMS-036 and Grant 2020JQ-900.

Data Availability Statement

The original contributions presented in this study are included in thearticle. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Candès, E.J.; Recht, B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [CrossRef]
  2. Liu, Z.; Vandenberghe, L. Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 2009, 31, 1235–1256. [Google Scholar] [CrossRef]
  3. Cao, F.; Cai, M.; Tan, Y. Image interpolation via low-rank matrix completion and recovery. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1261–1270. [Google Scholar]
  4. Xing, Z.; Zhou, J.; Ge, Z.; Huang, G.; Hu, M. Recovery of high order statistics of PSK signals based on low-rank matrix completion. IEEE Access 2023, 11, 12973–12986. [Google Scholar] [CrossRef]
  5. Chen, P.; Suter, D. Recovering the missing components in a large noisy low-rank matrix: Application to SFM. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1051–1063. [Google Scholar] [CrossRef] [PubMed]
  6. Tsakiris, M.C. Low-rank matrix completion theory via Plücker coordinates. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10084–10099. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, Z.; Pu, W.; Zhou, R.; You, M.; Wang, W.; Yan, J. Efficient Position Determination Using Low-Rank Matrix Completion. In Proceedings of the 2024 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Zhuhai, China, 22–24 November 2024. [Google Scholar]
  8. Sturm, J.F. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 1999, 11, 625–653. [Google Scholar] [CrossRef]
  9. Tütüncü, R.H.; Toh, K.C.; Todd, M.J. Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. 2003, 95, 189–217. [Google Scholar] [CrossRef]
  10. Toh, K.-C.; Yun, S. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 2010, 6, 615–640. [Google Scholar]
  11. Cai, J.-F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  12. Yang, J.; Yuan, X. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 2013, 82, 301–329. [Google Scholar] [CrossRef]
  13. Daubechies, I.; Defrise, M.; Mol, C.D. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  14. Cui, A.; He, H.; Yang, H. A designed thresholding operator for low-rank matrix completion. Mathematics 2024, 12, 1065. [Google Scholar] [CrossRef]
  15. Liang, J.; Luo, T.; Schonlieb, C.B. Improving “fast iterative shrinkage-thresholding algorithm”: Faster, smarter, and greedier. SIAM J. Sci. Comput. 2022, 44, A1069–A1091. [Google Scholar] [CrossRef]
  16. Ma, S.; Goldfarb, D.; Chen, L. Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 2011, 128, 321–353. [Google Scholar] [CrossRef]
Figure 1. Plots of operator r c , λ ( t ) for some c with λ = 0.5 .
Figure 1. Plots of operator r c , λ ( t ) for some c with λ = 0.5 .
Mathematics 13 03135 g001
Figure 2. The original grayscale English text image.
Figure 2. The original grayscale English text image.
Mathematics 13 03135 g002
Figure 3. The grayscale missing English text image.
Figure 3. The grayscale missing English text image.
Mathematics 13 03135 g003
Figure 4. The grayscale missing English text image recovered by FAIMDT algorithm with c = 1 .
Figure 4. The grayscale missing English text image recovered by FAIMDT algorithm with c = 1 .
Mathematics 13 03135 g004
Figure 5. The grayscale missing English text image recovered by FAIMDT algorithm with c = 5 .
Figure 5. The grayscale missing English text image recovered by FAIMDT algorithm with c = 5 .
Mathematics 13 03135 g005
Figure 6. The grayscale missing English text image recovered by AIMDT algorithm with c = 1 .
Figure 6. The grayscale missing English text image recovered by AIMDT algorithm with c = 1 .
Mathematics 13 03135 g006
Figure 7. The grayscale missing English text image recovered by AIMDT algorithm with c = 5 .
Figure 7. The grayscale missing English text image recovered by AIMDT algorithm with c = 5 .
Mathematics 13 03135 g007
Figure 8. The grayscale missing English text image recovered by SVT algorithm.
Figure 8. The grayscale missing English text image recovered by SVT algorithm.
Mathematics 13 03135 g008
Table 1. Numerical results of algorithms for random low-rank matrix completion problems with different n and r, SR = 0.40 (Noiseless case).
Table 1. Numerical results of algorithms for random low-rank matrix completion problems with different n and r, SR = 0.40 (Noiseless case).
ProblemFAIMDT, c = 1 FAIMDT, c = 5 AIMDT, c = 1 AIMDT, c = 5 SVT Algorithm
( n , r , FR )RETimeRETimeRETimeRETimeRETime
(100, 10, 2.10)2.69 × 10−80.1663.04 × 10−80.1623.31 × 10−70.5273.31 × 10−70.5402.72 × 10−10.481
(200, 20, 2.10)5.32 × 10−80.5585.46 × 10−80.5652.39 × 10−71.6122.40 × 10−71.6312.23 × 10−12.203
(300, 30, 2.10)4.01 × 10−81.1024.15 × 10−81.0971.98 × 10−72.9401.95 × 10−72.9642.02 × 10−15.576
(400, 40, 2.10)2.72 × 10−72.2992.77 × 10−82.1892.23 × 10−75.9892.27 × 10−76.0652.10 × 10−110.594
(500, 50, 2.10)3.29 × 10−83.5183.35 × 10−83.5091.88 × 10−78.8101.86 × 10−78.8542.12 × 10−118.682
(600, 60, 2.10)3.12 × 10−85.5773.17 × 10−84.5771.79 × 10−711.7331.85 × 10−711.9072.10 × 10−124.072
(700, 70, 2.10)3.13 × 10−87.2963.17 × 10−87.2011.88 × 10−719.0891.87 × 10−720.0401.92 × 10−145.720
(800, 80, 2.10)2.94 × 10−89.9452.97 × 10−89.9291.75 × 10−726.4761.80 × 10−729.0841.90 × 10−165.804
(900, 90, 2.10)3.00 × 10−813.4882.45 × 10−813.2671.73 × 10−736.0421.77 × 10−736.8121.86 × 10−187.852
(1000, 100, 2.10)2.67 × 10−817.5782.69 × 10−818.1831.72 × 10−749.8341.75 × 10−750.4731.93 × 10−1110.415
(1100, 110, 2.10)2.52 × 10−823.1562.54 × 10−828.4701.71 × 10−760.5861.74 × 10−766.2941.93 × 10−1143.420
Table 2. Numerical results of algorithms for random low-rank matrix completion problems with different n and r, SR = 0.40 (Noisy case).
Table 2. Numerical results of algorithms for random low-rank matrix completion problems with different n and r, SR = 0.40 (Noisy case).
ProblemFAIMDT, c = 1 FAIMDT, c = 5 AIMDT, c = 1 AIMDT, c = 5 SVT Algorithm
( n , r , FR )RETimeRETimeRETimeRETimeRETime
(100, 10, 2.10)2.90 × 10−30.1662.90 × 10−30.1622.90 × 10−30.5232.90 × 10−30.5043.23 × 10−10.444
(200, 20, 2.10)2.00 × 10−30.5652.00 × 10−30.5652.00 × 10−31.6362.00 × 10−31.6422.65 × 10−12.067
(300, 30, 2.10)1.60 × 10−31.2051.60 × 10−31.2271.60 × 10−33.3441.60 × 10−33.3642.43 × 10−14.889
(400, 40, 2.10)1.40 × 10−32.5031.40 × 10−32.6641.40 × 10−35.9421.40 × 10−36.0662.01 × 10−111.438
(500, 50, 2.10)1.40 × 10−33.3381.40 × 10−33.3571.40 × 10−38.6541.40 × 10−38.8131.93 × 10−119.293
(600, 60, 2.10)1.10 × 10−35.1651.10 × 10−35.1781.10 × 10−313.3231.10 × 10−313.9031.93 × 10−130.694
(700, 70, 2.10)1.00 × 10−37.5671.00 × 10−37.2231.00 × 10−318.8881.00 × 10−320.2852.00 × 10−141.304
(800, 80, 2.10)9.61 × 10−49.9689.61 × 10−310.1219.61 × 10−326.9929.61 × 10−327.5682.00 × 10−157.394
(900, 90, 2.10)8.96 × 10−413.8358.96 × 10−413.4178.96 × 10−436.4088.96 × 10−436.6491.88 × 10−184.413
(1000, 100, 2.10)8.61 × 10−418.2748.61 × 10−418.6278.61 × 10−448.8088.61 × 10−449.7761.78 × 10−1117.498
(1100, 110, 2.10)8.00 × 10−421.4878.00 × 10−422.5638.00 × 10−457.9498.00 × 10−454.0218.14 × 10−2336.561
Table 3. Numerical results of algorithms for grayscale missing English text completion problem.
Table 3. Numerical results of algorithms for grayscale missing English text completion problem.
FAIMDT, c = 1 FAIMDT, c = 5 AIMDT, c = 1 AIMDT, c = 5 SVT Algorithm
RETimeRETimeRETimeRETimeRETime
8.60 × 10−337.0603.07 × 10−211.9268.70 × 10−3819.7923.07 × 10−2127.6223.96 × 10−217.247
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, H.; Cui, A.; Yang, H. A Fast Designed Thresholding Algorithm for Low-Rank Matrix Recovery with Application to Missing English Text Completion. Mathematics 2025, 13, 3135. https://doi.org/10.3390/math13193135

AMA Style

He H, Cui A, Yang H. A Fast Designed Thresholding Algorithm for Low-Rank Matrix Recovery with Application to Missing English Text Completion. Mathematics. 2025; 13(19):3135. https://doi.org/10.3390/math13193135

Chicago/Turabian Style

He, Haizhen, Angang Cui, and Hong Yang. 2025. "A Fast Designed Thresholding Algorithm for Low-Rank Matrix Recovery with Application to Missing English Text Completion" Mathematics 13, no. 19: 3135. https://doi.org/10.3390/math13193135

APA Style

He, H., Cui, A., & Yang, H. (2025). A Fast Designed Thresholding Algorithm for Low-Rank Matrix Recovery with Application to Missing English Text Completion. Mathematics, 13(19), 3135. https://doi.org/10.3390/math13193135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop