Next Article in Journal
Effect of Preparation Method on the Optical Properties of Novel Luminescent Glass-Crystalline Composites
Previous Article in Journal
Research on Intelligent Extraction Method of Influencing Factors of Loess Landslide Geological Disasters Based on Soft-Lexicon and GloVe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rank-Restricted Hierarchical Alternating Least Squares Algorithm for Matrix Completion with Applications

Division of Global Business and Technology, Hankuk University of Foreign Studies, Yongin 17035, Republic of Korea
Appl. Sci. 2025, 15(16), 8876; https://doi.org/10.3390/app15168876
Submission received: 16 June 2025 / Revised: 1 August 2025 / Accepted: 8 August 2025 / Published: 12 August 2025

Abstract

The matrix completion problem aims to recover missing entries in a partially observed matrix by approximating it with a low-rank structure. The two common approaches—the singular value thresholding and matrix factorization with alternating least squares—often become prohibitively expensive for large matrices or when rigorous accuracy is demanded. To address these issues, we propose a rank-restricted hierarchical alternating least squares with orthogonality and sparsity constraints, which includes a novel shrinkage function. Specifically, for faster execution speed, truncated factor matrices are updated to restrict the costly shrinkage step as well as boundary-condition heuristics. Experiments on image completion and recommender systems show that the proposed method converges with extremely fast execution speed while achieving comparable or superior reconstruction accuracy relative to state-of-the-art matrix completion methods. For example, in the image completion problem, the proposed algorithm produced outputs approximately 15 times faster on average than the most accurate reference algorithm, while achieving 98% of its accuracy.

1. Introduction

The matrix completion problem, which aims to recover missing or corrupted entries in a partially observed matrix, arises in various fields, such as recommender systems [1,2], computer vision [3,4,5], machine learning [6,7,8], and signal processing [9,10,11]. However, filling in the missing entries of a matrix is challenging because it is inherently ill-posed, as infinitely many completion results may exist. To address this issue, constraints are typically imposed on the complete matrix to capture its inherent structure or regularity. A common constraint is that the underlying matrix is of low rank, meaning that only a few linearly independent latent factors can be used to approximate the missing entries. This assumption is natural in many real-world scenarios. For example, in movie recommender systems, users’ preferences are often explained by only a few underlying factors, such as genres, actors, or directors. This low-rank property makes the matrix completion problem tractable, as it significantly reduces the degrees of freedom. Another important challenge is scalability. In real-world applications, the data matrices to be completed are often extremely large. Even though the data matrix is sparse, matrix completion algorithms typically need to perform computations on the entire matrix to recover the missing entries, requiring prohibitively expensive computational resources. Given the importance of matrix completion in practical applications, numerous methods have been developed to solve the low-rank matrix completion problem, which include optimization-based approaches and machine learning algorithms [6,7,8]. Furthermore, computationally more efficient algorithms have been extensively studied to handle large-scale data matrices [12,13].
In this paper, we focus on enhancing the performance of matrix completion by proposing a novel optimization algorithm and providing proofs of its performance in popular applications, such as image completion and recommender systems. The remainder of this paper is organized as follows. In Section 2, we introduce popular low-rank matrix completion algorithms and highlight our contributions. In Section 3, we propose a new rank-restricted hierarchical alternating least squares algorithm for matrix completion. Section 4 presents an evaluation of the numerical performance of the proposed algorithm using image completion and recommender system data. We conclude with a discussion in Section 5.

2. Related Works and Our Contribution

The mathematical expression of the low-rank completion problem is as follows: Assume that an incomplete matrix M R m × n is given whose observed entries are indexed by the set Ω = { ( i , j ) | M i j is observed } . Matrix completion problems aim to obtain the low-rank matrix X R m × n that matches M on the observed positions by solving the optimization problem [14]:
arg min X rank ( X ) s . t . X i j = M i , j , ( i , j ) Ω .
Introducing the sampling operator P Ω ( G ) for any matrix G, defined by
P Ω ( G ) i j = { g i j if ( i , j ) Ω , 0 otherwise ,
the problem (1) can be rewritten as
arg min X rank ( X ) s . t . P Ω ( X ) = P Ω ( M ) .
Unfortunately, direct rank minimization in (3) is an NP-hard [15]. As a result, it requires a close approximation to rank minimization, which relaxes the problem in (3). For example, l 1 norm or nuclear norm relaxation have been studied [12,14,16,17]. Candès et al. [14] introduced a convex relaxation of the non-convex problem in (3) by employing nuclear norm regularization, such that
arg min X | | X | | * s . t . P Ω ( X ) = P Ω ( M ) ,
and showed that the optimal solution of (4) is equivalent to solving the following problems iteratively, such that
{ X ¯ k = shrink ( X k , τ ) , X k + 1 = X k + δ P Ω ( M X ¯ k ) ,
where δ indicates the step length, k represents the iteration step, and the function shrink ( X k , τ ) is the soft singular value thresholding (SVT) operator with the predefined threshold value τ . Inspired by the work of SVT, several methods based on nonlinear SVT algorithms have been proposed [12,18,19,20].
The method proposed by Candès et al. provided rigorous mathematical derivations of the problems and yielded excellent low-rank approximation of M. However, it demands substantial computational resources because it requires a singular value decomposition (SVD) in every iteration. To alleviate this burden, alternating minimization-based matrix factorization techniques have been explored. Assuming that the rank r min ( m , n ) of X is known, one seeks two factor matrices A R m × r and B R n × r from M that minimize
arg min A , B P Ω ( M ) P Ω ( A B T ) F 2 .
A basic alternating minimization scheme updates one factor matrix while fixing the other, such that
A k + 1 = arg min A P Ω ( M ) P Ω ( A B k T ) F , B k + 1 = arg min B P Ω ( M ) P Ω ( A k B T ) F .
Tanner and Wei proposed an alternating steepest-descent method to solve (7), which iteratively updates A and B using the steepest descent [13]. Hastie et al. [21] considered a nuclear norm-based matrix completion algorithm combined with the matrix factorization method, based on the following equation:
| | X | | * = arg min A , B , X = A B T 1 2 A F 2 + B F 2 .
Equation (8) implies that the nuclear norm of a matrix can be approximated by the sum of the Frobenius norms of its factor matrices. By substituting (8) into their matrix completion formulation, Hastie et al. obtained
arg min X A F 2 + B F 2 s . t . P Ω ( A B T ) = P Ω ( M ) .
Subsequently, they employed alternating minimization techniques to determine the optimal factor matrices A and B. Since solving (9) does not require an SVD, and the sizes of the factor matrices are smaller than that of X, the algorithm is expected to run faster than SVT in (5). However, the algorithm proposed by Hastie et al. is highly dependent on an accurate estimate of the rank; an incorrect value can cause the loss of crucial low-rank information or unnecessary increases in computational time. Later, Yang et al. and Li et al. proposed non-negative matrix-factorization algorithms with an adaptive graph to mitigate sensitivity to noise or outliers [22,23]. Xu et al. combined rank-1 matrix completion with an adaptive local filter to simultaneously incorporate local information and low-rank features [24]. Specifically, Xu et al. incrementally added dominant singular triplets to X and integrated it with locally filtered outputs to obtain the completed matrix. Xiao et al. proposed the tight-and-flexible rank (TFR) function—a tunable non-convex surrogate that hugs the true rank more closely than existing convex or non-convex penalties—and integrated it into a proximal alternating minimization algorithm [17]. Depending on the data characteristics, deep learning based matrix completion approaches can be also considered [6,8,25]. Specifically, deep learning-based methods offer significant advantages for highly sparse, nonlinear, and complex data matrices compared with the matrix factorization approaches introduced in this section.
This study concentrates on computing an accurate low-rank approximation of the incomplete matrix while maintaining computational efficiency. To attain this goal, a rank-restricted hierarchical alternating least squares algorithm with orthogonality and sparsity constraints is proposed. Specifically, the main contributions of the proposed algorithm are as follows:
  • A novel optimal relaxation of (3) that enforces adjustable sparsity and incorporates an orthogonality constraint is proposed, yielding more accurate and computationally efficient results than the procedure in (9). Specifically, the orthogonality constraint is included to enhance computational efficiency, while a new shrinkage function is derived to enforce sparsity in the solution.
  • To improve the computational efficiency within a single iteration, we employ hierarchical alternating minimization to the proposed model, enabling faster computation through column-wise updates rather than full-matrix updates. This method remains effective even with a relatively rough estimate of the rank and is faster than the procedure in (7).

3. Rank-Restricted Hierarchical Alternating Least Squares Algorithm

In this section, an efficient matrix completion algorithm, which employs hierarchical alternating least squares, is proposed. Also, orthogonality and sparsity constraints are incorporated to improve the accuracy of the matrix completion problem.

3.1. Rank-Restricted Hierarchical Alternating Least Squares with Orthogonality Constraint

Here, the hierarchical alternating least squares algorithm for the matrix completion problem is derived. Consider the matrix completion problem in (4). By substituting (8) into (4), the Lagrange multiplier equation of (4) is given as follows:
L ( A , B , λ ) = P Ω ( M ) P Ω ( A B T ) F 2 + λ ( A F 2 + B F 2 ) ,
where λ represents the Lagrange multiplier. Because the factor matrices A R m × r and B R n × r can be interpreted as U Σ and V, respectively, where X = U Σ V T denotes the SVD of X, it is reasonable to impose the orthogonality constraint on B in (10), such that
L ( A , B , λ , γ ) = P Ω ( M ) P Ω ( A B T ) F 2 + λ ( A F 2 + B F 2 ) + γ I r B T B F 2 ,
where a matrix I r R r × r denotes the identity matrix and γ represents another Lagrange multiplier [26,27]. The orthogonality constraint encourages B to be orthogonal, an assumption used in the derivation of the proposed low-rank optimal relaxation method; details are provided in Section 3.2.
Remark 1.
The orthogonality constraint in (11) does not strictly enforce orthogonality; rather, it helps to avoid the zero-lock problem, which can cause intermediate results to stall at zero values and lead to convergence to a suboptimal solution [28]. Therefore, the orthogonality constraint on (11) employs an adjustable degree of orthogonality to impose a relaxed constraint on B. The Supplementary Materials present experimental results demonstrating the role of the constraint by comparing the number of iterations required for convergence with and without the constraint.
Instead of directly computing the optimal solution of (11) using the classical alternating least squares method, we employ the hierarchical least squares technique to avoid matrix inversions [29]. Let A = [ a 1 , a 2 , , a r ] and B = [ b 1 , b 2 , , b r ] . The column-wise minimization of (11) can be written as follows:
L ( A , B , λ , γ ) = P Ω ( M i = 1 r a i b i T ) F 2 + λ i = 1 r a i 2 2 + b i 2 2 + γ j = 1 r i = 1 r b i T b j 2 2 2 i = 1 r b i 2 2 + r .
The optimal solution of the column-wise minimization in (12) is obtained by taking derivatives with respect to each column a i and b i for 1 i r in an alternating manner. For instance, the optimal i-th columns a i and b i are obtained by solving the alternating least squares method derived from (12), such that
a i = α M i b i , b i = β M i T a i ,
where M i = M A B T + a i b i T , and the constants α and β in (13) are defined as
α = 1 ( b i T b i + λ ) , β = 1 ( a i T a i + λ + γ I r B T B F ) .
For large-scale matrix completion problems, a further reduction in computational cost is crucial. To this end, a factor-by-factor procedure is preferred to the column-by-column procedure in (13), which is more beneficial for modern parallel computer architecture. Consequently, the updates for a i and b i can be obtained as
a i α M i b i = α ( [ M B ] i A [ B T B ] i + a i b i T b i ) b i β M i T a i = β ( [ M T A ] i B [ A T A ] i + b i a i T a i ) ,
where the matrix operations in (15) can be pre-computed. The detailed procedure of this algorithm is described in the following section.

3.2. Sparsity Constraint and Imposing the Boundary Condition

In this section, we discuss some performance improvement techniques for the proposed algorithm. One technique is to incorporate a sparsity constraint. The sparsity constraint enhances the quality of image denoising or filtering [30,31], and it is an essential prerequisite for recommender system problems [1]. Specifically, rather than using the truncation of the SVD of A with a predefined threshold, it is reasonable to enforce sparsity by applying a suitable nonlinear element-wise projection or filtering function known as a shrinkage function with a same threshold because the missing entries of the incomplete matrix act like sparse, high-magnitude noise when constructing A B T . In this regard, we propose a novel shrinkage function for the sparsity constraint in the matrix completion problem.
Consider the optimization problem of the surrogate function obtained from (10), defined as follows:
f μ , p ( A , B ) = 1 p A B T M F p + μ p ( A F p + B F p ) ,
for some fixed μ > 0 . Then, the optimal solution to the matrix completion problem with a sparsity constraint can be obtained by solving
arg min f μ , p ( A , B ) , subject to P Ω ( M ) = P Ω ( A B T ) .
Proposition 1.
Let A = U Σ under the assumption in the previous section. Then, the solution of f μ , p ( A , B ) is obtained by applying the shrinkage operator D γ ( Σ , τ , d ) to X, which is defined as
D γ ( Σ , τ , d ) = σ i d + 1 σ i d + τ ,
where σ i , 1 i r is the i-th element of the diagonal matrix Σ, τ = μ p 1 , and d = p p 1 .
Proof. 
Taking the derivative of f μ , p ( A , B ) with respect to B while keeping A fixed yields
f μ , p ( A , B ) B = A T ( A B T M ) p 1 + μ ( B T ) p 1 = 0 .
Rearranging (19) and taking the ( p 1 ) -th root to the equation gives
( A T ) 1 p 1 ( A B T M ) + μ p 1 B T = ( A T ) 1 p 1 A + μ p 1 I B T ( A T ) 1 p 1 M = 0 .
Hence,
B T = τ I + ( A T ) 1 p 1 A 1 ( A T ) 1 p 1 M ,
where τ = μ p 1 . Let Σ = diag ( σ 1 , , σ r ) and U = [ u 1 , , u r ] . If we rewrite (21) as a sum of vectors such that
B T = i = 1 r σ i 1 p 1 τ + σ i 1 p 1 σ i u i T M ,
then X is obtained by
X = A B T = i = 1 r u i σ i d τ + σ i d u i T M .
Let the SVD of M be M = Z Φ W T , where Z = [ z 1 , , z r ] , Φ = diag [ ϕ 1 , , ϕ r ] , and W = [ w i , , w r ] . Because we assumed earlier in this section that X M , we can assume that u i z i , σ i ϕ i , and w i b i for 1 i r . Thus, X is computed as
X i = 1 r u i σ i d + 1 σ i d + τ b i T .
Therefore, based on (24), the solution of f μ , p ( A , B ) is obtained by applying the shrinkage operator defined in (18), and that completes the proof.    □
Proposition 2.
The solution of (17) can be obtained by solving the two following equations alternatingly:
X k = D ¯ γ ( Y k 1 , τ , d ) Y k = Y k 1 + δ P Ω ( M X k ) ,
where δ is the step length, D ¯ γ ( Y k 1 ) is defined as
D ¯ γ ( Y k 1 , τ , d ) = U k 1 Σ ¯ B k 1 T ,
and Σ ¯ is the diagonal matrix obtained from (18) with predefined τ and d.
Proof. 
See [14].    □
Remark 2.
The shrinkage function (18) sharply filters out the small singular values σ i ,
D γ ( Σ , τ , d ) = σ i τ σ i d + 1 0 ,
when d is sufficiently large. Contrarily, for large singular values, since τ σ i d 0 , D γ ( Σ , τ , d ) σ i . Figure 1 shows this behavior for singular values uniformly distributed in [ 0 , 3 ] ; as d increases, the curve approaches that of the Iterative Hard Thresholding (IHT) operator, which enforces sparsity to the results [32].
The proposed hierarchical alternating least squares minimization algorithm for the matrix completion problem (HALMC) is summarized in Algorithm 1, and Figure 2 provides a high-level workflow for additional clarity.
Algorithm 1  X k + 1 = HALMC ( M , Ω , γ , λ , δ , τ , d , r , ϵ )
  1:   X 0 = P Ω ( M )
  2:   A 0 = U 0 ( : , 1 : r ) Σ 0 ( 1 : r , 1 : r ) and B 0 = V 0 ( 1 : r ) , where [ U 0 , Σ 0 , V 0 ] = svd ( M )
  3:  for k = 1,2,… do
  4:       W = X k T A k , Y = A k T A k , and compute β defined in (14)▷  O ( | Ω | r + m r 2 )
  5:      for j = 1,2,…,r do
  6:           b j = β ( W j B Y j + b j a j T a j ) ▷  O ( n r + m 2 )
  7:      end for
  8:       P = X k B k , Q = B k T B k , and compute α defined in (14)▷  O ( | Ω | r + n r 2 )
  9:      for j = 1,2,…,r do
10:           a j = α ( P j A Q j + a j b j T b j ) ▷  O ( m r + n 2 )
11:      end for
12:       [ U , Σ , V ] = svd ( A k ) ▷  O ( m 2 r )
13:       Σ ¯ = shrink ( A k , τ , d ) where shrinkage operator is defined in (18)
14:      Set A k = U Σ ¯ V T ▷  O ( m r 2 )
15:       X k + 1 = X k + δ P Ω ( M A k B k T ) ▷  O ( | Ω | r )
16:      if  X k + 1 X k F 2 X k F 2 > ϵ  then
17:          break
18:      end if
19:  end for
20:  return  X k + 1
Another technique is to impose a boundary condition on M as a preprocess step for HALMC. It is well known that images reconstructed from the truncation of singular values are affected by the boundary value [33]. Therefore, considering the boundary condition may improve the performance of image completion, for example, by reducing the ringing artifacts, thus saving computational resources. For a detailed mathematical formulation imposing a boundary condition on low-rank matrix recovery, readers may refer to [34,35]. For the same reason, in the recommender system application, it is possible to improve the performance of HALMC by considering the boundary condition. Typical boundary conditions include zero (i.e., Dirichlet), periodic, and reflective (i.e., Neumann) boundary conditions. Among them, the reflective boundary condition, where the entries outside the matrix form a mirror image of the interior, is preferred in many image processing applications because it prevents discontinuity near the boundary. For example, a 3 × 3 matrix M and its reflective extension M ext of radius 3 are
M = 1 2 3 4 5 6 7 8 9 , M ext = 9 8 7 7 8 9 9 8 7 6 5 4 4 5 6 6 5 4 3 2 1 1 2 3 3 2 1 3 2 1 1 2 3 3 2 1 6 5 4 4 5 6 6 5 4 9 8 7 7 8 9 9 8 7 9 8 7 7 8 9 9 8 7 6 5 4 4 5 6 6 5 4 3 2 1 1 2 3 3 2 1
In this study, a reflective boundary condition is applied to each incomplete matrix before executing HALMC. Subsequently, the restored matrix is cropped back to its original size.

3.3. Computational and Memory Complexity

HALMC performs an SVD at each iteration, but the decomposition is applied to the smaller factor matrix A k rather than to X k , thereby reducing computational cost. If HALMC converges at iteration k, the overall complexity is approximately O ( m 2 r k ) flops. Detailed operation counts appears as comments in the pseudocode. For the memory cost analysis, we assume that only the entries of X in Ω are required, resulting in a cost of O ( | Ω | ) . The memory required for the intermediate factor matrices is O ( ( m + n ) r ) . Therefore, the total memory cost requires O ( | Ω | + ( m + n ) r ) .

4. Numerical Experiments

This section presents numerical experiments designed to demonstrate the efficiency and accuracy of the proposed algorithm. For performance comparison with HALMC, we evaluated the following reference algorithms: Singular Value Thresholding (SVT) [14], Modified Schatten 2/3-Norm Minimization with Reweighting (TSNMR) [12], Rank-Restricted Efficient Minimum-Margin Matrix Factorization: SoftImpute-ALS (SoftImp) proposed by Hastie et al. [21] (p. 3375), Low-Rank Matrix Completion (LAMC) proposed by Xu et al. [24], and matrix completion with Tight and Flexible Rank (TFR) proposed by Xiao et al. [17] (https://jin-liangxiao.github.io/, accessed on 1 August 2025). Note that SVT, TSNMR, and TFR focus on developing novel shrinkage functions for matrix completion, while SoftImp and LAMC employ rank-restricted matrix factorization. All experiments were implemented using MATLAB version 9.10.00.1710957. The algorithms were tested on two common applications, image completion and recommender system, and were executed on an Intel Core i9-11900K processor with 64 GB memory.

4.1. Image Completion Problem

One of the most popular applications for matrix completion problems is image completion. The image completion problem, also known as image inpainting, involves repairing defects in individual pixel values or regions of digital images caused by some dirt, a scratch, or cracks on charge-coupled device (CCD) or impulse-type noise in images [3,36]. Given the location of the contaminated pixels, these pixels are treated as missing. An intuitive approach to restore the missing pixels is to replace their values with the interpolated values from adjacent pixels. Bertalmio et al. enhanced this approach by iteratively diffusing the values from the surrounding information, which exploits the idea of the thermal diffusion equation in physics [37]. Later, the time-consuming nature of the algorithm proposed by Bertalmio et al. was further improved by considering a total variation model [38]. Another strategy for image completion involves exploiting the low-rank structure inherent in many images. Low-rank image completion algorithms predict missing pixels using a low-rank approximation of the partially observed pixels from the image matrix. In this work, we focus on reconstructing the incomplete images based on low-rank image completion methods. We also demonstrate that the proposed algorithm is effective for image completion problems. For the experiments, we used popular natural images with randomly generated missing pixels to evaluate the performance of the algorithms.
Figure 3a, Figure 4a, Figure 5a and Figure 6a depict the natural images used in our experiments. The images have dimensions of 128 × 128 , 256 × 256 , 512 × 512 , and 1024 × 1024 , respectively. To compare the performance of the algorithms, we measured execution time, the number of iterations to converge, mean-squared error (MSE), peak signal-to-noise (PSNR) values, structural similarity index measure (SSIM), and feature similarity index (FSIM) [39]. PSNR is defined as
PSNR = 20 log 10 max ( X ¯ ) 1 m n i = 0 m j = 0 n ( X ¯ ( i , j ) X ( i , j ) ) 2 ,
and SSIM is defined as
SSIM = ( 2 μ ¯ ( X ¯ ) μ ¯ ( X ) + c 1 ) ( 2 σ ¯ ( X ¯ , X ) + c 2 ) ( 2 μ ¯ ( X ¯ ) 2 + μ ¯ ( X ) 2 + c 1 ) ( σ ¯ ( X ¯ ) 2 + σ ¯ ( X ) 2 + c 2 ) .
In (27) and (28), X ¯ represents the original image without missing pixels, while X denotes the image reconstructed by the algorithms. The term μ ¯ ( M ) denotes the average of an arbitrary matrix M, σ ¯ ( M ) denotes its variance, and σ ¯ ( M , N ) represents the covariance between two arbitrary matrices M and N. The constants c 1 and c 2 are included to stabilize the division in cases where the denominator is small. While MSE and PSNR provide basic pixel-wise error measurements, SSIM evaluates structural fidelity, and FSIM captures perceptually important features, particularly in textured regions. Therefore, higher SSIM and FSIM values, along with lower MSE and higher PSNR, indicate better perceptual and numerical similarity to the ground truth.
To simulate incomplete images, we randomly selected missing pixels with various missing rates. Specifically, we randomly removed 30%, 50%, and 70% of the pixels from the entire image area and treated them as missing. To determine the optimal hyperparameters, we conducted empirical experiments by varying one hyperparameter at a time while keeping the others fixed. Based on these experiments, we set γ = 0.1 defined in (12), the truncation level as 0.25 m for both HALMC and SoftImp (where m is the size of the squared test image), d = 15 as defined in (18), and the boundary condition to 3. For a detailed procedure of selecting the hyperparameters for HALMC, readers may refer to the Supplementary Materials.
To ensure a fair comparison among all algorithms, we define the stopping criterion as follows:
X k X k 1 F 2 X k F 2 ϵ ,
where ϵ = 1.0 e 4 is the predefined threshold. All experimental results were averaged over 20 independent trials.
Table 1, Table 2, Table 3 and Table 4 present the experimental results of all compared algorithms. To highlight the best performance, the top two results in each table are boldfaced. Since TSNMR produces meaningless output due to its inaccuracy and failure to converge, its results were excluded from the analysis. According to the experimental results, HALMC, TFR, and SVT achieved the highest visual quality across all test images and incompletion ratios. In some cases—such as ‘indian’ and ‘barbara’ under specific incompletion ratios—SVT produced highly accurate image restorations. TFR, in particular, achieved the best accuracy for the ‘indian’ image with 30% missing pixels. However, overall, the most accurate results across the experiments were achieved by HALMC. Specifically, HALMC consistently outperformed the other methods in terms of overall pixel-level differences (measured by MSE and PSNR) and structural differences (measured by SSIM and FSIM). Moreover, HALMC produced stable outputs regardless of the incompletion ratios, unlike SVT and TFR, which tended to yield less accurate results as the missing pixel rate increased. LAMC showed relatively higher FSIM values compared to the other accuracy metrics, such as MSE and PSNR. It can be interpreted that the local-filtering module in LAMC improved the local feature similarities, particularly under severe image degradation.
In terms of execution time, HALMC demonstrated significant efficiency while maintaining high accuracy. Although SoftImp achieved the fastest execution time, its performance degraded more rapidly as the incompletion ratio increased, specifically compared to HALMC. Overall, considering the execution time, robustness, and accuracy in image completion, HALMC emerges as the most effective algorithm among the four evaluated. Example output images generated by the algorithms for the case of 50% missing pixels are shown in Figure 3c–g, Figure 4c–g, Figure 5c–g and Figure 6c–g.

4.2. Recommender System

Another application for matrix completion problems is a recommender system. The primary goal of recommender systems is to provide personalized item recommendations to users based on their explicit or implicit preference information. Specifically, user preferences are typically estimated using ratings of similar items, indicating the user’s affinity for related items. As rating data of two entities (users and items) are often placed in matrix form, and users tend to rate only a small subset of items, estimation of the preferences can be considered as predicting missing entities from a large and sparse matrix. To address this challenge, matrix completion algorithms are extensively studied for decades. One widely adopted approach involves low-rank approximation of the rating matrix, providing an effective solution for predicting missing entries and enhancing the accuracy of recommender systems [1,2].
The low-rank matrix completion algorithms predict missing values by using the low-rank approximation of partially observed rating values in the rating matrix. Given a set of users u 1 , , m and items i 1 , , n , if we let y u i be the rating of user u and item i, then the rating matrix X R m × n can be defined as
P Ω ( X ) u i = { y u i , if ( u , i ) Ω 0 otherwise ,
where the set Ω = { ( u , i ) | y u i } is observed. Hence, the proposed low-rank matrix completion algorithm is applicable to the recommender system to fill out the missing rate values. To measure the performance of the proposed algorithm to recommender systems, we used the MovieLens 100k dataset that has a rating matrix X R 943 × 1682 , MovieLens 1M dataset that has a rating matrix X R 6040 × 3952 , and Bookcrossing dataset that has a rating matrix of size 1295 × 17 , 384 . The MovieLens 100k dataset has 97.48% of unrated entries, MovieLens 1M dataset has 98.32%, and Bookcrossing dataset has 99.95%. The datasets have rating values y u i ranging from 1 to 5 for MovieLens 100k and MovieLens 1M, and from 1 to 10 for Bookcrossing. To evaluate performance, we randomly chose 20%, 40%, and 60% of the non-zero-rating values in the datasets and considered them as missing values. After completing the rating matrices, we calculated the mean absolute error (MAE) from the ground truth data and RMSE, where MAE and root mean square error (RMSE) are defined as follows [2]:
MAE = u = 1 m i = 1 n | P Ω ¯ ( X * ) P Ω ¯ ( X ) | N , RMSE = u = 1 m i = 1 n ( P Ω ¯ ( X * ) P Ω ¯ ( X ) ) 2 N .
Here, Ω ¯ represents the set of randomly chosen missing values to be predicted, X * indicates the ground-truth rating matrix, and N denotes the number of missing values in Ω ¯ . The lower the MAE, the closer the prediction of missing user ratings. Similarly, we empirically chose hyperparameters of the algorithm, as well as common parameters, including stopping criterion.
Table 5, Table 6 and Table 7 present the experimental results. As with the image completion experiments, SoftImp demonstrated relatively fast computation speed but yielded unsatisfactory accuracy. SVT produced highly accurate results; however, it suffered from excessive computation time, which makes it less viable for large recommender system datasets. Additionally, SVT showed limited robustness in terms of convergence. TSNMR achieved excellent accuracy across nearly all recommender system datasets, but it failed to converge within the predefined maximum number of iterations. The convergence history for TSNMR is shown in the Supplementary Materials. This resulted in prohibitively high computational costs, rendering TSNMR impractical for large-scale applications. In contrast, HALMC consistently demonstrated the best execution speed across all recommender system datasets and varying missing rates, making it well-suited for large datasets. Although HALMC did not achieve the highest accuracy in all cases, its significantly faster execution speed and reasonable accuracy make it a practical and scalable option for a recommender system.
Consequently, HALMC stands out as the most attractive algorithm for recommender system applications, offering a strong balance between accurate restoration of missing entries and significantly reduced computation time compared to other matrix completion algorithms.

5. Conclusions and Future Works

This study introduces a rank-restricted hierarchical alternating least squares algorithm for matrix completion applications. Typical algorithms for matrix completion often demand substantial computational resources while iteratively computing the extremely expensive SVD, particularly when using large-sized images. To overcome these difficulties, we employ a rank-restricted matrix factorization with an orthogonality constraint using hierarchical alternating least squares. Additionally, we propose a novel shrinkage function of singular values to enforce the sparsity constraint. The experimental results demonstrate that the proposed algorithm restores an incomplete matrix significantly faster than existing methods, while maintaining similar or even superior accuracy in image completion problems and recommender systems. In future work, we plan to explore two directions to enhance the applicability and scalability of the proposed algorithm. First, we aim to incorporate an adaptive selection of the hyperparameters to enable more robust optimization. Second, to further address the computational demands of large-scale applications, we intend to develop a scalable implementation of the proposed algorithm. This includes parallelization strategies using GPU computing and distributed frameworks.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15168876/s1, Figure S1: Empirical analysis for selecting the hyperparameters λ and τ used in HALMC with 30% randomly missing pixels. Figure S2: Empirical analysis for selecting the hyperparameters γ and boundary condition number used in HALMC with 30% randomly missing pixels. Figure S3: Empirical analysis for selecting the hyperparameters truncation level and d defined in (23) for HALMC with 30% randomly missing pixels. Truncation level is the ratio of the image size. Figure S4: Empirical analysis for selecting the hyperparameters λ and τ used in HALMC with 50% randomly missing pixels. Figure S5: Empirical analysis for selecting the hyperparameters γ and boundary condition number used in HALMC with 50% randomly missing pixels. Figure S6: Empirical analysis for selecting the hyperparameters truncation level and d defined in (23) for HALMC with 50% randomly missing pixels. Truncation level is the ratio of the image size. Figuer S7: Empirical analysis for selecting the hyperparameters λ and τ used in HALMC with 70% randomly missing pixels. Figure S8: Empirical analysis for selecting the hyperparameters γ and boundary condition number used in HALMC with 70% randomly missing pixels. Figure S9: Empirical analysis for selecting the hyperparameters truncation level and d defined in (23) for HALMC with 70% randomly missing pixels. Truncation level is the ratio of the image size. Figure S10: Convergence history of TSNMR when MovieLens 100k dataset with 40% missing entries is used.

Funding

This work was supported by the Hankuk University of Foreign Studies Research Fund.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HALMCHierarchical Alternating Least Squares for Matrix Completion

References

  1. Ramlatchan, A.; Yang, M.; Liu, Q.; Li, M.; Wang, J.; Li, Y. A Survey of Matrix Completion Methods, for Recommendation Systems. Big Data Min. Anal. 2018, 1, 308–323. [Google Scholar] [CrossRef]
  2. Chen, Z.; Wang, S. A review on matrix completion for recommender systems. Knowl. Inf. Syst. 2022, 64, 1–34. [Google Scholar] [CrossRef]
  3. Jam, J.; Kendrick, C.; Walker, K.; Drouard, V.; Hsu, J.G.; Yap, M.H. A comprehensive review of past and present image inpainting methods. Comput. Vis. Image Underst. 2021, 203, 103147. [Google Scholar] [CrossRef]
  4. Li, J.; Li, M.; Fan, H. Image Inpainting Algorithm Based on Low-Rank Approximation and Texture Direction. Math. Prob. Eng. 2014. [Google Scholar] [CrossRef]
  5. Xu, J.; Chen, Y.; Zhang, X. Color image inpainting based on low-rank quaternion matrix factorization. J. Ind. Manag. Optim. 2024, 20, 825–837. [Google Scholar] [CrossRef]
  6. Fan, J.; Cheung, J. Matrix completion by deep matrix factorization. Neural Net. 2018, 98, 34–41. [Google Scholar] [CrossRef] [PubMed]
  7. Xu, M.; Jin, R.; Zhou, Z. Speedup matrix completion with side information: Application to multi-label learning. In Proceedings of the NIPS’13: 26th International Conference on Neural Information Processing Systems, Lake Tahoe, NV USA, 5–10 December 2013; Volume 2, pp. 2301–2309. [Google Scholar]
  8. Radhakrishnan, A.; Stefanakis, G.; Belkin, M.; Uhler, C. Simple, fast, and flexible framework for matrix completion with infinite width neural networks. Proc. Natl. Acad. Sci. USA 2022, 119, e2115064119. [Google Scholar] [CrossRef] [PubMed]
  9. Candès, E.; Eldar, Y.; Strohmer, T. Phase retrieval via matrix completion. SIAM Rev. 2015, 52, 225–251. [Google Scholar] [CrossRef]
  10. Kalogerias, D.S.; Petropulu, A.P. Matrix Completion in Colocated MIMO Radar: Recoverability, Bounds & Theoretical Guarantees. IEEE Trans. Signal Process. 2013, 62, 309–321. [Google Scholar] [CrossRef]
  11. Sun, S.; Zhang, Y.D. 4D Automotive Radar Sensing for Autonomous Vehicles: A Sparsity-Oriented Approach. IEEE J. Sel. Top. Signal Process. 2021, 15, 879–891. [Google Scholar] [CrossRef]
  12. Ha, J.; Li, C.; Luo, X.; Wang, Z. Matrix completion via modified schattern 2/3-norm. Eurasip J. Adv. Signal Process. 2023, 2023, 62. [Google Scholar] [CrossRef]
  13. Tanner, J.; Wei, K. Low rank matrix completion by alternating steepest descent methods. Appl. Comput. Harmon. Anal. 2016, 40, 417–429. [Google Scholar] [CrossRef]
  14. Cai, J.F.; Candès, E.J.; Shen, Z. A Singular Value Thresholding Algorithm for Matrix Completion. Siam J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  15. Recht, B.; Fazel, M.; Parrilo, P.A. Guaranteed minimum-rank solution of linear matrix equations via nuclear norm minimization. SIAM Rev. 2010, 52, 471–501. [Google Scholar] [CrossRef]
  16. Shi, Q.; Lu, H.; Cheung, Y. Rank-One Matrix Completion With Automatic Rank Estimation via L1-Norm Regularization. IEEE Trans. Neural Net. Learn. Sys. 2017, 29, 4744–4757. [Google Scholar] [CrossRef] [PubMed]
  17. Xiao, J.; Huang, T.; Deng, L.; Dou, H. A Novel Nonconvex Rank Approximation with Application to the Matrix Completion. East Asian J. Appl. Math. 2025, 15, 741–769. [Google Scholar] [CrossRef]
  18. Cui, A.; Peng, J.; Li, H. Exact recovery low-rank matrix via transformed affine matrix rank minimization. Neurocomputing 2018, 319, 1–12. [Google Scholar] [CrossRef]
  19. Josse, J.; Sardy, S. Adaptive shrinkage of singular values. Stat. Comput. 2016, 26, 715–724. [Google Scholar] [CrossRef]
  20. Zhang, H.; Gong, C.; Qian, J.; Zhang, B.; Xu, C.; Yang, J. Efficient recovery of low-rank via double nonconvex nonsmooth rank minimization. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2916–2925. [Google Scholar] [CrossRef]
  21. Hastie, T.; Mazumder, R.; Lee, J.D.; Zadeh, R. Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares. J. Mach. Learn. Res. 2015, 16, 3367–3402. [Google Scholar]
  22. Li, C.; Che, H.; Leung, M.F.; Liu, C.; Yan, Z. Robust multi-view non-negative matrix factorization with adaptive graph and diversity constraints. Inf. Sci. 2023, 634, 587–607. [Google Scholar] [CrossRef]
  23. Yang, X.; Che, H.; Leung, M.F.; Liu, C. Adaptive graph nonnegative matrix factorization with the self-paced regularization. Appl. Intell. 2023, 53, 15818–15835. [Google Scholar] [CrossRef]
  24. Xu, K.; Zhang, Y.; Dong, Z.; Li, Z.; Fang, B. Hybrid Matrix Completion Model for Improved Images Recovery and Recommendation Systems. IEEE Access 2021, 9, 149349–149359. [Google Scholar] [CrossRef]
  25. Xu, D.; Ruan, C.; Korpeoglu, E.; Kumar, S.; Achan, K. Rethinking Neural vs. Matrix-Factorization Collaborative Filtering: The Theoretical Perspectives. In Proceedings of the 38 th International Conference on Machine Learning, Virtual, 18–24 July 2021; PMLR: Cambridge, UK, 2021; Volume 139. [Google Scholar]
  26. Ding, C.; Li, T.; Peng, W.; Park, H. Orthogonal nonnegative matrix t-factorizations for clustering. In Proceedings of the KDD ’06: 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, 20–23 August 2006; pp. 126–135. [Google Scholar]
  27. Gan, J.; Liu, T.; Li, L.; Zhang, J. Non-negative Matrix Factorization: A Survey. Comput. J. 2021, 64, 1080–1092. [Google Scholar] [CrossRef]
  28. Kimura, K.; Tanaka, Y.; Kudo, M. A Fast Hierarchical Alternating Least Squares Algorithm for Orthogonal Nonnegative Matrix Factorization. In Proceedings of the Sixth Asian Conference on Machine Learning, Ho Chi Minh City, Vietnam, 20–22 November 2015; Volume 39, pp. 129–141. [Google Scholar]
  29. Chichocki, A.; Zdunek, R.; Phan, A.H.; Amari, S.I. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2009. [Google Scholar]
  30. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
  31. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transformation-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  32. Blumensath, T.; Davies, M.E. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef]
  33. Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images Matrices, Spectra, and Filtering; SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
  34. Fan, Y.W.; Nagy, J.G. Synthetic boundary conditions for image deblurring. Linear Algebra Appl. 2011, 434, 2244–2268. [Google Scholar] [CrossRef]
  35. Zhou, X.; Zhou, F.; Bai, X.; Xue, B. A boundary condition based deconvolution framework for image deblurring. J. Comput. Appl. Math. 2014, 261, 14–29. [Google Scholar] [CrossRef]
  36. Nguyen, L.T.; Kim, J.; Shim, B. Low-Rank Matrix Completion: A Contemporary Survey. IEEE Access 2019, 7, 94215–94237. [Google Scholar] [CrossRef]
  37. Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; ACM Press/Addison-Wesley Co.: New Orleans, LA, USA, 2000; pp. 417–424. [Google Scholar]
  38. Chan, T.F.; Shen, J. Nontexture inpainting by curvature-driven diffusions. J. Vis. Commun. Image Represent. 2001, 12, 436–449. [Google Scholar] [CrossRef]
  39. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef]
Figure 1. Shrinkage function with different d.
Figure 1. Shrinkage function with different d.
Applsci 15 08876 g001
Figure 2. Diagram of proposed algorithm.
Figure 2. Diagram of proposed algorithm.
Applsci 15 08876 g002
Figure 3. Experimental results of the algorithm on ‘indian’ image with 50% missing pixels.
Figure 3. Experimental results of the algorithm on ‘indian’ image with 50% missing pixels.
Applsci 15 08876 g003
Figure 4. Experimental results of the algorithm on ‘boat’ image with 50% missing pixels.
Figure 4. Experimental results of the algorithm on ‘boat’ image with 50% missing pixels.
Applsci 15 08876 g004
Figure 5. Experimental results of the algorithm on ‘barbara’ image with 50% missing pixels.
Figure 5. Experimental results of the algorithm on ‘barbara’ image with 50% missing pixels.
Applsci 15 08876 g005
Figure 6. Experimental results of the algorithm on ‘airport’ image with 50% missing pixels.
Figure 6. Experimental results of the algorithm on ‘airport’ image with 50% missing pixels.
Applsci 15 08876 g006
Table 1. Performance comparisons of the algorithms on the ‘indian’ image. The hyperparameters are set to λ = 1 and τ = 0.5 for HALMC.
Table 1. Performance comparisons of the algorithms on the ‘indian’ image. The hyperparameters are set to λ = 1 and τ = 0.5 for HALMC.
Missing RateMethodIter.TimeMSEPSNRSSIMFSIM
30%incomp.--0.297612.99490.99710.9468
SVT25.10.04050.016125.70450.99990.9962
SoftImp8.20.01110.016925.43530.99980.9961
LAMC102.64.78970.307413.08690.99530.9695
HALMC50.50.02450.017325.38610.99990.9954
TFR16.20.02590.012926.66350.99990.9968
50%incomp.--0.501510.72830.99310.9209
SVT21.40.03640.043121.39840.99960.9888
SoftImp19.30.01840.040521.65620.99960.9894
LAMC99.24.88730.303013.11350.99550.9667
HALMC26.70.01410.037721.99170.99970.9893
TFR32.00.05040.031922.78040.99980.9911
70%incomp.--0.699039.28640.98750.9016
SVT500.00.79110.152716.00300.99790.9616
SoftImp17.30.01470.094717.96900.99880.9724
LAMC93.04.77260.292513.20240.99570.9633
HALMC22.00.01120.088018.29900.99910.9729
TFR67.10.10280.090818.29370.99930.9738
Table 2. Performance comparisons of the algorithms on ‘boat’ image. The hyperparameters are set to λ = 1 and τ = 0.5 for HALMC.
Table 2. Performance comparisons of the algorithms on ‘boat’ image. The hyperparameters are set to λ = 1 and τ = 0.5 for HALMC.
Missing RateMethodIter.TimeMSEPSNRSSIMFSIM
30%incomp.--0.299810.59420.99500.8889
SVT178.31.13710.008226.21130.99980.9915
SoftImp8.80.03470.007326.71840.99980.9905
LAMC71.718.75500.267511.59620.99280.9379
HALMC86.90.14840.003829.51500.99990.9959
TFR35.00.21620.003929.39690.99990.9960
50%incomp.--0.49988.37510.98810.8663
SVT161.31.05560.039819.36300.99920.9625
SoftImp13.30.04810.014223.84970.99970.9819
LAMC66.818.48060.259311.65120.99290.9367
HALMC60.10.10640.009725.48860.99990.9879
TFR59.00.36570.012724.32810.99980.9850
70%incomp.--0.69996.91270.97830.8561
SVT500.03.21910.035220.03340.99940.9578
SoftImp21.70.05780.028420.82400.99930.9632
LAMC61.418.79580.239511.86780.99310.9363
HALMC55.30.04580.021921.94760.99960.9689
TFR101.30.60760.065017.26510.99870.9345
Table 3. Performance comparisons of the algorithms on ‘barbara’ image. The hyperparameters are set to λ = 0.5 and τ = 0.3 for HALMC.
Table 3. Performance comparisons of the algorithms on ‘barbara’ image. The hyperparameters are set to λ = 0.5 and τ = 0.3 for HALMC.
Missing RateMethodIter.TimeMSEPSNRSSIMFSIM
30%incomp.--0.299811.11830.99560.9342
SVT266.57.81230.004029.84650.99990.9988
SoftImp10.40.16470.010425.70640.99980.9949
LAMC96.097.97710.237312.89770.99450.9772
HALMC22.50.22480.007227.32770.99990.9970
TFR53.51.49650.010725.59430.99990.9958
50%incomp.--0.50028.89560.98950.9026
SVT248.57.50490.011425.31400.99980.9961
SoftImp17.10.27510.018923.10230.99970.9907
LAMC89.099.24890.225612.99400.99460.9766
HALMC53.00.51710.012524.92030.99980.9952
TFR189.35.33440.015424.14400.99980.9939
70%incomp.--0.69987.43720.98090.8717
SVT500.014.06070.037620.16410.99940.9832
SoftImp29.00.38960.037420.16280.99920.9804
LAMC78.998.66120.223512.94100.99450.9745
HALMC18.70.19280.036820.23160.99940.9820
TFR220.15.98610.064017.86780.99910.9714
Table 4. Performance comparisons of the algorithms on ‘airport’ image. The hyperparameters are set to λ = 0.5 and τ = 0.1 for HALMC.
Table 4. Performance comparisons of the algorithms on ‘airport’ image. The hyperparameters are set to λ = 0.5 and τ = 0.1 for HALMC.
Missing RateMethodIter.TimeMSEPSNRSSIMFSIM
30%incomp.--0.300114.24840.99780.9849
SVT500.068.80150.012528.03830.99990.9995
SoftImp12.01.25590.017326.63500.99980.9978
LAMC76.0369.73700.162216.93160.99780.9867
HALMC56.11.83250.010528.80030.99990.9995
TFR72.710.67550.013727.64840.99990.9992
50%incomp.--0.499712.03390.99480.9687
SVT500.072.25690.029624.31040.99980.9980
SoftImp19.01.72210.028824.42840.99970.9952
LAMC73.0370.19300.144717.42080.99820.9872
HALMC54.41.76040.019126.20060.99990.9984
TFR81.011.92130.250215.52450.99810.9822
70%incomp.--0.700210.56910.99050.9454
SVT500.071.91240.052321.88570.99970.9940
SoftImp29.02.18610.049422.07740.99950.9918
LAMC74.0652.00800.129417.90270.99840.9876
HALMC37.01.23270.039323.07460.99970.9936
TFR82.011.81850.419913.29780.99570.9633
Table 5. Performance comparisons of the algorithms on ‘Movielens 100K’ dataset. The hyperparameters are set to λ = 1 and τ = 0.5 for HALMC.
Table 5. Performance comparisons of the algorithms on ‘Movielens 100K’ dataset. The hyperparameters are set to λ = 1 and τ = 0.5 for HALMC.
Missing RateMethodIter.TimeMAERMSE
20%incomp.--3.52803.7034
SVT210.9030.15861.07321.2894
SoftImp50.004.33381.07721.2988
TSNMR500.0088.66080.97781.1729
HALMC22.000.89780.98481.1927
TFR169.6724.70941.19931.4351
40%incomp.--3.52743.7032
SVT185.0024.91331.13141.3569
SoftImp52.004.18661.14791.3789
TSNMR500.0083.19421.01781.2298
HALMC22.400.91021.01521.2281
TFR162.5023.39461.26751.5077
60%incomp.--3.52913.7044
SVT155.2021.03881.24251.4824
SoftImp56.004.46911.25891.5029
TSNMR500.0084.15061.22081.4323
HALMC23.000.91841.07891.3025
TFR174.0024.11571.38001.6268
Table 6. Performance comparisons of the algorithms on ‘Movielens 1M’ dataset. The hyperparameters are set to λ = 1 and τ = 1 for HALMC.
Table 6. Performance comparisons of the algorithms on ‘Movielens 1M’ dataset. The hyperparameters are set to λ = 1 and τ = 1 for HALMC.
Missing RateMethodIter.TimeMAERMSE
20%incomp.--3.58123.7516
SVT500.006399.87001.02441.2265
SoftImp51.00268.84901.25491.4883
TSNMR500.006860.52000.93591.1240
HALMC29.0021.69031.06211.2591
TFR27.00358.87302.61272.8424
40%incomp.--3.58193.7519
SVT500.006401.65001.08841.2999
SoftImp55.00285.11801.30281.5395
TSNMR500.006849.06000.94661.1321
HALMC28.5021.61721.08201.2827
TFR35.00463.24502.47232.7358
60%incomp.--3.58133.7515
SVT434.005529.12001.15191.3707
SoftImp60.00296.59401.41631.6599
TSNMR500.006821.03000.94951.1301
HALMC29.5022.22531.12131.3269
TFR46.00606.45902.44542.6799
Table 7. Performance comparisons of the algorithms on ‘Bookcrossing’ dataset. The hyperparameters are set to λ = 1 and τ = 1 for HALMC.
Table 7. Performance comparisons of the algorithms on ‘Bookcrossing’ dataset. The hyperparameters are set to λ = 1 and τ = 1 for HALMC.
Missing RateMethodIter.TimeMAERMSE
20%incomp.--7.95138.1353
SVT500.003866.08006.43366.6708
SoftImp52.20261.17805.43285.7538
TSNMR500.004775.27002.98573.3236
HALMC91.8072.75173.08003.4051
TFR200.001750.85005.18065.4939
40%incomp.--7.96018.1434
SVT500.003863.81006.92287.1359
SoftImp50.00255.38805.77136.0657
TSNMR500.004682.02002.95693.3006
HALMC85.5068.58953.67924.0054
TFR211.251756.02005.59615.8833
60%incomp.--7.95688.1399
SVT500.003805.29007.32417.5232
SoftImp47.00242.51406.33746.5932
TSNMR500.004802.90002.95623.2997
HALMC80.0064.41285.71025.9981
TFR204.001815.14005.79466.0672
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, G. Rank-Restricted Hierarchical Alternating Least Squares Algorithm for Matrix Completion with Applications. Appl. Sci. 2025, 15, 8876. https://doi.org/10.3390/app15168876

AMA Style

Lee G. Rank-Restricted Hierarchical Alternating Least Squares Algorithm for Matrix Completion with Applications. Applied Sciences. 2025; 15(16):8876. https://doi.org/10.3390/app15168876

Chicago/Turabian Style

Lee, Geunseop. 2025. "Rank-Restricted Hierarchical Alternating Least Squares Algorithm for Matrix Completion with Applications" Applied Sciences 15, no. 16: 8876. https://doi.org/10.3390/app15168876

APA Style

Lee, G. (2025). Rank-Restricted Hierarchical Alternating Least Squares Algorithm for Matrix Completion with Applications. Applied Sciences, 15(16), 8876. https://doi.org/10.3390/app15168876

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop