Next Article in Journal
Product Cost Calculation in Model-Based Systems Engineering
Previous Article in Journal
Analysis of the Features of Capacity Correlation Network and Its Impact on Shipping Freight Rate
Previous Article in Special Issue
The Development of a Methodology for Assessing Data Value Through the Identification of Key Determinants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Latent Factor Recommendation Approach for Sparse Datasets of E-Commerce Platforms

1
The Faculty of Education, Shaanxi Normal University, Xi’an 710063, China
2
Department of Computer Science and Engineering, Hanyang University, Ansan 15577, Republic of Korea
3
Admissions and Employment Office, Xi’an University, Xi’an 710065, China
4
Graduate School of Engineering, ESIGELEC, Av. Galilée, 76801 Saint-Étienne-du-Rouvray, France
5
School of Public Administration, University of Electronic Science and Technology of China, Chengdu 610054, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Systems 2025, 13(5), 372; https://doi.org/10.3390/systems13050372
Submission received: 25 March 2025 / Revised: 7 May 2025 / Accepted: 9 May 2025 / Published: 13 May 2025
(This article belongs to the Special Issue Data-Driven Methods in Business Process Management)

Abstract

:
In certain newly established or niche e-commerce platforms, user–item interactions are often exceedingly sparse due to limited user bases or specialized product lines, posing significant obstacles to accurate personalized recommendations. To address these challenges, this paper proposes an enhanced recommendation approach based on a latent factor model. By leveraging factorization to uncover the hidden features of users and items and incorporating both user behavioral data and item attribute information, a multi-dimensional latent semantic space is constructed to more effectively capture the underlying relationships between user preferences and item properties. The method involves data preprocessing, model construction, user and item vectorization, and semantic-similarity-based recommendation generation. For empirical validation, we employ a real-world dataset gathered from an e-commerce platform, comprising 4645 ratings from 3445 users across 277 items in nine distinct categories. Experimental results demonstrate that, compared with conventional collaborative filtering methods, this approach achieves superior precision and recall even in highly sparse settings, showing stronger resilience under low-density conditions. These findings offer objective and feasible insights for advancing personalized recommendation techniques in newly established or niche e-commerce platforms.

1. Introduction

To achieve personalized recommendation systems for e-commerce platforms, the efficacy of the recommendation algorithm is crucial for the quality of content delivery. The successful implementation of such systems depends on the selection of an appropriate algorithm. Contemporary recommendation methodologies encompass content-based approaches [1,2], association rule techniques [3], and collaborative filtering strategies [4]. Among these, collaborative filtering has emerged as the most widely adopted and thoroughly researched approach.
The collaborative filtering algorithm [5] operates on the principle of analyzing user-interest similarities. It assumes that individuals with similar backgrounds, values, and interests tend to share similar views on corresponding items. This approach offers a unique advantage over other recommendation methods by uncovering user preferences that might be challenging to discern through alternative means [6]. Moreover, it does not rely on explicit item text or classification data, nor does it require users to actively input personal information. Instead, the system autonomously acquires and analyzes user data, resulting in a more efficient and personalized recommendation process [7].
However, the index growth of internet usage and e-commerce has led to a dramatic expansion in both user bases and item catalogs across numerous platforms. This data explosion presents significant challenges to traditional collaborative filtering techniques. Issues such as high computational complexity and substantial memory requirements impede real-time processing and overall efficiency. Furthermore, problems like data sparsity [8], cold start scenarios [9,10], and catastrophic forgetting [11] continue to impact the performance of recommender systems, prompting ongoing research and development of novel algorithms.
To address the data sparsity issue, a common approach involves densifying the original dataset using fixed values. The average filling method [12] is frequently employed, utilizing either the mean of all user ratings or the average rating for a specific item. While this technique can enhance system accuracy to some extent, it fails to capture individual user preferences for specific items, thus not fundamentally resolving the sparsity problem [13].
Consequently, researchers have explored alternative solutions by leveraging user and item information. These approaches include item similarity-based collaborative algorithms [14], data matrix dimension reduction [15,16,17,18,19], and artificial intelligence techniques [20,21]. Sarwar et al. [22] conducted a comparative analysis of item-based and user-based algorithms, demonstrating the superiority of the former. Jiang et al. [23] introduced a Slope One algorithm that integrates trusted data and user similarity, with experiments on Amazon datasets validating its accuracy. To enhance both recommendation accuracy and prediction precision, Chen et al. [24] proposed a collaborative filtering recommendation algorithm based on user relevance and evolutionary clustering. At present, with high sparseness, frequent cold starts, and fast scene updates, a recommendation solution that integrates multi-source data and dynamic bias is still needed.
In recent years, deep learning has achieved remarkable progress in recommendation systems, particularly in scenarios with large-scale and densely populated interaction data. However, in small-scale and extremely sparse recommendation tasks, the performance of deep learning models is often suboptimal. This is primarily due to the heavy parameterization of such models, which makes them highly sensitive to data volume and quality. When faced with data sparsity, these models are prone to overfitting and struggle to learn generalizable representations. For instance, Fu et al. [25] in experiments on medium-scale datasets such as MovieLens-1M, deep collaborative filtering models demonstrated limited performance due to insufficient training signals and underperformed compared to lightweight alternatives. Similarly, Stergiopoulos, V. et al. [26] in highly sparse academic datasets like CiteULike, deep models were shown to learn noise rather than meaningful preference patterns, resulting in degraded recommendation quality. Furthermore, Xia et al. [27] emphasized that under conditions of extreme sparsity and small sample sizes, deep models suffer from instability and high computational overhead, offering little practical advantage over simpler approaches. Therefore, in such challenging data environments, shallow methods such as neighborhood-based models, matrix factorization, and probabilistic approaches remain more robust and effective, providing a more feasible modeling choice for recommender systems.
To address the data sparsity and cold-start problems commonly encountered, this paper extends the conventional Latent Factor Model framework by integrating baseline bias correction, dimensionality reduction, and semantic vectorization into a unified system tailored for extreme sparsity conditions. While each individual component is well-established, the novelty lies in the design of a joint optimization process and evaluation strategy under real-world constraints of 99.5% data sparsity. First, by integrating multi-dimensional user and item attributes into the factorization process, the proposed approach significantly improves robustness under sparse conditions. Second, the adoption of gradient descent-based iterative training, coupled with a proper regularization strategy, effectively curbs overfitting and boosts predictive accuracy. Third, the enhanced LFM representation incorporates user biases and multi-source feedback, mitigating the cold-start issue and improving recommendation diversity. Experimental results indicate that this improved LFM not only achieves higher accuracy and stability but also demonstrates excellent scalability, offering valuable insights and technical support for personalized recommendation in high-sparsity environments.

2. Enhanced Latent Factor Recommendation Algorithm

2.1. Establishment of Latent Factor Model

The enhanced latent-factor framework in this paper differs from standard latent factor models in three key ways. First, it integrates a baseline bias correction component to offset systematic skew in user or item ratings. Second, it incorporates a dimension-reduced semantic projection optimized through singular value decomposition and gradient descent jointly, enabling stable convergence even under extreme sparsity (99.5%). Third, the model adopts a whole-matrix evaluation scheme instead of a conventional   T o p K truncation, aligning the training objective more closely with performance in low-density recommendation environments. The baseline method [28] is mainly used to provide references for other models, so that the model can better approach the benchmark level. One of its important advantages is that it can generate recommendations for new registered users without any information. This paper uses the gradient descent method to solve the problem of the benchmark prediction model.
The r u , i ’s prediction rating is r ^ u , i , and the error of prediction is e u , i = r u , i r ^ u , i . Equation (1) shows the first-order partial derivative of b u and b i .
C b u = 2 e u , i + 2 λ b u C b i = 2 e u , i + 2 λ b i
where, C represents the loss function. The parameters b u and b i are the deviations of user u and item i from the average level, respectively.
Moving in the direction opposite to the gradient, we obtain the iterative Equation (2):
b u b u + γ · ( e u , i λ · b u ) b i b i + γ · ( e u , i λ · b i )
Here, γ represents the learning rate, λ denotes the regularization parameter. Through multiple experiments, the optimal value is determined according to the actual situation.
The main goal of using factorial models to generate predictive ratings is to reveal hidden features of items that can explain the observed rating. Some examples of this model include the PLSA model [29], the neural network model [30], and the implicit Dirichlet assignment model [31]. Recently, the matrix factorization model has been favored by more and more people because of its accuracy and stability.
The idea of establishing the factorization model comes from the singular value decomposition (SVD) principle [32]. The implementation scheme of SVD technology is to decompose the original matrix, select the first k singular values as the eigenvalues of the prediction matrix, and generate a new low-dimensional matrix. This matrix is the approximate matrix of the original matrix, and the matrix value is used as the prediction rating.
Given a user–item rating matrix R of size m × n (where ( m > n ) ), SVD yields three matrices: U , S , and V , such that Equation (3):
R = U × S × V T
In the above equation, S is the singular value diagonal matrix, and its size is m × n . The elements on the diagonal of S are singular values of R and satisfy the arrangement order of δ 1 δ 2 . . . δ n > 0 . The matrices U and V are orthogonal matrices. The size of U is m × m and satisfies U U T = 1 , and the size of V is n × n and satisfies V V T = 1 . The schematic diagram of the singular value decomposition of the R matrix is shown in Figure 1.
In the process of dimension reduction using singular value decomposition, it is necessary to determine how many dimensions are retained. This parameter needs to be adjusted according to the actual situation. The specific operation of dimension reduction is to retain the first k eigenvalues (i.e., δ 1 , δ 2 , , δ k ) in the singular value matrix S to obtain a new eigenmatrix S k . And we select the corresponding k eigenvectors in the matrices U and V to form the matrices U k and V k , and finally synthesize a new matrix R k , as shown in Equation (4).
R ^ R k = U k × S k × V k T
Then R k ( u , i ) represents the predicted rating of the user u on the item i . The SVD decomposition and dimension reduction process is shown in Figure 2.
The steps of using singular value decomposition to complete scoring prediction are as follows:
The rating matrix R is decomposed into U , S , and V by the singular value decomposition algorithm.
We obtain the first k singular values of the S matrix to form S k .
We select the corresponding k eigenvectors from U and V to form U k and V k .
Matrices U r = U k S k and V r = S k V k T are synthesized from S k , U k , and V k .
We obtain the prediction rating r ^ u , i = U r ( u ) V r ( i ) of the user u on the item i .
The SVD algorithm is the usual way to solve the problem of sparse data in the recommendation system, which can achieve the goal of dimension reduction. However, this method has the following two problems:
  • Occupying too much storage space. The actual system has many users and items, and the matrix needs a large storage space after generating the prediction rating.
  • The operation’s efficiency is low. The algorithm needs to decompose the matrix. For the matrix with very high dimensions in practice, the algorithm will require a large amount of computation and take a long time.
These two problems limit the application of the SVD algorithm. After continuous research, Simon Funk improved the SVD algorithm based on the gradient descent method [33] and proposed Funk SVD [34], i.e., a factorization model.
The factor decomposition model utilizes scoring data from the e-commerce platform as its primary input. These data encompass information on all users, items, and the corresponding user–item ratings. In Figure 3, to illustrate the model’s functionality, we consider a simplified scenario with ratings from 3 users on 4 items, assuming 3 hidden features.
In this factor decomposition model, matrix R serves as the foundation, where R i , j denotes user i ’s preference for item j . The model’s main function is to find latent item features from the original data, which are then used for item classification and rating prediction. The matrix R undergoes decomposition into matrix P and matrix Q .
Here, P is the matrix of user–feature, with P i , j indicating user i ’s preference level for hidden feature j . Conversely, Q denotes the matrix of item-feature, where Q i , j signifies the weight of item j within the feature set i .
The predicted preference of user u for item i is calculated using Equation (5):
r ^ u , i = q i T p u = k = 1 K P u , k Q k , i
In Equation (5), p u represents user u ’s preference vector (a row in matrix P ), while q i denotes item i ’s weight vector (a column in matrix Q ).
This decomposition approach offers several advantages:
  • It autonomously extracts and utilizes latent item attributes for classification, eliminating the need for manual item categorization.
  • The model’s granularity is flexible and determined by the number of hidden features, allowing for adjustable levels of refinement.
  • Rather than explicit item categorization, the model assigns weights to each item across various classes.
To optimize the vectors p u and q i , we employ a loss function minimization approach. The input data are partitioned into training and test sets, and gradient descent is utilized to iteratively refine p u and q i , thereby reducing the loss function value and improving prediction accuracy.
The initial loss function is defined as Equation (6):
C = ( u , i ) κ ( r u , i r ^ u , i ) 2 = ( u , i ) κ ( r u , i q i T p u ) 2
where κ is the set of known user–item interactions.
To mitigate overfitting, we introduce a regularization term, modifying the loss function to Equation (7):
C = ( u , i ) κ ( r u , i q i T p u ) 2 + λ q i 2 + λ p u 2
Here, λ serves as the regularization parameter, fine-tuned through empirical testing. The optimization process employs gradient descent, involving the following steps:
  • Compute the partial derivatives of p u and q i , as shown in Equation (8):
    C p u = 2 q i e u , i + 2 λ p u C q i = 2 p u e u , i + 2 λ q i
  • Update p u and q i iteratively, as Equation (9):
    p u p u + γ · ( e u , i · q i λ · p u ) q i q i + γ · ( e u , i · p u λ · q i )
Parameter γ denotes the learning rate, e u , i represents the prediction error. The model’s implementation requires:
  • Initializing vectors p u and q i based on the dataset.
  • Tuning parameters include learning rate γ , regularization parameter λ , iteration count N , and hidden feature count F .
This factor decomposition approach allows for the incorporation of corrective measures to address practical issues, such as bias in user scoring patterns or consistent item overestimation.
Equation (10) depicts the ultimate predictive model, known as the implicit semantic model [35].
r ^ u , i = μ + b i + b u + q i T p u
To optimize the model parameters ( b u , b i , p u , q i ,), we minimize the following normalized square loss function, as shown in Equation (11):
C = ( u , i ) κ ( r u , i μ b i b u q i T p u ) 2 + λ ( b i 2 + b u 2 + q i 2 + p u 2 )
The optimization process employs stochastic gradient descent, with the following update rules, as shown in Equation (12).
C b u = 2 e u , i + 2 λ b u C b i = 2 e u , i + 2 λ b i C p u = 2 q i e u , i + 2 λ p u C q i = 2 p u e u , i + 2 λ q i
To adjust the parameters for a given training sample, we utilize the technique of moving in the direction contrary to the gradient, as illustrated in Equation (13).
b u b u + γ · ( e u , i λ · b u )             b i b i + γ · ( e u , i λ · b i )             p u p u + γ · ( e u , i · q i λ · p u ) q i q i + γ · ( e u , i · p u λ · q i )
By fine-tuning parameter γ and λ , the model achieves enhanced prediction accuracy.

2.2. Algorithm Design

This study improves user-based collaborative filtering by integrating a dimension reduction method based on an implicit semantic model. The approach employs gradient descent for iterative training, enabling the prediction of unscored items and the completion of missing entries in the initial user scoring matrix. This methodology effectively addresses the data sparsity issue inherent in such matrices.
The algorithm proceeds as follows:
  • Acquisition and Representation of Data: The user–item scoring data from the e-commerce platform is collected and represented as a matrix, as shown in Equation (14).
    R = r 1,1 r 1,2 r 1 , n r 2,1 r 2,2 r 2 , n r m , 1 r m , 2 r m , n  
  • Initialization Phase: Compute the mean rating μ across all users in the system based on matrix R . Initialize vectors b u , b i , p u , and q i .
  • Rating Prediction and Error Calculation: Calculate the predicted rating using: r ^ u , i = μ + b i + b u + q i T p u Determine the prediction error: e u , i = r u , i r ^ u , i .
  • Parameter Optimization: Update b u , b i , p u , and q i through training, as per the previously defined Equation (13). Fine-tune the learning rate γ , regularization parameter λ , and the number of hidden features F through experimental iterations to achieve optimal performance.
  • Iterative Refinement: Repeat steps 3 and 4 for N iterations to progressively improve prediction accuracy.
  • Matrix Completion: For each user–item pair ( u , i ) , determine the final predicted rating: r ^ u , i = r u , i , otherwise r ^ u , i = μ + b i + b u + q i T p u , as shown in Equation (15).
    r ^ u , i = r u , i                                                         r u , i 0 μ + b i + b u + q i T p u       r u , i = 0
The operation produces a revised scoring matrix R ^ , depicted in Equation (16):
R ^ = r ^ 1,1 r ^ 1,2 r ^ 1 , n r ^ 2,1 r ^ 2,2 r ^ 2 , n r ^ m , 1 r ^ m , 2 r ^ m , n  
7.
Collaborative Filtering Application: Utilize the newly constructed dense scoring matrix R ^ as input for the collaborative filtering algorithm.
8.
Final Prediction and Recommendation: Generate the ultimate rating predictions and item recommendations based on the results of the collaborative filtering process.

2.3. Computational Complexity

This section will discuss the complexity of the model. First, define the following parameters:
Let m be the number of users, n the number of items; Ω the number of observed (user, item) ratings; F the dimensionality of the latent factor space; N the number of training epochs.
LFM uses stochastic gradient descent (SGD). The time complexity of a complete training increases linearly with the data size, latent factor dimension, and number of iterations. The complexity is shown in Equation (17).
T t r a i n = O ( Ω   F   N )
The parameter memory footprint grows linearly with the number of users and items, as shown in Equation (18):
S p a r a m = O ( ( m + n ) F )
The I/O cost of each epoch is simply O ( Ω ) , as the algorithm scans the sparse rating list once. Because every cost term is linear, LFM remains efficient computationally and storage-wise.
Figure 4 presents a visual representation of this enhanced recommendation algorithm’s workflow, illustrating the integration of the implicit semantic model into the e-commerce platform’s recommendation system.

3. Evaluation Indicators

To assess our proposed recommendation system’s effectiveness, we used five common metrics:
Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Precision, Recall and F1 score. These evaluate the algorithm’s prediction accuracy and overall performance.
The MAE, a standard metric for assessing prediction accuracy [36], is computed as Equation (19):
M A E = 1 τ ( u , i ) τ r ^ u , i r u , i
Here, τ is the test set, τ its cardinality, r ^ u , i the predicted rating, and r u , i the actual score for user u on item i .
The F-measure, which balances precision and recall, is calculated as Equation (20):
F-measure = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
This metric offers a more holistic view of the system’s performance. A higher F-measure indicates superior recommendation quality and better alignment with user preferences.
RMSE quantifies the square root of the average squared differences between the predicted and observed ratings, thus emphasizing larger errors. RMSE is calculated using Equation (21):
R M S E = 1 τ ( u , i ) τ r ^ u , i r u , i 2      
Consequently, lower MAE and RMSE values, coupled with higher F-measure scores, signify more accurate and robust predictions.
Because the experiment data set is extremely sparse, it contains 3445 users, 277 items, and 4645 observed ratings. The rating matrix density is 4.9 × 10 3   ( 0.49 % ) , which corresponds to a sparsity of 99.5%.
Under such conditions a conventional Top-K cut-off would hide most relevant interactions and amplify random variance [37]. Therefore, we evaluate Precision and Recall on the entire prediction list, using a fixed relevance threshold τ = 4 as Equations (22) and (23):
  Precision   = u U   P u T u u U   P u
Recall   = u U   P u T u u U   T u
where P u = { i r ^ u , i τ } is the set of items predicted as “relevant” for user   u , and T u is the ground-truth relevant set.
This whole-matrix evaluation better reflects performance in highly sparse, long-tail scenarios and is reported alongside MAE and RMSE in the results section.

4. Experiment and Results

To fine-tune the LFM model, we systematically adjust the following parameters:
Learning rate ( γ ) , Regularization parameter ( λ ) , Number of latent features ( F ) , Number of iterations ( N ) .
We employ a controlled variable approach, iteratively modifying each parameter while holding others constant. This process allows us to identify the optimal parameter combination for our e-commerce platform recommendation algorithm.
The experiment is divided into the following parts:

4.1. Evaluation of Prediction Accuracy Under Different Iteration Settings

Set learning rate: γ = 0.01 , regularization parameter: λ = 0.006 , and the number of hidden features: F = 10 . When the number of iterations N changes from 0 to 100, the MAE and RMSE of the three prediction models can be calculated, as shown in Table 1.
When N is taken as 0–100, it is ensured that the rating predictions based on the baseline model, SVD model, and LFM model have the same parameters. The change trend of MAE and RMSE values of the three is shown in Figure 5.
It can be intuitively seen from Figure 5 that the prediction errors (MAE and RMSE) of the baseline model and the LFM model are relatively close and are generally lower than the SVD model. The RMSE curve is basically consistent with the trend observed in the corresponding MAE curve, but the RMSE shows a fluctuation pattern, which further highlights the sensitivity to small changes in model parameters during the iterative optimization process. Before reaching about 20 iterations, the error levels of the baseline model and the LFM model are similar. However, as the number of iterations increases, the performance of the LFM model gradually improves, showing its stronger prediction accuracy under sparse conditions.
We evaluated the accuracy rate and recall rate for three recommendation algorithms: one based on the benchmark prediction model, another on the factor decomposition model, and a third on the implicit semantic model. By combining these two metrics, we calculated the F1 score. Table 2 displays the experimental results for each algorithm.
The accuracy rate of the three models increases with the increase in the number of iterations N and becomes stable after N = 10 . When the number of iterations is between 0 and 20, the accuracy and recall of the LFM model are basically the same as those of the baseline model, and higher than those of the SVD model. After more than 20 iterations, the LFM model is more accurate. We compare the F -measure, recall, and precision change curves of the three models, as shown in Figure 6.
Figure 6 compares the performance indicators (F-measure, precision, and recall) of the baseline, SVD, and LFM models at different iteration numbers (N). As can be seen from Figure 6a, the baseline model and the LFM model are significantly better than the SVD model in terms of F-measure. From Figure 6b (precision) and Figure 6c (recall), it can be found that the performance trends of these two indicators are roughly similar to the F-measure.
Overall, Baseline and LFM have achieved a good balance between accuracy and coverage, especially the LFM model has a rebound in Recall when the number of iterations is 80, which shows that the model can still take into account the effectiveness of user interest coverage while steadily improving the accuracy of recommendations. Therefore, on the whole, the LFM model shows good adaptability and balanced performance for recommendation tasks in sparse data environments. In the practical application, the prediction of the LFM model is more accurate after many iterations.

4.2. Impact of Hyperparameter Tuning on Model Performance

(1)
Adjustment of learning rate γ of the LFM model
We set the regularization parameter λ = 0.006 , the number of implicit features F = 10 , the number of iterations N = 30 , and the learning rate γ from 0 to 0.04, with a step size of 0.002. The MAE value and RMSE value corresponding to each γ is the average value of 5 experiments, as shown in Table 3.
Taking the learning rate γ as the abscissa and the MAE and RMSE values as the ordinate, the change curve is shown in Figure 7.
According to Figure 7, it can be found that when γ = 0.0102 , the minimum error is reached. After γ = 0.0102 , the curve rises slowly with the increase in the learning rate. From the RMSE curve, it can be observed that the curve trend is consistent with MAE. The fluctuation degree of the RMSE curve also shows that the model is not very sensitive to changes in the learning rate parameter.
(2)
Adjustment of regularization parameter of the LFM model
We set the learning rate γ = 0.0102 , the number of implicit features F = 10 , the number of iterations N = 30 , and the value of the regularization parameter λ from 0 to 0.04, with an interval of 0.002. The MAE and RMSE value corresponding to each λ is the average value of 5 experiments, as shown in Table 4.
We take the regularization parameter λ as the abscissa and the MAE and RMSE as the ordinate. Both MAE and RMSE changes corresponding to all values of λ are expressed in the rectangular coordinate system, and the change trend of the curve can be obtained, as shown in Figure 8.
According to the curve, from the overall trend, as the regularization parameter increases, the MAE curve (blue) shows a relatively smooth and gradual upward trend, which shows that moderate regularization (smaller   λ value) helps to avoid overfitting and improve the generalization ability of the model, but when the regularization intensity is too large, it will lead to underfitting of the model and increase the prediction error. However, unlike the smooth performance of MAE, the RMSE curve (red) shows obvious high-frequency sawtooth fluctuations. This phenomenon is because the model itself is highly unstable under sparse data, and even a small regularization adjustment will change the error distribution state of some user–item prediction pairs. In contrast, MAE only uses a linear accumulation calculation method for prediction errors, which does not have this amplification effect, so it performs more steadily. From a practical point of view, the frequent fluctuations of RMSE reflect the sensitivity of the model to individual rating predictions under different regularization strengths.
(3)
Adjustment of iteration times of LFM model
Take the ratio of γ = 0.0102 , λ = 0.0064 , F = 10 , training set and test set as 19:1, change the iteration number N (value 0–100). During the experiment, five experiments were conducted for each N , the arithmetic mean value was taken for all MAE and RMSE values, and some representative data were selected. The results are shown in Table 5.
Taking N as the abscissas, Mae and RMSE as the ordinate, the data generated by changing N from 0 to 100 are expressed on the coordinate system, and the change curve can be obtained, as shown in Figure 9.
Figure 9 shows the changes in MAE and RMSE of the LFM model under different iterations N . Overall, the two indicators of MAE and RMSE show similar overall trends, that is, the indicators drop significantly in the initial stage of the iteration number, indicating that the model converges quickly, and the prediction accuracy improves rapidly. Then the indicators gradually stabilize, indicating that the model has reached a certain convergence state. However, it is worth noting that after more than N = 30 times, the two indicators begin to slowly recover, indicating a slight overfitting trend.
In addition, this gentle but persistent small fluctuation in RMSE also reflects the sensitivity of the model itself in the context of sparse data and small-scale data sets. Even when the model is generally stable, small changes in the prediction error will be amplified in the RMSE, showing more frequent small fluctuations.
After a comprehensive analysis, we recommend that the number of iterations be selected around 30 in practical applications. This number of iterations achieves the best convergence performance of RMSE and MAE overall.
(4)
Adjustment of the number f of hidden features in the LFM model
We take the ratio of γ = 0.0102 , λ = 0.0064 , N = 30 , training set and test set as 19:1, change the number of latent features F (value 0-100), and get the MAE and RMSE value and running time of the implicit semantic model (each value is the average value of 5 experiments), as shown in Table 6.
The MAE and RMSE values corresponding to different F values (0–100) and the running time are plotted in a graph and expressed in the same rectangular coordinate system, as shown in Figure 10.
It can be seen from the figure that when the F value is 10, the MAE value reaches the minimum value. Meanwhile, the running time is relatively short, 227.6 milliseconds. Similarly, the RMSE curve also shows volatility. In a sparse small sample data set, increasing F will quickly cause the number of parameters to exceed the number of observations, and any slight weight change may significantly change individual residuals, thereby amplifying the RMSE indicator. After F is greater than 10, the additional dimension learns random noise, causing the RMSE to increase instead. Therefore, when the number of hidden features F = 10 is selected, the prediction value of the recommendation system is more accurate and more efficient.

4.3. Comparison of LFM with Traditional Collaborative Filtering Methods

To further clarify the contribution of each component in our proposed system, this section serves as an ablation study comparing the standalone collaborative filtering (CF) method, the latent factor model (LFM), and the integrated hybrid model (LFM+CF). Based on the adjustments made in Experiment 2, this experiment utilizes optimal parameters to evaluate each configuration. The number of neighbors K was employed as a variable to observe the system’s prediction accuracy before and after improvement, as illustrated in Table 7.
We take 95 from interval 5 of K = 5 to get the value of MAE and RMSE, as shown in Figure 11.
The numerical comparison in Table 7 shows that LFM+CF is superior to pure CF in both MAE and RMSE. However, in Figure 11b, the two RMSE curves are intertwined due to high-frequency sawtooth fluctuations, and it is not easy to immediately distinguish the superiority from the inferiority visually. This phenomenon stems from the square error amplification property of RMSE. When the number of neighbors K changes, even if only a very small number of user–item residuals fluctuate slightly, the RMSE will be amplified and form sharp “sawtooths”. In contrast, MAE uses a linear metric and is insensitive to extreme errors, so it presents a smoother and more intuitive drop in Figure 11a.
We measured the algorithms’ accuracy, recall, and F1 scores before and after improvement. Table 8 displays these results.
The F -measure is expressed on the coordinate system as shown in Figure 12.
Figure 12 intuitively shows the F1, Precision, and Recall curves of the three algorithms as the number of neighbors K changes. In the F-measure and precision tests, the LFM+CF model always leads the other models after K ≈ 25. Although the Recall of LFM+CF is lower than that of the pure CF model, it has a significant improvement in Precision and F1; at the same time, the curve has a small fluctuation range, indicating that the hybrid model has good robustness under different neighbor sizes.

4.4. Cross-Dataset Validation on ML-1M Subset

To enhance the external validity of our findings and further demonstrate the practical utility of the proposed model, we conducted an auxiliary experiment using a controlled sparse subset of the MovieLens 1M (ML-1M) [38] dataset.
The ML-1M dataset contains over one million ratings from 6000 users and 3900+ movies. To construct a testbed comparable in structure to our proprietary e-commerce dataset, we applied the following sampling strategy:
We removed users and items with fewer than five ratings to ensure minimal interaction stability. From the filtered pool, we randomly sampled 3500 users and 300 items. We retained only those user–item pairs that had existing interactions within the sampled subset, ensuring no empty rows or columns in the rating matrix.
The final sampled submatrix contains 5000 observed interactions across 2077 users and 265 items, resulting in a sparsity of 99.09%, closely aligned with our primary dataset. This subset was used for secondary validation.
We then applied the proposed model to this sampled ML-1M subset using the same training configuration as in Section 4.3. The model was evaluated using the full-matrix metrics of MAE, RMSE, Precision, Recall, and F1-score, the result as shown in Table 9.
The results in Table 9 are consistent with those in Section 4.3, and the models perform at similar levels on the two datasets. This consistency in performance indicates that the model does not overfit the original data structure and has a certain degree of transferability for similar sparse data environments.

5. Discussion

Our research addresses the scarcity of sophisticated personalized recommendation services for newly established or niche e-commerce platforms. We developed an innovative collaborative filtering algorithm that incorporates an implicit semantic model, optimized through statistical learning techniques to enhance recommendation accuracy and personalization. The analysis of our experimental outcomes yields several significant insights:
The LFM+CF demonstrates a marked improvement in recommendation efficacy compared to traditional methods. At an optimal neighbor count (K = 45), our enhanced algorithm achieves an MAE of 0.88138, representing a substantial reduction of approximately 0.25 from the baseline. This improvement underscores the algorithm’s enhanced capability in deciphering user behavior patterns. Moreover, the results in Table 7 and Table 8 effectively serve as an ablation study, demonstrating that each component—baseline CF, matrix factorization, and hybrid integration—contributes incrementally to the overall performance. This validates the architectural choices in our enhanced latent factor model.
Our investigation into prediction accuracy and stability across varying neighbor counts reveals intriguing dynamics. The improved system exhibits a gradual decline in prediction accuracy as the number of neighbors increases but maintains stability within a defined range. This characteristic suggests enhanced predictive consistency compared to the original system, which displays erratic accuracy fluctuations across different K values.
Examination of precision and recall metrics further corroborates the superiority of our enhanced system. While recall rates remain comparable, the improved algorithm demonstrates significantly higher precision. This indicates a more nuanced understanding of user preferences, leading to more relevant recommendations and increased user satisfaction. The performance curve (Figure 12) illustrates that our algorithm’s accuracy improves with increasing neighbor counts, outperforming standard collaborative filtering approaches.
The latent-factor framework presented in this study is deliberately tailored for small, highly sparse data sets—situations in which the user–item matrix contains far more empty entries than ratings. Such conditions are typical of new or niche e-commerce platforms, where a limited catalogue and a fledgling user base make dense interaction logs unattainable. Under these constraints, classical top-K recommenders that thrive on abundant signals lose reliability, whereas the proposed low-rank model, trained with whole-matrix Precision and Recall, remains data-efficient and computationally light. The same sparsity challenge also appears in early streaming services, digital health applications, educational courseware, and local social networks. This case study provides a design solution for how to achieve feasible recommendations before reaching “big data” volumes. Therefore, an important research direction in the future is to quantify “sufficient” data, or to explore the contribution relationship between the ratio of users to items and the rating coverage.
While the proposed model demonstrates promising results in highly sparse and small-scale recommendation scenarios, it is not without limitations. First, the evaluation is primarily based on a proprietary dataset collected from a niche e-commerce platform, supplemented by a highly sparse submatrix of the public MovieLens 1M dataset. This subset was sampled to match the original task conditions (99.09% sparsity, ~5000 interactions), yet it remains a synthetic construct and may not fully reflect real-world consumer behavior. Second, the model does not incorporate content-based features (e.g., product descriptions, user profiles), which could further improve performance, especially in cold-start scenarios. Third, the method is optimized for static offline settings and has not been tested under dynamic or real-time recommendation environments. In future work, we plan to supplement the cold-start tests that have not yet been specifically performed by simulating cold-start conditions with controlled user/item masks. We will continue to explore the cross-domain generalization performance of this design, applying the model to other datasets from similar low-resource domains. We will also explore lightweight hybrid models that combine latent factor learning with semantic or content-based embeddings.
In summary, through multiple experiments in this study, we have successfully proposed and verified an improved personalized recommendation algorithm for e-commerce platforms, which has achieved significant improvement in prediction accuracy and recommendation effect and provides valuable reference and enlightenment for the research and application of recommendation systems in the field of e-commerce.

6. Conclusions

This study addresses the persistent challenge of data sparsity in recommendation systems, particularly within newly established or niche e-commerce platforms. We propose a lightweight yet effective recommendation algorithm that integrates dimensionality reduction with latent factor modeling, optimized through iterative gradient descent and regularization. Unlike deep learning models that require extensive data and computing resources, our approach is designed specifically for low-resource, high-sparsity scenarios, achieving accuracy and efficiency.
Through extensive empirical analysis on a real-world sparse dataset (99.5% sparsity), the model demonstrates robust prediction performance across MAE, RMSE, and F1 metrics. Comparative results with traditional collaborative filtering and matrix factorization approaches confirm the value of our hybrid design. The algorithm is also computationally efficient, making it well-suited for platforms with limited infrastructure.
Beyond e-commerce, the proposed framework holds promise for broader application domains characterized by small-scale and sparse interaction data. These include digital education platforms, early-stage health recommendation systems, personalized learning tools, and local community-based content apps. Such systems often lack sufficient historical interaction records to train deep models effectively, making our approach a practical and interpretable alternative.
In future work, we plan to explore hybrid extensions incorporating side information to further mitigate cold-start issues and to validate generalizability through cross-domain datasets.

Author Contributions

Conceptualization, X.L., W.W. and J.T.; methodology, W.W., Z.Q. and J.T.; software, J.T., B.W. and Z.Q.; validation, M.T. and X.L.; formal analysis, J.T., B.W. and Z.Q.; resources, J.T. and W.W.; data curation, J.T., and M.T.; writing—original draft preparation, W.W. and Z.Q.; writing—review and editing, J.T., W.W. and Z.Q.; visualization, B.W. and M.T.; supervision, X.L. and J.T.; project administration, X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets used in this study include a custom-built dataset and a sampled subset based on the publicly available MovieLens 1M (ML-1M) dataset. Due to the customized nature of the experimental setup, the specific data used for analysis are not publicly archived. However, they are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yera, R.; Alzahrani, A.A.; Martinez, L. A fuzzy content-based group recommender system with dynamic selection of the aggregation functions. Int. J. Approx. Reason. 2022, 150, 273–296. [Google Scholar] [CrossRef]
  2. de Campos, L.M.; Fernandez-Luna, J.M.; Huete, J.F. Use of topical and temporal profiles and their hybridisation for content-based recommendation. User Model. User-Adapt. Interact. 2023, 33, 911–937. [Google Scholar] [CrossRef]
  3. Jeonghoon, L.; Cho, C.; Kim, J. A Study on the Development of the School Library Book Recommendation System Using the Association Rule. J. Korean Soc. Inf. Manag. 2022, 39, 1–22. [Google Scholar] [CrossRef]
  4. Venkatesan, V.K.; Ramakrishna, M.T.; Batyuk, A.; Barna, A.; Havrysh, B. High-Performance Artificial Intelligence Recommendation of Quality Research Papers Using Effective Collaborative Approach. Systems 2023, 11, 81. [Google Scholar] [CrossRef]
  5. Chun, J.; Lin, T.; Hong, S. Research on cross-domain recommendation algorithm based on quadratic collaborative filtering. In Proceedings of the IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA), Changchun, China, 25–27 February 2022; pp. 591–594. [Google Scholar]
  6. Koren, Y.; Rendle, S.; Bell, R. Advances in Collaborative Filtering. In Recommender Systems Handbook; Ricci, F., Rokach, L., Shapira, B., Eds.; Springer: New York, NY, USA, 2022; pp. 91–142. [Google Scholar]
  7. Papadakis, H.; Papagrigoriou, A.; Panagiotakis, C.; Kosmas, E.; Fragopoulou, P. Collaborative filtering recommender systems taxonomy. Knowl. Inf. Syst. 2022, 64, 35–74. [Google Scholar] [CrossRef]
  8. Ramakrishna, M.T.; Venkatesan, V.K.; Bhardwaj, R.; Bhatia, S.; Rahmani, M.K.I.; Lashari, S.A.; Alabdali, A.M. HCoF: Hybrid Collaborative Filtering Using Social and Semantic Suggestions for Friend Recommendation. Electronics 2023, 12, 1365. [Google Scholar] [CrossRef]
  9. Liu, C.; Kong, X.; Li, X.; Zhang, T. Collaborative Filtering Recommendation Algorithm Based on User Attributes and Item Score. Sci. Program. 2022, 2022, 4544152. [Google Scholar] [CrossRef]
  10. Yin, P.; Ji, D.; Yan, H.; Gan, H.; Zhang, J. Multimodal deep collaborative filtering recommendation based on dual attention. Neural Comput. Appl. 2023, 35, 8693–8706. [Google Scholar] [CrossRef]
  11. Xia, J.; Li, D.; Gu, H.; Lu, T.; Zhang, P.; Gu, N. Incremental Graph Convolutional Network for Collaborative Filtering. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM), Virtual Event, QLD, Australia, 1–5 November 2021; pp. 2170–2179. [Google Scholar]
  12. Morise, H.; Atarashi, K.; Oyama, S.; Kurihara, M. Neural collaborative filtering with multicriteria evaluation data. Appl. Soft Comput. 2022, 119, 108548. [Google Scholar] [CrossRef]
  13. Li, J.; Qi, S.; Chen, L.; Yan, H. Research on personalized recommendation based on big data technology. In The 10th International Conference on Computer Engineering and Networks; Springer: Singapore, 2021; pp. 240–247. [Google Scholar]
  14. Abdalla, H.I.; Amer, A.A.; Amer, Y.A.; Nguyen, L.; Al-Maqaleh, B. Boosting the Item-Based Collaborative Filtering Model with Novel Similarity Measures. Int. J. Comput. Intell. Syst. 2023, 16, 123. [Google Scholar] [CrossRef]
  15. Wang, D.; Zheng, Y.; Liu, Z.; Zheng, W.; Tian, J.; Fan, X. Personalized Recommendation System of Innovation and Entrepreneurship Course Based on Collaborative Filtering. In Proceedings of the 2021 International Conference on Networking Systems of AI (INSAI), Shanghai, China, 19–20 November 2021; pp. 21–25. [Google Scholar]
  16. Bandyopadhyay, S.; Thakur, S.S.; Mandal, J.K. Product recommendation for e-commerce business by applying principal component analysis (PCA) and K-means clustering: Benefit for the society. Innov. Syst. Softw. Eng. 2021, 17, 45–52. [Google Scholar] [CrossRef]
  17. Duan, R.; Jiang, C.; Jain, H.K. Combining review-based collaborative filtering and matrix factorization: A solution to rating’s sparsity problem. Decis. Support Syst. 2022, 156, 113748. [Google Scholar] [CrossRef]
  18. Ray, P.; Reddy, S.S.; Banerjee, T. Various dimension reduction techniques for high dimensional data analysis: A review. Artif. Intell. Rev. 2021, 54, 3473–3515. [Google Scholar] [CrossRef]
  19. Jia, W.; Sun, M.; Lian, J.; Hou, S. Feature dimensionality reduction: A review. Complex & Intelligent Systems 2022, 8, 2663–2693. [Google Scholar] [CrossRef]
  20. Liang, W.; Xie, S.; Cai, J.; Xu, J.; Hu, Y.; Xu, Y.; Qiu, M. Deep Neural Network Security Collaborative Filtering Scheme for Service Recommendation in Intelligent Cyber-Physical Systems. IEEE Internet Things J. 2022, 9, 22123–22132. [Google Scholar] [CrossRef]
  21. Liu, X. Personalized Recommendation Algorithm of Tourist Attractions Based on Transfer Learning. Math. Probl. Eng. 2022, 2022, 2520140. [Google Scholar] [CrossRef]
  22. Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, Hong Kong, 1–5 May 2001; pp. 285–295. [Google Scholar]
  23. Jiang, L.; Cheng, Y.; Yang, L.; Li, J.; Yan, H.; Wang, X. A trust-based collaborative filtering algorithm for E-commerce recommendation system. J. Ambient Intell. Humaniz. Comput. 2019, 10, 3023–3034. [Google Scholar] [CrossRef]
  24. Chen, J.; Zhao, C.; Uliji; Chen, L. Collaborative filtering recommendation algorithm based on user correlation and evolutionary clustering. Complex Intell. Syst. 2020, 6, 147–156. [Google Scholar] [CrossRef]
  25. Fu, M.; Qu, H.; Yi, Z.; Lu, L.; Liu, Y. A Novel Deep Learning-Based Collaborative Filtering Model for Recommendation System. IEEE Trans. Cybern. 2019, 49, 1084–1096. [Google Scholar] [CrossRef]
  26. Stergiopoulos, V.; Vassilakopoulos, M.; Tousidou, E.; Corral, A. An academic recommender system on large citation data based on clustering, graph modeling and deep learning. Knowl. Inf. Syst. 2024, 66, 4463–4496. [Google Scholar] [CrossRef]
  27. Xia, Z.; Sun, A.; Xu, J.; Peng, Y.; Ma, R.; Cheng, M. Contemporary Recommendation Systems on Big Data and Their Applications: A Survey. IEEE Access 2024, 12, 196914–196928. [Google Scholar] [CrossRef]
  28. Feng, C.; Liang, J.; Song, P.; Wang, Z. A fusion collaborative filtering method for sparse data in recommender systems. Inf. Sci. 2020, 521, 365–379. [Google Scholar] [CrossRef]
  29. Figuera, P.; García Bringas, P. Revisiting Probabilistic Latent Semantic Analysis: Extensions, Challenges and Insights. Technologies 2024, 12, 5. [Google Scholar] [CrossRef]
  30. Martins, G.B.; Papa, J.P.; Adeli, H. Deep learning techniques for recommender systems based on collaborative filtering. Expert Syst. 2020, 37, e12647. [Google Scholar] [CrossRef]
  31. Wilhelm, F.; Mohr, M.; Michiels, L. An Interpretable Model for Collaborative Filtering Using an Extended Latent Dirichlet Allocation Approach. Int. FLAIRS Conf. Proc. 2022, 35. [Google Scholar] [CrossRef]
  32. Liu, Y.; Ji, S.; Fu, Q.; Zhao, J.; Zhao, Z.; Gong, M. Latent semantic-enhanced discrete hashing for cross-modal retrieval. Appl. Intell. 2022, 52, 16004–16020. [Google Scholar] [CrossRef]
  33. Klymash, M.; Hordiichuk-Bublivska, O.; Pyrih, Y.; Urikova, O. A Hybrid Collaborative Filtering Based Recommender Model Using Modified Funk SVD Algorithm. In Digital Ecosystems: Interconnecting Advanced Networks with AI Applications; Springer: Cham, Switzerland, 2024; pp. 255–273. [Google Scholar]
  34. Xiaochen, Y.; Qicheng, L. Parallel Algorithm of Improved FunkSVD Based on GPU. IEEE Access 2022, 10, 26002–26010. [Google Scholar] [CrossRef]
  35. Wu, D.; Luo, X.; He, Y.; Zhou, M. A Prediction-Sampling-Based Multilayer-Structured Latent Factor Model for Accurate Representation to High-Dimensional and Sparse Data. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 3845–3858. [Google Scholar] [CrossRef] [PubMed]
  36. Karunasingha, D.S.K. Root mean square error or mean absolute error? Use Their Ratio Well. Inf. Sci. 2022, 585, 609–629. [Google Scholar] [CrossRef]
  37. Zhao, W.X.; Lin, Z.; Feng, Z.; Wang, P.; Wen, J.-R. A Revisiting Study of Appropriate Offline Evaluation for Top-N Recommendation Algorithms. ACM Trans. Inf. Syst. 2022, 41, 32. [Google Scholar] [CrossRef]
  38. Harper, F.M.; Konstan, J.A. The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst. 2015, 5, 19. [Google Scholar] [CrossRef]
Figure 1. SVD decomposition diagram of matrix R .
Figure 1. SVD decomposition diagram of matrix R .
Systems 13 00372 g001
Figure 2. SVD decomposition diagram of matrix R k after dimension reduction.
Figure 2. SVD decomposition diagram of matrix R k after dimension reduction.
Systems 13 00372 g002
Figure 3. Factor decomposition model.
Figure 3. Factor decomposition model.
Systems 13 00372 g003
Figure 4. Visual representation of recommendation algorithm after improvement based on implicit semantic model.
Figure 4. Visual representation of recommendation algorithm after improvement based on implicit semantic model.
Systems 13 00372 g004
Figure 5. Change curves of evaluation metrics (MAE and RMSE) for baseline, SVD, and LFM models with different iteration numbers N. (a) MAE curves; (b) RMSE curves.
Figure 5. Change curves of evaluation metrics (MAE and RMSE) for baseline, SVD, and LFM models with different iteration numbers N. (a) MAE curves; (b) RMSE curves.
Systems 13 00372 g005
Figure 6. Change curves of evaluation indexes baseline, SVD and LFM with iteration times. (a) F-measure curve; (b) Precision curve; (c) Recall curve.
Figure 6. Change curves of evaluation indexes baseline, SVD and LFM with iteration times. (a) F-measure curve; (b) Precision curve; (c) Recall curve.
Systems 13 00372 g006
Figure 7. Change curve of MAE and RMSE of LFM model with learning rate γ .
Figure 7. Change curve of MAE and RMSE of LFM model with learning rate γ .
Systems 13 00372 g007
Figure 8. Change curve of MAE and RMSE of LFM model with regularization parameter λ .
Figure 8. Change curve of MAE and RMSE of LFM model with regularization parameter λ .
Systems 13 00372 g008
Figure 9. Change curve of MAE and RMSE value of LFM model with iteration number N .
Figure 9. Change curve of MAE and RMSE value of LFM model with iteration number N .
Systems 13 00372 g009
Figure 10. Change curves of MAE, RMSE, and running time of LFM model with the number of hidden features F . (a) MAE curve; (b) RMSE curve.
Figure 10. Change curves of MAE, RMSE, and running time of LFM model with the number of hidden features F . (a) MAE curve; (b) RMSE curve.
Systems 13 00372 g010
Figure 11. MAE and RMSE score curves of the improved algorithm as a function of the number of neighbors K. (a) MAE curve; (b) RMSE curve.
Figure 11. MAE and RMSE score curves of the improved algorithm as a function of the number of neighbors K. (a) MAE curve; (b) RMSE curve.
Systems 13 00372 g011
Figure 12. Graph comparing F1 scores with neighbor K count before and after improvement. (a) F-measure; (b) Precision; (c) Recall.
Figure 12. Graph comparing F1 scores with neighbor K count before and after improvement. (a) F-measure; (b) Precision; (c) Recall.
Systems 13 00372 g012
Table 1. MAE and RMSE values of baseline, SVD, and LFM models with different iteration times N .
Table 1. MAE and RMSE values of baseline, SVD, and LFM models with different iteration times N .
N 010203040
MAE (Baseline)1.145950.915490.905860.906880.91174
MAE (SVD)1.260681.069871.083021.094051.10129
MAE (LFM)1.145950.915550.905710.903100.90380
RMSE (Baseline)1.333751.064751.052251.053501.05905
R M S E (SVD)1.401321.189671.206481.217451.22553
R M S E (LFM)1.308681.045811.034631.032371.03287
N 5060708090
MAE (Baseline)0.917650.922610.926810.930700.93495
MAE (SVD)1.107091.111261.114291.116661.11861
MAE (LFM)0.907600.909450.911830.915170.91901
RMSE (Baseline)1.065871.072021.078491.082041.08655
R M S E (SVD)1.231931.236991.241121.244031.24512
R M S E (LFM)1.037271.041231.043201.047111.04957
Table 2. Evaluation indices of baseline, SVD, and LFM model algorithms with different iteration times N .
Table 2. Evaluation indices of baseline, SVD, and LFM model algorithms with different iteration times N .
N 020406080100
Precision (Baseline)0.607000.696970.702560.700510.705580.70707
Precision (SVD)0.617830.647580.644740.647580.647580.65044
Precision (LFM)0.607000.696970.707700.718750.725390.72396
Recall (Baseline)1.000000.884620.878210.884620.891030.89744
Recall (SVD)0.621790.942310.942310.942310.942310.94231
Recall (LFM)1.000000.884620.884620.884620.897440.89103
F -measure (Baseline)0.755450.779660.780630.781870.787540.79096
F -measure (SVD)0.619810.767620.765630.767620.767620.76963
F -measure (LFM)0.755450.779660.786320.793100.802290.79885
Table 3. MAE and RMSE value of LFM model with different learning rate γ .
Table 3. MAE and RMSE value of LFM model with different learning rate γ .
γ 00.0040.0080.0120.016
MAE (LFM)1.070650.919600.907470.906790.91160
RMSE (LFM)1.224161.050111.033381.032981.04127
γ 0.0200.0240.0280.0320.036
MAE (LFM)0.919150.924600.927590.930830.93435
RMSE (LFM)1.050991.055381.088181.058521.06831
Table 4. MAE and RMSE values of LFM model with different regularization parameters λ .
Table 4. MAE and RMSE values of LFM model with different regularization parameters λ .
λ 00.0040.0080.0120.016
MAE (LFM)0.906840.906630.906620.906700.90686
RMSE (LFM)1.044101.043991.042831.043211.04450
λ 0.0200.0240.0280.0320.036
MAE (LFM)0.907080.907360.907740.908160.90860
RMSE (LFM)1.044591.044181.044331.046041.04572
Table 5. MAE value of LFM model with different iteration times N .
Table 5. MAE value of LFM model with different iteration times N .
N 010203040
MAE (LFM)1.092940.912810.902590.900470.90447
RMSE (LFM)1.293461.084951.079991.074741.07859
N 5060708090
MAE (LFM)0.909730.913160.916300.919070.92158
RMSE (LFM)1.078711.089231.126701.111361.09551
Table 6. MAE and RMSE value and running time of LFM model when the number of hidden features F is different.
Table 6. MAE and RMSE value and running time of LFM model when the number of hidden features F is different.
F 010203040
MAE (LFM)0.901370.900780.900840.901070.90141
RMSE (LFM)1.036611.035731.036361.036691.03703
Run time (ms)281.4227.6277.8371.6452.4
F 5060708090
MAE (LFM)0.901820.902280.902770.903280.90382
RMSE (LFM)1.037341.037721.038451.038971.03944
Run time (ms)556.2682722.4855.2980.8
Table 7. MAE and RMSE scores of the algorithm for different numbers of neighbors K .
Table 7. MAE and RMSE scores of the algorithm for different numbers of neighbors K .
K 515253545
MAE (CF)1.155701.147551.146901.148671.14206
MAE (LFM)0.900340.900340.900340.900340.90034
MAE (LFM+CF)0.915150.900440.888760.884870.88138
RMSE (CF)1.342171.403311.400091.395121.39401
R M S E (LFM)1.036961.061031.106691.113241.08474
R M S E (LFM+CF)1.079961.118921.082791.065991.08143
K 5565758595
MAE (CF)1.141501.13861.134461.139381.14863
MAE (LFM)0.900340.900340.900340.900340.90034
MAE (LFM+CF)0.882430.884770.885450.886130.88725
RMSE (CF)1.341951.320671.377341.331421.35204
R M S E (LFM)1.045491.075451.076811.065701.05223
R M S E (LFM+CF)1.078471.056621.093081.104311.04605
Table 8. Evaluation metrics of the algorithms before and after improvement for different numbers of neighbors K .
Table 8. Evaluation metrics of the algorithms before and after improvement for different numbers of neighbors K .
K 5254565
Precision (CF)0.642860.652360.647830.65351
Precision (LFM)0.740540.740540.740540.74054
Precision (LFM+CF)0.739130.744570.744570.74457
Recall (CF)1.000000.993460.973860.97386
Recall (LFM)0.901320.901320.901320.90132
Recall (LFM+CF)0.894740.901320.901320.90132
F -measure (CF)0.782610.787560.778070.78215
F -measure (LFM)0.813060.813060.813060.81306
F -measure (LFM+CF)0.809520.815480.815480.81548
Table 9. Performance of the Proposed Model for different numbers of neighbors K on the ML-1M Sparse Subset.
Table 9. Performance of the Proposed Model for different numbers of neighbors K on the ML-1M Sparse Subset.
K 5254565
MAE (LFM+CF)0.912920.910540.907820.90983
R M S E (LFM+CF)1.024331.021921.020511.02113
Precision (LFM+CF)0.740120.745820.745820.74582
Recall (LFM+CF)0.896820.904210.904210.90421
F -measure (LFM+CF)0.812060.818330.818330.81833
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, W.; Qi, Z.; Tian, J.; Wang, B.; Tang, M.; Liu, X. An Enhanced Latent Factor Recommendation Approach for Sparse Datasets of E-Commerce Platforms. Systems 2025, 13, 372. https://doi.org/10.3390/systems13050372

AMA Style

Wu W, Qi Z, Tian J, Wang B, Tang M, Liu X. An Enhanced Latent Factor Recommendation Approach for Sparse Datasets of E-Commerce Platforms. Systems. 2025; 13(5):372. https://doi.org/10.3390/systems13050372

Chicago/Turabian Style

Wu, Wenbin, Zhanyong Qi, Jiawei Tian, Bixi Wang, Minyi Tang, and Xuan Liu. 2025. "An Enhanced Latent Factor Recommendation Approach for Sparse Datasets of E-Commerce Platforms" Systems 13, no. 5: 372. https://doi.org/10.3390/systems13050372

APA Style

Wu, W., Qi, Z., Tian, J., Wang, B., Tang, M., & Liu, X. (2025). An Enhanced Latent Factor Recommendation Approach for Sparse Datasets of E-Commerce Platforms. Systems, 13(5), 372. https://doi.org/10.3390/systems13050372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop