Next Article in Journal
Measuring Changes in Jaw Opening Forces to Assess the Degree of Improvement in Patients with Temporomandibular Disorders
Next Article in Special Issue
Resolving Data Sparsity via Aggregating Graph-Based User–App–Location Association for Location Recommendations
Previous Article in Journal
Impact of Frost on the Morphology and Chemical Composition of cv. Santulhana Olives
Previous Article in Special Issue
Enhanced Collaborative Filtering for Personalized E-Government Recommendation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dirichlet Matrix Factorization: A Reliable Classification-Based Recommender System

by
Raúl Lara-Cabrera
1,2,*,†,
Álvaro González
2,†,
Fernando Ortega
1,2,† and
Ángel González-Prieto
2,3,4,†
1
Departamento de Sistemas Informáticos, ETSI Sistemas Informáticos, Universidad Politécnica de Madrid, 28031 Madrid, Spain
2
KNODIS Research Group, Universidad Politécnica de Madrid, 28031 Madrid, Spain
3
Departamento de Álgebra, Geometría y Topología, Universidad Complutense de Madrid, 28040 Madrid, Spain
4
Instituto de Ciencias Matemáticas (CSIC-UAM-UCM-UC3M), 28049 Madrid, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(3), 1223; https://doi.org/10.3390/app12031223
Submission received: 1 December 2021 / Revised: 15 January 2022 / Accepted: 21 January 2022 / Published: 24 January 2022

Abstract

:
Traditionally, recommender systems have been approached as regression models aiming to predict the score that a user would give to a particular item. In this work, we propose a recommender system that tackles the problem as a classification task instead of as a regression. The new model, Dirichlet Matrix Factorization (DirMF), provides not only a prediction but also its reliability, hence achieving a better balance between the quality and quantity of the predictions (i.e., reducing the prediction error by limiting the model’s coverage). The experimental results conducted show that the proposed model outperforms other models due to its ability to discard unreliable predictions. Compared to our previous model, which uses the same classification approach, DirMF shows a similar efficiency, outperforming the former on some of the datasets included in the experimental setup.

1. Introduction

In the latest years, society is becoming more and more saturated with the overwhelming amount of information available to costumers. For this reason, the use of algorithms for information retrieval and sorting is becoming a cornerstone in the technological solutions offered. An important instance of these methods is Recommender Systems (RSs), which aid users to match with items in the most efficient way possible, taking into account both the user’s own preferences and those of others with similar tastes. An evidence of the deep penetration of these algorithms in the industry is the high number of companies that use RSs as part of their business model, such as Amazon, eBay, Netflix, Twitter and many others. However, although improving recommendation accuracy has been the main target for academic researchers [1], other objectives that are beyond improving accuracy have also attracted the interest of the community in recent years [2]. Metrics such as coverage, diversity and novelty are goals of cutting-edge research since users typically prefer to be recommended items with a certain degree of novelty. Recent RSs research suggests that accurate recommendations are possible [3]: when RSs recommend a restaurant by awarding it five stars, a human is likely to be skeptical of the rating. Some RSs provide some extra data that help us to deduce the rating’s trustworthiness such as the number of individuals who have voted for the restaurant. Almost everyone prefers a restaurant with an average rating of 4 stars and 5000 reviews to one with just 5 reviews and an average rating of 5 stars. In this framework, a recommendation does not only comprise the recommended item, but also an associated reliability value provided through RSs techniques and models for trustworthiness.
The core of every RS is its filtering mechanism. Broadly speaking, the system makes recommendations based on context [4], content [5], social environment [6], demographic data [7] and collaborative information, the latter being the most widely used approach [8]. Hybrid methods [9], which combine Collaborative Filtering (CF) and other filtering sources, are commonly used in commercial RS designs.
Among CF methods, Probabilistic Matrix Factorization (PMF) [10] is arguably the most relevant contribution to the area in the latest times. It models the process of assigning a rating as regression process that follows a normal distribution with mean given by the inner product of two ‘hidden vectors’ associated to each user and item. Since it inception, other CF models have been proposed to extend the ideas PMF to work with more general frameworks. One of the most promising lines is to combine the hidden vectors in a non-linear way, in sharp contrast with the usual operation of PMF. For instance, in [11], two new models are proposed inspired by deep learning techniques, namely, Generalized Matrix Factorization (GMF) and Neural Collaborative Filtering (NCF). In the former, instead of the standard inner product in the euclidean space, a more general inner product is allowed, which is learned during the training process. To definitely break with linearity, this work also proposes NCF, in which the inner product is fully substituted by a neural network able to combine the hidden vectors in a highly non-linear way.
Despite these advances, recent works show that, in the RSs field, providing precise reliability values that are connected with forecasts is a major aim, as it provides many advantages: new ways of providing recommendations to the user that take into account the reliability of the prediction, achieving better accuracy by filtering low reliability recommendations, and identifying a cold start problem when every recommendation for an user has low reliability. It is worth noticing that reliability and accuracy are different metrics of the performance of an RS: accuracy is the objective ability of the RS of issuing correct predictions (in the sense that they agree with the known values of the test split dataset), while reliability is a subjective perception of the model on the probability that its prediction is correct. In some sense, reliability measures the impression of the algorithm on the ‘difficulty’ of issuing the prediction. In this way, a good RS method labels controvert predictions with low reliability while clear trends are associated to high reliability. This reliability measure can be used to filter-out these controvert predictions, in such a way that only forecasts with high reliability are issued. If the RS models reliability correctly, this filtration will lead to an improvement of the accuracy of the predictions.
For this reason, several CF-based methods and models have been developed to provide reliability values based on both the Matrix Factorization (MF) and K Nearest Neighbors (KNN) approaches, in contrast with the aforementioned proposals which do not support this metric. For instance, Margaris et al. [12] presents the notion of reliability in the domain of review-to-rating conversion, that is, the reliability of converting textual reviews to implicit ratings. In addition, Mesas and Bellogín [13] looks at a few different ways of embedding awareness in an RS to analyze whether each item should be offered. The authors claim that, similar to our suggested strategy, confidence values may be used to filter the most trustworthy recommendations, resulting in an increase in accuracy and a drop in coverage.
Regarding memory-based approaches, Fan et al. [14] improves classic Pearson and cosine similarity metrics by applying the reliability concept. Previously, authors calculated the credibility of a user’s rating using the users’ scores on common products. Ahmadian et al. [15] presented an aggregation of reliability metrics, including user-based reliability, rating profile reliability, and item-based reliability. Their model is able to make better suggestions as a result of this knowledge. Liang et al. [16], Xu et al. [17] both used an upper bound of confidence to improve recommendations, the former using event-based social networks and synthetic datasets, and the latter exploiting the user’s behavior temporal information to improve the accuracy of the recommendations.
Despite its effectiveness, the above-mentioned studies have two major flaws in terms of dependability. First, because of its lack of precision and scalability, the KNN technique is not used. Second, because social information is only available in a tiny portion of current RSs, social-based trust-aware techniques aimed at determining dependability values cannot be termed universal solutions.
In this direction, our previous model Bernoulli Matrix Factorization (BeMF) [18] addressed the problem of providing accurate recommendations in addition to their associated reliability as a classification problem. This classification approach uses a finite set of ratings that should be assigned to the items as classes to predict. In this manner, the model returns the probability that a user would assign each of the D possible ratings to an item, which was modeled as D independent Bernoulli processes. Those D probabilities can be aggregated to obtain the expected reliability result, in such a way that the model provides additional information to the recommendations.
Although our previous model achieves good results, we modeled the ratings as a D-dimensional vector of independent Bernoulli random variables, which is a rather strong assumption. To address this issue, in this work we propose a novel MF model that provides the reliability of the recommendations by using a classification-based approach. However, instead of assuming independence between the ratings, our novel proposal models them as a Dirichlet random variable to deal with the existing interdependence. Due to the key role that the Dirichlet distribution plays in the formulation of the model, as well as its crucial use of matrix factorization procedures, this new method is referred to as Dirichlet Matrix Factorization (DirMF).
The structure of this paper is as follows. Section 2 explains the interstices of the proposed DirMF model, both regarding the mathematical formulation of the underlying classification problem (Section 2.1) and the way in which prediction value and reliability can be extracted from these data (Section 2.2). The presentation is complemented with a discussion of the time and space complexity of the proposed method in Section 2.3, as well as a detailed running example shown in Section 2.4. In Section 3 we present the experiments conducted to evaluate the performance of DirMF. In particular, Section 3.1 describes the experimental setting, whereas Section 3.2 analyzes the obtained results. The paper finishes with some conclusions and future works, as outlined in Section 4.

Our Contribution

The main results and novel proposals developed in the present manuscript are the following:
  • A new model, named DirMF, is introduced, and a complete description of its mathematical formulation, training procedure and prediction protocol is provided.
  • The new DirMF model introduces a new probabilistic interpretation of the rating process. Instead of considering the rating as a continuous random variable or seeing each vote as an independent feature, as in the existing literature, our method is flexible enough to model the ratings as interrelated discrete random variables. For this purpose, the method proposes the novelty of relying in the Dirichlet probability distribution to model the user’s rating behavior;
  • The performance of this new model is evaluated through an extensive collection of experiments. The results evidence that DirMF achieves a high performance both in terms of recommendation quality and management of reliability. The implementation of the reliability as an intrinsic part of the algorithm leads to a better performance of this method compared to the preexisting algorithms that do not treat reliability in this way;
  • Additionally, DirMF shows similar results to BeMF, the other method that embeds the reliability as part of its training process. Furthermore, thanks to its more flexible nature, DirMF presents a more conservative forecasting trend than BeMF, which leads to a better prediction in scenarios in which failure is highly penalized.

2. Proposed Model

Let us assume that there are N users that may rate M different items. Rating means a user can assign a score to an item chosen from a discrete set of scores S = s 1 , , s D . For instance, a rating consisting of giving one to five stars means S = 1 , 2 , 3 , 4 , 5 . The ratings that are known are represented by the ratings matrix R = ( R u , i ) , where R u , i = s if user u [ 1 , N ] has assigned the score s S to item i [ 1 , M ] and R u , i = if user u has not rated item i. The matrix R is decomposed into D binary matrices R s 1 , , R s D , such that R u , i s = 1 if user u assigned the score s to item i (i.e., R u , i = s ), R u , i s = 0 if user u assigned other score than s to the item i (i.e., R u , i s ), and R u , i s = if the user u did not rate the item i (i.e., R u , i = ).
As stated above, predicting those missing values of R s is a classification problem, as R u , i s means whether user u thought that a rating of s is the best for item i. The aim of our proposal is to model the random vector ( R u , i s 1 , , R u , i s D ) as a Dirichlet random variable with parameters 0 ρ u , i s 1 , , ρ u , i s D 1 . Therefore, R u , i s quantifies both the affinity of the user u for item i and the reliability of the prediction.
Regarding how our model estimates the parameters ρ u , i s , it supposes that there exists some K-dimensional hidden user factors vector P u s = ( P u , 1 s , , P u , K s ) R K and hidden item factors vector Q i s = ( Q i , 1 s , , Q i , K s ) R K whose dot product is ρ u , i s . This is consistent with other MF models in the literature. As P u s · Q i s may return any real number the model uses a logistic-like smooth function ω : R [ 0 , 1 ] to normalize the result into 0 ω ( P u s · Q i s ) 1 . In this way, we set ρ u , i s = ω ( P u s · Q i s ) as a probability.
Figure 1 exhibits the plate diagram of the model, which is composed of D different factorizations (one factorization for each possible score), each of which is composed of N variables P u (latent factors of each user) and M variables Q i (latent factors of each item). The hyperparameters of the model are two positive real values σ P , σ Q > 0 (which will play the role of standard deviation of gaussian priors for regularization), an integer k > 0 (the number of hidden factors) and a smooth logistic-like function ω used for normalization.

2.1. Dirichlet Factorization

In this section, we shall describe the mathematical formulation of the Dirichlet factorization model. Recall that, given parameters ρ 1 , ρ D > 0 , the associated Dirichlet distribution is the D-dimensional random vector with joint probability density function f ρ 1 , , ρ D : R D R D given by
f ( x 1 , , x D | ρ 1 , , ρ D ) = 1 B ( ρ 1 , ρ D ) s = 1 D x s ρ s 1 ,
for variables x s in the standard simplex, i.e., x 1 , , x D 0 and s S x s = 1 . The density function is zero elsewhere. Here, the normalization factor is the Beta function
B ( ρ 1 , , ρ D ) = s = 1 D Γ ( ρ s ) Γ s = 1 D ρ s ,
with Γ the Gamma function (the analytic continuation of the factorial into a meromorphic function in the complex plane).
In this way, if we set as parameters ρ u , i s = ω ( P u s · Q i s ) then, given the rating matrix R = ( R u , i ) u , i decomposed into binary matrices R s = ( R u , i s ) u , i , we obtain that the associated likelihood L ( R | P , Q ) is
L ( R | P , Q ) = R u , i 1 B ( ω ( P u · Q i ) ) s S R u , i s ω ( P u s · Q i s ) 1 .
Here, we have shortened B ( ω ( P u · Q i ) ) = B ( ω ( P u 1 · Q i 1 ) , , ω ( P u D · Q i D ) ) .
However, observe that, since R u , i s is either 0 or 1, the previous likelihood is zero except for perfect prediction. To avoid this issue, we propose to use a ‘soft’ version of the binary labeling. To do so, we take the softmax of the hard labels:
R ˜ u , i s = softmax β u , i s ( R u , i 1 , , R u , i D ) = e β u , i R u , i s r = 1 D e β u , i R u , i r = e β u , i ( D 1 ) + e β u , i if R u , i s = 1 , 1 ( D 1 ) + e β u , i if R u , i s = 0 .
Here, β u , i > 0 is a shape hyperparameter that controls the sharpness of the softmax function. It can be adjusted to increase the weight of some pairs ( u , i ) in the likelihood function.
In this way, the modified likelihood function that we shall consider is:
L ( R | P , Q ) = R u , i 1 B ( ω ( P u · Q i ) ) s S R ˜ u , i s ω ( P u s · Q i s ) 1 .
Hence, the log-likelihood function, ( R | P , Q ) = log L ( R | P , Q ) , is given by:
( R | P , Q ) = R u , i log Γ s S ω ( P u s · Q i s ) s S log Γ ω ( P u s · Q i s ) + R u , i s S ω ( P u s · Q i s ) log ( R ˜ u , i s ) .
To optimize this likelihood function, we will use a standard gradient ascent algorithm. The directional derivatives of the cost function in the directions P and Q are then given by:
P u , k s = { i | R u , i } Q i , k s · ω ( P u s · Q i s ) ψ r S ω ( P u r · Q i r ) ψ ω ( P u s · Q i s ) + log ( R ˜ u , i s ) , Q i , k s = { u | R u , i } P u , k s · ω ( P u s · Q i s ) ψ r S ω ( P u r · Q i r ) ψ ω ( P u s · Q i s ) + log ( R ˜ u , i s ) ,
where ψ denotes the digamma function
ψ ( x ) = log Γ ( x ) x = Γ ( x ) Γ ( x ) .
Therefore, the update rules for each known rating R u , i are the following:
P u s P u s + γ Q i s ω ( P u s · Q i s ) ψ r S ω ( P u r · Q i r ) ψ ω ( P u s · Q i s ) + log ( R ˜ u , i s )
Q i s Q i s + γ P u s ω ( P u s · Q i s ) ψ r S ω ( P u r · Q i r ) ψ ω ( P u s · Q i s ) + log ( R ˜ u , i s ) .
Notice that, for typical logistic-like functions, the derivative ω can be easily computed. For instance, for ω ( x ) = 1 1 + e x the genuine logistic function, we have that
ω ( x ) = ω ( x ) ( 1 ω ( x ) ) .
We can also add Gaussian priors to the likelihood function (Equation (3)) for the parameters P u s and Q i s with mean zero and fixed standard deviation. Analogue computations with these Gaussian priors lead to an extra regularization term + η P u s in (Equation (4)) and + η Q i s in (5)) for η > 0 a hyperparameter.
Algorithm 1 summarizes the fitting process of the DirMF model for ω = 1 1 + e x the genuine logistic function and β u , i = R u , i to control the sharpness of the softmax function.
The algorithm requires as input the sparse rating matrix (R), the number of latent factors (K), the set of scores ( S ), and the hyper-parameters that are required by gradient ascent optimization: learning rate ( γ ), regularization ( η ) and number of iterations (m) as inputs. Its outputs are the fitted latent factors: P contains the latent factors for each score s, user u and factor k, whereas Q contains the latent factors for each score s, item i and factor k. Note that both the user’s (lines 3–20) and item’s update loops (lines 21–38) may be executed in parallel for each user and item, respectively.

2.2. Prediction

Once the model has been trained, we have obtained a collection of users’ latent vectors P u s as well as items’ latent vectors Q i s for s S . Given a new pair user–item ( u , i ) to be predicted, the Dirichlet MF method assigns to it the D-dimensional continuous random vector R u , i = ( R u , i 1 , , R u , i D ) with parameters ρ s = ω ( P u s · Q i s ) . That is, it is the D-dimensional random vector supported on the standard simplex whose probability density function is given by (1):
f x 1 , , x D | ω ( P u · Q i ) = 1 B ( ω ( P u · Q i ) ) s = 1 D x s ω ( P u s · Q i s ) 1 .
From this distribution, discrete random variables with supports on S can be sampled. These discrete probability distributions represent the probability that the user u would score item i with each of the possible ratings. However, observe that, in sharp contrast with other matrix factorization approaches, the output R u , i here is not a discrete distribution, but a random variable that takes values in discrete probability distributions.
Several criteria can be used to obtain the predicted ratings. In this work, we shall follow an approach based on the law of large numbers. Indeed, if we sampled R u , i a large number of times, the mean probability of the score s S would be the mean of the s-th component of the Dirichlet distribution with parameters ρ 1 = ω ( P u 1 · Q i 1 ) , , ρ D = ω ( P u D · Q i D ) , which is the value:
E ( R u , i s ) = ω ( P u s · Q i s ) s S ω ( P u s · Q i s ) .
These means are positive E ( R u , i s ) 0 and s E ( R u , i s ) = 1 , so they give rise to a discrete robability distribution on S . From it, we obtain two relevant data:
  • The final prediction R ^ u , i , which is the mode of the aforementioned distribution, that is,
    R ^ u , i = argmax s E ( R u , i s ) .
  • The reliability ϱ u , i in the prediction, which is the probability attained at the mode of the distribution, that is,
    ϱ u , i = max s E ( R u , i s ) .
In this manner, fixed a threshold 0 ϑ 1 , we can crop the predictions so that we artificially set R ^ u , i = (no prediction is issued) if the reliability does not achieves the given threshold, that is, if ϱ u , i < ϑ . Otherwise, when ϱ u , i ϑ , we keep, as the prediction R ^ u , i , the mode of the mean probability distribution.
Summarizing, the workflow associated to a cycle of training and exploitation phases of the DirMF model is as follows:
  • Collect the rating matrix R of shape N × M with the known votes per user and item;
  • Choose hyperparameters K (number of latent factors), γ (learning rate), η (regularization), m (number of iterations) and ϑ (reliability threshold, for the exploitation phase);
  • Execute the training algorithm for DirMF (Algorithm 1) with the chosen hyperparameters. The output of the training is a collection of pairs of matrices ( P s = ( P 1 s , , P N s ) , Q s = ( Q 1 s , , Q M s ) ) for each possible vote s;
  • Given a new pair ( u , i ) of an user 1 u N and an item 1 i M to be predicted, compute the quantities
    E ( R u , i s ) = ω ( P u s · Q i s ) s S ω ( P u s · Q i s ) ,
    for each possible score s;
  • The prediction R ^ u , i is the vote s 0 for which E ( R u , i s 0 ) is maximum. The reliability is the value ϱ u , i = E ( R u , i s 0 ) .
  • If ϱ u , i ϑ , then return prediction R ^ u , i ; otherwise return that no reliable prediction can be issued.

2.3. Computational Complexity

From the information of Algorithm 1, it is possible to analyze the complexity of the DirMF model in terms of time and space consumption.
Let us first focus on the spent time. From lines 3–4 and 21–22, we observe that, for each training epoch, the training algorithm must iterate over the set of known votes, let us denote it by Δ . Now, the entry corresponding to each of the matrix factorizations must be updated, and there are | S | of them (lines 6, 9, 24 and 27). Finally, each updating operation requires to modify each of the entries of the hidden vectors for users and items, and there are K of them (lines 14 and 32). Notice that the updates only require standard arithmetic operations, which are performed in constant time. Hence, taking the total number of training steps, we obtain an exact time consumption of O ( | Δ | | S | K m ) .
To analyze how this quantity scales with the size of the dataset, we can estimate it further. Typically in real world datasets, the number of known values is a constant proportion (around 1–5%) of the total numbers of pairs (user, item), and there are N M of these pairs. On the other hand, S uses to be a small set (of the order of 10 possible ratings) and m is a hyperparameter that is fixed independently of the number of entries of the dataset. Hence, the time complexity can be estimated by O ( N M K ) , that is, the time complexity increases linearly with the number of users and items in the dataset, as well as with the number of hidden factors.
Algorithm 1: DirMF model fitting algorithm fixing ω for the logistic function and shape parameters β u , i = R u , i .
Applsci 12 01223 i001
With respect to space consumption, notice that all the operations can be conducted in-place, so no extra space is needed apart from the computed values. These values correspond to the | S | pairs of matrices ( P s , Q s ) of the corresponding matrix factorization for the binary matrix R s . Since each of these pairs has shape N × K and M × K , we get a space complexity of O ( | S | N M K 2 ) . Again, since | S | is a fixed value independent of the size of the dataset, this complexity scales as O ( N M K 2 ) , that is, linearly in the number of users and items and quadratically in the number of hidden factors.

2.4. Running Example

Here we describe a running example of the DirMF model using a synthetic dataset with three users and five items. For the sake of simplicity, the set of possible ratings is S = , that is, like and dislike. The original rating matrix used for this example can be checked in Table 1.
Given the ratings, we can build the softmax of every rating R u , i s (i.e., like, dislike) according to Equation (2), as shown in Table 2.
The parameters of the model are the latent factors for both users and items for every possible rating: P , P , Q , Q . For the sake of simplicity, we fixed the number of latent factors to K = 3 . The random initialization of the parameters for this running example is shown in Table 3.
The optimization of the parameters is performed by a gradient descent algorithm, so the update rules defined by Equations (4) and (5) are applied on each m iteration step. For instance, the update rule for the second latent factor k 2 of item i 3 and dislike rating (✖) is:
Δ Q i 3 , k 2 = P u 1 , k 2 · ω ( P u 1 · Q i 3 ) ψ ω ( P u 1 · Q i 3 ) ψ s S ω P u 1 s · Q i 3 s log ( R u 1 , i 3 ) ,
hence we should update that hidden factor as follows:
Q i 3 , k 2 Q i 3 , k 2 γ Δ Q i 3 , k 2 + η Q i 3 , k 2 ,
with γ and η being the learning rate and regularization, respectively.
Table 4 contains the latent factors after one iteration using regularization η = 0.01 and learning rate γ = 0.1 .
Predictions can be computed after the model has been trained by finding the score that maximizes the probability in the classification task. For instance, to determine user u 2 ’s rating of item i 2 , R ^ u 2 , i 2 , we may compute the probability distribution of this rating as follows:
E ( R u 2 , i 2 ) = ω ( P u 2 · Q i 2 ) s S ω ( P u 2 s · Q i 2 s ) = 0.81 0.81 + 0.66 = 0.55 , E ( R u 2 , i 2 ) = ω ( P u 2 · Q i 2 ) s S ω ( P u 2 s · Q i 2 s ) = 0.66 0.81 + 0.66 = 0.45 ;
then, the model selects the most likely outcome R ^ u 2 , i 2 = with a reliability of ϱ u 2 , i 2 = 0.55 .

3. Evaluation of Dirichlet Matrix Factorization

This section features a comprehensive description of the experiments conducted to assess the suggested model. The experimental setup, which defines the datasets, baselines and quality metrics used during the evaluation, is described in Section 3.1. Section 3.2 contains the experimental data as well as a comparison of the proposed method’s performance against the chosen baselines.
All experiments in this paper have been carried out using Collaborative Filtering for Java (CF4J) (https://github.com/ferortega/cf4j, accessed on 30 November 2021) [19]. The source code of all experiments analyzed in this work is publicly available on GitHub (https://github.com/KNODIS-Research-Group/DirMF, accessed on 30 November 2021).

3.1. Experimental Setup

The MovieLens [20], FilmTrust [21], MyAnimeList [22] and Netflix [23] datasets were used to conduct the experiment. These datasets were chosen to see how splitting the rating matrix into binary rating matrices with various discrete sets of possible scores affected the results. Both MovieLens and Netflix datasets comprise ratings ranging from one to five stars. FilmTrust ratings vary from 0.5 to 4.0 in half-step increments, while the MyAnimeList dataset has a score range of 1 to 10. Furthermore, all of these experiments were conducted using the benchmark version of these datasets, which is supplied in CF4J, to assure reproducibility. Table 5 summarizes the basic features of these datasets.
According to Section 1, all MF models in the literature can estimate a user’s rating prediction for an item, but only a few are able to offer the reliability of both their predictions and recommendations. Baselines were chosen to provide a diverse representation of all the known MF models. The selected baselines and their resulting outputs are:
  • Bernoulli Matrix Factorization (BeMF) [18], which provides both prediction and recommendation reliability;
  • PMF [10], which provides predictions but no reliability;
  • GMF [11], which provides predictions only;
  • NCF [11], which provides predictions but no reliability.
Several hyperparameters in both the specified baselines and our model must be tweaked. By executing a grid search optimization that minimizes the mean absolute prediction error, we evaluated the prediction error of baseline models with varied values of these hyperparameters. The hyperparameters derived by this optimization method for each baseline and dataset are listed in Table 6.
Additionally, we set the DirMF method to work with the logistic activation function ω ( x ) = ( 1 + e x ) 1 . As shape parameters for the softmax function we took β u , i = R u , i . Notice that this choice allows the system to straighten the most relevant votes (those with R u , i high) using a more spiky softmax function, compared to the less relevant votes (those with R u , i small) for which the softmax function used is flatter.
To assess the quality of the predictions and recommendations provided by a RS based on CF we define Mean Absolute Error (MAE) (Equation (6)) as the mean absolute difference between the test ratings ( R u , i ) and their predictions ( R ^ u , i ), and coverage (Equation (7)) as the proportion of test ratings that CF can predict ( R ^ u , i ) with respect to the total number of test ratings. Here, R test is the set of pairs ( u , i ) of user u and item i in the test split of the dataset, and # R test denotes its cardinality.
MAE = 1 # R test u , i R test | R u , i R ^ u , i | .
coverage = # { u , i R test | R ^ u , i } # R test .
Furthermore, two adapted quality measures emerge by fixing a number n > 0 of top recommendations to consider: precision is defined as the averaged ratio of successful recommendations that are included in the top n recommendations for the user u, T u n , with respect to the size n of the recommendation list (Equation (8)); recall is the averaged proportion of successful recommendations included in the recommendation list of user u, T u n , with respect to the total number of test items that user u likes (Equation (9)).
precision = 1 N u = 1 N { i T u n | R u , i θ } # T u n .
recall = 1 N u = 1 N { i T u n | R u , i θ } { i R u test | R u , i θ } .
Here, the variable u runs over the users of the dataset, N is the total number of users, R u test is the collection of items rated by user u in the test split and θ is a threshold to discern whether a user likes an item ( R u , i θ ) or not ( R u , i < θ ).

3.2. Experimental Results

As mentioned above, we are able to tune the output of the DirMF model by filtering out less reliable predictions, hence decreasing the coverage of the model as some predictions cannot be recommended because the model does not have enough confidence to make them, but increasing the prediction accuracy at the same time. It is fair to believe that high-reliability predictions are more evident than low-reliability predictions. For instance, if a user gave The Lord of the Rings: The Fellowship of the Ring and The Lord of the Rings: The Two Towers favorable ratings, the model will have high confidence in the user’s positive interest in The Lord of the Rings: The Return of the King and will give this prediction a high reliability score. In contrast, the same algorithm will have less trust in the user’s interest in other fantasy films, such as The Princess Bride or Willow, and will give these predictions lower reliability scores.

3.2.1. Mae vs. Coverage

Figure 2 shows the effect of using a reliability threshold on the quality of the predictions. The plots were created by filtering out any prediction with lower reliability than the x-axis reliability. Notice that only those models that output prediction reliabilities can filter their forecasts. For this reason, PMF, GMF and NCF are represented in the graphic as a horizontal line since varying the threshold do not affect their predictions.
To be precise, the plots of Figure 2 represent the evolution of the MAE and coverage quality measures, as specified in Equations (6) and (7). In this way, we compared the prediction error to the model’s predictability. As can be seen, there is a consistent trend within the plots, as unreliable predictions are filtered out, both the coverage and prediction error decrease. Furthermore, DirMF exhibits a more conservative behavior compared to BeMF, as shown by the sudden decrease in coverage which accompanies a similarly sudden decrease in MAE. This points out that the holistic interpretation of DirMF compared to BeMF allows it to extract subtle features of the data, which decreases its reliability in the prediction but also improves its ability to issue correct forecasts.

3.2.2. Recommendation Quality

To assess the recommendation quality, we calculated the precision (Equation (8)) and recall (Equation (9)) of the top ten recommendations ( n = 10 ) for each model. We set the threshold θ for determining which items attract a user based on his/her test ratings to θ = 4 for MovieLens and Netflix, θ = 3.5 for FilmTrust, and θ = 7 for MyAnimeList.
To create the recommendation lists predicted by each method, we have chosen the top 10 test items with the highest prediction ( R ^ u , i ), omitting those with a prediction lower than θ . Moreover, for those methods that also provide a reliability metric, we only considered those predictions with a reliability greater than or equal to the set reliability threshold ϑ (reliability ρ u , i ϑ ). Notice that the roles of ϑ and θ are distinct: the former is a threshold in desired reliability, whereas the latter is a threshold in predicted score.
These findings are presented in Figure 3. Again, the x-axis represents the reliability threshold ϑ below which predictions are filtered away. In this way, as we increase the threshold, for those methods that support reliability (DirMF and BeMF), the recall value falls while the precision value rises, as shown in that figure. Both our new DirMF approach and our previous BeMF model achieve better recall results than the other baselines when the prediction reliability is very low (between 0.4 and 0.6, depending on the dataset). Compared to BeMF, the recall of DirMF decreases more prematurely with respect to the reliability of the predictions, reinforcing the idea that the DirMF model is more conservative.
With respect to precision, it can be observed that DirMF and BeMF manage to improve the result obtained by the baselines, although each one achieves it with different levels of reliability indicating, once again, the more conservative approach of DirMF. In any case, DirMF manages to outperform not only BeMF, but all other baselines in absolute terms of precision.

3.2.3. Time and Space Complexity

We would like to finish this section with some words on the time and space complexity of the used algorithms. As computed in Section 2.3, DirMF has a time complexity of O ( N M K ) and a space complexity of O ( N M K 2 ) . These values are similar to the ones of PMF and BeMF, since both are based on a matrix factorization. Additionally, these bounds are smaller than the time and space complexity of the neural network-based models GMF and NCF. Indeed, these methods, apart from computing a matrix factorization, also need to compute the weights of the inner product (GMF) or even those of a whole neural network (NCF) used to combine the hidden factors.
In this way, DirMF is as efficient in terms of time and space as the existing MF methods, and even more efficient with respect to those that apply deep learning techniques.

4. Conclusions

The DirMF model, which is an MF-based CF method that yields not just predictions for things that have not been evaluated by users, but also the reliability of these predictions, is provided in this study. By approaching the recommendation process as a classification problem rather than a regression one, this result is accomplished. The DirMF model’s output pair of both prediction and reliability allows us to calibrate it to achieve the correct balance between the quality and quantity of the predictions (i.e., to reduce the prediction error by limiting the model’s coverage).
DirMF builds on the progress made with our previous model, BeMF [18], where the use of other probability distributions as underlying assumptions for the MF method was identified as future work worth pursuing. This is exactly what has been done, using a Dirichlet probability distribution as the underlying assumption instead of Bernoulli random variables, which assume independence between the ratings.
Experimental results show that DirMF and BeMF achieve better recall results than the baselines when the prediction reliability is very low (between 0.4 and 0.6, depending on the dataset), although the recall of the former decreases more quickly with respect to the reliability of the predictions. Regarding precision, again, our model DirMF achieves better values than those obtained by the baselines, improving BeMF’s results on some datasets. According to those results, we can conclude that our model DirMF is more conservative with its predictions than BeMF. This means that DirMF tends to be more prudent when issuing predictions, but those emitted have a much larger chance of being accurate. In this manner, the quality of the DirMF predictions with large thresholds is better that for BeMF, at the cost of issuing less predictions. This behavior is desirable in those scenarios in which failure is highly penalized, like customers with a small chance of re-entering the system, medical predictions, or critical industrial forecasting.
It is worth mentioning that BeMF and DirMF are two methods that, by design, incorporate reliability at the core of the training process of the matrix factorization algorithm. For this reason, BeMF and DirMF are able to outperform other methods, which do not intrinsically treat reliability, in terms of accuracy of predictions. The probabilistic assumption that rating is a process that must be modeled as several binary classification problems (as done by BeMF and DirMF), instead of a single regression problem (as done by PMF, GMF, NCF) seems to be much more accurate in most scenarios, leading to better predictions. This trend becomes even more prominent when we set a reliability threshold ϑ : with this threshold, BeMF and DirMF are able to filter out non-reliable prediction, which are those that, more likely, will be misclassified; whereas PMF, GMF and NCF cannot make use of this capability, since they do not implement a native management of reliability.
As a future work, we suggest examining the quality of our model in terms other than accuracy, that is, novelty, diversity and discovery. Similarly, we advise testing the model’s stability against shilling attacks, which are used to discredit or promote some items over the rest. Furthermore, our model might be expanded to include both social and content information to increase the quality of the predictions.

Author Contributions

Conceptualization, F.O.; methodology, F.O. and Á.G.-P.; software, Á.G.; validation, R.L.-C.; formal analysis, Á.G.-P.; investigation, Á.G.; writing—original draft preparation, R.L.-C.; writing—review and editing, F.O. and Á.G.-P.; visualization, R.L.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been co-funded by the Ministerio de Ciencia e Innovación of Spain and European Regional Development Fund (FEDER) under grants PID2019-106493RB-I00 (DL-CEMG) and the Comunidad de Madrid under Convenio Plurianual with the Universidad Politécnica de Madrid in the actuation line of Programa de Excelencia para el Profesorado Universitario.

Data Availability Statement

All experiments in this paper have been carried out using CF4J (https://github.com/ferortega/cf4j, accessed on 30 November 2021) [19]. The source code of all experiments analyzed in this work is publicly available on GitHub (https://github.com/KNODIS-Research-Group/DirMF, accessed on 30 November 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bobadilla, J.; Ortega, F.; Hernando, A.; Gutiérrez, A. Recommender systems survey. Knowl.-Based Syst. 2013, 46, 109–132. [Google Scholar] [CrossRef]
  2. Kaminskas, M.; Bridge, D. Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems. ACM Trans. Interact. Intell. Syst. 2016, 7, 1–42. [Google Scholar] [CrossRef]
  3. Bobadilla, J.; Gutiérrez, A.; Ortega, F.; Zhu, B. Reliability quality measures for recommender systems. Inf. Sci. 2018, 442, 145–157. [Google Scholar] [CrossRef]
  4. Villegas, N.M.; Sánchez, C.; Díaz-Cely, J.; Tamura, G. Characterizing context-aware recommender systems: A systematic literature review. Knowl.-Based Syst. 2018, 140, 173–200. [Google Scholar] [CrossRef]
  5. Zamani, H.; Shakery, A. A language model-based framework for multi-publisher content-based recommender systems. Inf. Retr. J. 2018, 21, 369–409. [Google Scholar] [CrossRef]
  6. Rezvanian, A.; Moradabadi, B.; Ghavipour, M.; Daliri Khomami, M.M.; Meybodi, M.R. Social Recommender Systems. In Learning Automata Approach for Social Networks; Springer International Publishing: Cham, Switzerland, 2019; pp. 281–313. [Google Scholar]
  7. Al-Shamri, M.Y.H. User profiling approaches for demographic recommender systems. Knowl.-Based Syst. 2016, 100, 175–187. [Google Scholar] [CrossRef]
  8. Sohail, S.S.; Siddiqui, J.; Ali, R. Classifications of Recommender Systems: A review. J. Eng. Sci. Technol. Rev. 2017, 10, 132–153. [Google Scholar] [CrossRef]
  9. Ignat’ev, V.U.; Lemtyuzhnikova, D.V.; Rul’, D.I.; Ryabov, I.L. Constructing a Hybrid Recommender System. J. Comput. Syst. Sci. Int. 2018, 57, 921–926. [Google Scholar] [CrossRef]
  10. Mnih, A.; Salakhutdinov, R.R. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems 20 (NIPS 2007); 2008; pp. 1257–1264. Available online: https://papers.nips.cc/paper/2007/hash/d7322ed717dedf1eb4e6e52a37ea7bcd-Abstract.html (accessed on 30 November 2021).
  11. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.S. Neural Collaborative Filtering. In Proceedings of the 26th International Conference on World Wide Web WWW ’17, Perth, Australia, 3–7 April 2017; pp. 173–182. [Google Scholar] [CrossRef] [Green Version]
  12. Margaris, D.; Vassilakis, C.; Spiliotopoulos, D. What makes a review a reliable rating in recommender systems? Inf. Process. Manag. 2020, 57, 102304. [Google Scholar] [CrossRef]
  13. Mesas, R.M.; Bellogín, A. Exploiting recommendation confidence in decision-aware recommender systems. J. Intell. Inf. Syst. 2020, 54, 45–78. [Google Scholar] [CrossRef]
  14. Fan, S.; Yu, H.; Huang, H. An improved collaborative filtering recommendation algorithm based on reliability. In Proceedings of the 2018 IEEE 3rd International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), Chengdu, China, 20–22 April 2018; pp. 45–51. [Google Scholar]
  15. Ahmadian, S.; Afsharchi, M.; Meghdadi, M. A novel approach based on multi-view reliability measures to alleviate data sparsity in recommender systems. Multimed. Tools Appl. 2019, 78, 17763–17798. [Google Scholar] [CrossRef]
  16. Liang, Y.; Huang, C.; Bao, X.; Xu, K. Sequential dynamic event recommendation in event-based social networks: An upper confidence bound approach. Inf. Sci. 2021, 542, 1–23. [Google Scholar] [CrossRef]
  17. Xu, G.; Tang, Z.; Ma, C.; Liu, Y.; Daneshmand, M. A Collaborative Filtering Recommendation Algorithm Based on User Confidence and Time Context. J. Electr. Comput. Eng. 2019, 2019. [Google Scholar] [CrossRef]
  18. Ortega, F.; Lara-Cabrera, R.; González-Prieto, A.; Bobadilla, J. Providing reliability in recommender systems through Bernoulli Matrix Factorization. Inf. Sci. 2021, 553, 110–128. [Google Scholar] [CrossRef]
  19. Ortega, F.; Mayor, J.; López-Fernández, D.; Lara-Cabrera, R. CF4J 2.0: Adapting Collaborative Filtering for Java to new challenges of collaborative filtering based recommender systems. Knowl.-Based Syst. 2021, 215, 106629. [Google Scholar] [CrossRef]
  20. Harper, F.M.; Konstan, J.A. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst. 2015, 5, 1–19. [Google Scholar] [CrossRef]
  21. Guo, G.; Zhang, J.; Yorke-Smith, N. A Novel Bayesian Similarity Measure for Recommender Systems. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI), Beijing, China, 3–9 August 2013; pp. 2619–2625. [Google Scholar]
  22. MyAnimeList.net. MyAnimeList Dataset. 2020. Available online: https://www.kaggle.com/azathoth42/myanimelist (accessed on 14 October 2021).
  23. Netflix Inc. Netflix Prize Data. 2009. Available online: https://www.kaggle.com/netflix-inc/netflix-prize-data (accessed on 14 October 2021).
Figure 1. Plate diagram of DirMF model.
Figure 1. Plate diagram of DirMF model.
Applsci 12 01223 g001
Figure 2. Quality of the predictions measured by MAE and coverage. Predictions with lower reliability than those indicated on the x-axis are filtered out.
Figure 2. Quality of the predictions measured by MAE and coverage. Predictions with lower reliability than those indicated on the x-axis are filtered out.
Applsci 12 01223 g002
Figure 3. Quality of the recommendations measured by precision and recall. Predictions with lower reliability than those indicated on the x-axis are filtered out.
Figure 3. Quality of the recommendations measured by precision and recall. Predictions with lower reliability than those indicated on the x-axis are filtered out.
Applsci 12 01223 g003
Table 1. Rating matrix used in the running example. Rating values are: like ✔ and dislike ✖.
Table 1. Rating matrix used in the running example. Rating values are: like ✔ and dislike ✖.
R u , i i 1 i 2 i 3 i 4 i 5
u 1
u 2
u 3
Table 2. Softmax-normalized ratings. Note that only known ratings are normalized.
Table 2. Softmax-normalized ratings. Note that only known ratings are normalized.
R u , i i 1 i 2 i 3 i 4 i 5 R u , i i 1 i 2 i 3 i 4 i 5
u 1 0.27 0.730.73 u 1 0.73 0.270.27
u 2 0.73 0.730.27 u 2 0.27 0.270.73
u 3 0.270.73 0.73 u 3 0.730.27 0.27
Table 3. Random initialization of K = 3 latent factors for both users P and items Q.
Table 3. Random initialization of K = 3 latent factors for both users P and items Q.
P u , k k 1 k 2 k 3 P u , k k 1 k 2 k 3
u 1 0.060.040.58 u 1 0.730.850.88
u 2 0.780.040.66 u 2 0.950.230.53
u 3 0.400.770.36 u 3 0.900.870.75
Q i , k k 1 k 2 k 3 Q i , k k 1 k 2 k 3
i 1 0.890.840.54 i 1 0.140.950.46
i 2 0.060.420.84 i 2 0.960.660.77
i 3 0.840.350.91 i 3 0.130.750.25
i 4 0.830.930.19 i 4 0.260.330.51
i 5 0.080.310.14 i 5 0.430.690.84
Table 4. Latent factors for both users P and items Q after one iteration.
Table 4. Latent factors for both users P and items Q after one iteration.
P u , k k 1 k 2 k 3 P u , k k 1 k 2 k 3
u 1 0.100.080.59 u 1 0.720.840.87
u 2 0.790.060.67 u 2 0.940.220.52
u 3 0.410.780.37 u 3 0.890.850.73
Q i , k k 1 k 2 k 3 Q i , k k 1 k 2 k 3
i 1 0.900.850.56 i 1 0.120.930.44
i 2 0.080.420.85 i 2 0.950.660.77
i 3 0.840.350.91 i 3 0.130.750.25
i 4 0.830.930.21 i 4 0.260.320.51
i 5 0.090.330.15 i 5 0.420.680.83
Table 5. Datasets used in the experiments.
Table 5. Datasets used in the experiments.
DatasetNo. UsersNo. ItemsNo. RatingsNo. Test RatingsRating Scale
MovieLens1M60403706911,03189,1781 to 5
FilmTrust1508207132,67528190.5 to 4.0
MyAnimeList69,60099275,788,207549,0271 to 10
Netflix Prize480,18917,77099,945,049535,4581 to 5
Table 6. Baseline hyperparameters obtained by grid search optimization.
Table 6. Baseline hyperparameters obtained by grid search optimization.
MethodMovieLensFilmTrustMyAnimeListNetflix
PMF factors = 8 , γ = 0.01 , λ = 0.045 factors = 4 , γ = 0.015 , λ = 0.1 factors = 10 , γ = 0.005 , λ = 0.085 factors = 8 , γ = 0.01 , λ = 0.06
NCF factors = 5 , epochs = 10 factors = 5 , epochs = 8 factors = 7 , epochs = 15 factors = 6 , epochs = 4
GMF factors = 5 , epochs = 10 factors = 5 , epochs = 15 factors = 7 , epochs = 20 factors = 5 , epochs = 4
BeMF factors = 2 , γ = 0.006 , λ = 0.16 , m = 100 factors = 2 , γ = 0.02 , λ = 0.06 , m = 75 factors = 4 , γ = 0.004 , λ = 0.1 , m = 100 factors = 6 , γ = 0.0006 , λ = 0.02 , m = 50
DirMF factors = 6 , γ = 0.01 , η = 0.022 , m = 50 factors = 8 , γ = 0.015 , η = 0.09 , m = 100 factors = 10 , γ = 0.02 , η = 0.01 , m = 100 factors = 10 , γ = 0.02 , λ = 0.02 , m = 50
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lara-Cabrera, R.; González, Á.; Ortega, F.; González-Prieto, Á. Dirichlet Matrix Factorization: A Reliable Classification-Based Recommender System. Appl. Sci. 2022, 12, 1223. https://doi.org/10.3390/app12031223

AMA Style

Lara-Cabrera R, González Á, Ortega F, González-Prieto Á. Dirichlet Matrix Factorization: A Reliable Classification-Based Recommender System. Applied Sciences. 2022; 12(3):1223. https://doi.org/10.3390/app12031223

Chicago/Turabian Style

Lara-Cabrera, Raúl, Álvaro González, Fernando Ortega, and Ángel González-Prieto. 2022. "Dirichlet Matrix Factorization: A Reliable Classification-Based Recommender System" Applied Sciences 12, no. 3: 1223. https://doi.org/10.3390/app12031223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop