1. Introduction
Matrix factorization is a fundamental technique in linear algebra and data science, widely used for dimensionality reduction, data compression, and feature extraction. Recent research expands its use in various fields, including recommendation systems (e.g., collaborative filtering), bioinformatics, and signal processing. Researchers are actively pursuing new types of factorizations. In their effort to discover new factorizations and provide a unifying structure, ref. [
1] list 53 systematically derived matrix factorizations arising from the generalized Cartan decomposition. Their results apply to invertible matrices and generalizations of orthogonal matrices in classical Lie groups.
Nonnegative matrix factorization (NMF) is particularly useful when dealing with non-negative data, such as in image processing and text mining. Ref. [
2] survey existing NMF methods and their variants, analyzing their properties and applications. Ref. [
3] present a comprehensive survey of NMF, focusing on its applications in feature extraction and feature selection. Ref. [
4] summarize theoretical research on NMF from 2008 to 2013, categorizing it into four types and analyzing the principles, basic models, properties, algorithms, and their extensions and generalizations.
There are many advances aimed at developing more efficient algorithms tailored to specific applications. One prominent application of matrix factorization is in recommendation systems, particularly for addressing the cold-start problem. In recommender systems, matrix factorization models decompose user–item, user–user, or item–item interaction matrices into lower-dimensional latent spaces, which can then be used to generate recommendations.
Ref. [
5] summarizes the literature on recommender systems and proposes a multifaceted collaborative filtering model that integrates both neighborhood and latent factor approaches. Ref. [
6] develop a matrix factorization model to generate recommendations for users in a social network. Ref. [
7] proposes a matrix factorization model for cross-domain recommender systems by extracting items from three different domains and finding item similarities between these domains to improve item ranking accuracy.
Ref. [
8] develop a matrix factorization model that combines user–item rating matrices and item–side information matrices to develop soft clusters of items for generating recommendations. Ref. [
9] propose a matrix factorization that learns latent representations of both users and items using gradient-boosted trees. Ref. [
10] provide a systematic literature review on approaches and algorithms to mitigate cold-start problems in recommender systems.
Matrix factorization has also been used in natural language processing (NLP) in recent years. Word2Vec by [
11,
12] marks a milestone in NLP history. Although no clear matrices are presented in their study, Word2Vec models the co-occurrence of words and phrases using latent vector representations via a shallow neural network model. Another well-known example of matrix factorization in NLP is the word representation with global vectors (GloVe) by [
13]. They model the co-occurrence count matrix of words using cosine similarity of latent vector representations via alternating least squares. However, practical data may be heavily skewed, making the sum or mean squared error unsuitable as an objective function. To address this, ref. [
13] utilize weighted least squares to reduce the impact of skewness in the data. Manually creating the weight is difficult to make it work well for real data. In GloVe training, the algorithm is set to run a fixed number of iterations without another convergence check. Here, we consider using the likelihood principle to model the skewed data matrix.
In this paper, our goal is to model non-negative continuous sparse matrix data from skewed distributions with an abundance of zeros but lacking covariate information. We are particularly interested in zero-inflated Gamma observations, often referred to as semi-continuous data due to the presence of many zeros and the highly skewed distribution of positive observations. Examples of such data include insurance claim data, household expenditure data, and precipitation data (see [
14]). Ref. [
15] study and simulate data using actual deconvolved calcium imaging data, employing a zero-inflated Gamma model to accommodate spikes of observed inactivity and traces of calcium signals in neural populations. Ref. [
16] examine the amount of leisure time spent on physical activity and explanatory variables such as gender, age, education level, and annual per capita family income. They find that the zero-inflated Gamma model is preferred over the multinomial model. Beyond Gamma distribution, Weibull distribution can also model skewed non-negative data. Unfortunately, when the shape parameter is unknown, the Weibull distribution is not a member of the exponential family. Further, the alternating update is not suitable because its sufficient statistics is not linear in the observed data. A different estimation procedure will need to be developed if Weibull distribution is used. The Gamma distribution not only is a member of the exponential family, but its sufficient statistic is also linear in the observed data. This gives solid theoretical ground for alternating update to find a solution.
Section 5 explains more details in this regard.
In all the aforementioned examples, there are explicitly observed covariates or factors. Furthermore, the two parts of the model parameters are pendicular to each other, allowing model estimation by fitting two separate regressions: binomial regression and Gamma regression. Unfortunately, such models cannot be applied to user–item or co-occurrence count matrix data arising from many practical applications. Examples include user–item or item–item co-occurrence data from online shopping platforms and co-occurring word–word pairs in sequences of texts. One reason is the absence of observed covariates. Additionally, the mechanism that leads to the observed data entries in the data matrix for the binomial and Gamma parts may be of the same nature, making it more appropriate to utilize shared parameters in both parts of the model. Therefore, in this paper, we consider shared parameter modeling of zero-inflated Gamma data using alternating regression. We consider two different link functions for the Gamma part: canonical link and log link.
We believe that our study is the first that utilizes a shared parameter likelihood for zero-inflated skewed distribution to conduct matrix factorization. The alternating least squares (ALS) shares a similar spirit as our SA-ZIG in terms of alternately updating parameters. Most matrix factorization methods in the literature involving ALS are adding an assumption without realizing it. In order for ALS to be valid, the data need to have constant variance. This is because the objective function behind the ALS procedure is the mean squared error, which gives equal emphasis to all observations regardless of how big or small their variations are. If the variations for different observations are drastically different, it is unfair to treat the residuals equally. The real data in the high-dimensional sparse co-occurrence matrix are often very skewed with a lot of zeros and do not have constant variance. The contribution of this paper is the SA-ZIG model that models the positive co-occurrence data with Gamma distribution and attributes the many zeros in the data as being from a Bernoulli distribution. Shared parameters are used in both the Bernoulli and Gamma parts of the model. The latent row and column vector representations in matrix decomposition can be thought of as missing values that have some distributions relying on a smaller set of parameters. Estimating the vector representation for the rows relies on the joint likelihood of the observed matrix data and the missing vector representation for the columns. Due to missingness, the estimation of row vector representations is transformed to using conditional likelihood of the observed data given the column vector representations, and vice versa. This alternating update in the end gives the maximum likelihood estimate if the sufficient statistic for the column vector representation is linear in the observed data and the row vector representations. Both ALS and our SA-ZIG rely on this assumption to be valid.
The remainder of this paper is structured as follows.
Section 2 outlines the fundamental framework of the ZIG model.
Section 3 focuses on parameter estimation within the SA-ZIG model using the canonical link.
Section 4 addresses parameter estimation for the scenario involving the log link in the Gamma regression component. Convergence analysis is presented in
Section 5.
Section 6 details the SA-ZIG algorithms incorporating learning rate adjustments.
Section 7 presents the experimental studies. Finally,
Section 8 concludes the paper by summarizing the research findings, contributions, and limitations.
2. Shared Parameter Alternating Zero-Inflated Gamma Regression
Suppose the observed data are
whose distribution depends on some unknown Bernoulli random variable
and a Gamma random variable such that
The mechanism behind the observed data may be a result of combined contribution from some covariates or factors but none was observed.
We can write the probability mass function (pmf) for the Bernoulli random variable and probability density function (pdf) for the Gamma random variable as follows:
The product of the two functions generates the joint distribution of the Bernoulli and Gamma random variable
By summing up the two possibilities of the Bernoulli random variable, we obtain the pdf of the zero-inflated Gamma (ZIG) distribution:
For the Bernoulli part, we use the logit link function to connect the probability
to the effects of some unknown covariates or factors as follows:
where
and
are unknown parameters related to row
i, while
and
are unknown parameters related to column
j. Assume
and
are
d-dimensional vectors and
,
are scalars. For the Gamma observations,
When using the canonical link, the mean of the response variable is connected to the unknown parameters through the following formula:
where
and
are scalars. With the canonical link, the likelihood function and the score equations are both a function of the sufficient statistic. As the sufficient statistic carries all information about the unknown parameters, we can restrict our attention to the sufficient statistic without losing any information. This link function, however, could have difficulty in the estimation process. The natural parameter space is
. The right-hand side of Equation (
3) may yield an estimate of
that is outside of this parameter space.
Another link function that is popularly used for Gamma distribution is the log link
, which gives the log linear model:
The log link eliminates the non-negativity problem and the model parameters have better interpretation than the canonical link. For each unit increment in
or
, the mean increases multiplicatively by
or by
, respectively. Even though the parameters enjoy better interpretation, the estimation could also become a problem in the sense that the dot product on the right-hand side of (
4) could become large, leading to
being infinity, and hence, the estimation algorithm diverges.
In both links, the common dot product is used to reflect the fact that the cosine similarity is the key driving force behind the observed data in the table. For example, in natural language processing (NLP), and each represent a word in a dense vector. Each one of them contains both linguistic information and word usage information in it. The observations are distance-weighted co-occurrence counts that are linked to relevance between the words and the relevance can be captured with the cosine similarity. In the example of item–item co-occurrence matrix, and represent the hidden product information of the items including characteristics, properties, functionality, popularity, users’ ratings on them, etc. The dot product again tells how the two items are relevant to each other. Sometimes, the co-occurrence matrix was derived from a time series such as a sequence of watched movies in time order from a customer. In this case, the co-occurrence count tells how often the two items were considered in a similar time frame because the counts were weighted based on the position separation of the two items in the sequence. The relevance of these items summarized by the cosine similarity could reflect how the two products are alike in their property, functionality, etc. Therefore, the dot product serves as a major contributor for the observed weighted count.
In both link functions, the intercepts are allowed to be different from that in the logistic model for flexibility. In some classical statistical models such as in [
17], shared parameter modeling is used by assuming one part of model parameters to be proportional to the other. Such proportionally assumed parameters are meaningful for the case where covariates are observed. In our case, both
and
are unknown. Different pairs of
and
could give the same dot product. Adding an additional proportionality parameter only makes the model even more non-identifiable. This is why we believe it is better to put flexibility in the intercepts instead of using the proportionality parameters.
Due to shared parameters being used, the parameters from both the Gamma regression part and the Logistic regression part should be estimated simultaneously. There are past studies that use a separate set of parameters. Ref. [
14] conducted hypothesis testing to compare two groups via mixture of zero-inflated Gamma and zero-inflated log-normal models. However, the parameters from the two model components were modelled separately. In a totally applied setting, ref. [
17] considered normal and log-Gamma mixture model for HIV RNA data using shared parameters, assuming that the two parts of the model are proportional to each other. They proposed such an application and simulation, but no inference was given. The shared parameter modeling was employed in some of the literature to achieve some model parsimony. For example, ref. [
18] used shared parameters for both a random effects linear model and a probit-modeled censoring process. Ref. [
19] used a similar approach for simultaneous modeling of a mean structure and informative censoring. Ref. [
20] used shared parameters to model the intensity of a Poisson process and a binary measure of severity.
Denote
,
, and
,
. We estimate
and
alternately. In this estimation scheme, the likelihood function can be treated as a function of either
or
but not concurrently at a same time. When estimating
, the likelihood is treated as a function of
while
stays fixed. Reversely, when we estimate the
, the likelihood is treated as a function of
while
is treated as fixed. This resembles block coordinate descent in which one part of the parameters is updated while holding the remaining part of the parameters as fixed values. For clarity, we use separate notations
and
to refer to these two occasions, i.e.,
where
and
are both equal to
but one is treated as a function of
while the other one is treated as a function of
.
Note that
and
simply represent the log likelihood of a row or a column of the data matrix. The
can be thought of as the log likelihood function of
for data in the
row of the co-occurence matrix and the
is the log likelihood function of
for data in the
column of the matrix. Each log likelihood function could be split into two components; one corresponds to the Bernoulli part and the other corresponds to the Gamma part:
where
Assuming that the observations
’s are independent of each other, conditional on unobserved
and
,
, the overall loglikelihood function from all observations can be written as
The alternating regression deals with two separate log likelihood functions and , respectively. The and part is the traditional log likelihood for binary logistic regression. The and part is the Gamma log likelihood restricted to only positive observations. If the two parts do not share common parameters, then the estimation can be performed separately. However, they share some common parameters and . In the next two sections, we describe the parameter estimations.
3. ZIG Model with Canonical Links
In this section, we consider the case with canonical links, i.e., the Bernoulli part uses logit function and the Gamma part uses negative inverse link. Using the canonical link with generalized linear models enjoys the benefit that the score equations are a function of sufficient statistics. Here, we consider parameter estimation under the canonical links. The negative inverse link has difficulty interpreting model parameters and also has some restrictions in terms of the support of the link function, which does not match the positive value of Gamma distribution. More details can be seen as we introduce the model.
Recall that the the log likelihood and the link function for the logistic part on the
row of data are
where
and
are unknown parameters while treating
and
as fixed. The log likelihood and the negative inverse link function for the Gamma part on the
row of data are
The log likelihood function for the
row of data is
To obtain partial derivatives, we write
as follows:
The inverse logit and its partial derivative w.r.t.
are
Therefore, the first-order partial derivatives can be summarized as
The negative second-order partial derivatives and their expectations are the same and are given below:
Now, consider the second component of the log likelihood
from the
ith row of data
The first-order partial derivatives for the
are
The second-order partial derivatives and their negative expectations are
All of the aforementioned formulae work with the
row of data. When we combine the log likelihood from different rows of data, other rows do not contribute to the partial derivative with respect to
. That is,
As a result, the estimation of the components in does not need to be performed simultaneously. Instead, we can cycle through the estimation of , for , one by one iteratively. After is updated, the estimated value of is used to update .
The first part of the alternating regression has the following updating equation based on the Fisher scoring algorithm:
For each
i and
t, this update requires the value of
and their score equations and information matrix at the
iteration. One iteration alone here is not taking advantage of the data because retrieving the
row of data takes a significant amount of time when the dimension of the data matrix is huge. Therefore, for each row of data retrieved, it is better to update
a certain number of times. Specifically, the
is updated again and again in a loop of E epochs and the updated values are used to recompute the score equations and information matrix, all of which are used for next epoch’s update. After all epochs are completed,
takes the value of
at the end of all iterations from E epochs based on the updating formula below:
This inner loop of updates makes good use of the already loaded data to refine the estimate of
so that the end estimate is closer to its MLE when the same current
values are used. Note that the updated
in each epoch from (
20) leads to changes in the score equations and Fisher information matrix, whose update will in turn result in a better estimate of
. Such multiple rounds of updates in (
20) reduce the variation of update in
when we go through many iterations based on Equation (
19). It effectively reduces the number of times to retrieve the data. Below are the formulae involved in the updating equations:
with
where these partial derivatives are the
,
,
, and
evaluated at
and
using formulae in (
8) and (
12), and
where the expectations are based on Formulae (
9), (
10), (
13)–(
15) evaluated at
and
if it is in the outer loop of the update and using the values of
and
when it is in the inner loop of updates.
This concludes the first part of the alternating ZIG regression. It gives an iterative update of while is fixed. After we finish updating all ’s, we move on to treat the as fixed to estimate .
Starting with
, we consider the partial derivatives with respect to
while holding
fixed. The derivations are similar to those for
but still we list them for clarity. Note that
The first-order partial derivatives regarding
are
The negative second derivatives and their expectations are
The first-order partial derivatives of the second component are
.
The negative second-order partial derivatives and their expectations are
Therefore, the other side of the updating equation for the alternating ZIG regression based on the Fisher scoring algorithm is
Again, for each
j and iteration number
t, this equation is iterated for a certain number of epochs to obtain a refined estimate of
based on the current value
without having to reload the data. The quantities in the updating equation are listed below.
with
where these partial derivatives are
,
,
, and
evaluated at
and
, and
The expectations involved in the above formulae were using
at the
iteration and the current values
if it is in the outer loop of the update and using the values of
and
when it is in the inner loop of updates.
All of the formulae above assume the parameter
is known. When it is not known, the parameter can be estimated with MLE, bias corrected MLE, or moment estimator. Simulation studies in [
21] suggested that when the sample size is large, all the estimators perform similarly but the moment estimator has the advantage of being easy to compute. If the sample size is medium, then bias corrected MLE is better.
The canonical link for the Gamma regression part could encounter problems. Recall that the canonical link for the Gamma regression is
. The left-hand side of the equation is required to be negative because the support of the Gamma distribution is (0,
. However, the right-hand side of the equation could freely take any value in
. Due to this conflict in natural parameter space and the range of the estimated value, the likelihood function sometimes could become undefined because
appears in it but
cannot be evaluated for negative
(see Equation (
11)).
4. Using Log Link for Gamma Regression
In this section, we consider using the log link to model the Gamma regression part while maintaining the logistic regression part. The settings of the two-part model are similar to those in
Section 3 except that modifying the link function from canonical to log link changes the score equations and the Hessian matrix.
For updating formulae on the model parameters based on Fisher scoring algorithm, the first-order and second-order partial derivatives of
remain the same as in (
8)–(
10) because the new link function does not appear in the Bernoulli part. Now consider the second component of the log likelihood
, which was given in (
5). In this section, denote
as in the previous subsection. Then, the log likelihood of the ith row of non-zero observations in the co-occurrence matrix, corresponding to the Gamma distribution, can be expressed as
The equations below give the first-order partial derivatives of
with respect to the parameters
and
.
To obtain the second-order partial derivatives, first note that
Then, the second-order partial derivatives and their negative expectations which are components of the Fisher information matrix can be derived as
The first part of the alternating regression has the following updating equation based on the Fisher scoring algorithm:
The updating equation in (
29) is iterated E epochs, say E equals 20, for the same
i and
t, where
t is the iteration number. The multiple epochs here allow the estimate of
to get closer to its MLE for given current value
as the score equations and information matrix are also updated with the estimated parameter value. There is no need to have too many epochs because the parameter
is not the true value yet and still needs to be estimated later. Below are the formulae involved in the updating equations:
with
where these partial derivatives are
,
,
, and
evaluated at
and
using Equations (
8) and (
25), and
The expectations involved in the above formulae were given in (
9)–(
10) and (
26)–(28) except that the parameters are using the current values
and
at the
iteration if it is in the outer loop of the update and using the values of
and
when it is in the inner loop of updates.
The aforementioned equations are used in updating the
part of the alternating ZIG regression. Now, consider the other side of the alternating ZIG regression in which updates are performed for
while holding
fixed. First, consider the first-order partial derivatives
The first-order partial derivatives of
are
The second-order partial derivatives
,
,
and their expectations are same as in the canonical link case, which are given in Equations (
21)–(23). Additionally,
Next, consider the first-order partial derivatives of the second component
To derive the second-order partial derivatives, note that two frequently used terms are
Using the two terms, the second derivatives can be written as
Therefore, the updating equations for
based on the Fisher scoring algorithm are
The updating Equation (
30) is in the outer loop of iterations and the iterations in (31) are in the inner loop of epochs for the same
j and
t. The quantities involved are
with
where these partial derivatives are derivatives
,
,
,
and
evaluated at
and
during the outer loop of iterations and at
and
during the inner loop of iterations.
5. Convergence Analysis
In this section, we discuss the convergence behavior of the algorithm analytically. Our algorithm contains two components: a logistic regression part and a Gamma regression part. If these two components’ parameter estimations were independent of each other, then it is the situation of the standard Generalized Linear Model (GLM) in each part. In our model, the two components’ estimation cannot be separated but the two components in the log likelihood are additive. Hence, the convergence behavior in one component standard GLM case is still relevant. In this section, we first talk about the convergence behavior in the standard case, as this case applies to the situation when we hold the fixed while estimating or vice versa.
In the standard GLM setting, most of the parameter estimation converges pretty fast. There are also abnormal behaviors that could happen such as when the estimated model component gets out of the valid range of the distribution. For example, in the Gamma regression, if the canonical link (negative inverse) is used, the estimated mean may become negative every now and then even though the distribution requires a positive mean. Refs. [
22,
23,
24] presented conditions for the existence of MLE in logistic regression models. They proved in the non-trivial case that if there is overlap in the convex cones generated by the covariate values from different classes, the maximum likelihood estimates of the regression parameters exist and are unique. On the other hand, if the convex cones generated by the covariates in different classes have complete separation or quasi complete separation, then the maximum likelihood estimates do not exist or are unbounded. Generally, they recommended inserting a stopping rule if complete separation is found or restarting the iterative algorithm with standardized observations (to have mean 0 and variance 1) if quasi complete separation is found. Ref. [
24] also recommended a procedure to check the conditions. For quasi complete separation, the estimation process diverges at least at some points. This makes the estimated probability of belonging to the correct class grow to one. Therefore, ref. [
24] recommended checking the maximum predicted probability
for each data point at the
iteration. If the maximum probability is close to 1 and is bigger than previous iterations’ maximum probability, there are two possibilities for which this could happen. They suggested initially printing a warning but continue the iteration because the data point is likely to be an outlier observation in its own class and there is overlap in the two convex cones. In this case, the MLE exists and is unique so the algorithm should be allowed to continue. The other possibility is that there is quasi complete separation in the data. In this case, the process should be stopped and rerun with the observation vectors standardized with zero mean and unit variance.
Ref. [
24] also stated that the difficulties associated with complete and quasi complete separation are small sample problems. With large sample size, the probability of observing a set of separated data points is close to zero. Complete separation may occur with any type of data but it is unlikely that quasi complete separation will occur with truly continuous data.
Ref. [
25] illustrated the non-convergence problem with a Poisson regression example. The author proposed a simple solution and implemented it in the R
package, version 1.2.1. Specifically, if the iteratively reweighted least squares (IRLS) procedure produces either an infinite deviance or predicted values which fall within an invalid range, then the amount of update of the parameter estimates is repeatedly halved until the update no longer shows the behavior. Moreover, it produced a further step-halving, which checks that the updated deviance is making a reduction compared to that in the previous iteration. If it did not show the reduction, it triggers the step-halving to make the algorithm monotonically reduce the deviance.
Based on the aforementioned studies illustrating the standard case of GLM convergence behavior, we analytically present the convergence behavior of our alternating ZIG regression. For further discussion, we first specify our convex cones. Recall that
serves as data when we estimate
. In our context, the convex cones are defined through
, while we estimate
and through
while we estimate
. That is, for estimating
, the convex cones are
For estimating the
, the convex cones are denoted as
and
, respectively.
Firstly, we consider the case that either complete separation or quasi complete separation exists in the data. We discuss the estimation of while holding fixed.
Suppose there exists complete separation or quasi separation in
and
. That is,
,
, and
. Also, suppose
and
. Then, there exists a vector direction
such that
for
and
for
. (Note: This vector can be taken to be the perpendicular direction to a vector that lies in between
and
but does not belong to either
or
.) Let
be the log likelihood of the Bernoulli part when the model parameters are updated toward direction
by
k unit. That is, the original log likelihood
and the updated
are as follows:
For . Hence, as k increases to ∞, increases toward 1. This implies increases toward 0.
For , . Hence, as k increases to ∞, decreases toward 0. This implies increases toward 0.
Putting the two pieces together, we know that as
k increases,
increases for any given
. Therefore, the maximum cannot be reached until
k is
∞. This means the MLE does not exist or the solution set
is unbounded for the current value of
. This perspective can be also seen by looking at the partial derivative of
with respect to
k. Note that
Since
when
, we know the first term is non-negative. Similarly,
when
implies the second term is non-negative. Therefore, the partial derivative of
is non-negative. This indicates that the gradient of
is positive unless
. Therefore, there is no solution for
except the trivial solution
in binomial regression alone. Of course, this is not exactly our case because we still have the Gamma regression component to be considered together.
Now consider the
component with its first- and second-order derivatives with respect to
k.
where
. Note that
where
is the standardized Gamma random variable that has mean 0 and variance 1 because the term
is the mean of
and
is the shape parameter. Further,
The negativity of the second-order derivative implies that
is concave.
Gathering the partial derivatives from both
and
together, we obtain the partial derivative of
From the above discussion, the first two terms in (
35) corresponding to
are greater than 0. The
is centered at zero and has variance 1. Therefore, in order for the MLE to exist, the sum of
must cancel the total value of
and the positive term
. If the shape parameter
of Gamma distribution is small, the distribution is highly skewed with a long right tail. In this case, there are more observations having values less than its mean. However, a small shape parameter also makes
small such that the
may dominate, and hence, the entire partial derivative
is greater than zero. When the shape parameter is large, the Gamma random variables are approximately normally distributed. In this case, the observations are symmetrically located on either side of its mean. This makes the number of observations satisfying
and
roughly equal. Consequently, the
is close to zero. Then, there are no extra values left to neutralize
. As a result, regardless of whether the shape parameter
is large or small, when complete separation or quasi complete separation holds, it is highly likely that the MLE does not exist. When the shape parameter
is intermediate such that the Gamma distribution is still skewed, the more values of Gamma observations less than its mean might allow
to cancel all other positive terms. This is the case that there could be a solution for
.
Next, consider the case that there is overlap in the two convex cones
and
(i.e., there is neither complete separation nor quasi complete separation in the data). Recall that the ZIG model is a two-part model with a Bernoulli part and a Gamma part. The log likelihood function
corresponding to the Bernoulli part can be shown to be strictly concave in
. This is because the first component
in
is an affine function of
(see Equation (
7) for the expression of
), which is both convex and concave, if we hold
and
fixed. The second component
is strictly concave, as its second derivative is less than 0, as shown below.
Thus, the log likelihood corresponding to the Bernoulli part
is strictly concave.
Now, consider the log likelihood corresponding to the Gamma part
in (
24). We only need to consider the summation over the last two terms
and
because the other terms do not involve the regression parameters, where
. Note that
is an affine function of
, which is both convex and concave and
is convex if
is convex (see ref. [
26]). This leads to the term
being strictly concave in
. Hence, we have the
strictly concave in
. Combining the two concave components
and
, we know that the entire log likelihood for the
row
is a strictly concave function with respect to the parameter being estimated for any row
i. Additionally, there is overlap in the two convex cones. Therefore, for any direction in the overlapping area of the convex cones, updating the parameter along that direction will lead to the two components of
in Formula (
34) being of opposite sign to each other. In this case,
has a unique minimum when there is neither complete separation nor quasi complete separation in
and
. Similar arguments apply when we estimate
while holding
fixed. That is, a unique MLE of
exists when there is neither complete separation nor quasi complete separation in
and
. This means that the alternating procedure will find the MLE of
when
is fixed and will find the MLE of
when
is fixed in this non-separation scenario.
Next, we consider the convergence behavior for the estimation of without fixing the value of . For estimating , consider the complete data . The entire matrix is missing. Due to too many missing values, a logical approach is to regard the as randomly drawn from a distribution which has relatively few parameters. Assume the distribution of is in the exponential family.
Let be the unconditional density of the complete data and be the conditional density given . Denote the marginal density of given as . In the next few paragraphs, we explain that the alternating updates in ZIG leads ultimately to a value of that maximizes .
For exponential families, the unconditional density
and conditional density
both have the same natural parameter
and the same sufficient statistic
except that they are defined over different sample spaces
versus
. We can write
and
in general exponential family format as
where
Then,
. The first- and second-order derivatives of
are
where
and
are the expectation and variance under the complete data likelihood from
and
. And
and
are the conditional expectation and variance of the sufficient statistics.
is the expected value of the conditional covariance matrix when
has sampling density
. The last equality of both equations assumes the order of expectation and derivative can be exchanged. The previous equation means the derivative of the log likelihood is the difference between the conditional and unconditional expectation of the sufficient statistics.
Meanwhile, the updating equation based on the Fisher scoring algorithm can be written as
where in the limit,
, for some
, which leads to
or
at
.
The complete data log likelihood based on the joint distribution of
can be written as
, where
is defined in the end of
Section 2, and
is the probability density function of
given
. We know
is also a member of the exponential family. Assume its sufficient statistics are linear in
. Given
, the observed data likelihood
based on
contains the Bernoulli and Gamma parts, in which the sufficient statistic for
is linear in
. Then, the sufficient statistics for the complete data problem are linear in the data
and
. In this case, calculating
is equivalent to a procedure which first fills in the individual data points for
and then computes the sufficient statistics using filled-in values. With the filled-in value for
, the computation of the estimator for
follows the usual maximum likelihood principle. This results in iterative update of
and
back and forth. Essentially, the problem is a transformation from an assumed parameter-vector to another parameter-vector that maximized the conditional expected likelihood.
One of the difficulties in the ZIG model is that the parameterization allows arbitrary orthogonal transformations on both and without affecting the value of the likelihood. Even for cases where the likelihood for the complete data problem is concave, the likelihood for the ZIG may not be concave. Consequently, multiple solutions of the likelihood equations can exist. An example is a ridge of solutions corresponding to orthogonal transformations of the parameters.
For complete data problems in exponential family models with canonical links, the Fisher scoring algorithm is equivalent to the Newton–Raphson algorithm, which has a quadratic rate of convergence. This advantage is due to the fact that the second derivative of the log-likelihood does not depend on the data. In these cases, the Fisher scoring algorithm has a quadratic rate of convergence when the starting values are near a maximum. However, we do not have the complete data and when estimating . Fisher scoring algorithms often fail to have quadratic convergence in incomplete data problems, since the second derivative often does depend upon the data. Further, the scoring algorithm does not have the property of always increasing the likelihood. It could in some cases move toward a local maximum if the choice of starting values is poor.
In summary, we conclude that our alternating ZIG regression has an unique MLE in each side of the regressions when either or stays fixed and the algorithm converges when there is overlap in data. The data refer to while we estimate and refer to while we estimate . When there is overlap in data, both and are concave functions with unique maximum. However, when there is complete separation or quasi complete separation, the alternating ZIG regression will fail to converge with high chance. This is because the first-order partial derivative of is non-negative and increases with k. Even though the component is a well-behaved concave curve, the entire log likelihood, unfortunately, may not have MLE exist or the solution set may be unbounded especially when the Gamma observations have large or too small shape parameter. The overall convergence behavior for estimating without holding fixed can treat as missing data. The alternating update ultimately finds the maximum likelihood estimate of based on the sampling distribution of the observed matrix if the solution is in the interior of the parameter space. This requires the joint distribution of and to be in the exponential family with sufficient statistic that is linear in and .
6. Adjusting Parameter Update with Learning Rate
In the convergence analysis section, our discussion is based on holding either
or
fixed while estimating the other one. The Fisher scoring algorithm is a modified version of Newton’s method. In general, Newton’s method solves
by numerical approximation. This algorithm starts with an initial value
and computes a sequence of points via
. Newton’s method converges fast because the distance between the estimate and its true value shrinks quickly such that the distance in the next step of iteration is asymptotically equivalent to the squared distance in the previous iteration. That is, Newton’s method has quadratic convergence order (cf. [
27] p. 29 and [
28]). This convergence order holds when the initial value of the iteration is in the neighborhood of the true value and the third derivative
is continuous and
is non-zero.
The Fisher scoring algorithm is slightly different from Newton’s method. In the Fisher scoring algorithm, we replace the
by its expected value. This algorithm is asymptotically equivalent to Newton’s method and therefore enjoys the same asymptotic property such as consistency of the estimate of the parameter. As sample size becomes large, the convergence order increases [
28]. It has some advantages over Newton’s method in that the expected value of the Hessian matrix is positive definite which guarantees the update is uphill toward the direction of maximizing the log likelihood function assuming that the model is correct and the covariates are true explanatory variables.
As the Fisher scoring algorithm assumes that
is the true value when we estimate
or vice versa, there could be complications when the parameter being fixed is not equal to the true parameter value. To see this point, note that our updating equations are all written in the context of using the Fisher scoring algorithm, which relies on the expectation of the Hessian matrix using correct distribution at the true parameter value. In particular, the algorithm uses
as a fixed value while estimating
. The resulting estimate of
determines the distribution because the distribution is a function of
and
. When the parameter is being fixed at a value far from its true value during the intermediate steps, the distribution is wrong even though it is in the right family. The consequence of using a wrong distribution to compute the expectation of the Hessian matrix could lead to a sequence of parameter updates that converges to a limiting value unequal to the true parameter [
28]. In this case, the algorithm might diverge. Our simulation study in later section confirms this point.
To avoid this parameter update divergence problem, we introduce learning rate adjustment so that the change in parameter estimate is scaled by
, where
is a small constant learning rate such as 0.1 or 0.01, and
t is the iteration number. That is, the general updating formula is
and the adjustment is applied to both inner loop and outer loop iterations. The algorithm follows the same work flow as those listed in Algorithm 1 except that the epoch update in the inner loop (Algorithm 2) is replaced with the Algorithm 3 using the learning rate adjustment
. How we decide to use this learning rate adjustment comes from modifying the popularly used adaptive moment estimation (Adam) and the stochastic gradient descent. In the stochastic gradient descent, the parameter update has learning rate adjustment to make small moves so that it compensates for the random nature of selecting only one observation to compute the gradient. Specifically, given parameters
and a gradient function evaluated at one randomly selected observation
, the update is based on formula
, where
satisfies
and
. The
corresponds to our
. However, our parameter update does not use just one
. Instead, all
were used in computing the gradient and the information matrix. Given parameters
and a loss function
at the
training iteration, the Adam update takes the form
, where
and
are the exponential moving average of the gradients and the second moments of the gradients in the past iterations, respectively, and
is a small scalar (e.g.,
) used to prevent division by 0. Our use of the Fisher information
should provide a better mechanism than Adam’s exponential moving average of second moments to achieve the effect of increasing the learning rate for sparser parameters and decreasing the learning rate for ones that are less sparse. This is because Adam only uses the diagonal entries of
and ignores the covariance between the estimated parameters existing in the off-diagonal entries. Refs. [
29,
30] both pointed out that Adam may not converge to optimal solutions even for some simple convex problems, although it is overwhelmingly popular in machine learning applications.
Using the learning rate adjustment makes smaller steps in each update before changing directions. This learning rate adjustment turns out to be crucial. The simulation study in the next section examines the effect of the learning rate adjustment.
Algorithm 1 SA-ZIG regression |
, overall = total negative log likelihood while do while do while do do n_epoch update of and and using row of data. ▹ see Algorithm 2 or 3 do n_epoch update of and and using column of data.
for do retrieve kth row of data recompute , and their norms using and values. recompute and for row and column respectively using and values. compute the overall . check if the relative change in Loss is less than a predefined threshold .
|
Algorithm 2 Epoch update in inner loop of Algorithm 1 without learning rate adjustment |
for epoch ∈ 1,…, n_epoch do compute and . update using current value of { } and based on formulae in ( 29) for epoch ∈ 1,…, n_epoch do compute and . update using current value of { }, and based on formulae in (31)
|
Algorithm 3 Updating equations in SA-ZIG regression with learning rate adjustment |
for do for do
|
8. Conclusions and Discussion
In summary, we presented the shared parameter alternating zero-inflated Gamma (SA-ZIG) regression model in this paper. The SA-ZIG model is designed for highly skewed non-negative matrix data. It uses a logit link to model the zero versus positive observations. For the Gamma part, we considered two link functions: the canonical link and the log link, and derived updating formulas for both.
We proposed an algorithm that alternately updates the parameters and while holding one of them fixed. The Fisher scoring algorithm, with or without learning rate adjustment, was employed in each step of the alternating update. Numerical studies indicate that learning rate adjustment is crucial in SA-ZIG regression. Without it, the algorithm may fail to find the optimal direction.
After model estimation, the matrix is factorized into the product of a left matrix and a right matrix. The rows of the left matrix and the columns of the right matrix provide vector representations for the rows and columns, respectively. These estimated row and column vector representations can then be used to assess the relevance of items and make recommendations in downstream analysis.
The SA-ZIG model is inherently similar to factor analysis. In factor analysis, both the loading matrix and the coefficient vector are unknown. The key difference between SA-ZIG and factor analysis is that SA-ZIG uses a large coefficient matrix, whereas factor analysis uses a single vector. Additionally, SA-ZIG assumes a two-stage Bernoulli–Gamma model, while factor analysis assumes a normal distribution.
In both models, likelihood-based estimation can determine convergence behavior by linking the complete data likelihood with the conditional likelihood. For factor analysis, the normal distribution and the linearity of the sufficient statistic in the observed data allow the use of ALS to estimate both the loading matrix and the coefficient vector ([
31]). However, SA-ZIG cannot use ALS because the variance of the Gamma distribution is not constant.
In both SA-ZIG and factor analysis, the unobserved row (or column) vector representation and the factor loading matrix can be treated as missing data, assuming the data are missing completely at random. For missing data analysis, the well-known Expectation Maximization (EM) algorithm can be used to estimate parameters. The EM algorithm has the advantageous property that successive updates always move towards maximizing the log likelihood. It works well when the proportion of missing data is small, but it is notoriously slow when a large amount of data is missing.
For SA-ZIG, the alternating scheme with the Fisher scoring algorithm offers the benefit of a quadratic rate of convergence if the true parameters and their estimates lie within the interior of the parameter space. However, in real applications, the estimation process might diverge at either stage of the alternating scheme because the Fisher scoring update does not always guarantee an upward direction, especially in cases of complete or quasi complete separation. Additionally, the algorithm may struggle to find the optimal solution due to the non-identifiability of the row and column matrices under orthogonal transformations, leading to a ridge of solutions. The learning rate adjustment in Algorithm 3 helps by making small moves during successive updates in later stage of the algorithm and thereby is more likely to find a solution.
Future research on similar problems could explore alternative distributions beyond Gamma. Tweedie and Weibull distributions, for instance, are capable of modeling both symmetric and skewed data through varying parameters, each with its own associated link functions. However, new algorithms and convergence analyses would need to be developed specifically for these distributions. In practical applications, the most suitable distribution for the observed data is often uncertain, making diagnostic procedures an important area for further investigation.