Next Article in Journal
Investigations into the Design and Implementation of Reinforcement Learning Using Deep Learning Neural Networks
Previous Article in Journal
Chinese Story Generation Based on Style Control of Transformer Model and Content Evaluation Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Alternative Estimator for Poisson–Inverse-Gaussian Regression: The Modified Kibria–Lukman Estimator

1
Department of Mathematics, Insurance, and Applied Statistics, Helwan University, Cairo 11795, Egypt
2
Department of Mathematics and Statistics, University of North Dakota, Grand Forks, ND 58202, USA
3
Department of Statistics and Informatics, University of Mosul, Mosul 41002, Iraq
4
Department of Management Information Systems, Faculty of Economics and Administrative Sciences, Tarsus University, 33400 Mersin, Turkey
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(3), 169; https://doi.org/10.3390/a18030169
Submission received: 6 February 2025 / Revised: 9 March 2025 / Accepted: 13 March 2025 / Published: 14 March 2025
(This article belongs to the Section Algorithms for Multidisciplinary Applications)

Abstract

:
Poisson regression is used to model count response variables. The method has a strict assumption that the mean and variance of the response variable are equal, while, in practice, the case of overdispersion is common. Also, in multicollinearity, the model parameter estimates obtained with the maximum likelihood estimator are adversely affected. This paper introduces a new biased estimator that extends the modified Kibria–Lukman estimator to the Poisson–Inverse-Gaussian regression model to deal with overdispersion and multicollinearity in the data. The superiority of the proposed estimator over the existing biased estimators is presented in terms of matrix and scalar mean square error. Moreover, the performance of the proposed estimator is examined through a simulation study. Finally, on a real dataset, the superiority of the proposed estimator over other estimators is demonstrated.

1. Introduction

Count models are considered for regression analysis when the response variable only consists of non-negative integers. These models are useful in identifying the factors that could have a major effect on the non-negative response variable [1]. The Poisson regression model is a popular count model that assumes the variance equals the mean (equi-dispersed). Count variables can occasionally show over- or under-dispersion, meaning that the variance is either higher than the mean or lower than the mean; as a result, the Poisson model produced inaccurate results. Therefore, when count data are over- or under-dispersed, the Poisson model is substituted with the negative binomial, Conway–Maxwell–Poisson, or Poisson–Inverse-Gaussian (PIG) models, among others [2,3,4].
Heavy-tailed count data are prevalent in various disciplines, including biology, computer science, actuarial science, electronic engineering, and medical research [5,6]. These datasets are characterized by a predominance of values close to zero, accompanied by a small proportion of exceptionally large integer values. Effectively modeling such data requires a distribution that can accommodate both the heavy tail and the concentration of smaller values. One such model is the PIG distribution, a special case of the Sichel distribution with two parameters. Like the negative binomial (NB) distribution, the PIG model can be expressed as a mixture; however, while the NB distribution assumes a gamma mixing component, the PIG distribution is derived from a Poisson and inverse-Gaussian mixture [7,8]. This distinction allows the PIG model to capture heavy tails and high kurtosis, making it a more flexible alternative to the NB distribution, particularly for datasets with a high initial peak and right-skewed distributions [9,10].
The PIG Maximum Likelihood Estimator (PIGMLE) is commonly used to estimate the parameters of the PIG regression model. One significant drawback of PIGMLE is its susceptibility to unreliable inference and predictions in multicollinearity, as the high variance in regression estimates can lead to instability and reduced accuracy [11].
To address the challenges posed by multicollinearity, researchers have developed alternative estimators that introduce a controlled bias to stabilize estimates, thereby mitigating the adverse effects [12]. Several biased estimators have been proposed for linear regression models, including the James–Stein estimator [13], ridge regression estimator [14,15], Liu and Liu-type estimators [16,17,18], and Kibria–Lukman estimator [19]. These estimators have also been extended to nonlinear regression models, such as gamma regression, to improve estimation efficiency in multicollinearity [20,21,22,23].
Building on the work of Kibria and Lukman [19] and Aladeitan et al. [24], we developed a new way to address multicollinearity in the PIG regression model. The remaining part of this article follows this pattern: Section 2 presents the fundamentals of the PIG regression model, reviews existing estimators, and introduces the PIG-MKL estimators. Section 3 provides a theoretical comparison of the estimators, while Section 4 evaluates their performance through a Monte Carlo simulation study. In Section 5, the proposed estimators are applied to real-world data. Finally, Section 6 summarizes the key findings and conclusions.

2. Methodology

2.1. Poisson–Inverse-Gaussian Regression Model (PIGM)

Holla [25] proposed the PIG distribution as a mixture of the Poisson and inverse-Gaussian distributions. The resulting probability mass function is defined as follows:
p Y = y μ , ω = 2 π ω e ω μ 1 y ! 2 ω ( 1 + 1 2 ω μ 2 ) y 0.5 K y 0.5 2 ω ( 1 + 1 2 ω μ 2 ) ,
for   y = 0, 1, 2, …, μ > 0   is the mean parameter, ω > 0 is the dispersion parameter, K y 0.5 . represents the modified Bessel function of the second kind [26]. The mean and variance are E y = μ and V y = μ + μ 3 ω , respectively. The linear predictor is defined as
μ i = exp x i β             i = 1,2 , , n ,
where x i j is the matrix of the explanatory variables and β is the vector of regression parameters.
The maximum likelihood estimator (MLE) method is widely employed for estimating regression parameters due to its desirable statistical properties, such as consistency, efficiency, and asymptotic normality. This method determines the parameter values that maximize the log-likelihood function, identifying the values that best explain the observed data. For the regression model specified in Equation (1), the corresponding log-likelihood function is formulated as follows:
l β , ω y i = i = 1 n 1 2 l n 2 π ω + ω μ 1 ln ( y i ! ) y 0.5 2 ω 1 + 1 2 ω μ 2 + K y 0.5 2 ω ( 1 + 1 2 ω μ 2 )
The partial derivative of the log-likelihood function is given as:
S β = l β , ω y i β j = 0
The solution to Equation (4) requires numerical methods such as the Newton–Raphson method and Fisher score method [26,27]. The Fisher scoring algorithm is commonly employed for estimating the parameters β j , j = 1,2 , , q . Let β ^ l denotes the maximum likelihood estimator after l iterations, then β l is given as follows:
β ^ l + 1 = β ^ l + I β l 1 S ( β l )
where l   is the number of iterations until convergence is reached and I β l = E 2 l ( β ) β j β h is the Fisher information matrix. The iterations terminate when β ^ l β ^ l + 1 < ϵ where ϵ is a specified small value and usually equals 10 6 . The final estimate algorithm form is then combined with iterative reweighted least squares as follows, with a few modifications:
β ^ P I G M L E = X W ^ X 1 X W ^ ϕ
where W ^ is a n × n diagonal matrix with its diagonal elements w ^ i = μ ^ i + μ i 3 ω , i = 1,2 , , n and ϕ = l n μ ^ i + y i μ ^ i μ ^ i + μ i 3 ω . The dispersion parameter ω is defined as follows:
ω ^ = 1 n p i = 1 n y i μ ^ i 2 V a r ( μ ^ i )
In this study, the matrix mean squared error (MMSE) and scalar mean squared error (SMSE) are primarily used to evaluate the performance of the estimator τ ~ . The MMSE and SMSE are defined as follows:
M M S E τ ~ = c o v τ ~ + b i a s ( τ ~ ) b i a s ( τ ~ ) S M S E τ ~ = t r c o v τ ~ + t r [ b i a s τ ~ b i a s τ ~ ] ,
where t r is the trace operator of a matrix.
The MMSE and the SMSE for the estimators are obtained using spectral decomposition of the estimated weighted information matrix ( X W ^ X ) . Assuming the existence of an orthogonal matrix θ such that θ ( X W ^ X ) θ = Λ ,   s o   X W ^ X = θ Λ θ . Here, Λ = d i a g ( λ j ) contains the ordered eigenvalues λ 1 λ 2 λ q , we define Z = X θ and α = θ β , then θ X W ^ X θ = Z Z = Λ . The MMSE and the SMSE for β ^ P I G M L E are:
M M S E α ^ P I G M L E = ω ^ Λ 1
S M S E α ^ P I G M L E = t r ( M M S E α ^ P I G M L E ) = ω ^ j = 1 q 1 λ j
PIG-MLE exhibits high variance and instability in multicollinearity, highlighting the need for alternative estimators. We discuss the alternative methods in the next Section.

2.2. Biased Estimators

When multicollinearity is found in the data, corrective measures ought to be implemented to mitigate its effects. Multicollinearity poses a serious threat to the estimates of the regression coefficients [28,29], as it can lead to inflated variances of the estimated parameters. Consequently, the precision of these estimates is reduced. One approach to addressing multicollinearity is removing highly correlated regressors. However, an alternative is to retain all regressors and employ a biased estimator rather than the maximum likelihood estimator, which may offer advantages in certain contexts.
Biased estimators have been proposed to increase the accuracy of the parameter estimate in the model when multicollinearity exists, especially when the parameter estimation is the focus of the regression analysis. Following the work of James and Stein [13], Ashraf et al. [4] extended the James–Stein estimator to the Poisson–Inverse-Gaussian (PIG) regression model as:
β ^ P I G J S E = c β ^ P I G M L E ,             0 < c < 1
where
c = β ^ P I G M L E β ^ P I G M L E β ^ P I G M L E β ^ P I G M L E + ψ ^ t r c o v β ^ P I G M L E ·  
The MMSE and SMSE for β ^ P I G J S E are defined as follows:
M M S E α ^ P I G J S E = c 2 ω ^ Λ 1 + c 1 α α c 1
S M S E α ^ P I G J S E = c 2 ω ^ j = 1 q 1 λ j + c 1 2 j = 1 q α j 2
Hoerl and Kennard [14] developed the ridge estimator to address multicollinearity in the linear regression model. Batool et al. [11] extended the ridge regression estimator to the Poisson–Inverse-Gaussian (PIG) regression model as:
β ^ P I G R R E = Λ k + 1 X W ^ X β ^ P I G M L E ,                                           k > 0
where Λ k + 1 = X W ^ X + k I 1 .
The MMSE and SMSE for β ^ P I G R R E are defined as follows:
M M S E α ^ P I G R R E = ω ^ Λ k + 1 Λ Λ k + 1 + k 2 Λ k + 1 α α Λ k + 1
S M S E α ^ P I G R R E = ω ^ j = 1 q λ j λ j + k 2 + k 2 j = 1 q α j 2 λ j + k 2
Following Hoerl and Kennard [14], Liu [16] introduced a biased estimator that competes favorably with the ridge at reducing the impact of collinearity. Ashraf et al. [4] proposed the adjusted Liu estimator, which is defined as follows:
β ^ P I G L E = Λ I 1 Λ d + β ^ P I G M L E                                                                                     0 d < 1
where Λ I 1 = X W ^ X + I 1 ,   Λ d + = ( X W ^ X + d I ) . Hence, the MMSE and SMSE for β ^ P I G L E are defined as follows:
M M S E α ^ P I G L E = ω ^ Λ I 1 Λ d + Λ 1 Λ d + Λ I 1 + ( d 1 ) 2 Λ I 1 α α Λ I 1
S M S E α ^ P I G L E = ω ^ j = 1 q ( λ j + d ) 2 λ j + 1 2 λ j + ( d 1 ) 2 j = 1 q α j 2 λ j + 1 2
Based on the research of Özkale and Kaçıranlar [30] and Akram et al. [31], Alrweili [32] introduced a new method for the PIG regression model with two parameters as follows:
β ^ P I G T P E = Λ k + 1 Λ k d β ^ P I G M L E                                       0 d < 1 ,   0 < k
where Λ k d = ( X W ^ X + k d I ) . Hence, the MMSE and SMSE for β ^ P I G T P E are defined as follows:
M M S E α ^ P I G T P E = ω ^ Λ k + 1 Λ k d Λ 1 Λ k d Λ k + 1 + ( d 1 ) 2 k 2 Λ k + 1 α α Λ k + 1
S M S E α ^ P I G T P E = ω ^ j = 1 q ( λ j + k d ) 2 λ j + k 2 λ j + ( d 1 ) 2 k 2 j = 1 q α j 2 λ j + k 2 ·

2.3. Suggested Biased Estimators

In this paper, we present a novel biased estimator that expands on the Kibria–Lukman estimator β ^ P I G K L E . Firstly, the Kibria–Lukman estimator was developed especially for the linear regression model by Kibria and Lukman [19], the suggested β ^ P I G K L E   to the PIG regression model can be obtained as
β ^ P I G K L E = Λ k + 1 Λ k β ^ P I G M L E                                                                             k > 0
where Λ k = X W ^ X k I . Thus, the MMSE and SMSE of β ^ P I G K L E is given by
M M S E α ^ P I G K L E = ω ^ Λ k + 1 Λ k Λ 1 Λ k Λ k + 1 + 4 k 2 Λ k + 1 α α Λ k + 1
S M S E α ^ P I G K L E = ω ^ j = 1 q ( λ j k ) 2 λ j + k 2 λ j + 4 k 2 j = 1 q α j 2 λ j + k 2
Aladeitan et al. [24] introduced the modified Kibria–Lukman estimator for the Poisson regression model. We proposed this method in this study because it dominates the estimators we reviewed. Thus, β ^ P I G M K L E   is given by:
β ^ P I G M K L E = Λ k + 1 Λ k β ^ P I G R R E               ,         k > 0
where Λ k 1 = X W ^ X k I 1 and Λ k + 1 as defined before, then:
M M S E α ^ P I G M K L E = ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 + k 2 Λ k + 2 ( 3 Λ + k I ) α α ( 3 Λ + k I ) Λ k + 2
S M S E α ^ P I G M K L E = ω ^ j = 1 q λ j ( λ j k ) 2 λ j + k 4 + k 2 j = 1 q ( 3 λ j + k ) 2 α j 2 λ j + k 4

3. Theoretical Comparisons Between Estimators

For a novel estimator to be considered a significant contribution, it must demonstrate clear advantages over existing methodologies. A rigorous theoretical framework is essential for establishing its superiority, providing a foundation for comparative analysis. Theoretical investigations enable a precise understanding of the conditions under which the new estimator outperforms or underperforms relative to conventional approaches. This, in turn, aids practitioners in determining when to adopt the new estimator and when traditional methods remain preferable.
Lemma 1. 
Let   B   be a positive definite matrix,  ζ  be a vector of nonzero constants and  ν  be a positive constant, then  ν B ζ ζ > 0  if and only if   ζ B 1   ζ < ν  [33].
Definition 1. 
Let   θ ^ 1 ,   θ ^ 2   be two estimators of  θ , the estimator  θ ^ 1  is superior to the estimator  θ ^ 2  in the sense of MMSE criterion, if and only if
θ ^ 1 , θ ^ 2 = M M S E θ ^ 1 M M S E θ ^ 2 > 0
Lemma 2. 
Let  θ ^ 1 ,   θ ^ 2   be two estimators of  θ θ ^ 1 = A 1 y , and  θ ^ 2 = A 2 y . Assume that  ψ = c o v θ ^ 1 c o v θ ^ 2 > 0 , where  c o v θ ^ 1 ,   c o v θ ^ 2   represent the covariance matrices of the estimators  θ ^ 1 ,   θ 2 , respectively. Then,
θ ^ 1 , θ ^ 2 = M M S E θ ^ 1 M M S E θ ^ 2 = c o v θ ^ 1 + a 1 a 1 c o v θ ^ 2 a 2 a 2 θ ^ 1 , θ ^ 2 = ψ + a 1 a 1 a 2 a 2 > 0
if and only if
a 2 ψ + a 1 a 1 a 2 < 1
where a 1 , a 2 denote the bias vectors of the estimators  θ ^ 1 ,   θ ^ 2 , respectively [33,34].
Theorem 1.
The estimator  α ^ P I G M K L E  is superior to the estimator  α ^ P I G M L E  in the sense of MMSE criterion, i.e.,  M M S E α ^ P I G M L E M M S E α ^ P I G M K L E > 0  if and only if
b i a s α ^ P I G M K L E ω ^ Λ 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 b i a s α ^ P I G M K L E < 1
Proof. 
The difference between c o v α ^ P I G M L E and c o v α ^ P I G M K L E is as follows:
c o v α ^ P I G M L E c o v α ^ P I G M K L E = ω ^ Λ 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 c o v α ^ P I G M L E c o v α ^ P I G M K L E = ω ^   d i a g 1 λ j λ j ( λ j k ) 2 λ j + k 4 j = 1 q
Since B i a s α ^ P I G M K L E B i a s α ^ P I G M K L E is non-negative definite, then the difference matrix c o v α ^ P I G M L E c o v α ^ P I G M K L E is positive definite if and only if
λ j + k 4 > λ j 2 ( λ j k ) 2 , for k > 0 . Thus, ω ^ Λ 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 is positive definite, which means that α ^ P I G M K L E is better than α ^ P I G M L E since it has a smaller covariance matrix. □
Theorem 2.
The estimator  α ^ P I G M K L E  is superior to the estimator  α ^ P I G J S E  in the sense of MMSE criterion, i.e.,  M M S E α ^ P I G J S E M M S E α ^ P I G M K L E > 0  if and only if
b i a s α ^ P I G M K L E c 2 ω ^ Λ 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 b i a s α ^ P I G M K L E < 1
Proof. 
The difference between c o v α ^ P I G M L E and c o v α ^ P I G M K L E is as follows:
c o v α ^ P I G J S E c o v α ^ P I G M K L E = c 2 ω ^ Λ 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 c o v α ^ P I G J S E c o v α ^ P I G M K L E = ω ^   d i a g c 2 λ j λ j ( λ j k ) 2 λ j + k 4 j = 1 q
Since b i a s α ^ P I G M K L E b i a s α ^ P I G M K L E is non-negative definite, then the difference matrix c o v α ^ P I G J S E c o v α ^ P I G M K L E is positive definite if and only if
c 2 λ j + k 4 > λ j 2 ( λ j k ) 2 , for k > 0   a n d   0 < c < 1 . Thus, c 2 ω ^ Λ 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 is positive definite, which means that α ^ P I G M K L E is better than α ^ P I G J S E since it has a smaller covariance matrix. □
Theorem 3.
The estimator  α ^ P I G M K L E  is superior to the estimator  α ^ P I G R R E  in the sense of MMSE criterion, i.e.,  M M S E α ^ P I G R R E M M S E α ^ P I G M K L E > 0  if and only if
b i a s α ^ P I G M K L E ω ^ Λ k + 1 Λ Λ k + 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 b i a s α ^ P I G M K L E < 1
Proof. 
The difference between c o v α ^ P I G R R E a n d c o v α ^ P I G M K L E is as follows:
c o v α ^ P I G R R E c o v α ^ P I G M K L E = ω ^ Λ k + 1 Λ Λ k + 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 c o v α ^ P I G R R E c o v α ^ P I G M K L E = ω ^   d i a g λ j λ j + k 2 λ j ( λ j k ) 2 λ j + k 4 j = 1 q
Since b i a s α ^ P I G M K L E b i a s α ^ P I G M K L E is non-negative definite, then the difference matrix c o v α ^ P I G R R E c o v α ^ P I G M K L E is positive definite if and only if
λ j + k 2 > ( λ j k ) 2 , for k > 0 . Thus, ω ^ Λ k + 1 Λ Λ k + 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 is positive definite, which means that α ^ P I G M K L E is better than α ^ P I G R R E since it has a smaller covariance matrix. □
Theorem 4.
The estimator  α ^ P I G M K L E  is superior to the estimator  α ^ P I G R R E  in the sense of MMSE criterion, i.e.,  M M S E α ^ P I G L E M M S E α ^ P I G M K L E > 0  if and only if
b i a s α ^ P I G M K L E ω ^ Λ I 1 Λ d + Λ 1 Λ d + Λ I 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 b i a s α ^ P I G M K L E < 1
Proof. 
The difference between c o v α ^ P I G L E a n d c o v α ^ P I G M K L E is as follows:
c o v α ^ P I G L E c o v α ^ P I G M K L E = ω ^ Λ I 1 Λ d + Λ 1 Λ d + Λ I 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 c o v α ^ P I G L E c o v α ^ P I G M K L E = ω ^   d i a g ( λ j + d ) 2 λ j + 1 2 λ j λ j ( λ j k ) 2 λ j + k 4 j = 1 q
Since b i a s α ^ P I G M K L E b i a s α ^ P I G M K L E is non-negative definite, then the difference matrix c o v α ^ P I G L E c o v α ^ P I G M K L E is positive definite if and only if
λ j + k 4 ( λ j + d ) 2 > λ j 2 ( λ j k ) 2 λ j + 1 2 ,   for   k > 0   a n d   0 < d < 1
Thus, ω ^ Λ I 1 Λ d + Λ 1 Λ d + Λ I 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 is positive definite, which means that α ^ P I G M K L E is better than α ^ P I G L E since it has a smaller covariance matrix. □
Theorem 5.
The estimator  α ^ P I G M K L E  is superior to the estimator  α ^ P I G I L E  in the sense of MMSE criterion, i.e.,  M M S E α ^ P I G I L E M M S E α ^ P I G M K L E > 0  if and only if
b i a s α ^ P I G M K L E ω ^ Λ I 1 Λ d Λ 1 Λ d Λ I 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 b i a s α ^ P I G M K L E < 1
Proof. 
The difference between c o v α ^ P I G I L E a n d c o v α ^ P I G M K L E is as follows
c o v α ^ P I G I L E c o v α ^ P I G M K L E = ω ^ Λ I 1 Λ d Λ 1 Λ d Λ I 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 c o v α ^ P I G I L E c o v α ^ P I G M K L E = ω ^   d i a g ( λ j d ) 2 λ j + 1 2 λ j λ j ( λ j k ) 2 λ j + k 4 j = 1 q
Since b i a s α ^ P I G M K L E b i a s α ^ P I G M K L E is non-negative definite, then the difference matrix c o v α ^ P I G I L E c o v α ^ P I G M K L E is positive definite if and only if
λ j + k 4 ( λ j d ) 2 > λ j 2 ( λ j k ) 2 λ j + 1 2 ,   for   k > 0   a n d   0 < d < 1
Thus, ω ^ Λ I 1 Λ d Λ 1 Λ d Λ I 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 is positive definite, which means that α ^ P I G M K L E is better than α ^ P I G I L E since it has a smaller covariance matrix. □
Theorem 6. 
The estimator  α ^ P I G M K L E  is superior to the estimator  α ^ P I G T P E  in the sense of MMSE criterion, i.e.,  M M S E α ^ P I G T P E M M S E α ^ P I G M K L E > 0  if and only if
b i a s α ^ P I G M K L E ω ^ Λ k + 1 Λ k d Λ 1 Λ k d Λ k + 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 b i a s α ^ P I G M K L E < 1
Proof. 
The difference between c o v α ^ P I G T P E a n d c o v α ^ P I G M K L E is as follows:
c o v α ^ P I G T P E c o v α ^ P I G M K L E = ω ^ Λ k + 1 Λ k d Λ 1 Λ k d Λ k + 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 c o v α ^ P I G T P E c o v α ^ P I G M K L E = ω ^   d i a g ( λ j + k d ) 2 λ j + k 2 λ j λ j ( λ j k ) 2 λ j + k 4 j = 1 q
Since b i a s α ^ P I G M K L E b i a s α ^ P I G M K L E is non-negative definite, then the difference matrix c o v α ^ P I G T P E c o v α ^ P I G M K L E is positive definite if and only if
λ j + k 2 ( λ j + k d ) 2 > λ j 2 ( λ j k ) 2 ,   for   k > 0   a n d   0 < d < 1
Thus, ω ^ Λ k + 1 Λ k d Λ 1 Λ k d Λ k + 1 ω ^ Λ k + 1 Λ k Λ k + 1 Λ Λ k + 1 Λ k Λ k + 1 is positive definite, which means that α ^ P I G M K L E is better than α ^ P I G T P E since it has a smaller covariance matrix. □

4. Simulation Study

This Section outlines a Monte Carlo simulation designed to evaluate the performance of the estimators under different levels of multicollinearity and dispersion. The predictors are generated using the following equation:
x i j = ( 1 γ 2 ) 1 2 z i j + γ z i , q                   , i = 1,2 , , n ,   j = 1,2 , 3 , , q
where z i j indicates independent standard normal random numbers, and γ panels the degree of correlation among predictors. We examined the following levels of multicollinearity: γ = 0.80 ,   0.90 ,   0.95   and   0.99 . The response variable y i is drawn from the Poisson–Inverse-Gaussian distribution, where μ i = exp x i T β and ω represents the dispersion parameter, with ω = 2,4 , 6 . The sample sizes are set to n = 30 ,   50 ,   100 , and 200 , and the predictors were taken to be q = 4 ,   q = 8 and q = 12 . The regression coefficients were chosen such that β β = 1 [35]. The performance of the estimators was compared using the estimated mean squared error (MSE) in Equation (28).
M S E = 1 1000 r = 1 1000 j = 1 p ( β ^ r j β j ) 2
where β ^ r j denotes the estimate obtained in the r t h replication of the simulation, and β j is the vector of the true regression parameters. Each simulation is replicated 1000 times. The MixPoissonReg package in R is an explicit tool for fitting and analyzing mixed Poisson regression models, specifically those involving over-dispersed count data [36]. It is constructed to handle models where the response variable follows a Poisson distribution mixed with another distribution to account for over-dispersion. The package supports two common mixed Poisson distributions: the Poisson–Inverse-Gaussian regression and Negative Binomial (NB) regression models.
We summarized the simulation algorithms as follows:
Step (1): Initialize Parameters:
-
Set the sample size n .
-
Define the total number of replications of the simulation, r = 1 ,   2 , ,   1000 .
-
Specify the number of predictors p .
-
Define the true regression coefficients β .
Step (2): Set the replication counter: Let r = 1.
Step (3): Generate data with varying degrees of correlation using Equation (27).
Step (4): Estimates the parameters using the estimators under study.
Step (5): Evaluate the estimators’ performance using M S E , using Equation (28).
Step (6): Iterate Until Completion:
-
Increment the replication counter: r = r + 1.
-
If r = 1000, stop the simulation; otherwise, return to Step 3.
The simulation results are summarized in Table 1, Table 2 and Table 3.
The performance of the estimators is assessed based on their MSE values across different sample sizes, multicollinearity levels, dispersion parameters, and numbers of predictors. The results presented in Table 1, Table 2 and Table 3 reveal a consistent trend, where the estimator with the lowest MSE is preferred due to its higher estimation accuracy and efficiency. This is supported by Figure 1 and Figure 2. As the dispersion parameter increases from ω = 2 to ω = 4   and   6 , the MSE also rises. A notable trend across all tables is the decline in MSE as the sample size increases, emphasizing the benefits of larger datasets. This pattern remains consistent across varying levels of multicollinearity and dispersion, indicating that a larger sample size effectively reduces estimator variance and enhances overall performance. Conversely, smaller sample sizes (e.g., n = 30 ) result in higher MSEs, particularly in cases of severe multicollinearity (e.g., γ = 0.99 ) and overdispersion. Under these conditions, the performance differences between estimators become more pronounced, with the proposed estimator demonstrating greater adaptability than its counterparts.
Increasing the number of predictors from q = 4 (Table 1) to q = 12 (Table 3) leads to higher MSE values across all conditions. The superiority of the proposed estimator is particularly evident in smaller sample sizes, where it demonstrates strong resistance to both multicollinearity and dispersion effects. Although the performance gap diminishes as the sample size grows, the proposed estimator dominates the alternatives, underscoring its robustness and efficiency across varying model complexities and data conditions.

5. Real Data Application

We performed a thorough examination of the nut’s dataset in this application, following Hardin and Hilbe’s [7] documentation and Lukman et al.’s [35] recent adoption. The dataset is accessible through the COUNT library in the R program. The data collection, which includes fifty-two observations and four (4) predictors. In particular, the number of cones that squirrels remove is represented by the response variable. The predictors are the number of DBH (diameter at breast height) per plot ( x 1 ), mean tree height per plot ( x 2 ), canopy closure (as a percentage) ( x 3 ), and standardized number of trees per plot ( x 4 ). To determine which model was best suited for the dataset, we fitted the Poisson regression model (PRM), negative binomial regression model (NBRM), and Poisson–Inverse-Gaussian regression model (PIGMR). The results are shown in Table 4. The PIGRM provides the best fit based on residual variance, as it has significantly smaller errors. In terms of AIC, NBRM is marginally better than PIGRM, but the substantial reduction in residual variance for the PIGRM suggests that it may offer a better model despite the slightly higher AIC. PRM performs worse on both metrics, making it the least desirable model. Furthermore, over-dispersion was discovered in the model using the dispersion test, which made the Poisson regression model inappropriate for this dataset [35].
Furthermore, the estimated variance inflation factors are 40.89, 39.61, 16.332, and 4.474, which clearly show that the model contains correlated regressors. This leads to both multicollinearity and over-dispersion in the model.
Table 5 displays the estimated regression coefficients derived from the various estimators examined in this study. We assessed the estimator’s performance using the SMSE, prediction mean squared error (PMSE), and prediction mean absolute error (PMAE). The dataset was randomly split into 80% training and 20% test sets. We estimated the model parameters on the training set and calculated the performance criteria on the test set. The results revealed that α ^ P I G M L E exhibits poor performance in handling multicollinearity within the PIG regression model, as evidenced by its high SMSE, PMSE, and PMAE. The proposed estimator is the most preferred among the shrinkage estimators, achieving lower PMSE and PMAE.

6. Conclusions

The PIG regression model is widely used for modeling count data with overdispersion. The model regression estimates are mostly estimated using the PIGMLE. However, multicollinearity is a threat to the performance of PIGMLE. To address this issue, we proposed a novel biased estimator: the modified Kibria–Lukman estimator ( α ^ P I G M K L E ) . Through theoretical analysis and extensive simulations, we evaluate their performance. The results demonstrate that the proposed estimator exhibits superior efficiency, as evidenced by a reduced MSE compared to traditional approaches. These findings suggest that our proposed estimators provide a more robust alternative for PIG regression in the presence of multicollinearity.
An example study was conducted, examining squirrel behavior and forest characteristics across different plots in Abernethy Forest, Scotland. The proposed estimator has the lowest PMSE and PMAE among the shrinkage estimators. The findings further support the superiority of the proposed method in handling multicollinearity while maintaining predictive accuracy. Our findings enhance the existing research on count regression models and provide valuable analytical tools for researchers across various disciplines seeking robust estimation methods. In future studies, we aim to enhance the effectiveness of these estimators by incorporating techniques such as partial least squares and principal component regression.

Author Contributions

Conceptualization; All authors; methodology, All authors; formal analysis, All authors; writing—original draft preparation, All authors.; writing—review and editing, All authors; visualization, A.F.L.; supervision, All authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data will be made available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zha, L.; Lord, D.; Zou, Y. The Poisson inverse Gaussian (PIG) generalized linear regression model for analyzing motor vehicle crash data. J. Transp. Saf. Secur. 2015, 8, 18–35. [Google Scholar] [CrossRef]
  2. Månsson, K. On ridge estimators for the negative binomial regression model. Econ. Model. 2012, 29, 178–184. [Google Scholar] [CrossRef]
  3. Abonazel, M.R.; Saber, A.A.; Awwad, F.A. Kibria–Lukman estimator for the Conway–Maxwell Poisson regression model: Simulation and applications. Sci. Afr. 2023, 19, e01553. [Google Scholar] [CrossRef]
  4. Ashraf, B.; Amin, M.; Mahmood, T.; Faisa, M. Performance of Alternative Estimators in the Poisson-Inverse Gaussian Regression Model: Simulation and Application. Appl. Math. Nonlinear Sci. 2024, 9, 1–18. [Google Scholar] [CrossRef]
  5. Zhu, A.; Ibrahim, J.G.; Love, M.I. Heavy-tailed prior distributions for sequence count data: Removing the noise and preserving large differences. Bioinformatics 2019, 35, 2084–2092. [Google Scholar] [CrossRef] [PubMed]
  6. Gagnon, P.; Wang, Y. Robust heavy-tailed versions of generalized linear models with applications in actuarial science. Comput. Stat. Data Anal. 2024, 194, 107920. [Google Scholar] [CrossRef]
  7. Hardin, J.; Hilbe, J. Generalized Linear Models and Extensions, 2nd ed.; Stata Press: College Station, TX, USA, 2002; pp. 13–26. [Google Scholar]
  8. Dean, C.; Lawless, J.; Willmot, G. A mixed Poisson Inverse-Gaussian regression model. Can. J. Stat. 1989, 17, 171–181. [Google Scholar] [CrossRef]
  9. Putri, G.; Nurrohmah, S.; Fithriani, I. Comparing Poisson-Inverse Gaussian Model and Negative Binomial Model on case study: Horseshoe crabs data. J. Phys. Conf. Ser. 2017, 1442, 012028. [Google Scholar] [CrossRef]
  10. Rintara, P.; Ahmed, S.; Lisawadi, S. Post Improved Estimation and Prediction in the Gamma Regression Model. Thail. Stat. 2023, 21, 580–606. [Google Scholar]
  11. Batool, A.; Amin, M.; Elhassanein, A. On the performance of some new ridge parameter estimators in the Poisson-inverse Gaussian ridge regression. Alex. Eng. J. 2023, 70, 231–245. [Google Scholar] [CrossRef]
  12. Hocking, E.; Speed, M.; Lynn, J. A class of biased estimators in linear regression. Technometrics 1976, 18, 55–67. [Google Scholar] [CrossRef]
  13. James, W.; Stein, C. Estimation with quadratic loss. Proc. Fourth Berkely Symp. Math. Stat. Probab. 1961, 1, 361–379. [Google Scholar]
  14. Hoerl, A.; Kennard, W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  15. Hoerl, A.; Kennard, W. Ridge regression: Applications to nonorthogonal problems. Technometrics 1970, 12, 69–82. [Google Scholar] [CrossRef]
  16. Liu, K. A new class of biased estimate in linear regression. Commun. Stat.-Theory Methods 1993, 22, 393–402. [Google Scholar]
  17. Liu, K. Using Liu-type estimator to combat collinearity. Commun. Stat.-Theory Methods 2003, 32, 1009–2003. [Google Scholar] [CrossRef]
  18. Liu, K. More on Liu-type estimator in linear regression. Commun. Stat.-Theory Methods 2004, 33, 2723–2733. [Google Scholar] [CrossRef]
  19. Kibria, B.G.; Lukman, A.F. A new ridge-type estimator for the linear regression model: Simulations and applications. Scientifica 2020, 2020, 9758378. [Google Scholar] [CrossRef]
  20. Mandal, S.; Belaghi, R.; Mahmoudi, A.; Aminnejad, M. Stein-type shrinkage estimators in gamma regression model with application to prostate cancer data. Stat. Med. 2019, 38, 4310–4322. [Google Scholar] [CrossRef]
  21. Akram, M.; Amin, M.; Faisal, M. On the generalized biased estimators for the gamma regression model: Methods and applications. Commun. Stat.-Simul. Comput. 2023, 52, 4087–4100. [Google Scholar] [CrossRef]
  22. Akram, M.; Amin, M.; Qasim, M. A new biased estimator for the gamma regression model: Some applications in medical sciences. Commun. Stat.-Theory Methods 2023, 52, 3612–3632. [Google Scholar] [CrossRef]
  23. Lukman, A.; Ayinde, K.; Kibria, G.; Adewuyi, E. Modified ridge-type estimator for the gamma regression model. Commun. Stat.-Simul. Comput. 2022, 51, 5009–5023. [Google Scholar] [CrossRef]
  24. Aladeitan, B.; Adebimpe, O.; Lukman, A.; Oludoun, O.; Abiodun, O. Modified Kibria-Lukman (MKL) estimator for the Poisson Regression Model: Application and simulation. F1000Research 2021, 10, 548. [Google Scholar] [CrossRef] [PubMed]
  25. Holla, M. On a Poisson-inverse gaussian distribution. Metr. Int. J. Theor. Appl. Stat. 1967, 11, 115–121. [Google Scholar] [CrossRef]
  26. Abramowitz, M.; Stegun, I.A. (Eds.) Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; US Government Printing Office: Washington, DC, USA, 1968; Volume 55.
  27. Nelder, J.; Wedderburn, R. Generalized Linear Models. J. R. Stat. Soc. Ser. A 1972, 135, 370–384. [Google Scholar] [CrossRef]
  28. Lukman, A.F.; Ayinde, K. Review and classifications of the ridge parameter estimation techniques. Hacet. J. Math. Stat. 2017, 46, 953–967. [Google Scholar] [CrossRef]
  29. Lukman, A.; Kibria, G.; Ayinde, K.; Jegede, S. Modified One-Parameter Liu Estimator for the Linear Regression Model. In Modelling and Simulation in Engineering; Wiley Online Library: New York, NY, USA, 2020; p. 9574304. [Google Scholar]
  30. Özkale, M.; KaçIranlar, S. The restricted and unrestricted two-parameter estimators. Commun. Stat.-Theory Methods 2007, 36, 2707–2725. [Google Scholar] [CrossRef]
  31. Akram, N.; Amin, M.; Qasim, M. A new liu-type estimator for the inverse gaussian regression model. J. Stat. Comput. Simul. 2020, 90, 1153–1172. [Google Scholar] [CrossRef]
  32. Alrweili, H. Liu-Type estimator for the Poisson-Inverse Gaussian regression model: Simulation and practical applications. Stat. Optim. Inf. Comput. 2024, 12, 982–1003. [Google Scholar] [CrossRef]
  33. Farebrother, R. Further Results on the Mean Square Error of Ridge Regression. J. R. Stat. Soc. 1976, 38, 248–250. [Google Scholar] [CrossRef]
  34. Trenkler, G.; Toutenburg, H. Mean squared error matrix comparisons between biased estimators—An overview of recent results. Stat. Pap. 1990, 31, 165–179. [Google Scholar] [CrossRef]
  35. Lukman, A.F.; Albalawi, O.; Arashi, M.; Allohibi, J.; Alharbi, A.A.; Farghali, R.A. Robust Negative Binomial Regression via the Kibria–Lukman Strategy: Methodology and Application. Mathematics 2024, 12, 2929. [Google Scholar] [CrossRef]
  36. Barreto-Souza, W.; Simas, A.B. General mixed Poisson regression models with varying dispersion. Stat. Comput. 2016, 26, 1263–1280. [Google Scholar] [CrossRef]
Figure 1. Graphical Illustration of MSE against Sample Size for MSE for γ = 0.99 , ω = 2   and   p = 8 .
Figure 1. Graphical Illustration of MSE against Sample Size for MSE for γ = 0.99 , ω = 2   and   p = 8 .
Algorithms 18 00169 g001
Figure 2. Graphical Illustration of MSE against Sample Size for MSE for γ = 0.99 , ω = 6   and   p = 4 .
Figure 2. Graphical Illustration of MSE against Sample Size for MSE for γ = 0.99 , ω = 6   and   p = 4 .
Algorithms 18 00169 g002
Table 1. Estimated MSE values for q = 4 .
Table 1. Estimated MSE values for q = 4 .
ω 2 4 6
n305010020030501002003050100200
γ = 0.8 γ = 0.8 γ = 0.8
α ^ P I G M L E 2.5631.9561.8921.8142.6561.9841.9601.9062.7042.0071.9931.982
α ^ P I G R R E 1.3470.9650.9250.9191.5310.9810.9570.9391.6801.0000.9810.972
α ^ P I G L E 1.3280.9750.9350.9041.5240.9880.9570.9251.8991.0000.9900.964
α ^ P I G J S E 1.2810.9580.9570.9461.3270.9620.9630.9601.4511.0040.9860.976
α ^ P I G T P E 1.2670.9820.9140.9021.2940.9950.9860.9291.3091.0061.0000.974
α ^ P I G K L E 1.1260.9520.9190.9061.2510.9760.9470.9141.0081.0000.9630.955
α ^ P I G M K L E 1.0960.9310.9080.9011.1030.9480.9470.9181.2010.9650.9520.949
γ = 0.9 γ = 0.9 γ = 0.9
α ^ P I G M L E 3.9332.2291.9521.8954.8302.2531.9931.9374.9932.7142.0222.019
α ^ P I G R R E 2.2681.1400.9520.9212.9271.1970.9770.9403.0611.2890.9990.986
α ^ P I G L E 2.2091.1500.9580.9202.9001.2120.9860.9383.0361.3051.0030.988
α ^ P I G J S E 1.9641.1340.9760.9482.4111.1560.9960.9682.4921.1711.0111.009
α ^ P I G T P E 1.9141.1360.9560.9532.2981.1691.0020.9762.3691.1791.0151.015
α ^ P I G K L E 1.6281.0240.9560.9061.9131.1050.9720.9401.9921.1201.0030.990
α ^ P I G M K L E 1.5490.9520.9350.9021.7841.0080.9450.9281.897 1.6990.9710.950
γ = 0.95 γ = 0.95 γ = 0.95
α ^ P I G M L E 7.2033.2862.1932.1309.7634.0812.3802.17210.4074.9322.6452.220
α ^ P I G R R E 4.2511.8191.0721.0385.7572.4541.1041.0416.1873.0451.1841.140
α ^ P I G L E 4.1331.8051.0721.0325.6032.3811.1271.0416.0062.9351.2051.162
α ^ P I G J S E 3.5961.6421.0961.0654.8732.0381.1101.0615.1952.4621.2101.122
α ^ P I G T P E 3.4361.6141.1031.0714.6501.9671.1151.0985.9602.3541.1251.117
α ^ P I G K L E 2.9111.3671.0421.0364.1181.6221.0161.0755.3301.9191.1131.100
α ^ P I G M K L E 2.7321.3051.0341.0323.8761.5191.1001.0474.0711.7791.1051.076
γ = 0.99 γ = 0.99 γ = 0.99
α ^ P I G M L E 45.88420.7727.0355.80653.70625.3989.4057.78056.62033.40711.4469.476
α ^ P I G R R E 25.70112.6074.3323.48429.05515.0216.0014.84330.79719.2887.2255.906
α ^ P I G L E 24.79811.7183.8663.09328.65214.3095.3824.21230.23318.6526.4955.215
α ^ P I G T P E 22.89710.3673.5122.89926.80012.6754.6953.88428.25416.6715.7134.730
α ^ P I G J S E 22.0639.8163.3962.83425.97811.9954.4613.76627.36115.8485.4334.560
α ^ P I G K L E 20.5758.6212.9252.50924.67210.6893.7713.31025.89614.3854.6623.987
α ^ P I G M K L E 19.7718.0452.7382.38123.86910.0053.4853.11325.03613.5754.3173.740
Table 2. Estimated MSE values for q = 8 .
Table 2. Estimated MSE values for q = 8 .
ω 2 4 6
n305010020030501002003050100200
γ = 0.8 γ = 0.8 γ = 0.8
α ^ P I G M L E 6.4782.0361.9641.8668.0742.0921.9821.9189.1052.2772.0101.995
α ^ P I G R R E 3.4601.0120.9140.8994.4911.1050.9700.9105.0041.1851.0000.972
α ^ P I G L E 3.4731.0540.9470.9274.3921.0810.9660.9834.9081.2490.9890.985
α ^ P I G J S E 3.2341.0180.9430.9314.0301.0460.9820.9574.5451.1371.0050.999
α ^ P I G T P E 3.2061.0240.9530.9363.9661.0550.9860.9694.4921.1411.0131.002
α ^ P I G K L E 2.9840.9770.9240.9033.7081.0020.9400.8974.2181.0300.9960.983
α ^ P I G M K L E 2.8890.9560.9180.9023.5870.9800.9640.9594.0920.9960.9850.982
γ = 0.9 γ = 0.9 γ = 0.9
α ^ P I G M L E 16.6132.8442.4192.05016.7053.9512.8912.08620.3554.3713.5042.093
α ^ P I G R R E 9.0461.4041.1440.9969.3162.1321.4360.99711.4082.5151.8180.998
α ^ P I G L E 8.8041.5011.2381.0138.9782.1681.5311.03911.0022.4731.9071.050
α ^ P I G J S E 8.2911.4211.2091.0258.3371.9731.4441.04310.1582.1821.7501.046
α ^ P I G T P E 8.1911.4301.2211.0318.1911.9631.4521.0529.9612.1591.7491.058
α ^ P I G K L E 7.7321.3041.1281.0117.7881.7611.3071.0179.4441.9201.5441.110
α ^ P I G M K L E 6.4951.2661.1040.9907.5901.6871.2641.0079.1651.8321.4781.011
γ = 0.95 γ = 0.95 γ = 0.95
α ^ P I G M L E 28.1055.6844.3952.13142.2179.1066.2662.42947.14210.5078.5542.752
α ^ P I G R R E 15.7473.1822.3411.03623.0765.3483.5931.15425.5806.2525.1181.307
α ^ P I G L E 14.8693.1582.3491.04922.4155.0823.4381.22624.8105.9244.7411.423
α ^ P I G J S E 14.0262.8382.1951.06521.0674.5453.1281.21423.5245.2444.2701.375
α ^ P I G T P E 13.8002.8032.1861.07320.6724.4583.0881.22423.1695.1114.1901.386
α ^ P I G K L E 13.1392.4731.9511.01519.8373.9962.6991.16022.3494.5933.6781.274
α ^ P I G M K L E 12.7862.3531.8671.00619.3343.7992.5501.14321.8744.3613.4681.242
γ = 0.99 γ = 0.99 γ = 0.99
α ^ P I G M L E 171.42040.41426.09615.786225.45558.27939.83620.008247.23071.84856.25221.255
α ^ P I G R R E 91.34024.02416.57110.114117.31133.21924.65911.340127.79042.22633.63113.572
α ^ P I G L E 88.72221.86214.0718.431115.66631.41321.32410.849126.44038.56230.15111.425
α ^ P I G T P E 85.54120.16813.0247.879112.50329.08219.8809.985123.36035.85428.07110.608
α ^ P I G J S E 84.34019.57212.6627.710111.38327.24619.3109.671122.33034.74828.24610.338
α ^ P I G K L E 82.21818.11511.5457.010109.44825.42917.8649.072120.55032.49226.5359.401
α ^ P I G M K L E 80.78917.29910.9496.661108.08124.33517.0368.704119.24031.09025.4998.912
Table 3. Estimated MSE values for q = 12 .
Table 3. Estimated MSE values for q = 12 .
ω 2 4 6
n305010020030501002003050100200
γ = 0.8 γ = 0.8 γ = 0.8
α ^ P I G M L E 18.1974.9012.0111.87122.66210.0912.0271.90429.07711.3992.3001.985
α ^ P I G R R E 9.6302.5370.9560.91512.0125.5920.9770.93715.4336.4761.0570.995
α ^ P I G L E 9.4332.6261.0080.92711.7965.4561.0370.94115.0576.1891.2010.981
α ^ P I G J S E 9.0802.4471.0050.93511.3095.0371.0130.95214.5105.6891.1490.992
α ^ P I G T P E 9.0082.4461.0140.94011.2124.9891.0250.95514.4025.6271.1620.995
α ^ P I G K L E 8.8192.2760.9720.92910.9104.6840.9930.94714.1085.2871.0800.980
α ^ P I G M K L E 8.7132.2120.9370.92510.7554.5450.9860.93613.9425.1231.0550.950
γ = 0.9 γ = 0.9 γ = 0.9
α ^ P I G M L E 36.88010.7442.7781.99838.85022.1563.6912.01539.86027.7975.0092.025
α ^ P I G R R E 19.2636.0321.2850.97120.52112.5651.8090.97721.35615.9292.6010.996
α ^ P I G L E 19.0085.8091.4370.99820.11711.9351.9601.00020.78815.0502.6961.018
α ^ P I G J S E 18.4035.3621.3880.99919.38611.0571.8431.00719.89013.8712.5011.012
α ^ P I G T P E 18.2655.3041.4011.00819.20610.8821.8531.01319.68013.6432.5041.023
α ^ P I G K L E 17.9754.9071.3070.98118.81010.2481.6890.98619.16312.9212.2630.997
α ^ P I G M K L E 17.5744.7291.2770.94517.7999.9241.6310.94818.87912.5442.1730.975
γ = 0.95 γ = 0.95 γ = 0.95
α ^ P I G M L E 66.64130.0796.2782.28785.71046.6299.2472.63589.97757.28013.4343.068
α ^ P I G R R E 34.58817.3753.4451.08945.22626.5555.3471.29246.84232.2077.9431.396
α ^ P I G L E 34.21316.0213.3631.15544.24024.9535.0201.35746.25830.4967.2691.599
α ^ P I G J S E 33.25415.0103.1341.14342.76923.2694.6161.31744.89828.5836.7051.533
α ^ P I G T P E 32.98014.7763.1181.15542.36322.8384.5591.33344.53428.0726.6011.550
α ^ P I G K L E 32.49213.9722.8221.10141.55521.7434.1051.23843.85026.8606.0231.423
α ^ P I G M K L E 32.16513.5462.7031.04641.05321.1173.9061.11143.40926.1495.7521.384
γ = 0.99 γ = 0.99 γ = 0.99
α ^ P I G M L E 306.507183.21356.72713.293375.705254.19457.36519.084383.607323.779111.40124.977
α ^ P I G R R E 154.971100.45234.7618.242190.652135.67235.06312.210197.565170.25864.31615.799
α ^ P I G L E 154.61195.37130.0837.086189.952131.93130.67510.177194.846167.30658.82613.420
α ^ P I G T P E 152.94791.42428.3086.635187.477126.84428.6279.525191.421161.56755.59012.465
α ^ P I G J S E 152.59989.95227.6506.534186.913125.01227.8979.361190.396159.48054.31812.212
α ^ P I G K L E 151.80887.23225.9805.933185.773121.91726.0738.570188.554156.13451.71511.151
α ^ P I G M K L E 151.18985.38624.9915.648184.901119.71225.0008.179187.200153.66850.01610.617
Table 4. Model Adequacy Check.
Table 4. Model Adequacy Check.
Metrics P R M N B R M P I G R M
Residual Variance661.2661.20.5498
AIC873.1397.9406.0
Table 5. Squirrel dataset estimated regression coefficients.
Table 5. Squirrel dataset estimated regression coefficients.
Coef. α ^ P I G M L E α ^ P I G R R E α ^ P I G L E α ^ P I G J S E α ^ P I G T P E α ^ P I G K L E α ^ P I G M K L E
x 1 0.54360.00010.14340.0010−0.02960.0430−0.0001
x 2 0.43980.00020.14400.00080.21380.0348−0.0002
x 3 −0.3814−0.0002−0.1318−0.0007−0.2544−0.03020.0002
x 4 0.85030.00010.21160.0016−0.13020.0672−0.0001
SMSE987.460818.043517.56531.3558171.91457.31361.3578
PMSE453.3143446.1027448.5152446.1133450.5399446.6592446.0973
PMAE16.149815.900115.984715.900516.051115.919815.8999
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Farghali, R.A.; Lukman, A.F.; Algamal, Z.; Genc, M.; Attia, H. An Alternative Estimator for Poisson–Inverse-Gaussian Regression: The Modified Kibria–Lukman Estimator. Algorithms 2025, 18, 169. https://doi.org/10.3390/a18030169

AMA Style

Farghali RA, Lukman AF, Algamal Z, Genc M, Attia H. An Alternative Estimator for Poisson–Inverse-Gaussian Regression: The Modified Kibria–Lukman Estimator. Algorithms. 2025; 18(3):169. https://doi.org/10.3390/a18030169

Chicago/Turabian Style

Farghali, Rasha A., Adewale F. Lukman, Zakariya Algamal, Murat Genc, and Hend Attia. 2025. "An Alternative Estimator for Poisson–Inverse-Gaussian Regression: The Modified Kibria–Lukman Estimator" Algorithms 18, no. 3: 169. https://doi.org/10.3390/a18030169

APA Style

Farghali, R. A., Lukman, A. F., Algamal, Z., Genc, M., & Attia, H. (2025). An Alternative Estimator for Poisson–Inverse-Gaussian Regression: The Modified Kibria–Lukman Estimator. Algorithms, 18(3), 169. https://doi.org/10.3390/a18030169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop