Next Article in Journal
Bias-Corrected Fixed Item Parameter Calibration, with an Application to PISA Data
Previous Article in Journal
Detailed Command vs. Mission Command: A Cancer-Stage Model of Institutional Decision-Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inferences About Two-Parameter Multicollinear Gaussian Linear Regression Models: An Empirical Type I Error and Power Comparison

1
Department of Biostatistics, Florida International University, Miami, FL 33199, USA
2
Department of Mathematics and Statistics, Florida International University, Miami, FL 33199, USA
*
Author to whom correspondence should be addressed.
Stats 2025, 8(2), 28; https://doi.org/10.3390/stats8020028
Submission received: 26 February 2025 / Revised: 19 April 2025 / Accepted: 19 April 2025 / Published: 23 April 2025
(This article belongs to the Section Statistical Methods)

Abstract

:
In linear regression analysis, the independence assumption is crucial and the ordinary least square (OLS) estimator generally regarded as the Best Linear Unbiased Estimator (BLUE) is applied. However, multicollinearity can complicate the estimation of the effect of individual variables, leading to potential inaccurate statistical inferences. Because of this issue, different types of two-parameter estimators have been explored. This paper compares t-tests for assessing the significance of regression coefficients, including several two-parameter estimators. We conduct a Monte Carlo study to evaluate these methods by examining their empirical type I error and power characteristics, based on established protocols. The simulation results indicate that some two-parameter estimators achieve better power gains while preserving the nominal size at 5%. Real-life data are analyzed to illustrate the findings of this paper.

1. Introduction

In linear regression analysis, the independence assumption of the explanatory variables is crucial, as the ordinary least square (OLS) estimator is commonly used as the Best Linear Unbiased Estimator (BLUE). However, multicollinearity presents a significant obstacle, complicating the estimation of unique effects for individual variables. This often leads to OLS producing inefficient and unreliable estimates, marked by high standard errors and inaccurate confidence intervals (Kibria, 2003) [1]. To overcome these challenges, researchers have developed various alternative biasing estimators to replace OLS, each with unique approaches to improve estimation accuracy. Pioneering efforts in this area include contributions from Hoerl and Kennard (1970) [2] and subsequent enhancements by Ehsanes Saleh and Kibria (1993) [3], Kibria (2003) [1], Alheety et al. (2025) [4], Dawoud and Kibria (2020) [5], Hoque and Kibria (2023) [6], Hoque and Kibria (2024) [7], Nayem et al. (2024) [8], and Yasmin and Kibria (2025) [9], among others.
Hypothesis testing is another aspect of statistical inference, especially in regression models, where it is necessary to test the significance of the coefficients. This procedure helps identify which variables are significant predictors of the outcome.
Our main objective is to find out the statistical significance of the regression coefficients within our model using hypothesis testing. However, the current body of research on this topic is somewhat limited. Halawa and Bassiouni (2000) [10] were instrumental in presenting approximate t-tests for regression coefficients within the framework of ridge regression, with a focus on empirical sizes and powers. Building on this, Cule et al. (2011) [11] assessed these tests for linear and logistic ridge regression frameworks, further advancing our understanding of their effectiveness across different types of regression models. Gokpinar and Ebegil (2016) [12] also contributed by evaluating the effectiveness of t-tests across various estimators of the ridge parameter k , based on insights from the existing literature. Additionally, Kibria and Banik (2019) [13] as well as Perez-Melo and Kibria (2020) [14] investigated the robustness of t-tests across different ridge parameters. Despite these advancements, continued research is essential to deepen our understanding and enhance methodologies for testing the significance of regression coefficients.
Additionally, within the context of the Liu estimator, Ullah et al. (2017) [15] focused on testing coefficients specific to this regression framework. Expanding on this work, Perez-Melo et al. (2022) [16] conducted a comparative analysis of Ridge, Liu, and Kibria Lukman estimators, enhancing our understanding of regression coefficient testing across different estimation methods. These studies underscore the significance of exploring diverse regression techniques and their implications for hypothesis testing in regression analysis.
This study aims to thoroughly contrast various t-test statistics for testing different regression coefficients across multiple two-parameter estimation methods. We will evaluate the performance of these tests within several frameworks, for example the Yang and Chang (2010) [17] estimator, Modified Ridge Type (MRT) estimator, and other two-parameter estimators, and compare them to ordinary least square (OLS). By employing Monte Carlo simulation, we will examine the empirical type I error for each and then estimate the power properties of each method, guided by procedures established by Halawa and Bassiouni (2000) [10] and Gokpinar and Ebegil (2016) [12]. We will look at different two-parameter estimators together to see which one performs better than others. Previous work has not comprehensively compared all two-parameter methods against OLS, so our objective is to examine most of them and recommend the ones that hold type I error rates and demonstrate gains in power. This research also seeks to enhance the understanding of regression coefficient testing methods and offer valuable insights for practitioners in model selection and interpretation in regression analysis.
This paper is structured as follows: Section 2 outlines the statistical methodology, detailing various estimators for different parameters k and d . Section 3 presents the methods and explains the results of the simulation study. Section 4 provides an applied example to demonstrate the performance of several best selected methods compared to OLS. Finally, Section 5 offers a summary and concluding remarks.

2. Statistical Framework

In this section, we will explain the structure of the linear regression model and explore the estimators used in this context.

2.1. Framework of Model and Several Established Estimators

We consider the following model:
Y = X β + ϵ , ϵ ~ N 0 , σ 2 I n ,
where Y is an n × 1 vector representing a response variable, X is an n × p regressor matrix, assumed to have a full rank, β is a p × 1 vector of regression coefficients, and ϵ is an n × 1 vector of normally distributed residuals, satisfying E ϵ = 0 and V a r ϵ = σ 2 I n , with I n being an n × n identity matrix.
The ordinary least square (OLS) estimator of β in the linear regression model is given by the following:
β ^ O L S = X T X 1 X T Y
To test whether the i th component of β is equivalent to zero, i.e., H 0 : β i = 0   vs .   H 1 : β i 0 , the test statistics is defined based on the OLS estimator:
t = β ^ i S E ( β ^ i ) ,
where β ^ i is the i th component of β ^ , and S E ( β ^ i ) is the standard error of β ^ i , which is the square root of the i th diagonal entry of the covariance matrix V a r β ^ O L S = σ 2 X T X 1 , where
σ 2 ^ = Y X β ^ O L S T Y X β ^ O L S n p 1 .
The test statistic in Equation (3) follows Student’s t-distribution with n p 1 degrees of freedom under the null hypothesis. However, if X T X becomes ill conditioned when multicollinearity arises, the OLS estimator may yield unbalanced estimates with excessively high variances. To mitigate this issue, Hoerl and Kennard (1970) [2] established other shrinkage regression estimators.

2.2. Two Parameter Estimators

In this section, we will consider various two-parameter estimators that are available in the literature.

2.2.1. Liu Type of Two-Parameter Estimator

To overcome the multicollinearity problem, Liu (2003) [18] proposed a two-parameter estimator,
β ^ L T E = X T X + k I 1 X T Y d β * ,
where β * is any estimator of β ; if we choose β * = β ^ O L S and then we can obtain
β ^ L T E = X T X + k I 1 X T X d I β ^ O L S .
The expected value and covariance matrix are given, respectively, as follows:
E β ^ L T E = A L T E β   and
V a r β ^ L T E = σ L T E 2 A L T E ( X T X ) 1 A L T E T
where A L T E = X T X + k I 1 X T X d I , and σ L T E 2 is estimated as follows:
σ 2 ^ L T E = Y X β ^ L T E T ( Y X β ^ L T E ) n p 1
From Liu (2003) [18], we will consider the following values of k and d :
k ^ = λ 1 100 λ p 99   a n d   d ^ = j = 1 p σ ^ 2 k α j 2 λ j + k ^ 2 j = 1 p σ ^ 2 + λ j α ^ j 2 λ j λ j + k 2 .

2.2.2. Ozkale and Kaciranlar Two-Parameter Estimator

Ozkale and Kaciranlar (2007) [19] consider the following estimator,
β ^ T P = A T P β ^ O L S ,
where A T P = X T X + k I 1 X T X + k d I .
We have the expected value and covariance matrix for β ^ T P as follows:
E β ^ T P = A T P β   a n d V a r β ^ T P = σ T P 2 A T P X T X 1 A T P T ,
σ T P 2 is estimated as follows:
σ 2 ^ T P = Y X β ^ T P T ( Y X β ^ T P ) n p 1
Following Ozkale and Kaciranlar (2007) [19], we have the optimal d and k as follows:
d ^ o p t = j = 1 p ( k α ^ j 2 σ ^ 2 ) λ j + k 2 j = 1 p k ( σ ^ 2 + α ^ j λ j ) λ j λ j + k 2 ,
and
k ^ j = σ ^ 2 α ^ j 2 d σ ^ 2 λ j + α ^ j 2 .
Hoerl et al. (1975) [20] used the harmonic mean of k values which is identified by Hoerl and Kennard (1970) [2]. Also, Kibria (2003) [1] proposed the arithmetic mean of the same k values. Therefore, both arithmetic and harmonic means of k values can be used for estimating this shrinkage parameter k . Then,
k ^ H M = p σ ^ 2 j = 1 p α ^ j 2 d σ ^ 2 λ j + α j 2   and   k ^ A M = 1 p j = 1 p σ ^ 2 α ^ j 2 d σ ^ 2 λ j + α j 2 .
We can see that d ^ o p t depends on k and k ^ also depends on d . So, we can select parameters d and k by applying the following iterative method.
Step 1. First, we will calculate d ^ = min   α ^ j 2 σ ^ 2 λ j + α ^ j 2 .
Step 2. Then, we will obtain k ^ A M or k ^ H M by using d ^ from Step 1.
Step 3. We will estimate d ^ o p t from estimators k ^ A M or k ^ H M from Step 2.
Step 4. If we find that d ^ o p t is negative, we need to use d ^ o p t = d ^ because d ^ is always less than one and bigger than zero, i.e., 0 < d ^ < 1 .

2.2.3. New Biased Estimator Based on Ridge

Sakallıoglu˘ and Kiciranlar (2008) [21] proposed the following two-parameter estimator:
β ^ N B E = X T X + I 1 X T y + d β ^ R i d g e = X T X + I 1 X T X + d + k I X T X + k I 1 X T X β ^ O L S
The expected value and covariance matrix are given, respectively, as
E β ^ N B E = A N B E β   and
V a r β ^ N B E = σ N B E 2 A N B E ( X T X ) 1 A N B E T ,
where A N B E = X T X + I 1 X T X + d + k I X T X + k I 1 X T X , and σ N B E 2 is estimated as
σ 2 ^ N B E = Y X β ^ N B E T ( Y X β ^ N B E ) n p 1 .
For estimating the unknown parameters k and d , we can choose
d ^ o p t = j = 1 p λ j ( α ^ j 2 σ ^ 2 ) λ j + 1 2 ( λ j + k ) j = 1 p λ j ( λ j α ^ j 2 + σ ^ 2 ) λ j + 1 2 λ j + k 2 ,
where for fixed k , we use k H K = σ ^ 2 j = 1 p α ^ j 2 and k H K B = p σ ^ 2 j = 1 p α ^ j 2 .

2.2.4. Yang and Chang Two-Parameter Estimator

Yang and Chang (2010) [17] consider the following estimator:
β ^ Y C = Y k , d β ^ O L S
where Y k , d = X T X + I 1 X T X + d I X T X + k I 1 ( X T X ) , and k and d are biasing parameters.
The expected value and covariance matrix of β ^ Y C are as follows:
E β ^ Y C = Y k , d β , V a r ( β ^ Y C ) = σ Y C 2 Y k , d X T X 1 Y k , d T
and σ Y C 2 is estimated as
σ 2 ^ Y C = Y X β ^ Y C T ( Y X β ^ Y C ) n p 1 .
For k , we fix d , and based on Yang and Chang (2010) [17], we obtain the optimal k as
k ^ j = σ ^ 2 λ j + d 1 d λ j α ^ j 2 λ j + 1 α ^ j 2 .
We apply different formulas for this parameter, such as arithmetic mean, harmonic mean, and median.
Then, we obtain
d ^ o p t = i = 1 p k + 1 λ j + k λ j α ^ j 2 λ j 2 σ ^ 2 / λ j + 1 2 λ j + k 2 i = 1 p σ ^ 2 + λ j α ^ j 2 λ j / λ j + 1 2 λ j + k 2 .
The estimators of parameters k and d in β ^ Y C are acquired by using the following iterative method:
Step 1: Obtain an initial estimate using d ^ = min   σ ^ 2 α ^ j 2 .
Step 2: Estimate k ^ using d ^ in Step 1.
Step 3: Find d ^ o p t using k ^ in Step 2.
Step 4. If 0 < d ^ o p t < 1 does not hold, use d ^ o p t = d ^ .

2.2.5. Almost Unbiased Two-Parameter Estimator

Wu and Yang (2011) [22] consider the following estimator:
β ^ A U T P = [ I k 2 1 d 2 X T X + k I 2 ] β ^ O L S
The expected value and covariance matrix of β ^ A U T P are as follows:
E β ^ A U T P = A A U T P β , V a r ( β ^ A U T P ) = σ A U T P 2 A A U T P X T X 1 A A U T P T ,
where A A U T P = [ I k 2 1 d 2 X T X + k I 2 ] and σ A U T P 2 is estimated as
σ 2 ^ A U T P = Y X β ^ A U T P T ( Y X β ^ A U T P ) n p 1 .
Now, d j = 1 λ j + k σ ^ k 1 σ 2 + λ j α ^ j 2 1 2 and k j = σ ^ λ j 1 d σ ^ 2 + λ j α ^ j 2 σ ^ .
We apply the arithmetic mean and harmonic mean for this parameter.
We can select parameters k and d by using the following approach.
Step 1: We compute d ^ using d = 1 min   σ ^ λ j α ^ j 2 + σ ^ 2 .
Step 2: We obtain k ^ j using d ^ .
Step 3: We find d ^ j using k ^ j from Step 2.
Step 4. If d ^ j is negative, we need d ^ j = d ^ because it is always less than 1 but it may be smaller than 0.

2.2.6. Unbiased Two-Parameter Estimator

Wu (2014) [23] considers the following two-parameter estimator:
β ^ U T P = A U T P β ^ O L S + ( 1 A U T P ) J
where A U T P = X T X + k I 1 ( X T X + k d I ) and J ~ N β , σ 2 k 1 d X T X + k d I X T X 1 for k > 0 , 0 < d < 1 , and J is called the prior information and is a random vector which has a specified mean and covariance.
The expected value and covariance matrix of β ^ U T P are as follows:
E β ^ U T P = β , V a r ( β ^ U T P ) = σ U T P 2 A U T P X T X 1 ,
and σ U T P 2 is estimated as
σ 2 ^ U T P = Y X β ^ U T P T ( Y X β ^ U T P ) n p 1 .
Now, for fixed k , we need to obtain an estimator of d as follows:
d ^ = 1 σ ^ 2 [ p + k t r X T X 1 ] k β ^ O L S J β ^ O L S J T
where t r X T X 1 = j = 1 p 1 / λ j .
If k β ^ O L S J β ^ O L S J T σ ^ 2 p + k t r X T X 1 > 0 , then
d ^ * = 1 σ ^ 2 p + k t r X T X 1 k β ^ O L S J β ^ O L S J T ;
Otherwise, d ^ * = 1 .
Parameter k is defined as
k ^ = p σ ^ 2 1 d β ^ O L S J β ^ O L S J T σ 2 t r X T X 1 .
If 1 d β ^ O L S J β ^ O L S J T σ ^ 2 t r X T X 1 > 0 , then
k ^ * = p σ 2 1 d β ^ O L S J β ^ O L S J T σ ^ 2 t r X T X 1 ;
Otherwise,
k ^ * = p σ ^ 2 1 d β ^ O L S J β ^ O L S J T .
The selection of parameters k and d can be obtained by using the following approach.
Step 1: Estimate d ^ = min   σ ^ 2 α ^ j 2 .
Step 2: Compute a random vector J.
Step 3: Compute k ^ using J and d ^ .
Step 4: Compute d ^ .

2.2.7. Dorugade Modified Two-Parameter Estimator

Dorugade (2014) [24] considers the following modified estimator:
β ^ M T P = A M T P β ^ O L S
where A M T P = I + k 1 d X T X + k d I 1 [ I k d X T X + k d I 1 ] .
The expected value and covariance matrix of β ^ M T P are as follows:
E β ^ M T P = A M T P β , V a r ( β ^ M T P ) = σ M T P 2 A M T P X T X 1 A M T P T ,
and σ M T P 2 is estimated as
σ 2 ^ M T P = Y X β ^ M T P T ( Y X β ^ M T P ) n p 1 .
For unknow values of k and d , some well know methods are k ^ 1 = p σ ^ 2 j = 1 p α ^ j 2 and k ^ 2 = m e d i a n σ ^ 2 α ^ j 2 .
The optimal d can be considered as follows:
d ^ o p t = j = 1 p λ j + k σ ^ 2 + λ j α ^ j 2 λ j k α ^ j 2

2.2.8. Modified Almost Unbiased Liu Estimator

Arumairajan and Wijekoon (2017) [25] consider the following two-parameter estimator:
β ^ M A U L E = A M A U L E β ^ O L S
where A M A U L E = 1 1 d 2 X T X + I 2 X T X + k I 1 X T X .
The expected value and covariance matrix of β ^ M A U L E are as follows:
E β ^ M A U L E = A M A U L E β , V a r ( β ^ M A U L E ) = σ M A U L E 2 A M A U L E X T X 1 A M A U L E T ,
and σ M A U L E 2 is estimated as
σ 2 ^ M A U L E = Y X β ^ M A U L E T ( Y X β ^ M A U L E ) n p 1 .
Now, d ^ o p t = 1 j = 1 p λ j σ ^ 2 k α ^ j 2 λ j + k 2 λ j + 1 2 j = 1 p λ j σ ^ 2 + λ j α ^ j 2 λ j + 1 4 λ j + k 2 , and k ^ j = σ ^ 2 λ j + 1 2 λ j 1 d 2 λ j ( σ ^ 2 + α ^ j 2 ) α ^ j 2 λ j + 1 2 .
For the optimal value, we use the arithmetic mean value of k .

2.2.9. Modified Almost Unbiased Two-Parameter Estimator

Lukman et al. (2019) [26] consider the following estimator:
β ^ M A U T P = A M A U T P β ^ O L S
where A M A U T P = I k 2 1 d 2 X T X + k I 2 X T X + k I 1 X T X .
The expected value and covariance matrix of β ^ M A U T P are as follows:
E β ^ M A U T P = A M A U T P β , V a r ( β ^ M A U T P ) = σ M A U T P 2 A M A U T P X T X 1 A M A U T p T ,
and σ M A U T P 2 is estimated as
σ 2 ^ M A U T P = Y X β ^ M A U T P T ( Y X β ^ M A U T P ) n p 1 .
For the estimation of k and d , following Hoerl and Kennard (1970) [2], k ^ = σ ^ 2 α ^ j 2 , and the harmonic version of the proposed k ^ is k ^ H M P = p σ ^ 2 j = 1 p α ^ j 2 , and d ^ o p t = min   α ^ j 2 σ ^ 2 α ^ j 2 + α ^ j 2 .

2.2.10. Modified New Two-Parameter Estimator

Lukman et al. (2019) [27] consider the following estimator:
β ^ M N T P = X T X + k I 1 [ X T X + k d I β ^ O L S + k 1 d b ]
where b is the prior information on β and it tends to become b if k approaches infinity.
Let us recall that k 1 d = X T X + k I X T X + k d I and let A M N T P = X T X + k I 1 X T X + k d I .
The expected value and covariance matrix of β ^ M N T P are as follows:
E β ^ M N T P = A M N T P β + ( I A M N T P ) b , V a r ( β ^ M N T P ) = σ M N T P 2 A M N T P X T X 1 A M N T P T ,
and σ M N T P 2 is estimated as
σ 2 ^ M N T P = Y X β ^ M N T P T ( Y X β ^ M N T P ) n p 1 .
For k , we fix d , based on Lukman et al. (2019) [27], and we obtain the optimal k as
k ^ j = σ ^ 2 λ j λ j α ^ j b 2 d ^ ( λ j α ^ j b 2 + σ ^ 2 ) ,
and the harmonic mean is
k ^ H M P = p j = 1 p 1 / k ^ j .
Then, we obtain
d ^ o p t = j = 1 p k λ j α ^ j b 2 σ ^ 2 λ j j = 1 p σ ^ 2 k ^ + k λ j α ^ j b 2 .
The selection of parameters k and d in β ^ M N T P is obtained using the following method:
Step 1: First, we will obtain an initial estimate of d using d ^ = min   λ j α ^ j b 2 λ j α ^ j b 2 + σ ^ 2 .
Step 2: We will obtain k ^ using d ^ .
Step 3: We will estimate d ^ o p t using k ^ .
Step 4. If d ^ o p t is negative, we must use d ^ o p t = d ^ . However, d ^ takes a value between 0 and 1.

2.2.11. Modified Ridge Type

Lukman et al. (2019) [28] consider the following two-parameter estimator:
β ^ M R T = M k , d β ^ O L S
where M k , d = X T X + k 1 + d I 1 ( X T X ) , with k and d as biasing parameters.
The expected value and covariance matrix of β ^ M R T are as follows:
E β ^ M R T = M k , d β , V a r ( β ^ M R T ) = σ M R T 2 M k , d X T X 1 M k , d T ,
and σ M R T 2 is estimated as
σ 2 ^ M R T = Y X β ^ M R T T ( Y X β ^ M R T ) n p 1 .
As the shrinkage parameters k and d are both unknown, it is necessary to estimate them from the observed data. This section provides the formulas for various shrinkage parameter regression estimators.
Lukman et al. (2019) [28] proposed an estimator as follows:
k ^ j = σ ^ 2 1 + d α ^ j 2 .
We can obtain the harmonic mean of k ^ j as
k ^ H M P = p σ ^ 2 j = 1 p 1 + d α ^ j 2 ,
and
d ^ j = σ ^ 2 k α ^ j 2 1 .
Also, the harmonic means of d ^ j is
d ^ M R T = p j = 1 p 1 d ^ j .
The selection of parameters k and d in β ^ M R T is obtained by using the following method:
Step 1: We need to obtain an initial estimate of d using d ^ = min   σ ^ 2 α ^ j 2 .
Step 2: We obtain k ^ H M P using d ^ from Step 1.
Step 3: We estimate d ^ M R T using k ^ H M P from Step 2.
Step 4. If d ^ o p t is negative, we use d ^ M R T = d ^ .

2.2.12. A New Biased Estimator by Dawoud and Kibria

Dawoud and Kibria (2020) [5] consider the following two-parameter estimator:
β ^ D K = A D K β ^ O L S
where A D K = X T X + k 1 + d I 1 X T X k 1 + d I .
The expected value and covariance matrix of β ^ D K are as follows:
E β ^ D K = A D K β , V a r ( β ^ D K ) = σ D K 2 A D K X T X 1 A D K T ,
and σ D K 2 is estimated as
σ 2 ^ D K = Y X β ^ D K T ( Y X β ^ D K ) n p 1 .
For k , we fix d , and we obtain the optimal k as
k ^ j = σ ^ 2 1 + d σ ^ 2 λ j + 2 α ^ j 2 ,
where k m i n = m i n   ( k ^ ^ j ) .
Then, for the optimal d , we obtain
d ^ j = σ ^ 2 λ j m 1 ,
where m = k ( σ ^ 2 + 2 λ j α ^ j 2 ) .
In addition, d m i n = m i n   ( d ^ ^ j ) .
The selection of parameters k and d in β ^ D K is obtained by using the following method:
Step 1: Obtain an initial estimate of d using d ^ = min   σ ^ 2 α ^ j 2 .
Step 2: Obtain k ^ using d ^ .
Step 3: Estimate d ^ m i n using k ^ .
Step 4. If d ^ o p t is not between 0 and 1, use d ^ m i n = d ^ .

2.2.13. Generalized Two-Parameter Estimator

Zeinal (2020) [29] proposed the following two-parameter estimator:
β ^ G T P = X T X + k I 1 X T X + k D β ^ O L S
where D = d i a g ( d 1 , d 2 , , d p ) .
The expected value and covariance matrix of β ^ G T P are as follows:
E β ^ G T P = A G T P β , V a r ( β ^ G T P ) = σ G T P 2 A G T P X T X 1 A G T P T ,
where A G T P = X T X + k I 1 X T X + k D .
and σ G T P 2 is estimated as
σ 2 ^ G T P = Y X β ^ G T P T ( Y X β ^ G T P ) n p 1 .
For the unknown parameter, we obtain the optimal d j for fixed k as
d ^ j = ( k α ^ j 2 σ ^ 2 ) λ j k ( σ ^ 2 + α ^ j 2 λ j ) .
Then, we obtain
k ^ j = σ ^ 2 α ^ j 2 d j σ ^ 2 λ j + α ^ j 2
and take the arithmetic mean of the above-mentioned k j .
The selection of parameters k and d in β ^ G T P is obtained by using the following method:
Step 1: We need an initial estimate of d ^ j using d ^ =   σ ^ 2 α ^ j 2
Step 2: We obtain k ^ using d ^ .
Step 3: W estimate d ^ j o p t using k ^ .
Step 4. If d ^ j o p t is negative, we use d ^ j o p t = d ^ j .

2.2.14. Siray Two-Parameter Estimator

Şiray et al. (2021) [30] consider the following two-parameter estimator:
β ^ D T P = A D T P β ^ O L S
where A D T P = X T X + k I 1 X T X + k d X T X + I 1 X T X + d I .
The expected value and covariance matrix of β ^ D T P are as follows:
E β ^ D T P = A D T P β , V a r ( β ^ D T P ) = σ D T P 2 A D T P X T X 1 A D T P T ,
and σ D T P 2 is estimated as
σ 2 ^ D T P = Y X β ^ D T P T ( Y X β ^ D T P ) n p 1 .
Now, k ^ j = σ ^ 2 d λ j ( λ j + 1 ) d 1 λ j 2 α ^ j 2 σ ^ 2 ( λ j + d ) , so we can find the harmonic mean and median of k ^ j ,
and d ^ j = σ ^ 2 k λ j + k λ j 2 α ^ j 2 k λ j 2 α ^ j 2 σ ^ 2 λ j 2 σ ^ 2 λ j σ ^ 2 k , so we can also find the harmonic mean and median of d ^ j .
The estimation procedure for the biasing parameters is obtained by using the following method.
Step 1: Take an initial estimate of d ^ from d ^ =   max   λ j ( σ ^ 2 + λ j α ^ j 2 ) λ j 2 α ^ j 2 σ ^ 2 .
Step 2: Obtain k ^ j using d ^ .
Step 3: Estimate d ^ j using k ^ i .
Step 4. If d ^ j is negative, use d ^ j = d ^ .

2.2.15. Unbiased Modified Two-Parameter Estimator

Proposed by Abidoye, Ajayi, Adewale, and Ogunjobi (2022) [31], this estimator is defined as
β ^ U M T P = R k , d β ^ O L S + ( 1 R k , d ) J ,
where R k , d = X T X + k d I 1 X T X and J ~ N β , σ 2 k d I 1 for k > 0 , 0 < d < 1 , with J being uncorrelated with β ^ o l s .
The expected value and covariance matrix of β ^ U M T P are as follows:
E β ^ U M T P = β , V a r ( β ^ U M T P ) = σ U M T P 2 X T X + k d I 1 ,
and σ U M T P 2 is estimated as
σ 2 ^ U M T P = Y X β ^ U M T P T ( Y X β ^ U M T P ) n p 1 .
In this study, we choose k ^ = p σ ^ 2 j = 1 p α ^ 2 j and d ^ = j = 1 p λ j + k σ ^ 2 + λ j α ^ j 2 λ j k α ^ j 2 .

2.2.16. Ahamd and Aslam’s Modified New Two-Parameter Estimator

Ahmad and Aslam (2022) [32] consider the following two-parameter estimator:
β ^ M N T P E = C 0 β ^ O L S
where C 0 = X T X + I 1 X T X + d I X T X + k d I 1 X T X .
The expected value and covariance matrix of β ^ M N T P E are as follows:
E β ^ M N T P E = C 0 β , V a r ( β ^ M N T P E ) = σ M N T P E 2 C 0 X T X 1 C 0 T ,
and σ M N T P E 2 is estimated as
σ 2 ^ M N T P E = Y X β ^ M N T P E T ( Y X β ^ M N T P E ) n p 1 .
Now, k ^ j = σ ^ 2 λ j + d 1 d λ j α ^ j 2 d λ j + 1 α ^ j 2 .
We use the harmonic mean of k ^ values, and
d ^ o p t = j = 1 p λ j ( α ^ j 2 σ ^ 2 ) j = 1 p ( λ j α ^ j 2 + σ ^ 2 k λ j α ^ j 2 ) .
The selection of parameters k and d in β ^ M N T P E is obtained by using the following method:
Step 1: We need an initial estimate of d ^ using d ^ =   max   α j 2 σ 2 λ j σ 2 + λ j α j 2 .
Step 2: We obtain k ^ o p t using d ^ .
Step 3: We estimate d ^ o p t using k ^ o p t .
Step 4. If d ^ o p t is negative, we use d ^ o p t = d ^ .

2.2.17. Modified Liu Ridge Type

Aslam and Ahmad (2022) [33] consider the following two-parameter estimator:
β ^ M L R T = A M L R T β ^ O L S
where A M L R T = X T X + I 1 ( X T X + d ) X T X + k 1 + d I 1 X T X .
The expected value and covariance matrix of β ^ M L R T are as follows:
E β ^ M L R T = A M L R T β , V a r ( β ^ M L R T ) = σ M L R T 2 A M L R T X T X 1 A M L R T T ,
and σ M L R T 2 is estimated as
σ 2 ^ M L R T = Y X β ^ M L R T T ( Y X β ^ M L R T ) n p 1 .
Now, k ^ j = σ   ^ 2 λ j + d λ j 1 d α ^ j 2 1 + d λ j + 1 α ^ j 2 , and we can find the max value of k ^ , with d ^ o p t = j = 1 p λ j + k λ j + 1 λ j 2 k λ j 2 k λ j α ^ j 2 σ ^ 2 λ j λ j 2 k λ j 2 k λ j λ j + 1 2 j = 1 p σ 2 λ j 2 k λ j 2 k λ j + λ j k λ j + 1 λ j 2 k λ j 2 k λ j α ^ j 2 λ j + 1 2 .
The selection of parameters k and d is obtained by using the following method:
Step 1: We need to obtain an initial estimate of d ^ using d ^ =   max   α j 2 σ 2 λ j σ 2 + λ j α j 2 .
Step 2: We obtain k ^ j using d ^ .
Step 3: We estimate d ^ o p t using k ^ o p t .
Step 4. If d ^ o p t is negative, we use d ^ o p t = d ^ .

2.2.18. New Biased Regression Two-Parameter Estimator

Proposed by Dawoud, Lukman, and Haadi (2022) [34], the estimator is defined as:
β ^ N B R = R W M β ^ O L S
where R = X T X + k I 1 ( X T X + k d I ) , with W M from KL, and W = X T X + k I 1 and M = ( X T X k I ) .
The expected value and covariance matrix of β ^ N B R are as follows:
E β ^ N B R = R W M β , V a r ( β ^ N B R ) = σ N B R 2 R W M X T X 1 M T W T R T ,
and σ N B R 2 is estimated as
σ 2 ^ N B R = Y X β ^ N B R T ( Y X β ^ N B R ) n p 1 .
Here, k ^ = ( λ j 2 α ^ j 2 3 d + σ ^ 2 λ j ( 1 d ) ) 2 ( σ ^ 2 d + λ j α ^ j 2 ( 1 + d ) ) + λ j λ j α ^ j 2 2 d 3 2 + 2 λ j σ ^ 2 α ^ j 2 5 2 d + d 2 + σ 2 2 1 + d 2 2 ( σ ^ 2 d + λ j α ^ j 2 ( 1 + d ) ) and we find the minimum value of k .
With d ^ j = λ j 2 σ 2 3 α ^ j 2 k λ j k ( σ ^ 2 + α ^ j 2 k ) k ( k λ j ) ( σ ^ 2 + λ j α ^ j 2 ) , we find the minimum value of d .
The selection of parameters k and d is carried out as follows:
Step 1: Obtain an initial estimate of d ^ using d ^ =   min   σ ^ 2 α ^ j 2 .
Step 2: Obtain k ^ o p t using d ^ .
Step 3: Estimate d ^ o p t using k ^ o p t .
Step 4. If d ^ o p t is negative, use d ^ o p t = d ^

2.2.19. Biased Two-Parameter Estimator

Proposed by Idowu, Oladapo, Owolabi, and Ayinde (2022) [35], this estimator is defined as follows:
β ^ B T P = E H β ^ O L S
where E = X T X + I 1 ( X T X d I ) and H = X T X + k 1 + d I 1 ( X T X k ( 1 + d ) I ) .
The expected value and covariance matrix of β ^ B T P are as follows:
E β ^ B T P = E H β , V a r β ^ B T P = σ B T P 2 E H X T X 1 H T E T ,
and σ B T P 2 is estimated as
σ 2 ^ B T P = Y X β ^ B T P T ( Y X β ^ B T P ) n p 1 .
Now, k ^ j = σ ^ 2 λ j d λ j + α ^ j 2 λ j 2 ( d + 1 ) σ ^ 2 1 + d d λ j α ^ j 2 λ j ( 1 + d ) ( 2 λ j d + 1 ) .
And
d ^ = σ ^ 2 λ j σ ^ 2 k + α ^ j 2 λ j 2 + σ ^ 2 λ j k + 2 α ^ j 2 λ j 2 k 2 σ ^ 2 k + α ^ j 2 λ j k + α ^ j 2 2 λ j 2 k 2 λ j k + k + λ j + σ ^ 2 α ^ j 2 λ j 2 k k λ j + σ ^ 2 α ^ j 2 2 σ ^ 2 λ j k + k + λ j + σ ^ 2 2 λ j k k λ j ( σ ^ 2 k + α ^ j 2 λ j k ) .

2.2.20. New Two-Parameter Estimator

Owolabi, Ayinde, Idowu, Oladapo, and Lukman (2022) [36] consider the following new two-parameter estimator:
β ^ N T P = A N T P β ^ O L S
where A N T P = X T X + k d I 1 X T X k d I .
The expected value and covariance matrix of β ^ N T P are as follows:
E β ^ N T P = A N T P β , V a r ( β ^ N T P ) = σ N T P 2 A N T P X T X 1 A N T P T ,
and σ N T P 2 is estimated as
σ 2 ^ N T P = Y X β ^ N T P T ( Y X β ^ N T P ) n p 1 .
Now, k ^ j = σ ^ 2 d 2 α ^ j 2 + σ ^ 2 λ j and the harmonic mean is k ^ H M = σ ^ 2 d 2 α ^ j 2 + σ ^ 2 λ j , while d ^ j = σ ^ 2 k 2 α ^ j 2 + σ ^ 2 λ j .
The selection of parameters k and d is obtained by using the following method:
Step 1: We can obtain an initial estimate of d ^ using d ^ =   min   σ ^ 2 α ^ j 2 .
Step 2: We can obtain k ^ o p t using d ^ from Step 1.
Step 3: We can estimate d ^ o p t using k ^ o p t from Step 2.
Step 4. If d ^ o p t is negative, we must use d ^ o p t = d ^ .

2.2.21. New Ridge-Type Estimator

The two-parameter estimator proposed by Owolabi, Ayinde, and Alabi (2022) [37] is defined as
β ^ N R T = A N R T β ^ O L S ,
where A N R T = X T X + ( k + d ) I 1 X T X .
The expected value and covariance matrix of β ^ N R T are as follows:
E β ^ N R T = A N R T β , V a r ( β ^ N R T ) = σ N R T 2 A N R T X T X 1 A N R T T ,
and σ N R T 2 is estimated as
σ 2 ^ N R T = Y X β ^ N R T T ( Y X β ^ N R T ) n p 1 .
Now, k ^ = min   σ ^ 2 α ^ j 2 d and d ^ = min   σ ^ 2 α ^ j 2 k .
The selection of parameters k and d is carried out as follows:
Step 1: We obtain an initial estimate of d ^ using d ^ =   min   σ ^ 2 α ^ j 2 .
Step 2: We obtain k ^ o p t using d ^ from Step 1.
Step 3: We estimate d ^ o p t using k ^ o p t from Step 2.
Step 4. If d ^ o p t is negative, we use d ^ o p t = d ^ .

2.2.22. Modified Two-Parameter Estimator

Proposed by Owolabi, Ayinde, and Alabi (2022) [38], this estimator is defined as
β ^ M T P E = X T X + k + d I 1 ( X T X β ^ O L S + ( k + d ) b ) .
The expected value and covariance matrix of β ^ M T P E are as follows:
E β ^ M T P E = X T X + k + d I 1 ( X T X β + ( k + d ) b ) , V a r ( β ^ M T P E ) = σ M T P E 2 R k X T X 1 R k T ,
where R k = X T X + k d I 1 ( X T X ) and σ M T P E 2 is estimated as
σ 2 ^ M T P E = Y X β ^ M T P E T ( Y X β ^ M T P E ) n p 1 .
Here, k ^ j = σ ^ 2 α ^ j 2 d and we use the arithmetic mean of k , while d ^ = min   σ ^ 2 α ^ j 2 k .
In cases where d ^ is not between 0 and 1, we must use d ^ = 0 .

2.2.23. Modified Two-Parameter Liu Estimator by Abonazel

Abonazel (2023) [39] considers the following estimator which they use for the Conway–Maxwell Poisson regression model and Abdelwahab et al. (2024) [40] use it for the Poisson regression model, so we extended it for the Gaussian linear regression model:
β ^ M T P L = A M T P L β ^ O L S
where A M T P L = X T X + I 1 X T X ( k + d ) I .
The expected value and covariance matrix of β ^ M T P L are as follows:
E β ^ M T P L = A M T P L β , V a r ( β ^ M T P L ) = σ M T P L 2 A M T P L X T X 1 A M T P L T ,
and σ M T P L 2 is estimated as
σ 2 ^ M T P L = Y X β ^ M T P L T ( Y X β ^ M T P L ) n p 1 .
Now, k ^ j = λ j ( σ ^ 2 α ^ j 2 ) σ ^ 2 + λ j α ^ j 2 d and d ^ j = λ j σ ^ 2 α ^ j 2 k ( σ ^ 2 + λ j α ^ j 2 ) σ ^ 2 + λ j α ^ j 2 .

2.2.24. Liu–Kibria–Lukman Two-Parameter Estimator

Idowu et al. (2023) [41] proposed the following estimator:
β ^ L K L = C A β ^ O L S
where C = X T X + I 1 ( X T X + d I ) and A = X T X + k I 1 ( X T X k I ) .
The expected value and covariance matrix of β ^ L K L are as follows:
E β ^ L K L = C A β , V a r ( β ^ L K L ) = σ L K L 2 C A X T X 1 A T C T ,
and σ L K L 2 is estimated as
σ 2 ^ L K L = Y X β ^ L K L T ( Y X β ^ L K L ) n p 1 .
Now, d ^ = λ j α ^ j 2 σ ^ 2 + λ j k 2 α ^ j 2 λ j + α ^ j 2 σ ^ 2 σ ^ 2 λ j k + α ^ j 2 λ j λ j k .
For parameter k which is proposed by Kibria and Lukman (2020) [42], it is given as follows:
k ^ = min   σ ^ 2 2 α ^ j 2 + σ ^ 2 λ j

2.2.25. Two-Parameter Ridge Estimator

Proposed by Shakir Khan, Ali, Suhail et al. (2024) [43], this estimator is defined as
β ^ T P R = q X T X + k I 1 X T y ,
where q = X T y T ( X T X + k I ) 1 X T y X T y T ( X T X + k I ) 1 X T X ( X T X + k I ) 1 X T y .
The expected value and covariance matrix of β ^ T P R are as follows:
E β ^ T P R = A T P R β , V a r ( β ^ T P R ) = σ T P R 2 A T P R X T X 1 A T P R T ,
and σ T P R 2 is estimated as
σ 2 ^ T P R = Y X β ^ T P R T ( Y X β ^ T P R ) n p 1 .
Now, k ^ 1 = j = 1 p λ j α ^ j 2 σ ^ 2 α ^ m a x 2 and k ^ 2 = 1 k ^ 1 .
The above values k ^ is utilized for q ^ = j = 1 p α j 2 λ j λ j + k j = 1 p σ 2 λ j + α j 2 λ j 2 λ j + k 2 for computing the value of q ^ .
The above values of q ^ are used to compute the optimum values of k ^ o p t .
k ^ o p t = q j = 1 p σ ^ 2 λ j + q 1 j = 1 p α ^ j 2 λ j 2 j = 1 p α ^ j 2 λ j 2 .
We provide a summary in Table 1, which shows the name and parameters for every two-parameter estimation method, to facilitate better understanding.
To test whether the i th component of β is equivalent to zero, we use the approach of Halawa and Bassiouni (2000) [10]. The t-test statistic for this test is defined as follows:
t * = β ^ i * S E ( β ^ i * )
where β ^ i * represents the i th component of various estimators such as β ^ L T E , β ^ T P , β ^ N B R , β ^ M R T , . The term S E ( β ^ i * ) is the standard error of β ^ i * , which is calculated from the square root of the i th diagonal element of the covariance matrix V a r ( β ^ i * ) . Under the null hypothesis, the test statistics in Equation (55) follows an approximate Student’s t-distribution with n p 1 degrees of freedom.

3. A Monte Carlo Simulation Study

We conduct a Monte Carlo study to compare the performance of the test statistics in this section. First, we present the empirical type I error rates of the tests in Section 3.1. Then, we discuss the empirical powers of the tests in Section 3.2.

3.1. Type I Error-Rated Simulation Procedure

3.1.1. Simulation Methodology

In the simulation technique, the explanatory variables X are generated from the formula H Λ 0.5 G T , where H is an n × p matrix with orthogonal columns, Λ is the diagonal matrix containing the eigenvalues of the correlation matrix, and G is the matrix of normalized eigenvectors of the correlation matrix. This systematic generation of explanatory variables allows for a comprehensive evaluation of type I errors in regression analysis.
In this study, we generate n observations for the explanatory variables according to
Y i = β 1 X i 1 + β 2 X i 2 + + β p X i p + ϵ i ,
where ϵ i is an independent normal ( 0 , σ 2 ). We can check the performance based on the type I error rates across various biasing parameters. The comparison considers different values of sample sizes n = 30 ,   50 , and 100, and a varying number of explanatory variables p = 3, 5, and 10. Also, several correlation levels ρ = 0.80 ,   0.90 , and 0.99 are chosen, along with the assumed standard deviations of errors σ = 1 . The experiment is replicated 5000 times using R software [44].
Following Halawa and Bassiouni (2000) [10], the determination of the most and least favorable orientations ( β ) is carried out using the eigenvectors after normalization, which corresponds to the largest and smallest eigenvalues of X T X in their correlation form. The most favorable (MF) orientation is β = 1 p 1 p , where 1 p represents a vector of ones, and the least favorable (LF) orientation is any normalized vector orthogonal to 1 p . In the MF orientation, all components of β are equal, whereas in the LF orientation, all components are equally likely.
Studies have shown that the LF orientation yields tests that maintain the nominal type I error level regardless of the estimator used. Conversely, under the MF orientation, some tests exhibit higher type I error rates than expected. Thus, for practical purposes, the MF orientation helps to identify and discard tests that are too liberal in rejecting the null hypothesis. Consequently, our simulations are conducted based on the MF orientation of β .
To begin, we compare the type I error rates under the component of the orientation vector of β . For this purpose, the i th component is replaced by zero, denoted as β = 1 p 1 p , β j = 0 , j = 1,2 , , p . Subsequently, the test statistics are derived from the models, and the type I error rates are estimated by calculating the proportion of test statistic values which are more than the critical values from the t-distribution with n p 1 d.f. This procedure helps us to evaluate the performance of the test in correctly rejecting null hypotheses when they are true, thereby providing the empirical size of the tests. The simulated type I error rates for different sample sizes and regressors are presented in Table 2, Table 3 and Table 4 for ρ = 0.80 ,   0.90 , and 0.99, respectively.

3.1.2. Interpretation of Simulation Results for Type I Error

Assuming a nominal type I error rate of 5%, and using a simulation of 5000 iterations, we anticipate that the observed type I error will typically lie within the interval of 0.05 ± 2 0.05 × 0.95 5000 , approximately (4.4%, 5.6%). To maintain consistency, tests with an average observed type I error exceeding 0.06 were excluded from the comparison.
From Table 2, Table 3 and Table 4, we can find that some estimators provide type I error rates above the 6% nominal size in the MF orientation, rendering them unsuitable for recommendation. Therefore, we discard those methods from further consideration in power simulation.

3.2. Statistical Power Simulation Procedure

3.2.1. Monte Carlo Approach for Statistical Power

In this section, we conduct a comparative analysis of various test statistics with a focus on their power. Building upon the methodology outlined by Gokpinar and Ebegil (2016) [12], our objective is to evaluate the efficacy of these tests by computing their empirical power. By assessing each test’s ability to correctly reject false null hypotheses, we gain insights into their relative performance and robustness in detecting true effects. This analysis provides valuable information for researchers and practitioners in choosing the most suitable test statistic for their regression models based on power considerations.
Based on the previous analysis of type I error, we have discarded the tests that significantly exceeded the nominal size of 5%. The remaining test statistics will now be compared in terms of power. To calculate power, we modify the i th component of the β vector by replacing it with L w 0 σ β j , where L is a positive integer and w 2 0 = ( 1 + p 2 ρ ) / [ ( 1 ρ ) ( 1 + p 1 ρ ] . We choose L such that for each combination of correlation level and number of predictors, the maximum power achieved by the most powerful test is 100%. For this comparison, we select L = 4 . This procedure allows us to evaluate the relative power of the remaining tests under various conditions, providing insights into their ability to detect true effects in the data.
Using 5000 simulation iterations, we estimate the power of these tests by calculating the proportion of times that the absolute value of the test statistic is more than the critical value t 0.025 , ( n p 1 ) . We explore different combinations of sample sizes, with n = 30 ,   50 , and 100, and different numbers of regressors, with p = 3 ,   5 , and 10. These computations are conducted across correlation levels of 0.80, 0.90, and 0.99. By systematically varying these parameters, we obtain a comprehensive understanding of the power of each test under different conditions, allowing us to assess their relative effectiveness in detecting true effects in the data. The simulated power of the tests for different sample sizes and regressors is presented in Table 5, Table 6 and Table 7 for ρ = 0.80 ,   0.90 , and 0.99, respectively.

3.2.2. Interpretation of Simulation Results for Power

Based on the results from Table 5, Table 6 and Table 7, we observe that as the sample size increases, while keeping the other conditions constant, the power of the tests generally increases, as expected. Additionally, it is apparent that most of the tests exhibit greater power compared to the t-test across correlation levels of 0.80, 0.90, and 0.99. It is noted that for a given sample size, a smaller number of regressors results in higher power than a large number of regressors. Among them, some two-parameter methods such as LTE, NBE2, MRT, and LKL exhibit higher power than others compared to OLS. Additionally, YC3 and DK also show better power. These findings underscore the effectiveness of the related estimators in enhancing the power of hypothesis tests in regression analysis, particularly in the presence of multicollinearity.
Figure 1, Figure 2 and Figure 3 show the average gain in power for two-parameter estimators over the OLS test for α = 0.05 with different correlation levels, sample sizes, and numbers of regressors.
As we have only considered σ 2 = 1 , we examine two additional values, σ 2 = 5 and 10, which are given in Table 8, Table 9 and Table 10.
For additional levels of variance, the results show the decrease in power with the increase in variance. They also result in higher power than the OLS estimator.
We also want to consider sample sizes of 200 and 300 so that we can determine the power for the test compared to the lower sample sizes which are given in Table 11, Table 12 and Table 13. For the power, we take the same procedure as that followed by Gokpinar and Ebegil (2016) [12] and choose L = 5 to find the higher power.
As sample size increases to 200 and 300, the results suggest that the power increases with the increase in sample sizes. As the number of regressors increases, the power reduces while keeping the sample size constant.
We also introduce additional simulation scenarios where errors are drawn from alternative distributions—such as the t-distribution with a low degree of freedom in Table 14 and the exponential distribution with a rate of 1 in Table 15.

4. Application to Real-Life Data

To illustrate the simulation results and findings of this paper, we analyze a pollution dataset which was originally published by McDoland and Schwing (1973) [45] in this section. These data model the total age-adjusted mortality rate for the years 1959–1961 across 201 Standard Metropolitan Statistical Areas (SMSAs), with the data sourced from Duffy and Carroll (1967) [46]. The dataset has 60 observations and 15 independent variables measuring demographic, socioeconomic, and environmental factors.
We consider the following model:
Y = β 0 + β 1 X 1 + β 2 X 2 + + β 15 X 15 + ϵ
where Y = total age-adjusted mortality rate, X 1 = PREC (mean annual precipitation), X 2 = JANT (mean January temperature), X 3 = JULT (mean July temperature), X 4 = OVR65 (percent of population which is 65 years of age or over), X 5 = POPN (population per household), X 6 = EDUC (median school years), X 7 = HOUS (percent of housing units), X 8 = DENS (population per square mile), X 9 = NONW (percent of population which is non-white), X 10 = WWDRK (percent employment in white color occupation), X 11 = POOR (percent of families with low income), X 12 = HC (relative population potential of hydrocarbons), X 13 = NOX (relative population potential of oxides of nitrogen), X 14 = SOx (relative population potential of sulfur dioxide), and X 15 = HUMID (percent relative humidity).
Independent variables in this dataset exhibit high levels of correlation, or multicollinearity. For multicollinearity, we can use the variance inflation factor (VIF), calculated as
V I F i = 1 1 R i 2 , i = 1 , , p ,
where R i 2 is the multiple correlation coefficient obtained from regression of the explanatory variable. The VIF values are given in Table 16.
Based on the VIF values, it can be seen that variables such as HC (98.64) and NOX (104.98) exhibit a high level of multicollinearity; therefore, we can state that there is a dependency among explanatory variables.
To detect multicollinearity, we obtain the condition number value calculated as C N = l a r g e s t   e i g e n   v a l u e s m a l l e s t   e i g e n v a l u e 1 / 2 = 35,406.49 . We can see that the CN value is 35,406.49, which is more than 10, so it confirms that severe multicollinearity among the variables exists.
To evaluate the significance of the regression coefficients, we want to test the null hypothesis. We calculate the p values as p = 2 × P ( t n k 1 > | t α | ) . If p < α (typically), we reject the null hypothesis, indicating that the regression coefficients are statistically significant. In Table 17, we present the parameter estimates, standard errors, and p-values for each regression coefficient.
Based on the previous simulation results for type I errors and test power, certain two-parameter estimators, such as LTE, NBE2, MRT, and LKL, demonstrate higher power than others compared to OLS. Additionally, YC3 and DK also show improved power. Therefore, we want to evaluate the performance of these estimators with the real-life data affected by multicollinearity.
The corresponding parameter estimates, standard errors, and p-values are presented in Table 17. From the table, we can see that variables PREC, JANT, JULT, HOUS, POOR, and HUMID are not significant at the alpha level of 0.05 for the OLS estimator. However, under the YC3 estimator, all of these variables are statistically significant. Also, for the MRT estimator, the variables JANT, JULT, and HUMID are significant, and for the LKL estimator, the variables JANT, JULT, HOUS, POOR, and HUMID are significant.
Furthermore, the highest VIF variables, HC and NOX, are not significant under any estimation method. But some other high VIF variables, such as JANT and POOR, are significant under specific estimators. Also, the NONW variable is statistically significant across all of them, indicating its consistent impact. Therefore, we observe that all of the estimators perform better than the OLS estimator, but among them YC3 (Yang and Chang estimator) performs better than the other estimators on these data.
Another example we use are “body data” which can be found in the textbook Biostatistics for the Biological and Health Sciences, 2nd edition by Triola et al. (2006) [47]. These data are also available at “www.triolastats.com (accessed on 24 February 2015)”. The dataset consists of body and exam measurements for 300 subjects. The outcome variable for our model is HDL cholesterol (mg/dL). The explanatory variables are x1: weight (kg), x2: height (cm), x3: waist circumference (cm), x4: arm circumference (cm), and x5: BMI (Body Mass Index). The VIF values for the body dataset are given in Table 18.
There is evidence of multicollinearity in the data, as evidenced by several of the variance inflation factors (VIFs) being greater than 10 (see the paper by Ozkale and Kacıranlar (2007) [19], which is in practice considered the threshold for multicollinearity.
We can also see that the condition number is 143.6735, which also indicates that there is severe multicollinearity.
From Table 19, we can see that variable x3 is significant and variable x4 has a p-value of around 0.06 for most models, including the OLS estimator. Using the YC3 estimator, both x3 and x4 variables are highly non-significant; the MRT estimator shows non-significant results for the x4 variable. So, YC3 and MRT estimators provide better results for multicollinear independent variables.

5. Concluding Remarks

In this paper, we examined various test statistics derived from two-parameter estimators to address multicollinearity issues when testing regression coefficients within a linear regression model. We conducted a simulation study under several conditions and compared these test statistics empirically, evaluating their performance in terms of empirical size and power. Our findings show that several two-parameter estimators consistently outperformed other tests, demonstrating their effectiveness in managing multicollinearity and producing more reliable results. Specifically, the LTE, NBE2, YC3, MRT, and LKL estimators exhibited higher power than the others. These results provide valuable insights for selecting appropriate test statistics for regression coefficient testing in multicollinearity settings, contributing to the refinement of regression analysis methodology and more accurate inference for practitioners. Finally, we analyzed a pollution dataset and body data to illustrate the findings of this paper, which supported the simulation results to some extent. We can suggest that some estimators like YC3 and MRT work better than other estimators and OLS when there is multicollinearity. They can help to determine which variables are significant and perform better than OLS in the presence of multicollinearity.
In the future, we can use these estimators to test parameters for other types of regression models, like logistic and Poisson regression, and in more complex settings like mixed models. Another area of research is to investigate hypothesis testing for survival regression models like Weibull, exponential, and Cox Proportional Hazards models in the presence of multicollinearity.

Author Contributions

Conceptualization, M.A.H. and B.M.G.K.; methodology, M.A.H. and B.M.G.K.; software, M.A.H.; validation, M.A.H., Z.B. and B.M.G.K.; formal analysis, M.A.H. and B.M.G.K.; investigation, M.A.H., Z.B. and B.M.G.K.; resources, M.A.H., Z.B. and B.M.G.K.; data curation, M.A.H.; writing—original draft preparation, M.A.H.; writing—review and editing, M.A.H., Z.B. and B.M.G.K.; visualization, B.M.G.K. and Z.B.; project administration, Z.B. and B.M.G.K.; funding acquisition, N/A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kibria, B.M.G. Performance of some new ridge regression estimators. Commun. Stat. Simul. Comput. 2003, 32, 419–435. [Google Scholar] [CrossRef]
  2. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  3. Ehsanes Saleh, A.M.; Kibria, B.M.G. Performance of some new preliminary test ridge regression estimators and their properties. Commun. Stat. Theory Methods 1993, 22, 2747–2764. [Google Scholar] [CrossRef]
  4. Alheety, M.I.; Nayem, H.M.; Kibria, B.M.G. An Unbiased Convex Estimator Depending on Prior Information for the Classical Linear Regression Model. Stats 2025, 8, 16. [Google Scholar] [CrossRef]
  5. Dawoud, I.; Kibria, B.M.G. A new biased estimator to combat the multicollinearity of the Gaussian linear regression model. Stats 2020, 3, 526–541. [Google Scholar] [CrossRef]
  6. Hoque, M.A.; Kibria, B.M.G. Some one and two parameter estimators for the multicollinear gaussian linear regression model: Simulations and applications. Surv. Math. Its Appl. 2023, 18, 183–221. [Google Scholar]
  7. Hoque, M.A.; Kibria, B.M.G. Performance of some estimators for the multicollinear logistic regression model: Theory, simulation, and applications. Res. Stat. 2024, 2, 2364747. [Google Scholar] [CrossRef]
  8. Nayem, H.M.; Aziz, S.; Kibria, B.M.G. Comparison among Ordinary Least Squares, Ridge, Lasso, and Elastic Net Estimators in the Presence of Outliers: Simulation and Application. Int. J. Stat. Sci. 2024, 24, 25–48. [Google Scholar] [CrossRef]
  9. Yasmin, N.; Kibria, B.M.G. Performance of Some Improved Estimators and their Robust Versions in Presence of Multicollinearity and Outliers. Sankhya B 2025, 2025, 1–47. [Google Scholar] [CrossRef]
  10. Halawa, A.M.; El Bassiouni, M.Y. Tests of regression coefficients under ridge regression models. J. Stat. Comput. Simul. 2000, 65, 341–356. [Google Scholar] [CrossRef]
  11. Cule, E.; Vineis, P.; De Iorio, M. Significance testing in ridge regression for genetic data. BMC Bioinform. 2011, 12, 372. [Google Scholar] [CrossRef]
  12. Gökpınar, E.; Ebegil, M. A study on tests of hypothesis based on ridge estimator. Gazi Univ. J. Sci. 2016, 29, 769–781. [Google Scholar]
  13. Kibria, B.M.G.; Banik, S. A simulation study on the size and power Properties of some ridge regression Tests. Appl. Appl. Math. Int. J. (AAM) 2019, 14, 7. [Google Scholar]
  14. Perez-Melo, S.; Kibria, B.M.G. On some test statistics for testing the regression coefficients in presence of multicollinearity: A simulation study. Stats 2020, 3, 40–55. [Google Scholar] [CrossRef]
  15. Ullah, M.I.; Aslam, M.; Altaf, S. lmridge: A Comprehensive R Package for Ridge Regression. R J. 2018, 10, 326. [Google Scholar] [CrossRef]
  16. Perez-Melo, S.; Bursac, Z.; Kibria, B.M.G. Comparison of Test Statistics for Testing the Regression Coefficients in the OLS, Ridge, Liu and Kibria-Lukman Linear Regression Model: A Simulation Study. In JSM Proceedings, Biometrics Section; American Statistical Association: Alexandria, VA, USA, 2022; pp. 59–80. [Google Scholar]
  17. Yang, H.; Chang, X. A new two-parameter estimator in linear regression. Commun. Stat. Theory Methods 2010, 39, 923–934. [Google Scholar] [CrossRef]
  18. Liu, K. Using Liu-type estimator to combat collinearity. Commun. Stat. Theory Methods 2003, 32, 1009–1020. [Google Scholar] [CrossRef]
  19. Özkale, M.R.; Kaciranlar, S. The restricted and unrestricted two-parameter estimators. Commun. Stat. Theory Methods 2007, 36, 2707–2725. [Google Scholar] [CrossRef]
  20. Hoerl, A.E.; Kennard, R.W.; Baldwin, K.F. Ridge regression: Some simulations. Commun. Stat. Theory Methods 1975, 4, 105–123. [Google Scholar] [CrossRef]
  21. Sakallıoğlu, S.; Kaçıranlar, S. A new biased estimator based on ridge estimation. Stat. Pap. 2008, 49, 669–689. [Google Scholar] [CrossRef]
  22. Wu, J.; Yang, H. Efficiency of an almost unbiased two-parameter estimator in linear regression model. Statistics 2013, 47, 535–545. [Google Scholar] [CrossRef]
  23. Wu, J. An Unbiased Two-Parameter Estimation with Prior Information in Linear Regression Model. Sci. World J. 2014, 1, 206943. [Google Scholar] [CrossRef] [PubMed]
  24. Dorugade, A.V. A modified two-parameter estimator in linear regression. Stat. Transit. New Ser. 2014, 15, 23–36. [Google Scholar] [CrossRef]
  25. Arumairajan, S.; Wijekoon, P. Modified almost unbiased Liu estimator in linear regression model. Commun. Math. Stat. 2017, 5, 261–276. [Google Scholar] [CrossRef]
  26. Lukman, A.F.; Adewuyi, E.; Oladejo, N.; Olukayode, A. Modified almost unbiased two-parameter estimator in linear regression model. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2019; Volume 640, p. 012119. [Google Scholar]
  27. Lukman, A.F.; Ayinde, K.; Siok Kun, S.; Adewuyi, E.T. A modified new two-parameter estimator in a linear regression model. Model. Simul. Eng. 2019, 2019, 6342702. [Google Scholar] [CrossRef]
  28. Lukman, A.F.; Ayinde, K.; Binuomote, S.; Clement, O.A. Modified ridge-type estimator to combat multicollinearity: Application to chemical data. J. Chemom. 2019, 33, e3125. [Google Scholar] [CrossRef]
  29. Zeinal, A. Generalized two-parameter estimator in linear regression model. J. Math. Model. 2020, 8, 157–176. [Google Scholar] [CrossRef]
  30. Üstündağ Şiray, G.; Toker, S.; Özbay, N. Defining a two-parameter estimator: A mathematical programming evidence. J. Stat. Comput. Simul. 2021, 91, 2133–2152. [Google Scholar] [CrossRef]
  31. Abidoye, A.O.; Ajayi, I.M.; Adewale, F.L.; Ogunjobi, J.O. Unbiased Modified Two-Parameter Estimator for the Linear Regression Model. J. Sci. Res. 2022, 14, 785–795. [Google Scholar] [CrossRef]
  32. Ahmad, S.; Aslam, M. Another proposal about the new two-parameter estimator for linear regression model with correlated regressors. Commun. Stat. Simul. Comput. 2022, 51, 3054–3072. [Google Scholar] [CrossRef]
  33. Aslam, M.; Ahmad, S. The modified Liu-ridge-type estimator: A new class of biased estimators to address multicollinearity. Commun. Stat. Simul. Comput. 2022, 51, 6591–6609. [Google Scholar] [CrossRef]
  34. Dawoud, I.; Lukman, A.F.; Haadi, A.R. A new biased regression estimator: Theory, simulation and application. Sci. Afr. 2022, 15, e01100. [Google Scholar] [CrossRef]
  35. Idowu, J.I.; Oladapo, O.J.; Owolabi, A.T.; Ayinde, K. On the biased Two-Parameter Estimator to Combat Multicollinearity in Linear Regression Model. Afr. Sci. Rep. 2022, 1, 188–204. [Google Scholar] [CrossRef]
  36. Owolabi, A.T.; Ayinde, K.; Idowu, J.I.; Oladapo, O.J.; Lukman, A.F. A new two-parameter estimator in the linear regression model with correlated regressors. J. Stat. Appl. Probab. 2022, 11, 185–201. [Google Scholar]
  37. Owolabi, A.T.; Ayinde, K.; Alabi, O.O. A new ridge-type estimator for the linear regression model with correlated regressors. Concurr. Comput. Pract. Exp. 2022, 34, e6933. [Google Scholar] [CrossRef]
  38. Owolabi, A.T.; Ayinde, K.; Alabi, O.O. A Modified Two Parameter Estimator with Different Forms of Biasing Parameters in the Linear Regression Model. Afr. Sci. Rep. 2022, 1, 212–228. [Google Scholar] [CrossRef]
  39. Abonazel, M.R. New modified two-parameter Liu estimator for the Conway–Maxwell Poisson regression model. J. Stat. Comput. Simul. 2023, 93, 1976–1996. [Google Scholar] [CrossRef]
  40. Abdelwahab, M.M.; Abonazel, M.R.; Hammad, A.T.; El-Masry, A.M. Modified Two-Parameter Liu Estimator for Addressing Multicollinearity in the Poisson Regression Model. Axioms 2024, 13, 46. [Google Scholar] [CrossRef]
  41. Idowu, J.I.; Oladapo, O.J.; Owolabi, A.T.; Ayinde, K.; Akinmoju, O. Combating multicollinearity: A new two-parameter approach. Nicel Bilim. Derg. 2023, 5, 90–116. [Google Scholar] [CrossRef]
  42. Kibria, B.M.G.; Lukman, A.F. A new ridge-type estimator for the linear regression model: Simulations and applications. Scientifica 2020, 2020, 9758378. [Google Scholar] [CrossRef]
  43. Khan, M.S.; Ali, A.; Suhail, M.; Kibria, B.M.G. On some two parameter estimators for the linear regression models with correlated predictors: Simulation and application. Commun. Stat. Simul. Comput. 2024, 2024, 1–15. [Google Scholar] [CrossRef]
  44. R Core Team. _R: A Language and Environment for Statistical Computing_; R Foundation for Statistical Computing: Vienna, Austria, 2024; Available online: https://www.R-project.org/ (accessed on 11 August 2024).
  45. McDonald, G.C.; Schwing, R.C. Instabilities of regression estimates relating air pollution to mortality. Technometrics 1973, 15, 463–481. [Google Scholar] [CrossRef]
  46. Duffy, E.A.; Carroll, R.E. United States Metropolitan Mortality, 1959–1961; PHS Publication No. 1967, 999-AP-39; U.S. Public Health Service, National Center for Air Pollution Control: Philadelphia, PA, USA, 1967.
  47. Triola, M.M.; Triola, M.F.; Roy, J.A. Biostatistics for the Biological and Health Sciences; Pearson Addison-Wesley: Boston, MA, USA, 2006. [Google Scholar]
Figure 1. Average gain in power over the OLS test for α = 0.05 at correlation levels of 0.80, 0.90, and 0.99.
Figure 1. Average gain in power over the OLS test for α = 0.05 at correlation levels of 0.80, 0.90, and 0.99.
Stats 08 00028 g001
Figure 2. Average gain in power over the OLS test for α = 0.05 with sample sizes of 30, 50, and 100.
Figure 2. Average gain in power over the OLS test for α = 0.05 with sample sizes of 30, 50, and 100.
Stats 08 00028 g002
Figure 3. Average gain in power over the OLS test for α = 0.05 with 3, 5, and 10 parameters.
Figure 3. Average gain in power over the OLS test for α = 0.05 with 3, 5, and 10 parameters.
Stats 08 00028 g003
Table 1. Summary table for each estimator.
Table 1. Summary table for each estimator.
NameAuthorParameters
1. Liu Type of Two-Parameter Estimator (LTE)Liu (2003) [18] k ^ o p t = λ 1 100 λ p 99 ,
d ^ o p t = j = 1 p σ ^ 2 k ^ α j 2 λ j + k ^ 2 j = 1 p σ ^ 2 + λ j α ^ j 2 λ j λ j + k ^ 2 .
2. Ozkale and Kaciranlar Two-Parameter Estimator (TP)Ozkale and Kaciranlar (2007) [19] d ^ o p t = j = 1 p ( k α ^ j 2 σ ^ 2 ) λ j + k 2 j = 1 p k ( σ ^ 2 + α ^ j λ j ) λ j λ j + k 2 ,
k ^ j = σ ^ 2 α ^ j 2 d σ ^ 2 λ j + α ^ j 2
Used both arithmetic and harmonic means.
3. New Biased Estimator Based on Ridge (NBE)Sakallıoglu˘ and Kiciranlar (2008) [21] d ^ o p t = j = 1 p λ j ( α ^ j 2 σ ^ 2 ) λ j + 1 2 ( λ j + k ) j = 1 p λ j ( λ j α ^ j 2 + σ ^ 2 ) λ j + 1 2 λ j + k 2 ,
k ^ H K = σ ^ 2 j = 1 p α ^ j 2 and k ^ H K B = p σ ^ 2 j = 1 p α ^ j 2 .
4. Yang and Chang Two-Parameter Estimator (YC)Yang and Chang (2010) [17] k ^ j = σ ^ 2 λ j + d 1 d λ j α ^ j 2 λ j + 1 α ^ j 2 .
d ^ o p t = i = 1 p k + 1 λ j + k λ j α ^ j 2 λ j 2 σ ^ 2 / λ j + 1 2 λ j + k 2 i = 1 p σ ^ 2 + λ j α ^ j 2 λ j / λ j + 1 2 λ j + k 2 .
5. Almost Unbiased Two-Parameter Estimator (AUTP)Wu and Yang (2011) [22] d ^ j = 1 λ j + k σ ^ k 1 σ 2 + λ j α ^ j 2 1 2 ,
k ^ j = σ ^ λ j 1 d σ ^ 2 + λ j α ^ j 2 σ ^ .
6. Unbiased Two-Parameter Estimator (UTP)Wu (2014) [23] d ^ = 1 σ ^ 2 [ p + k t r X T X 1 ] k β ^ O L S J β ^ O L S J T ,
where t r X T X 1 = j = 1 p 1 / λ j .
k ^ = p σ ^ 2 1 d β ^ O L S J β ^ O L S J T σ 2 t r X T X 1 .
7. Dorugade Modified Two-Parameter Estimator (MTP)Dorugade (2014) [24] k ^ 1 = p σ ^ 2 j = 1 p α ^ j 2 and k ^ 2 = m e d i a n σ ^ 2 α ^ j 2 .
d ^ o p t = j = 1 p λ j + k σ ^ 2 + λ j α ^ j 2 λ j k α ^ j 2 .
8. Modified Almost Unbiased Liu Estimator (MAULE)Arumairajan and Wijekoon (2017) [25] d ^ o p t = 1 j = 1 p λ j σ ^ 2 k α ^ j 2 λ j + k 2 λ j + 1 2 j = 1 p λ j σ ^ 2 + λ j α ^ j 2 λ j + 1 4 λ j + k 2 ,
k ^ j = σ ^ 2 λ j + 1 2 λ j 1 d 2 λ j ( σ ^ 2 + α ^ j 2 ) α ^ j 2 λ j + 1 2 .
9. Modified Almost Unbiased Two-Parameter Estimator (MAUTP)Lukman et al. (2019) [26] k ^ = σ ^ 2 α ^ j 2 , and k ^ H M P = p σ ^ 2 j = 1 p α ^ j 2 ,
d ^ = min   α ^ j 2 σ ^ 2 α ^ j 2 + α ^ j 2 .
10. Modified New Two-Parameter Estimator (MNTP)Lukman et al. (2019) [27] k ^ j = σ ^ 2 λ j λ j α ^ j b 2 d ^ ( λ j α ^ j b 2 + σ ^ 2 ) ,
Use the harmonic mean of k values.
d ^ o p t = j = 1 p k λ j α ^ j b 2 σ ^ 2 λ j j = 1 p σ ^ 2 k ^ + k λ j α ^ j b 2 .
11. Modified Ridge Type (MRT) Lukman et al. (2019) [28] k ^ j = σ ^ 2 1 + d α ^ j 2 .
Use harmonic mean of the k ^ j .
d ^ j = σ ^ 2 k α ^ j 2 1 .
Use the harmonic means of d ^ j .
12. A New Biased Estimator by Dawoud and Kibria (DK)Dawoud and Kibria (2020) [5] k ^ j = σ ^ 2 1 + d σ ^ 2 λ j + 2 α ^ j 2 ,
Use k m i n = m i n   ( k ^ ^ j ) .
d ^ j = σ ^ 2 λ j m 1 ,
where m = k ( σ ^ 2 + 2 λ j α ^ j 2 ) .
Use d m i n = m i n   ( d ^ ^ j ) .
13. Generalized Two-Parameter Estimator (GTP)Zeinal (2020) [29] d ^ j = ( k α ^ j 2 σ ^ 2 ) λ j k ( σ ^ 2 + α ^ j 2 λ j ) ,
k ^ j = σ ^ 2 α ^ j 2 d j σ ^ 2 λ j + α ^ j 2 .
Take the arithmetic mean.
14. Siray Two-Parameter Estimator (DTP) Siray et al. (2021) [30] k ^ j = σ ^ 2 d λ j ( λ j + 1 ) d 1 λ j 2 α ^ j 2 σ ^ 2 ( λ j + d ) ,
d ^ j = σ ^ 2 k λ j + k λ j 2 α ^ j 2 k λ j 2 α ^ j 2 σ ^ 2 λ j 2 σ ^ 2 λ j σ ^ 2 k .
Take the harmonic mean and median.
15. Unbiased Modified Two-Parameter Estimator (UMTP)Abidoye et al. (2022) [31] k ^ = p σ ^ 2 j = 1 p α ^ 2 j  
d ^ = j = 1 p λ j + k σ ^ 2 + λ j α ^ j 2 λ j k α ^ j 2 .
16. Ahmad and Aslam’s Modified New Two-Parameter Estimator (MNTPE)Ahmad and Aslam (2022) [32] k ^ o p t = σ ^ 2 λ j + d 1 d λ j α ^ j 2 d λ j + 1 α ^ j 2 ,
Take the harmonic mean of k ^ values.
d ^ o p t = j = 1 p λ j ( α ^ j 2 σ ^ 2 ) j = 1 p ( λ j α ^ j 2 + σ ^ 2 k λ j α ^ j 2 ) .
17. Modified Liu Ridge Type (MLRT)Aslam and Ahmad (2022) [33] k ^ j = σ   ^ 2 λ j + d λ j 1 d α ^ j 2 1 + d λ j + 1 α ^ j 2 ,
Take the max values of k ^ .
d ^ o p t = j = 1 p λ j + k λ j + 1 λ j 2 k λ j 2 k λ j α ^ j 2 σ ^ 2 λ j λ j 2 k λ j 2 k λ j λ j + 1 2 j = 1 p σ 2 λ j 2 k λ j 2 k λ j + λ j k λ j + 1 λ j 2 k λ j 2 k λ j α ^ j 2 λ j + 1 2 .
18. New Biased Regression Two-Parameter Estimator (NBR)Dawoud et al. (2022) [34] k ^ = ( λ j 2 α ^ j 2 3 d + σ ^ 2 λ j ( 1 d ) ) 2 ( σ ^ 2 d + λ j α ^ j 2 ( 1 + d ) ) + λ j λ j α ^ j 2 2 d 3 2 + 2 λ j σ ^ 2 α ^ j 2 5 2 d + d 2 + σ 2 2 1 + d 2 2 ( σ ^ 2 d + λ j α ^ j 2 ( 1 + d ) ) ; take the minimum values of k .
d ^ j = λ j 2 σ 2 3 α ^ j 2 k λ j k ( σ ^ 2 + α ^ j 2 k ) k ( k λ j ) ( σ ^ 2 + λ j α ^ j 2 ) .
Take the minimum values of d .
19. Biased Two-Parameter Estimator (BTP)Idowu et al. (2022) [35] k ^ j = σ ^ 2 λ j d λ j + α ^ j 2 λ j 2 ( d + 1 ) σ ^ 2 1 + d d λ j α ^ j 2 λ j ( 1 + d ) ( 2 λ j d + 1 ) ,
d ^ j = σ ^ 2 λ j σ ^ 2 k + α ^ j 2 λ j 2 + σ ^ 2 λ j k + 2 α ^ j 2 λ j 2 k 2 σ ^ 2 k + α ^ j 2 λ j k + α ^ j 2 2 λ j 2 k 2 λ j k + k + λ j + σ ^ 2 α ^ j 2 λ j 2 k k λ j + σ ^ 2 α ^ j 2 2 σ ^ 2 λ j k + k + λ j + σ ^ 2 2 λ j k k λ j ( σ ^ 2 k + α ^ j 2 λ j k ) .
20. New Two-Parameter Estimator (NTP)Owolabi et al. (2022) [36] k ^ j = σ ^ 2 d 2 α ^ j 2 + σ ^ 2 λ j ,
Take the harmonic mean.
d ^ j = σ ^ 2 k 2 α ^ j 2 + σ ^ 2 λ j .
21. New Ridge-Type Estimator (NRT)Owolabi et al. (2022) [37] k ^ = min   σ ^ 2 α ^ j 2 d ,
d ^ = min   σ ^ 2 α ^ j 2 k .
22. Modified Two-Parameter Estimator (MTPE)Owolabi et al. (2022) [38] k ^ = σ ^ 2 α ^ j 2 d ,
Take the arithmetic mean of k ,
d ^ = min   σ ^ 2 α ^ j 2 k .
23. Modified Two-Parameter Liu Estiamtor by Abonazel (MTPL)Abonazel (2023) [39] k ^ o p t = λ j ( σ ^ 2 α ^ j 2 ) σ ^ 2 + λ j α ^ j 2 d ,
d ^ o p t = λ j σ ^ 2 α ^ j 2 k ( σ ^ 2 + λ j α ^ j 2 ) σ ^ 2 + λ j α ^ j 2 .
24. Liu–Kibria–Lukman Two Parameter Estiamtor (LKL)Idowu et al. (2023) [41] d ^ j = λ j α ^ j 2 σ ^ 2 + λ j k 2 α ^ j 2 λ j + α ^ j 2 σ ^ 2 σ ^ 2 λ j k + α ^ j 2 λ j λ j k ,
k ^ = min   σ ^ 2 2 α ^ j 2 + σ ^ 2 λ j .
25. Two-Parameter Ridge-Type Estimator (TPR)Shakir Khan et al. (2024) [43] q ^ = j = 1 p α j 2 λ j λ j + k j = 1 p σ 2 λ j + α j 2 λ j 2 λ j + k 2 ,
k ^ o p t = q j = 1 p σ ^ 2 λ j + q 1 j = 1 p α ^ j 2 λ j 2 j = 1 p α ^ j 2 λ j 2 .
Table 2. Type I error rate for ρ = 0.80 and α = 0.05 under MF orientation.
Table 2. Type I error rate for ρ = 0.80 and α = 0.05 under MF orientation.
p 3 5 10
n305010030501003050100Avg
OLS0.04570.04910.04970.04610.04980.04660.04410.04690.04880.0474
LTE0.07790.08590.09050.05580.06310.06340.03690.04440.04880.063
TP10.04660.05010.05090.04620.04930.0470.04420.0470.04890.0478
TP20.04710.05090.05230.04620.04930.04780.04280.04630.04840.0479
NBE10.06030.06660.06990.04990.05590.05540.03640.04350.04780.054
NBE20.05640.06330.06760.0490.05510.05420.03680.04410.04850.0528
YC10.04810.05870.06550.09990.12430.14050.20710.28710.33280.1516
YC20.04470.05210.06470.05750.07040.08160.05610.08180.10050.0677
YC30.05280.05940.0610.05210.05790.05970.03930.04870.05470.054
AUTP10.03910.04110.04570.03720.04120.04130.0270.03290.0390.0383
AUTP20.04090.04470.04840.04790.05270.04850.05030.05930.05750.05
UTP0.11780.27920.48440.01670.04770.17390.0020.0040.01340.1266
MTP10.10640.13750.15750.20420.28110.32540.37390.57440.66580.314
MTP20.10530.13590.15690.20110.27790.32170.37190.5720.66380.3118
MAULE0.00150.00570.01050.00110.00150.00340.00010.00120.00310.0023
MAUTP0.0780.08760.09570.07340.08690.09280.04120.05350.0630.0747
MNTP0.1910.2020.20610.10160.11340.11050.06530.07250.0760.1265
DK0.06040.06670.07040.0470.05060.04910.04260.04620.04830.0535
MRT0.05090.05910.06290.04650.05370.05280.03550.04370.04840.0504
GTP0.02570.03250.03310.02440.02760.03060.01580.02030.02480.0261
DTP10.14310.16110.18150.24690.2990.32890.39320.51410.57980.3164
DTP20.16830.18180.20660.24670.29160.31530.49270.61890.68170.356
UMTP0.9920.99590.99830.99560.99830.99930.99780.99950.99990.9974
MNTPE0.13330.14510.16210.15590.1850.19760.10430.14530.17790.1563
MLRT0.11750.13530.14450.21780.25830.27950.41210.53910.6050.301
NBR0.02610.05350.08910.01040.02330.06380.00260.00460.00660.0311
BTP0.07120.08360.09310.15710.19590.21820.39190.52610.58290.2578
LKL0.04940.06510.07740.03750.06120.07280.01150.02420.04380.0492
NTP0.04660.05020.05130.04580.04920.0470.04430.0470.04860.0478
NRT0.0480.05280.05370.04560.04910.04780.04260.04620.04830.0482
MTPE0.03940.04070.04270.03720.04080.03860.03580.03810.03950.0392
MTPL0.03050.03690.04170.03490.04230.04620.02720.03990.04810.0386
TPR10.11690.12710.15150.08260.11640.13680.01740.04990.08070.0977
TPR20.00160.00210.0030.00010.00010.00070.00010.00020.00030.0008
Notes: LTE: Liu Type of Two-Parameter Estimator; TP: Ozkale and Kaciranlar Two-Parameter Estimator; NBE: New Biased Estimator Based on Ridge; YC: Yang and Chang Two-Parameter Estimator; AUTP: Almost Unbiased Two-Parameter Estimator; UTP: Unbiased Two-Parameter Estimator; MTP: Dorugade Modified Two-Parameter Estimator; MAULE: Modified Almost Unbiased Liu Estimator; MAUTP: Modified Almost Unbiased Two-Parameter Estimator; MNTP: Modified New Two=Parameter Estimator; MRT: Modified Ridge Type; DK: A New Biased Estimator by Dawood and Kibria; GTP: Generalized Two-Parameter Estimator; DTP: Siray Two-Parameter Estimator; UMTP: Unbiased Modified Two-Parameter Estimator; MNTPE: Ahmad and Aslam’s Modified New Two-Parameter Estimator; MLRT: Modified Liu Ridge Type; NBR: New Biased Regression Two-Parameter Estimator; BTP: Biased Two-Parameter Estimator; NTP: New Two-Parameter Estimator; NRT: New Ridge-Type Estimator; MTPE: Modified Two-Parameter Estimator; MTPL: Modified Two-Parameter Liu Estimator by Abonazel; LKL: Liu–Kibria–Lukman Two-Parameter Estimator; TPR: Two-Parameter Ridge Estimator.
Table 3. Type I error rate for ρ = 0.90 and α = 0.05 under MF orientation.
Table 3. Type I error rate for ρ = 0.90 and α = 0.05 under MF orientation.
p 3 5 10
n305010030501003050100Avg
OLS0.04850.04550.04540.04660.04710.05060.04490.0470.04840.0471
LTE0.06790.06910.07010.0490.05190.05740.03640.04250.04740.0546
TP10.04870.04680.04570.04650.0470.05060.04490.0470.04840.0473
TP20.04930.04750.04630.04570.04720.05070.04270.04580.04810.047
NBE10.05860.05710.05790.04640.04950.05460.03630.04250.04690.05
NBE20.05750.05610.05590.04680.04960.05420.03640.04250.04730.0496
YC10.06180.06740.07320.09550.11810.13030.2110.27890.31330.1499
YC20.04680.05860.06760.05940.07210.08930.06330.08830.10590.0724
YC30.05230.05190.05380.04810.05220.05780.03740.04440.04970.0497
AUTP10.03940.03850.03990.03420.03730.04280.02690.0320.03720.0365
AUTP20.04510.04430.04490.04690.04930.05060.04680.05410.05380.0484
UTP0.04750.11390.3360.00920.02120.06240.00150.00330.00750.0669
MTP10.11670.15060.17720.23870.3150.36190.42440.62720.71750.3477
MTP20.11570.14950.1770.23710.31360.36060.42290.62630.71670.3466
MAULE0.00020.00090.00110.00010.00010.00020.00010.00010.00020.0003
MAUTP0.07770.08150.08530.06760.07760.0860.03660.04690.05350.0681
MNTP0.14030.14760.14350.07170.07530.07850.05310.05660.05960.0918
DK0.05890.05920.05870.04630.0480.0520.0430.04570.04820.0511
MRT0.05450.05470.05610.04590.04970.05420.03530.0420.04740.0489
GTP0.02810.03270.03510.01710.01920.02520.01260.01610.01930.0228
DTP10.1650.18090.20540.29610.34040.370.49240.60270.65950.368
DTP20.17390.19410.21130.2990.33980.36690.55640.67210.73180.3939
UMTP0.99010.99350.99680.99640.99820.99930.99860.99960.99980.9969
MNTPE0.15680.17310.19170.26940.30310.32470.33760.41880.4690.2938
MLRT0.14010.16290.17610.26680.30710.33220.49370.62150.69040.3545
NBR0.01990.04390.08980.00570.01110.01880.00170.00190.00130.0216
BTP0.07030.07810.08650.16550.2070.23240.42340.51150.55220.2585
LKL0.05220.06990.07750.04180.06210.0760.01410.02620.04550.0517
NTP0.04850.04640.04510.04610.04720.05080.04470.04690.04840.0471
NRT0.0490.04690.04650.04550.04690.05060.0430.04570.04820.0469
MTPE0.04060.03670.03810.03820.03830.04020.03660.0380.0390.0384
MTPL0.0330.03540.04030.03340.04090.04790.02580.03740.04580.0378
TPR10.13650.15160.17210.13870.17580.19920.04260.09180.12990.1376
TPR20.00030.00090.00130.00010.00010.00020.00010.00010.00020.0003
Table 4. Type I error rate for ρ = 0.99 and α = 0.05 under MF orientation.
Table 4. Type I error rate for ρ = 0.99 and α = 0.05 under MF orientation.
p 3 5 10
n305010030501003050100Avg
OLS0.04430.04850.04670.04550.0450.04740.04510.0470.04820.0464
LTE0.04820.05440.05330.04160.04390.04710.03520.04170.04560.0457
TP10.04420.04830.04690.04540.0450.04730.04510.0470.04820.0464
TP20.04570.05070.04950.04450.04480.04720.04260.04560.04760.0465
NBE10.04620.05210.05120.04160.04360.04690.03520.04170.04550.0449
NBE20.04770.05370.05330.04160.04380.0470.03520.04160.04560.0455
YC10.15870.18440.20150.26660.28870.30940.37010.48690.55740.3137
YC20.0750.0920.10020.0750.09370.1080.05960.08530.10350.088
YC30.04440.05040.04830.04150.04360.04780.0350.04150.04560.0442
AUTP10.03390.03770.03830.03230.03550.03990.02540.03170.03610.0345
AUTP20.04290.04760.04670.04350.04450.04680.04060.04590.0490.0453
UTP0.01040.0150.03150.00390.00620.01370.00090.00180.0030.0096
MTP10.1310.16550.18940.25910.33660.39120.46840.67090.7550.3741
MTP20.13070.16510.18910.25880.33650.3910.46830.67090.7550.3739
MAULE0.00010.00010.00020.00010.00010.00020.00010.00010.00020.0001
MAUTP0.05240.05960.06120.04390.0480.05230.03210.03950.04520.0482
MNTP0.08430.09140.09020.04930.0490.05080.04590.04760.04910.062
DK0.04660.05220.0510.04440.04460.0470.04290.04580.04770.0469
MRT0.04730.05260.05380.04070.04340.04670.03360.04070.04540.0449
GTP0.01020.01080.01130.00730.00970.01040.00460.00870.01240.0095
DTP10.16730.19030.20550.32220.37160.40590.58740.70210.75540.412
DTP20.16610.18890.20450.32780.37140.40740.59820.71420.77010.4165
UMTP0.98530.98930.99410.99570.99750.99870.99920.999710.9955
MNTPE0.16590.18730.20090.32780.3710.40540.59140.7070.76260.4133
MLRT0.16720.18690.20050.32520.36530.40280.57190.70440.7660.41
NBR0.00590.0080.0150.00260.00280.00210.0010.00050.00030.0042
BTP0.04590.05160.05670.13390.18390.21490.36780.46880.51790.2268
LKL0.04550.06450.07430.06120.09470.11740.06540.10040.12770.0835
NTP0.04410.04810.0470.04510.0450.04740.04480.04680.04820.0463
NRT0.04570.05150.05050.04430.04460.0470.04290.04580.04770.0467
MTPE0.03590.03810.03730.03550.03530.03820.03550.03690.03760.0367
MTPL0.02880.03760.04210.02790.03540.04290.02380.03380.04280.035
TPR10.15360.17850.19480.27820.33320.36740.31220.45090.52160.31
TPR20.00010.00010.00020.00010.00010.00020.00010.00010.00020.0002
Table 5. Statistical power of test for ρ = 0.80 and α = 0.05.
Table 5. Statistical power of test for ρ = 0.80 and α = 0.05.
p 3 5 10
n305010030501003050100Ave
OLS0.59270.61210.61450.39020.40890.41880.2080.22980.23440.4122
LTE0.92370.93130.93860.66410.68720.70630.24690.29950.31970.6353
TP10.62490.64560.6460.3960.41510.42610.20880.23050.2350.4253
TP20.73830.76610.77530.45290.48160.49560.22040.25060.25880.4933
NBE10.8790.89250.90110.59740.62870.65350.23430.28530.30590.5975
NBE20.89140.90030.90970.61830.64840.67030.24180.29340.3130.6096
YC30.84670.86790.88160.58380.61760.64330.24180.29410.31520.588
AUTP10.50330.55070.58450.3010.32920.36640.14870.17210.18170.3486
AUTP20.67260.67810.67530.48720.49620.48420.24820.29240.30140.4817
DK0.76160.77710.78720.47060.49710.51330.21950.24810.25560.5033
MRT0.83590.85210.86260.5830.61850.63760.2430.29750.3180.5831
GTP0.60850.62470.63610.38330.40660.41810.15830.1960.21410.4051
NBR0.10220.19530.64550.0180.02140.0470.00320.00070.00030.1148
LKL0.64050.65060.6530.29870.3120.3290.03920.05360.07930.3395
NTP0.66180.68230.68810.41350.43440.44270.2120.23460.23910.4454
NRT0.71960.73980.74710.45960.48520.49890.21930.2480.25550.4859
MTPE0.57810.59710.59730.36520.38470.39410.18420.20610.20920.3907
MTPL0.79080.81430.83130.55960.59760.61980.2460.31280.34250.5683
Table 6. Statistical power of test for ρ = 0.90 and α = 0.05.
Table 6. Statistical power of test for ρ = 0.90 and α = 0.05.
p 3 5 10
n305010030501003050100Ave
OLS0.59770.6080.62850.39390.40350.4220.21160.22650.23680.4143
LTE0.93190.93990.94230.6480.67410.69730.23750.27860.30620.6284
TP10.63250.64440.66350.40030.41080.42990.21210.22720.23750.4287
TP20.79330.81430.82660.45680.47630.5030.22110.24360.25780.5103
NBE10.89990.91450.91740.59320.62580.65490.22890.26940.29680.6001
NBE20.91590.92720.9310.62660.65440.68060.23610.27710.30470.6171
YC30.85290.87930.88910.55650.59720.62980.22340.26550.29320.5763
AUTP10.490.53290.58690.28370.30720.35760.14110.15910.17280.3368
AUTP20.64750.65040.66490.46460.46640.46640.23580.27150.28580.4615
DK0.80080.81610.82510.47330.49220.51760.22020.24140.25450.5157
MRT0.88470.8950.90110.60630.63670.66450.23890.28310.31240.6025
GTP0.65270.65850.67310.41020.4230.44850.17410.20910.23260.4313
NBR0.00980.01070.04710.00160.00030.00060.0003000.0078
LKL0.72510.73490.74290.40560.42040.44370.07880.09590.12950.4196
NTP0.67280.69010.70750.41490.42680.44810.21460.23070.2410.4496
NRT0.74860.76660.77830.46080.4770.50080.22020.24130.25430.4942
MTPE0.58130.59450.61470.36970.37890.39670.18780.20240.21230.3931
MTPL0.79290.81970.83870.53930.57060.6050.23460.28990.32450.5572
Table 7. Statistical power of test for ρ = 0.99 and α = 0.05.
Table 7. Statistical power of test for ρ = 0.99 and α = 0.05.
p 3 5 10
n305010030501003050100Ave
OLS0.58960.60550.62790.39010.4050.42060.21170.22750.23450.4125
LTE0.93330.93650.94380.60220.63540.66030.21670.25290.2750.6062
TP10.62890.64680.66850.39650.4120.42720.21220.22790.23510.4283
TP20.81710.83590.84980.44630.46960.49360.21620.23710.24780.5126
NBE10.89130.90760.92250.52680.58190.61780.20940.24710.26960.5749
NBE20.92680.93090.93930.59970.63450.65860.21720.25370.27580.6041
YC30.41290.72070.83930.3430.45910.54240.18310.23070.25840.4433
AUTP10.35620.42210.51120.18980.2170.27060.10730.12010.13070.2583
AUTP20.60090.61180.63350.40630.42080.43070.20680.23310.24970.4215
DK0.84090.850.86270.4590.48250.50770.2160.23530.24520.5221
MRT0.91130.91660.92730.60840.64230.66420.21850.25840.28340.6034
GTP0.76170.76890.77880.51240.53630.56130.19320.24190.27270.5141
NBR0.00040.00010.00010.00020.00010.00010.00010.00010.00020.0001
LKL0.90030.90630.9080.72390.74830.75960.34160.39580.43540.6799
NTP0.66890.69020.71150.40970.42630.44320.2140.23010.23750.4479
NRT0.76870.78650.80050.44710.46870.4930.2160.23530.24520.4957
MTPE0.57370.59120.61280.36660.38120.39610.18810.20360.20850.3913
MTPL0.68370.74660.80010.45930.5080.54640.20410.25290.28250.4982
Table 8. Statistical power for σ 2 = 5 and 10 for ρ = 0.80 .
Table 8. Statistical power for σ 2 = 5 and 10 for ρ = 0.80 .
p 3 10
n3030505010010030305050100100
σ 2 510510510510510510
OLS0.16150.09970.16920.10670.16930.11320.07580.06040.08130.06160.0860.0659
LTE0.36260.19730.39070.21950.40150.23090.07160.0540.08590.06130.09670.0701
NBE20.2870.15530.31570.17250.32970.18390.07140.05380.08530.06110.09590.0697
YC30.25470.13430.28610.1510.29930.16530.07320.05440.09020.06330.10310.0738
DK0.2570.14570.27910.16070.290.17240.07580.05940.0830.06220.08910.0674
MRT0.26360.14420.29060.16410.30390.17530.07010.05250.08590.06090.09790.0704
LKL0.25390.13790.27380.15770.2860.17350.00950.00510.01980.01290.03960.0309
Table 9. Statistical power for σ 2 = 5 and 10 for ρ = 0.90 .
Table 9. Statistical power for σ 2 = 5 and 10 for ρ = 0.90 .
p 3 10
n3030505010010030305050100100
σ 2 510510510510510510
OLS0.16240.10310.16530.10270.16790.10850.07570.05820.08010.06510.08570.0667
LTE0.40570.22370.43220.23070.44410.24670.06970.05130.08260.06360.09420.0696
NBE20.35560.18560.38130.19340.39790.20780.06970.05130.08250.06350.09410.0696
YC30.28510.15210.31110.16170.32760.17650.07130.05150.08450.06490.09640.0715
DK0.30590.17170.32390.1770.33460.18530.07530.05730.08130.06520.08840.0675
MRT0.34010.180.37280.19230.39120.2070.06850.05020.08280.06310.09540.0701
LKL0.38810.21990.40740.24030.41790.26230.02390.01230.03590.02210.05930.0423
Table 10. Statistical power for σ 2 = 5 and 10 for ρ = 0.99 .
Table 10. Statistical power for σ 2 = 5 and 10 for ρ = 0.99 .
p 3 10
n3030505010010030305050100100
σ 2 510510510510510510
OLS0.16410.10330.16410.10730.16940.1110.07680.05910.07860.06480.08740.0678
LTE0.42990.260.46150.27110.48550.28150.06770.05080.0780.0620.09140.0687
NBE20.44010.26320.47270.27770.49770.29080.06790.05090.07810.0620.09150.0688
YC30.23950.17590.27470.18950.31010.20690.06380.04920.07480.06050.08930.0675
DK0.32270.19930.33060.20660.34480.21190.07580.05740.07930.06450.0890.0685
MRT0.49590.29930.52790.32410.55080.34730.06630.04920.0780.06140.09230.0689
LKL0.62390.44480.65080.49570.66470.50950.08810.07860.13040.12180.16660.1476
Table 11. Statistical power for n = 200 and 300 and ρ = 0.80 .
Table 11. Statistical power for n = 200 and 300 and ρ = 0.80 .
p33551010
n200300200300200300
OLS0.81590.82160.60380.60170.34970.3483
LTE0.98340.98580.85610.85980.47140.4721
NBE20.97470.97770.83420.83810.46340.4635
YC30.96130.96390.81120.8150.45930.4593
DK0.90980.91280.70140.70330.38040.3785
MRT0.9530.95710.80740.80880.46770.4672
LKL0.73620.73980.41240.41760.13650.1474
Table 12. Statistical power for n = 200 and 300 and ρ = 0.90 .
Table 12. Statistical power for n = 200 and 300 and ρ = 0.90 .
p33551010
n200300200300200300
OLS0.81490.81980.60.60470.34730.3536
LTE0.98460.98490.85120.8530.44870.4555
NBE20.97930.98110.8410.84180.44640.4534
YC30.95950.96430.80240.80560.43140.4394
DK0.91870.92110.70670.71040.37350.3804
MRT0.96370.96650.8230.82240.45530.4632
LKL0.80030.80350.52420.53060.20210.217
Table 13. Statistical power for n = 200 and 300 and ρ = 0.99 .
Table 13. Statistical power for n = 200 and 300 and ρ = 0.99 .
p33551010
n200300200300200300
OLS0.81710.82060.59880.61150.34920.3481
LTE0.9850.98410.83010.83890.41550.4171
NBE20.98310.98260.82920.83740.41650.4184
YC30.94590.95230.75610.77170.39680.399
DK0.93030.93420.70030.71090.36710.3658
MRT0.97370.97320.82590.83110.42720.4296
LKL0.93090.93350.81420.82250.5430.5516
Table 14. Statistical power for ρ = 0.80 using errors from t-distribution.
Table 14. Statistical power for ρ = 0.80 using errors from t-distribution.
p3 5
n1020301020302030
OLS0.36220.51990.56090.17280.33040.36040.1620.1996
LTE0.68850.88260.91020.20040.56110.61910.15330.234
NBE20.55770.82720.86750.17010.510.57550.15050.2294
YC30.40160.76780.82670.12430.4740.54390.15120.23
DK0.43930.69170.73720.17650.38780.43660.1630.2099
MRT0.49290.76410.81210.16430.47610.54460.14810.231
LKL0.45690.59870.62190.06840.25970.28160.02640.0381
Table 15. Statistical power for ρ = 0.90 using errors from exponential distribution.
Table 15. Statistical power for ρ = 0.90 using errors from exponential distribution.
p3 5
n1020301020302030
OLS0.27280.35140.36060.13760.22260.23340.11620.1313
LTE0.56470.72930.75940.15250.35130.39040.09720.1363
NBE20.48350.68970.72430.13620.33280.37060.09690.1358
YC30.25120.57030.62390.08360.28340.32720.09280.1322
DK0.38930.56540.60260.14080.25810.27710.11440.1345
MRT0.41570.65030.68710.12730.3220.36590.09380.1363
LKL0.46920.59310.60470.06150.28160.30390.02930.0504
Table 16. Variance inflation factor.
Table 16. Variance inflation factor.
VariableVIF
PREC4.113888
JANT6.143551
JULT3.967774
OVR657.470045
POPN4.307618
EDUC4.860538
HOUS3.994781
DENS1.658281
NONW6.779599
WWDRK2.841582
POOR8.717068
HC98.639935
NOX104.982405
SOx4.228929
HUMID1.907092
Table 17. Results of pollution data analysis comparing two-parameter models with OLS.
Table 17. Results of pollution data analysis comparing two-parameter models with OLS.
OLS LTE NBE2
CoefSEp-valueCoefSEp-valueCoefSEp-value
PREC1.1751.060.2741.0361.3730.4551.2091.0780.268
JANT−1.5161.2910.247−1.3281.6720.431−1.7891.2510.16
JULT1.3191.8190.4721.1642.3560.6241.61.790.376
OVR6511.1848.0080.179.82210.3690.34910.3818.2250.214
POPN128.03645.0070.007112.44458.280.06113.28239.8540.007
EDUC−1.46313.1120.912−1.28416.9790.94−2.15714.2850.881
HOUS1.2211.9960.5441.0782.5850.6791.5911.9850.427
DENS0.0070.0050.1080.0320.0060.0010.0070.0050.122
NONW4.131.550.0113.6282.0080.0784.0891.5930.014
WWDRK0.4471.9360.8180.3972.5070.8750.4632.030.821
POOR1.8863.730.6161.6584.830.7332.4453.7110.513
HC−0.3730.5680.514−0.3280.7360.658−0.3820.5730.509
NOX0.8741.1690.4580.7681.5140.6140.9071.1820.447
SOx0.160.1710.3570.140.2220.5320.1540.1740.381
HUMID1.9151.2640.1371.6871.6370.3082.1271.2420.094
YC3 DK MRT
CoefSEp-valueCoefSEp-valueCoefSEp-value
PREC2.050.7990.0141.2221.0580.2541.4611.0650.177
JANT−3.1990.7880.001−1.7691.2370.16−2.9961.060.007
JULT4.3571.0980.0011.5891.7770.3762.9081.6710.089
OVR651.4491.1910.2310.3537.8980.1976.2627.4740.407
POPN0.650.1510.001113.92340.0770.00745.20816.4130.009
EDUC0.1550.7270.832−1.71412.990.896−2.59511.7230.826
HOUS3.1651.0820.0051.5381.9380.4323.0541.7470.087
DENS0.0070.0050.1280.0070.0050.1140.0070.0050.153
NONW3.1930.8830.0014.0761.5460.0123.8071.5430.018
WWDRK0.1661.1950.890.4271.9280.8260.2981.8710.874
POOR4.2941.5630.0092.4013.6520.5144.8993.4430.162
HC−0.4520.5170.386−0.3790.5680.509−0.4010.5830.495
NOX1.1731.0360.2640.8991.1690.4461.0161.1940.399
SOx0.1610.1570.3110.1570.1710.3650.1450.1730.409
HUMID3.8350.9280.0012.1151.230.0933.0841.1380.01
LKL
CoefSEp-value
PREC1.6391.1170.149
JANT−3.9471.070.001
JULT3.9241.7310.028
OVR653.1347.6760.685
POPN−7.7813.220.02
EDUC−3.45511.60.767
HOUS4.241.7790.022
DENS0.0060.0050.206
NONW3.6061.6090.03
WWDRK0.2151.9240.912
POOR6.8333.5590.061
HC−0.420.6150.499
NOX1.1081.2570.383
SOx0.1340.1820.464
HUMID3.8321.1730.002
Notes: Coef: regression coefficient; SE: standard error.
Table 18. VIF for body data.
Table 18. VIF for body data.
VariableVIF
x1—weight (kg)89.942300
x2—height (cm)19.724450
x3—waist circumference (cm)8.249625
x4—arm circumference (cm)5.793885
x5—BMI77.377840
Table 19. Results of body data analysis.
Table 19. Results of body data analysis.
OLS LTE NBE2
CoefSEp-valueCoefSEp-valueCoefSEp-value
x1−0.68690.10240.001−0.65950.09840.001−0.68390.10180.001
x20.58660.05520.0010.57410.05310.0010.57710.05350.001
x3−0.44460.14430.0023−0.42310.13840.0024−0.42330.14160.003
x4−0.76950.41890.0672−0.73530.40120.0678−0.70690.40410.0813
x52.7910.44910.0012.67190.43010.0012.6930.43170.001
YC3 DK MRT
CoefSEp-valueCoefSEp-valueCoefSEp-value
x1−0.62180.09370.001−0.68440.10190.001−0.67730.10080.001
x20.47170.03660.0010.57770.05360.0010.5560.04980.001
x3−0.16270.11090.1436−0.42480.14180.003−0.37580.13570.006
x4−0.07780.23490.7408−0.70960.40510.0809−0.56740.37160.1278
x51.51390.23480.0012.69980.43280.0012.47460.39310.001
LKL
CoefSEp-value
x1−0.68660.10230.001
x20.58540.0550.001
x3−0.44180.14390.0023
x4−0.76080.4170.0691
x52.7780.44680.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hoque, M.A.; Bursac, Z.; Kibria, B.M.G. Inferences About Two-Parameter Multicollinear Gaussian Linear Regression Models: An Empirical Type I Error and Power Comparison. Stats 2025, 8, 28. https://doi.org/10.3390/stats8020028

AMA Style

Hoque MA, Bursac Z, Kibria BMG. Inferences About Two-Parameter Multicollinear Gaussian Linear Regression Models: An Empirical Type I Error and Power Comparison. Stats. 2025; 8(2):28. https://doi.org/10.3390/stats8020028

Chicago/Turabian Style

Hoque, Md Ariful, Zoran Bursac, and B. M. Golam Kibria. 2025. "Inferences About Two-Parameter Multicollinear Gaussian Linear Regression Models: An Empirical Type I Error and Power Comparison" Stats 8, no. 2: 28. https://doi.org/10.3390/stats8020028

APA Style

Hoque, M. A., Bursac, Z., & Kibria, B. M. G. (2025). Inferences About Two-Parameter Multicollinear Gaussian Linear Regression Models: An Empirical Type I Error and Power Comparison. Stats, 8(2), 28. https://doi.org/10.3390/stats8020028

Article Metrics

Back to TopTop