Next Article in Journal
Using Vector Representations of Characteristic Functions and Vector Logarithms When Proving Asymptotic Statements
Previous Article in Journal
eduSTAT—Automated Workflows for the Analysis of Small- to Medium-Sized Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Two-Parameter Ridge Estimators for Addressing Multicollinearity in Linear Regression: Theory, Simulation, and Applications

1
Department of Biostatistics, Florida International University, Miami, FL 33199, USA
2
Department of Mathematics and Statistics, Florida International University, Miami, FL 33199, USA
*
Author to whom correspondence should be addressed.
Stats 2026, 9(1), 15; https://doi.org/10.3390/stats9010015
Submission received: 25 December 2025 / Revised: 30 January 2026 / Accepted: 7 February 2026 / Published: 10 February 2026

Abstract

Multicollinearity among explanatory variables often undermines the reliability of the ordinary least squares (OLS) estimator that can be used in linear regression modeling. To overcome the limitation, a variety of two-parameter estimation strategies have been developed in prior research. We revisit these existing methods and present a newly established two-parameter ridge estimator to improve the accuracy of regression coefficients in terms of multicollinearity settings. A theoretical evaluation, assessed under the mean squared error (MSE) framework, is examined to compare their efficiency. Furthermore, a comprehensive simulation study is conducted to examine the empirical properties of all these estimators for different configurations, followed by a real-life dataset to examine their performance.

1. Introduction

Traditional linear regression analysis commonly relies on the assumption that the predictor variables are independent of one another. When the assumption is fulfilled, the ordinary least squares estimator achieves optimal statistical performance and satisfies the conditions of the best linear unbiased estimator (BLUE). In practical data analysis, predictor variables frequently exhibit moderate to strong correlations, violating the independence assumption and producing the multicollinearity problem. Under the multicollinearity problem, the explanatory (predictor) variables share redundant information, and the OLS estimator becomes inefficient; its estimated coefficients may have inflated standard errors, incorrect signs, and wide confidence intervals (Nayem et al., 2024) [1].
Multicollinearity is therefore regarded as a key challenge that affects regression modeling. Various approaches have been proposed to mitigate it, including: (i) expanding the sample size, (ii) eliminating predictors with a high correlation, (iii) combining variables through linear transformations, (iv) using dimension reduction via principal component analysis (PCA), (v) utilizing partial least squares regression (PLSR), and (vi) applying biased estimation techniques like ridge regression and related extensions. Among these, ridge regression, which was originally proposed by Hoerl and Kennard (1970 [2]), is used as the most widely influential approach.
Since its inception, many alternative biased estimators have been developed to improve the OLS estimator because of the presence of multicollinearity (Hoerl and Kennard, 1970 [2]). The classical ridge regression estimator introduces a positive scalar, k , known as the ridge or biasing parameter, that stabilizes estimates but whose performance depends critically on its proper selection. Later, Liu (1993 [3]) proposed an alternative biased estimator that introduces a different biasing parameter, d , which enters linearly and offers computational simplicity and interpretational advantages.
To further enhance estimation, several two-parameter estimators have been proposed that combine both k and d and provide greater flexibility and potential efficiency gains compared to single-parameter estimators such as ridge or Liu regression. Several influential studies in this area include the works of Lukman et al. (2019 [4]), Yang and Chang (2010 [5]), and Owolabi et al. (2022 [6]), among others. Although different two-parameter estimators have been proposed to address multicollinearity, several limitations remain. So, these existing two-parameter estimators rely on chosen shrinkage parameters, which may not adequately adapt to the eigen structure of the design matrix. Also, theoretical comparison results are often only derived under restrictive conditions, with limited empirical validation across a broad range of correlation levels and error variances. Finally, practical guidance on when newly proposed estimators outperform classical ridge or Liu-type estimators is often lacking in the literature.
The study undertakes a comparative evaluation of the numerous established two-parameter estimators and also presents a newly developed two-parameter ridge estimator that extends our classical ridge framework through an eigenvalue-adaptive biasing structure. We present theoretical comparisons using the MSE criterion and conduct an extensive Monte Carlo simulation across multiple scenarios. Additionally, we demonstrate their practical applicability by using empirical data from a real-life data application.
The proposed estimator involves two tuning parameters that play distinct but complementary statistical roles. The first parameter primarily controls the overall degree of shrinkage applied to the regression coefficients, thereby regulating the bias–variance trade-off in the presence of multicollinearity. Larger values of this parameter induce stronger shrinkage, which reduces the variance inflation caused by highly correlated predictors at the expense of a controlled increase in bias.
The second parameter governs the direction and structure of the shrinkage by incorporating information from the eigen structure of the design matrix. In particular, this parameter allows for differential shrinkage along principal component directions associated with large eigenvalues of X T X , enabling the estimator to adapt to the underlying correlation pattern among regressors. As a result, the estimator stabilizes coefficient estimates in ill-conditioned regression settings more effectively than one-parameter ridge-type estimators.
The rest of this manuscript is organized in the following manner. Section 2 introduces the statistical materials and methods and the full set of the estimators and in Section 3, their theoretical results are compared under the MSE criterion. In Section 4, we discuss the results of the Monte Carlo simulation design and also present the numerical findings and provide a practical application using a real-world dataset. And, in Section 5, we conclude the paper with key findings and possible future extensions.

2. Materials and Methods

This section explores the linear regression framework and discusses various types of estimators developed for the linear regression model.

2.1. Regression and Estimator Construction

Consider the linear regression model of the following form:
Y = X β + ϵ ,   ϵ ~ N 0 , σ 2 I n ,
where Y denotes an n × 1 response vector variable, X represents an n × p design matrix of regressors with full rank column matrix, β is a q × 1 vector of unknown regression coefficients, and the random error ϵ is assumed an n × 1 vector of residuals, which is assumed to be normally distributed with E ϵ = 0 , and V a r ϵ = σ 2 I n , where I n is the n × n identity matrix.
The ordinary least square estimator of β in Equation (1) is defined as follows:
β ^ O L S = X T X 1 X T Y ,
where X is the design matrix.
Let R be an orthogonal matrix, which is R T X T X R = Λ = d i a g ( λ 1 , λ 2 , , λ p ) , where λ j is the j th eigen value of X T X . Λ and R represent the diagonal matrix of eigen values and the corresponding matrix of eigen vectors of X T X , respectively. Using the transformation, Equation (1) can be equivalently written as follows:
Y = Z α + ϵ ,
where Z = X R , α = R T β and Z T Z = Λ .
Now, the OLS estimator of α under the transformed model is therefore as follows:
α ^ O L S = Λ 1 Z T Y
The statistical properties of the OLS estimator are
B i a s α ^ O L S = 0 ,
C o v α ^ O L S = σ 2 Λ 1 ,
where σ 2 ^ = Y Z α ^ O L S T ( Y Z α ^ O L S ) n p 1
M M S E α ^ O L S = C o v α ^ O L S + B i a s α ^ O L S B i a s α ^ O L S T = σ 2 Λ 1 ,
M S E α ^ O L S = E [ α ^ O L S α T α ^ O L S α ] = t r M M S E α ^ O L S = σ 2 j = 1 p 1 λ j

2.2. Two-Parameter Estimators

This section examines different types of two-parameter estimators that have previously been introduced in different studies.

2.2.1. Liu Type Two-Parameter Estimator

Liu (2003) [7] has proposed a two-parameter estimator, which is designed to mitigate multicollinearity and is defined as
α ^ L T E = Λ + k I 1 Z T Y d α * ,
where α * is any estimator of α , and we choose α * = α O L S and then we can get
α ^ L T E = Λ + k I 1 Λ d I α ^ O L S .
The Bias, the covariance matrix, MMSE and scalar MSE are given, respectively:
B i a s α ^ L T E = d + k Λ + k I 1 α ,
C o v α ^ L T E = σ 2 A L T E Λ 1 A L T E T ,
where A L T E = Λ + k I 1 Λ d I .
M M S E α ^ L T E = σ 2 A L T E Λ 1 A L T E T + d + k 2 Λ + k I 1 α α T Λ + k I 1
M S E α ^ L T E = σ 2 j = 1 p d λ j 2   λ j λ j + k 2 + j = 1 p ( d + k ) 2 α j 2 λ j + k 2
For choosing k and d in this case from Liu (2003) [7],
k ^ L T E = λ 1 100 λ p 99 ,   d ^ L T E = j = 1 p σ ^ 2 k α j 2 λ j + k ^ 2 j = 1 p σ ^ 2 + λ j α j 2 λ j λ j + k ^ 2

2.2.2. Ozkale and Kaciranlar Two-Parameter Estimator

An additional two-parameter estimator, developed by Ozkale and Kaciranlar (2007), [8] is expressed as follows:
α ^ T P = A T P α ^ O L S ,
where A T P = Λ + k I 1 Λ + k d I .
Here, Bias, the covariance matrix, MMSE and scalar MSE of α ^ T P are as follows:
B i a s α ^ T P = k d 1 Λ + k I 1 α , C o v α ^ T P = σ 2 A T P Λ 1 A T P T ,
M M S E α ^ T P = σ 2 A T P Λ 1 A T P T + k 2 d 1 2 Λ + k I 1 α α T Λ + k I 1 ,
M S E α ^ T P = σ 2 j = 1 p λ j + k d 2   λ j λ j + k 2 + j = 1 p k 2 d 1 2 α j 2 λ j + k 2 .
Now, using the approach of Ozkale and Kaciranlar (2007) [8], the optimal of d for the fixed k value can be derived as follows:
d ^ o p t = j = 1 p ( k α j 2 σ ^ 2 ) λ j + k 2 j = 1 p k ( σ ^ 2 + α j ^ λ j ) λ j λ j + k 2 ,
and
k ^ = σ ^ 2 α ^ j 2 d σ ^ 2 λ j + α j 2
Hoerl et al. (1975) [9] suggested estimating k by using the harmonic mean values of k , which was originally proposed by Hoerl and Kennard (1970) [2]. Consequently, both harmonic and arithmetic averaging can be used to estimate this shrinkage parameter. Then,
k ^ A M = 1 p j = 1 p σ ^ 2 α ^ j 2 d σ ^ 2 λ j + α j 2   and   k ^ H M = p σ ^ 2 j = 1 p α ^ j 2 d σ ^ 2 λ j + α j 2 .
Since the d ^ o p t depends on the k and the estimation of k also depends on the d , an iterative strategy is required for these parameters.

2.2.3. New Biased Estimator Based on Ridge

Proposed by Sakallıoglu and Kiciranlar (2008) [10], this is defined as
α ^ N B E = Λ + I 1 Z T y + d α ^ R i d g e = Λ + I 1 Λ + d + k I Λ + k I 1 Λ α ^ O L S .
The Bias, the covariance matrix, MMSE and scalar MSE are given, respectively,
B i a s α ^ N B E = A N B E I α ,
C o v β ^ N B E = σ 2 A N B E Λ 1 A N B E T ,
where A N B E = Λ + I 1 Λ + d + k I Λ + k I 1 Λ .
M M S E α ^ N B E = σ 2 A N B E Λ 1 A N B E T + A N B E I α α T A N B E I T ,
M S E α ^ N B E = σ 2 j = 1 p λ j λ j + ( d + k ) 2   λ j + 1 2 λ j + k 2 + j = 1 p ( 1 d λ j + k ) 2 α j 2 λ j + k 2 λ j + 1 2 .
To estimate the unknown parameter k and d , one may select the following:
d ^ o p t = j = 1 p λ j ( α ^ j 2 σ ^ 2 ) λ j + 1 2 ( λ j + k ) j = 1 p λ j ( λ j α ^ j 2 + σ ^ 2 ) λ j + 1 2 λ j + k 2 .
For fixed k , k H K = σ ^ 2 j = 1 p α ^ j 2 and k H K B = p σ ^ 2 j = 1 p α ^ j 2 .

2.2.4. Yang and Chang Two-Parameter Estimator

A two-parameter estimator, which is proposed by Yang and Chang (2010) [5], is defined as follows:
α ^ Y C = Y k , d α ^ O L S ,
where Y k , d = Λ + I 1 Λ + d I Λ + k I 1 Λ , and k and d are biasing parameters.
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ Y C are as follows:
B i a s α ^ Y C = ( Y k , d I ) α , C o v ( α ^ Y C ) = σ 2 Y k , d Λ 1 Y k , d T ,
M M S E α ^ Y C = σ 2 Y k , d Λ 1 Y k , d T + Y k , d I α α T Y k , d I T ,
M S E α ^ Y C = σ 2 j = 1 p λ j λ j + d 2   λ j + 1 2 λ j + k 2 + j = 1 p ( k + 1 d λ j + k ) 2 α j 2 λ j + k 2 λ j + 1 2 .
For k , we fixed d ; we get the optimal k as
k ^ j = σ ^ 2 λ j + d 1 d λ j α ^ j 2 λ j + 1 α ^ j 2
We can apply different approaches for estimating the parameters, such as the arithmetic mean and median.
We get
d ^ o p t = i = 1 p k + 1 λ j + k λ j α ^ j 2 λ j 2 σ ^ 2 / λ j + 1 2 λ j + k 2 i = 1 p σ ^ 2 + λ j α ^ j 2 λ j / λ j + 1 2 λ j + k 2 .

2.2.5. Modified Almost Unbiased Liu Estimator

Proposed by Arumairajan and Wijekoon (2017) [11], this is defined as follows:
α ^ M A U L E = A M A U L E α ^ O L S ,
where A M A U L E = I 1 d 2 Λ + I 2 Λ + k I 1 Λ .
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ M A U L E are as follows:
B i a s α ^ M A U L E = ( A M A U L E I ) α ,
C o v ( α ^ M A U L E ) = σ 2 A M A U L E Λ 1 A M A U L E T ,
M M S E α ^ M A U L E = σ 2 A M A U L E Λ 1 A M A U L E T + ( A M A U L E I ) α α T A M A U L E I T ,
M S E α ^ M A U L E = σ 2 j = 1 p λ j λ j + 2 d 2 λ j + d 2   λ j + k 2 λ j + 1 4 + j = 1 p λ j λ j + 2 d λ j + d λ j + k λ j + 1 2 2 α j 2 λ j + k 2 λ j + 1 4 .
Now, d ^ o p t = 1 j = 1 p λ j σ 2 k α j 2 λ j + k 2 λ j + 1 2 j = 1 p λ j σ 2 + λ j α j 2 λ j + 1 4 λ j + k 2 , and k ^ = σ 2 λ j + 1 2 λ j 1 d 2 λ j ( σ 2 + α j 2 ) α j 2 λ j + 1 2 .
We can obtain the arithmetic mean value of k .

2.2.6. Modified Almost Unbiased Two-Parameter Estimator

Proposed by Lukman, Adewuyi, Oladejo, and Olukayode (2019) [4], this is defined as follows:
α ^ M A U T P = A M A U T P α ^ O L S ,
where A M A U T P = I k 2 1 d 2 Λ + k I 2 Λ + k I 1 Λ .
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ M A U T P are as follows:
B i a s β ^ M A U T P = A M A U T P I α ,
C o v ( α ^ M A U T P ) = σ 2 A M A U T P Λ 1 A M A U T P T ,
M M S E α ^ M A U T P = σ 2 A M A U T P Λ 1 A M A U T P T + ( A M A U T P I ) α α T A M A U T P I T ,
M S E α ^ M A U T P = σ 2 j = 1 p λ j λ j + 2 k k d 2 λ j + k d 2   λ j + k 3 + j = 1 p λ j λ j + 2 k k d λ j + k d λ j + k 3 2 α j 2 λ j + k 2 .
To estimate k and d , we use k ^ = σ 2 α j 2 and the harmonic version of k ^ is k ^ H M P = p σ ^ 2 j = 1 p α ^ j 2 , and d ^ = min α j 2 σ 2 α j 2 + α j 2 .

2.2.7. Modified Ridge Type

A modified ridge-type estimator (MRT) is proposed by Lukman et al. (2019) [12], which is defined as follows:
α ^ M R T = M k , d α ^ O L S ,
where M k , d = Λ + k 1 + d I 1 Λ , k and d are biasing parameters.
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ M R T are as follows:
B i a s α M R T = k ( 1 + d ) ( Λ + k 1 + d I 1 ) α ,
C o v ( α ^ M R T ) = σ 2 M k , d Λ 1 M k , d T ,
M M S E α ^ M R T = σ 2 M k , d Λ 1 M k , d T + k 2 1 + d 2 Λ + k 1 + d I 1 α α T Λ + k 1 + d I 1 ,
M S E α ^ M R T = σ 2 j = 1 p λ j   λ j + k ( 1 + d ) 2 + j = 1 p k 2 1 + d 2 α j 2 λ j + k ( 1 + d ) 2 .
Following Lukman et al. (2019) [12]:
k ^ j = σ ^ 2 1 + d α ^ j 2 .
The harmonic mean of the k ^ j is
k ^ H M P = p σ ^ 2 j = 1 p 1 + d α ^ j 2 ,
and
d ^ j = σ ^ 2 k α ^ j 2   1 .
Also, the harmonic means of d ^ j is as follows:
d ^ M R T = p j = 1 p 1 d ^ .

2.2.8. A New Biased Estimator

A new biased two-parameter estimator was developed by Dawoud and Kibria (2020) [13] and is defined as follows:
α ^ D K = A D K α ^ O L S ,
where A D K = Λ + k 1 + d I 1 Λ k 1 + d I .
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ D K are as follows:
B i a s α ^ D K = 2 k ( 1 + d ) α Λ + k 1 + d I 1 ,
C o v ( α ^ D K ) = σ 2 A D K Λ 1 A D K T ,
M M S E α ^ D K = σ 2 A D K Λ 1 A D K T + 4 k 2 1 + d 2 Λ + k 1 + d I 1 α α T Λ + k 1 + d I 1 ,
M S E α ^ D K = σ 2 j = 1 p λ j k ( 1 + d ) 2   λ j λ j + k ( 1 + d ) 2 + j = 1 p 4 k 2 1 + d 2 α j 2 λ j + k ( 1 + d ) 2 .
For k , we fixed d ; we get the optimal k as
k ^ j = σ ^ 2 1 + d ^ σ ^ 2 λ j + 2 α ^ j 2 ,
and k m i n = m i n ( k ^ j ) .
Then, we get
d ^ o p t = σ ^ 2 λ j m 1 ,
where m = k ^ ( σ ^ 2 + 2 λ j α ^ j 2 ) .

2.2.9. Generalized Two-Parameter Estimator

Proposed by Zeinal (2020) [14], this is defined as follows:
α ^ G T P = Λ + k I 1 Λ + k D α ^ O L S ,
where D = d i a g ( d 1 , d 2 , , d p ) .
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ G T P are as follows:
B i a s α ^ G T P = A G T P I α ,
C o v ( α ^ G T P ) = σ 2 A G T P Λ 1 A G T P T .
M M S E α ^ G T P = σ 2 A G T P Λ 1 A G T P T + ( A G T P I ) α α T A G T P I T ,
M S E α ^ G T P = σ 2 j = 1 p λ j + k d j 2   λ j λ j + k 2 + j = 1 p k 2 d j 1 2 α j 2 λ j + k 2 ,
where A G T P = Λ + k I 1 Λ + k D .
For the unknown parameter, we get the optimal d j , j = 1,2 , , p for fixed k
d ^ j o p t = ( k α ^ j 2 σ ^ 2 ) λ j k ( σ ^ 2 + α ^ j 2 λ j ) ,   j = 1,2 , ,   p
then, we get
k ^ j = σ ^ 2 α ^ j 2 d j σ ^ 2 λ j + α ^ j 2 ,
and the arithmetic mean of the above k j .

2.2.10. Modified Liu Ridge Type

Proposed by Aslam and Ahmad (2022) [15], this is defined as
α ^ M L R T = A M L R T α ^ O L S ,
where A M L R T = Λ + I 1 ( Λ + d I ) Λ + k 1 + d I 1 Λ .
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ M L R T are as follows:
B i a s α ^ M L R T = ( A M L R T I ) α ,
C o v ( α ^ M L R T ) = σ 2 A M L R T Λ 1 A M L R T T ,
M M S E α ^ M L R T = σ 2 A M L R T Λ 1 A M L R T T + ( A M L R T I ) α α T A M L R T I T ,
M S E α ^ M L R T = σ 2 j = 1 p λ j λ j + d 2   λ j + k 1 + d 2 λ j + 1 2 + j = 1 p λ j 1 d + k 1 + d λ j + 1 2 α j 2 λ j + k 1 + d 2 λ j + 1 2 .
Now, k o p t = σ 2 λ j + d λ j 1 d α j 2 1 + d λ j + 1 α j 2 , and d o p t = j = 1 p λ j + k λ j + 1 λ j 2 k λ j 2 k λ j α j 2 σ 2 λ j λ j 2 k λ j 2 k λ j λ j + 1 2 j = 1 p σ 2 λ j 2 k λ j 2 k λ j + λ j k λ j + 1 λ j 2 k λ j 2 k λ j α j 2 λ j + 1 2 .

2.2.11. New Biased Regression Two-Parameter Estimator

Dawoud, Lukman and Haadi (2022) [16] proposed the following estimator,
α ^ N B R = R W M α ^ O L S ,
where R = Λ + k I 1 ( Λ + k d I ) and W M for KL and W = Λ + k I 1 and M = ( Λ k I ) .
The Bias, the covariance matrix, MMSE and scalar MSE matrix of α ^ N B R are as follows:
B i a s α ^ N B R = R W M I α ,
C o v ( α ^ N B R ) = σ 2 R W M Λ 1 M T W T R T ,
M M S E α ^ N B R = σ 2 R W M Λ 1 M T W T R T + R W M I α α T R W M I T ,
M S E α ^ N B R = σ 2 j = 1 p λ j k 2 λ j + k d 2   λ j λ j + k 4 + j = 1 p λ j + k d λ j k λ j + k 2 α j 2 λ j + k 4 .
Here, k ^ = ( λ j 2 α j 2 3 d + σ 2 λ j ( 1 d ) ) 2 ( σ 2 d + λ j α j 2 ( 1 + d ) ) + λ j λ j α j 2 2 d 3 2 + 2 λ j σ 2 α j 2 5 2 d + d 2 + σ 2 2 1 + d 2 2 ( σ 2 d + λ j α j 2 ( 1 + d ) ) , and we take the minimum values.
And d ^ = λ j 2 σ 2 3 α j 2 k λ j k ( σ 2 + α j 2 k ) k ( k λ j ) ( σ 2 + λ j α j 2 ) and the minimum values of this.

2.2.12. Biased Two-Parameter Estimator

Idowu, Oladapo, Owolabi and Ayinde (2022) [17] proposed this biased two-parameter estimator and it is defined as follows:
α ^ B T P = E H α ^ O L S ,
where E = Λ + I 1 ( Λ d I ) and H = Λ + k 1 + d I 1 ( Λ k ( 1 + d ) I ) .
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ B T P are as follows:
B i a s α ^ B T P = ( E H I ) α ,
C o v ( α ^ B T P ) = σ 2 E H Λ 1 H T E T ,
M M S E α ^ B T P = σ 2 E H Λ 1 H T E T + E H I α α T E H I T ,
M S E α ^ B T P = σ 2 j = 1 p λ j d 2 λ j k 1 + d 2   λ j λ j + 1 2 λ j + k 1 + d 2 + j = 1 p λ j d λ j k 1 + d λ j + 1 λ j + k 1 + d 2 α j 2 λ j + 1 2 λ j + k 1 + d 2 .
Now, k ^ = σ 2 λ j d λ j + α j 2 λ j 2 ( d + 1 ) σ 2 1 + d d λ j α j 2 λ j ( 1 + d ) ( 2 λ j d + 1 ) , and,
d ^ = σ 2 λ j σ 2 k + α j 2 λ j 2 + σ 2 λ j k + 2 α j 2 λ j 2 k 2 σ 2 k + α j 2 λ j k + α j 2 2 λ j 2 k 2 λ j k + k + λ j + σ 2 α j 2 λ j 2 k k λ j + σ 2 α j 2 λ j k 2 σ 2 λ j k + k + λ j + σ 2 2 λ j k k λ j ( σ 2 k + α j 2 λ j k ) .

2.2.13. New Two-Parameter Estimator

A new two-parameter estimator has been developed by Owolabi, Ayinde, Idowu, Oladapo and Lukman (2022) [18]:
α ^ N T P = A N T P α ^ O L S ,
where A N T P = Λ + k d I 1 Λ k d I .
The Bias, the covariance matrix, MMSE and scalar MSE matrix of α ^ N T P are as follows:
B i a s α ^ N T P = ( A N T P I ) α ,
C o v ( β ^ N T P ) = σ 2 A N T P Λ 1 A N T P T ,
M M S E ( α ^ N T P ) = σ 2 A N T P Λ 1 A N T P T + ( A N T P I ) α α T ( A N T P I ) T ,
M S E α ^ N T P = σ 2 j = 1 p λ j k d 2   λ j λ j + k d 2 + j = 1 p 4 k 2 d 2 α j 2 λ j + k d 2 .
Now, k ^ = σ 2 d 2 α j 2 + σ 2 λ j and the harmonic mean is k ^ H M = σ 2 d 2 α j 2 + σ 2 λ j .
And d ^ j = σ 2 k 2 α j 2 + σ 2 λ j .

2.2.14. New Ridge-Type Estimator

The new ridge-type two-parameter estimator proposed by Owolabi, Ayinde, and Alabi (2022) [6] is defined as follows:
α ^ N R T = A N R T α ^ O L S ,
where A N R T = Λ + ( k + d ) I 1 Λ .
The Bias, the covariance matrix, MMSE and scalar MSE matrix of α ^ N R T are as follows:
B i a s α ^ N R T = A N R T I α ,
C o v ( α ^ N R T ) = σ 2 A N R T Λ 1 A N R T T ,
M M S E ( α ^ N R T ) = σ 2 A N R T Λ 1 A N R T T + ( A N T P I ) α α T A N T P I T ,
M S E ( α ^ N R T ) = σ 2 j = 1 p λ j   λ j + ( k + d ) 2 + j = 1 p ( k + d ) 2 α j 2 λ j + ( k + d ) 2 .
Now, k ^ = min σ 2 α j 2 d and d ^ = min σ 2 α j 2 k .

2.2.15. Liu–Kibria–Lukman Two-Parameter Estimator

Proposed by Idowu et al. (2023) [19], the two-parameter estimator is defined as follows:
α ^ L K L = C A α ^ O L S ,
where C = Λ + I 1 ( Λ + d I ) and A = Λ + k I 1 ( Λ k I ) .
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ L K L are as follows:
B i a s α ^ L K L = C A I α ,
C o v ( α ^ L K L ) = σ 2 C A Λ 1 A T C T ,
M M S E ( α ^ L K L ) = σ 2 C A Λ 1 A T C T + C A I α α T C A I T ,
M S E ( α ^ L K L ) = σ 2 j = 1 p λ j k 2   λ j + d 2 λ j λ j + k 2 λ j + 1 2 + j = 1 p λ j + d λ j k λ j + k λ j + 1 2 α j 2 λ j + k 2 λ j + 1 2 .
Now, d ^ = λ j α j 2 σ 2 + λ j k 2 α j 2 λ j + α j 2 σ 2 σ 2 λ j k + α j 2 λ j λ j k .
For the biasing parameter k , the proposed LKL estimator is given as follows:
k = min σ 2 2 α j 2 + σ 2 λ j .

2.2.16. Two-Parameter Ridge Estimator

Proposed by Lipovetsky and Conklin (2005) [20], a two-parameter ridge estimator is defined as
α ^ T P R = q Λ + k I 1 Λ α ^ O L S ,
where q = Z T Y T ( Λ + k I ) 1 Z T Y Z T Y T ( Λ + k I ) 1 Λ ( Λ + k I ) 1 Z T Y .
The Bias, the covariance matrix, MMSE and scalar MSE of α ^ T P R are as follows:
B i a s α ^ T P R = A T P R I α ,
C o v ( α ^ T P R ) = σ 2 A T P R Λ 1 A T P R T ,
where A T P R = q Λ + k I 1 Λ .
M M S E α ^ T P R = σ 2 A T P R Λ 1 A T P R T + A T P R I α α T A T P R I T ,
M S E α ^ T P R = σ 2 j = 1 p q 2 λ j λ j + k 2 + j = 1 p ( q λ j λ j k ) 2 α j 2 λ j + k 2 .
Now, we use the value of k = j = 1 p λ j α j 2 σ 2 α m a x 2 by Khan et al. (2024) [21] and this k ^ is utilized by Toker and Kaçiranlar (2013) [22] from this q ^ = j = 1 p α j 2 λ j λ j + k j = 1 p σ 2 λ j + α j 2 λ j 2 λ j + k 2 for computing the initial corresponding value of q ^ .
The values of q ^ can be obtained by the corresponding values of k ^ o p t .
k ^ o p t = q j = 1 p σ 2 λ j + q 1 j = 1 p α j 2 λ j 2 j = 1 p α j 2 λ j 2 .

2.2.17. New Proposed Minimum Eigenvalue Two-Parameter Ridge Estimator

Previous two-parameter ridge estimator introduced by Lipovetsky and Conklin (2005) [20], which is defined as
α ^ T P R = q Λ + k I 1 Λ α ^ O L S ,
where
q = Z T Y T ( Λ + k I ) 1 Z T Y Z T Y T ( Λ + k I ) 1 Λ ( Λ + k I ) 1 Z T Y ,
which can be written as
q = Z T Y T ( Λ + k I ) 1 ( Λ + k I ) ( Λ + k I ) 1 Z T Y Z T Y T ( Λ + k I ) 1 Λ ( Λ + k I ) 1 Z T Y .
But the ridge regression obtained by Hoerl and Kennard (1970) [2] is α ^ k = ( Λ + k I ) 1 Z T Y . Then, q can be defined as
q = α ^ T k ( Λ + k I ) α ^ k α ^ T k ( Λ ) α ^ k
Then using the min–max principle, it can be shown that the value of q will be between
l 1 q l p
where l 1   and l p are the minimum and maximum eigen value of [ Λ + k I ] ( Λ ) 1 . From a preliminary simulation study, we found that the two-parameter estimator performed better when q = l 1 . Therefore, we propose the following new ride regression estimator,
α ^ N T P R = l 1 Λ + k I 1 Λ α ^ O L S ,
where l 1 is the minimum eigen value of the matrix [ Λ + k I ] ( Λ ) 1 .
Now, the Bias, the covariance matrix, MMSE and scalar MSE of α ^ N T P R are as follows:
B i a s α ^ N T P R = A N T P R I α ,
C o v ( α ^ N T P R ) = σ 2 A N T P R Λ 1 A N T P R T ,
where A N T P R = l 1 Λ + k I 1 Λ .
M M S E α ^ T P R = σ 2 A N T P R Λ 1 A N T P R T + A N T P R I α α T A N T P R I T ,
M S E α ^ N T P R = σ 2 j = 1 q l 1 2 λ j λ j + k 2 + j = 1 q ( l 1 λ j λ j k ) 2 α j 2 λ j + k 2 .
Now, we use the value of k = j = 1 p λ j α j 2 σ 2 α m a x 2 by Khan et al. (2024) [21] and this k ^ is utilized by Toker and Kaçiranlar (2013) [22] from this l 1 ^ = j = 1 p α j 2 λ j λ j + k j = 1 p σ 2 λ j + α j 2 λ j 2 λ j + k 2 for computing the initial corresponding value of l 1 ^ .
The values of l 1 ^ are used to compute the corresponding optimum values of k ^ o p t .
k ^ o p t = l 1 j = 1 p σ 2 λ j + l 1 1 j = 1 p α j 2 λ j 2 j = 1 p α j 2 λ j 2 .

3. Results

In this section, we will discuss the results of the theoretical comparison for our new proposed estimator. The notations and lemmas given below are very useful for comparative analysis between these estimators.
Lemma 1.
Let  n × n  matrices  M > 0 ,  N > 0  ( N 0 ), then M > N  if and only if  λ j N M 1 < 1 , where  λ j ( N M 1 )  is the largest eigen value of matrix  N M 1 .
Lemma 2.
(Farebrother, 1976 [23]): Let M be an n × n positive definite matrix, let α be a non-zero n × 1 column matrix and let c be a positive scalar. Then, c M α α T > 0 if α T M 1 α < d , and c M α α T 0 if α T M 1 d .
Lemma 3.
(Trenkler and Toutenberg, 1990 [24]): Let α ^ j = A j y ,   j = 1,2  be two linear estimators of  α . Suppose that  D = C o v α ^ 1 C o v α ^ 2 > 0 , where  C o v α ^ j , j = 1,2  denotes the covariance matrix of  α ^ j  and  b j = B i a s α ^ j = A j X I α , j = 1,2 . Consequently,
α ^ 1 α ^ 2 = M M S E α ^ 1 M M S E α ^ 2 = σ 2 D + b 1 b 1 T b 2 b 2 T > 0 .
If and only if  b 2 T σ 2 D + b 1 b 1 T b 2 < 1 , where  M M S E α ^ j = C o v α ^ j + b j b j T .

3.1. Theoretical Comparison of Some Estimators

This section presents a theoretical comparison of the estimators, based on the methodologies proposed by Hoque and Kibria (2023) [25]. While there are many possible methods to be compared, we are presenting a few of the comparisons in this section and rest can be compared in a similar way.

3.1.1. Comparison Between New Proposed Two-Parameter Ridge Estimator and OLS Estimator

The difference between M M S E α ^ N T P R and M M S E ( α ^ O L S ) can be obtained by
M M S E α ^ O L S M M S E α ^ N T P R = σ 2 Λ 1 σ 2 A N T P R Λ 1 A N T P R T b N T P R b N T P R T   .
If we let k > 0 , we have the following theorem.
Theorem 1.
If  k > 0 ,  b N T P R = B i a s ( α ^ N T P R )  is the bias of NTPR, we can say that  α ^ N T P R  is better than the estimator  α ^ O L S  using the theorem of MMSE, that is,  M M S E α ^ O L S M M S E α ^ N T P R > 0 , if and only if
b N T P R T σ 2 Λ 1 A N T P R Λ 1 A N T P R T 1 b N T P R < 1 .
Proof. 
Difference between the scalar MSE functions,
M M S E α ^ O L S M M S E α ^ N T P R = σ 2 Λ 1 A N T P R Λ 1 A N T P R T b N T P R b N T P R T = σ 2 d i a g 1 λ j l 1 2 λ j λ j + k 2 j = 1 p b N T P R b N T P R T ,
where Λ 1 A N T P R Λ 1 A N T P R T is positive definite and we know that it is possible if and only if λ j + k 2 l 1 2 λ j > 0 . For k > 0 , we get that k 2 + 2 λ j k + λ j 2 ( 1 l 1 2 ) > 0 , and we can conclude that Λ 1 A N T P R Λ 1 A N T P R T is a positive definite. Using lemma 2, the proof is completed. □

3.1.2. Comparison Between New Proposed Two-Parameter Ridge Estimator and Liu Type Two-Parameter Estimator

The difference between M M S E α ^ N T P R and M M S E ( α ^ L T E ) is obtained by
M M S E α ^ L T E M M S E α ^ N T P R = σ 2 A L T E Λ 1 A L T E T σ 2 A N T P R Λ 1 A N T P R T + b L T E b L T E T b N T P R b N T P R T .
If we let k > 0 , we have the following theorem.
Theorem 2.
If  k > 0  and  0 < d < 1 , and  b L T E = B i a s ( α ^ L T E )  is the bias of LTE, we can say that the  α ^ N T P R  is better than the estimator  α ^ L T E  using the theorem of MMSE, that is,  M M S E α ^ L T E M M S E α ^ N T P R > 0 , if and only if
b N T P R T σ 2 A L T E Λ 1 A L T E T A N T P R Λ 1 A N T P R T + b L T E b L T E T 1 b N T P R < 1 .
Proof. 
Difference between scalar MSE functions,
M M S E α ^ L T E M M S E α ^ N T P R = σ 2 A L T E Λ 1 A L T E T A N T P R Λ 1 A N T P R T + b L T E b L T E T b N T P R b N T P R T = σ 2 d i a g d λ j 2 λ j λ j + k 2 l 1 2 λ j λ j + k 2 j = 1 p + b L T E b L T E T b N T P R b N T P R T ,
where A L T E Λ 1 A L T E T A N T P R Λ 1 A N T P R T is positive definite and we know that it is possible if and only if d λ j 2 l 1 2 λ j 2 > 0 . For k > 0 and 0 < d < 1 , we get that d 2 2 λ j + λ j 2 ( 1 l 1 2 ) > 0 , and we can conclude that A L T E Λ 1 A L T E T A N T P R Λ 1 A N T P R T is a positive definite. By lemma 2, the proof is completed. □

3.1.3. Comparison Between New Proposed Two-Parameter Ridge Estimator and Ozkale and Kaciranlar Two-Parameter Estimator

The difference between M M S E α ^ N T P R and M M S E ( α ^ T P ) is obtained by
M M S E α ^ T P M M S E α ^ N T P R = σ 2 A T P Λ 1 A T P T σ 2 A N T P R Λ 1 A N T P R T + b T P b T P T b N T P R b N T P R T .
If we let k > 0 , we have the following theorem.
Theorem 3.
If  k > 0  and  0 < d < 1 , and  b T P = B i a s ( α ^ T P )  is the bias of TP, we can say that  α ^ N T P R  is better than the estimator  α ^ T P  using the theorem of MMSE, that is,  M M S E α ^ T P M M S E α ^ N T P R > 0 , if and only if
b N T P R T σ 2 A T P Λ 1 A T P T A N T P R Λ 1 A N T P R T + b T P b T P T 1 b N T P R < 1 .
Proof. 
Difference between scalar MSE functions,
M M S E α ^ T P M M S E α ^ N T P R = σ 2 A T P Λ 1 A T P T A N T P R Λ 1 A N T P R T + b T P b T P T b N T P R b N T P R T = σ 2 d i a g λ j + k d 2 λ j λ j + k 2 l 1 2 λ j λ j + k 2 j = 1 p + b T P b T P T b N T P R b N T P R T ,
where A T P Λ 1 A T P T A N T P R Λ 1 A N T P R T is positive definite and we know that it is possible, if and only if λ j + k d 2 l 1 2 λ j 2 > 0 . For k > 0 and 0 < d < 1 , we get that k 2 d 2 + 2 k d λ j + λ j 2 ( 1 l 1 2 ) > 0 . And we can conclude that A T P Λ 1 A T P T A N T P R Λ 1 A N T P R T is a positive definite. By lemma 2, the proof is completed. □

3.1.4. Comparison Between New Proposed Two-Parameter Ridge Estimator and New Biased Estimator Based on Ridge

The difference between M M S E α ^ N T P R and M M S E ( α ^ N B E ) is obtained by
M M S E α ^ N B E M M S E α ^ N T P R = σ 2 A N B E Λ 1 A N B E T σ 2 A N T P R Λ 1 A N T P R T + b N B E b N B E T b N T P R b N T P R T .
If we let k > 0 , we have the following theorem.
Theorem 4.
If  k > 0  and  0 < d < 1 , and  b N B E = B i a s ( α ^ N B E )  is the bias of NBE, we can say that  α ^ N T P R  is better than the estimator  α ^ N B E  using the theorem of MMSE, that is,  M M S E α ^ N B E M M S E α ^ N T P R > 0  , if and only if
b N T P R T σ 2 A N B E Λ 1 A N B E T A N T P R Λ 1 A N T P R T + b N B E b N B E T 1 b N T P R < 1 .
Proof. 
Using the difference between scalar MSE functions,
M M S E α ^ N B E M M S E α ^ N T P R = σ 2 A N B E Λ 1 A N B E T A N T P R Λ 1 A N T P R T + b N B E b N B E T b N T P R b N T P R T = σ 2 d i a g λ j λ j + ( d + k ) 2 λ j + 1 2 λ j + k 2 l 1 2 λ j λ j + k 2 j = 1 p + b N B E b N B E T b N T P R b N T P R T ,
where A N B E Λ 1 A N B E T A N T P R Λ 1 A N T P R T is positive definite and we know that it is possible if and only if λ j λ j + ( d + k ) 2 l 1 2 λ j λ j + 1 2 > 0 . For k > 0 and 0 < d < 1 , we get that 1 l 1 2 λ j + ( k + d l 1 ) > 0 , and we can conclude that A N B E Λ 1 A N B E T A N T P R Λ 1 A N T P R T is a positive definite. By lemma 2, the proof is completed. □

3.1.5. Comparison Between Proposed Two-Parameter Ridge Estimator and Yang and Chang Two-Parameter Estimator

The difference between M M S E α ^ N T P R and M M S E ( α ^ Y C ) is obtained by
M M S E α ^ Y C M M S E α ^ N T P R = σ 2 Y k , d Λ 1 Y k , d T σ 2 A N T P R Λ 1 A N T P R T + b Y C b Y C T b N T P R b N T P R T .
If we let k > 0 , we have the following theorem.
Theorem 5.
If  k > 0  and  0 < d < 1 , and  b Y C = B i a s ( α ^ Y C )  is the bias of YC, we can say that  α ^ N T P R  is better than the estimator  α ^ Y C  using the theorem of MMSE, that is,  M M S E α ^ Y C M M S E α ^ N T P R > 0 , if and only if
b N T P R T σ 2 Y k , d Λ 1 Y k , d T A N T P R Λ 1 A N T P R T + b Y C b Y C T 1 b N T P R < 1 .
Proof. 
Using the difference between scalar MSE functions,
M M S E α ^ Y C M M S E α ^ N T P R = σ 2 Y k , d Λ 1 Y k , d T A N T P R Λ 1 A N T P R T + b Y C b Y C T b N T P R b N T P R T = σ 2 d i a g λ j λ j + d 2 λ j + 1 2 λ j + k 2 l 1 2 λ j λ j + k 2 j = 1 p + b Y C b Y C T b N T P R b N T P R T ,
where Y k , d Λ 1 Y k , d T A N T P R Λ 1 A N T P R T is positive definite and we know that it is possible if and only if λ j λ j + d 2 l 1 2 λ j λ j + 1 2 > 0 . For k > 0 and 0 < d < 1 , we get that 1 l 1 λ j + ( d l 1 ) > 0 , and we can conclude that Y k , d Λ 1 Y k , d T A N T P R Λ 1 A N T P R T is a positive definite. By lemma 2, the proof is completed. □

4. Discussion

A Monte Carlo simulation is conducted in this section to see the performance of these estimators, following the framework of Hoque and Kibria (2024) [26]. In Section 4.1, we will provide an overview of the simulation technique. The simulation results will be discussed in Section 4.2. We will analyze the simulated data and analyze the estimators’ performance across multiple scenarios, and in Section 4.3, real-life data has been collected.

4.1. Simulation Technique

In this section, we did a Monte Carlo simulation experiment, which follows the frameworks of McDonald and Galarneau (1975) [27] and Gibbons (1981) [28]. The explanatory variables are generated according to the following model:
X i j = 1 ρ 2 1 / 2 z i j + ρ z i p + 1 , i = 1,2 , , n , j = 1,2 , , p ,
where z i j are independent standard normal pseudo-random variables and it has mean zero and the unit variance. The parameter ρ is the correlation among the explanatory variables, while p denotes the total number of predictors. All variables have been standardized so that X T X and X T y are expressed in correlation form.
The response variable is generated according to the following:
Y i = β 1 X i 1 + β 2 X i 2 + + β q X i p + ϵ i ,
where the error ϵ i is independently normally distributed ( 0 , σ 2 ).
The simulation considers multiple numbers of sample sizes, n = 30 ,   50 , 100, and 200, and different explanatory variables p = 3, 5, and 10. Also, various numbers of correlation levels ρ = 0.80 ,   0.90 , 0.95, and 0.99 have been chosen, with three values for error standard deviations: σ = 1 ,   5 , and 10. The simulation is replicated 5000 times, using R software version 4.4.0.
Now, the parameter values β 1 , β 2 , , β p are chosen from the normalized eigen vectors based on the corresponding largest eigen value of X T X , ensuring that β T β = 1 , and the performance of the estimators has been evaluated using the estimated MSE, which is defined as follows:
M S E β ^ = 1 5000 j = 1 5000 β ^ i j β i T ( β ^ i j β i )
where β ^ i j denotes the estimate of the i th parameter in the j th replication and β i is the true parameter value.
Because of the different values of n , p , σ and ρ , the estimated MSE of the estimators are summarized in the corresponding tables. The simulated MSE values are presented in Table 1(1)–(12). In Table 1(1) is n = 30 and p = 3 , in Table 1(2) is n = 30 and p = 5 , in Table 1(3) is n = 30 and p = 10 , in Table 1(4) is n = 50 and p = 3 , in Table 1(5) is n = 50 and p = 5 , in Table 1(6) is n = 50 , and p = 10 , in Table 1(7) is n = 100 , and p = 3 , in Table 1(8) is n = 100 and p = 5 , in Table 1(9) is n = 100 and p = 10 , in Table 1(10) is n = 200 and p = 3 , in Table 1(11) is n = 200 and p = 5 , and in Table 1(12) is n = 200 , and p = 10 . Figure 1a,b depict average MSE trends across ρ and σ .
All simulations and numerical computations were performed using the R version 4.4.0.

4.2. Discussion of Simulation Results

Table 1 summarizes the results of the simulated mean squared error (MSE) outcomes of these estimators under different controlled settings, considering various sample sizes n = 30 ,   50 ,   100 ,   200 and predictors p = 3 ,   5 ,   10 . These tables report MSE values across different correlation levels ( ρ = 0.80 ,   0.90 ,   0.95 ,   0.99 ) and varying error standard deviations ( σ = 1 ,   5 ,   10 ). The OLS estimator consistently exhibits the largest MSE, confirming its vulnerability under multicollinearity. Across all scenarios, a general trend is observed where increasing the sample size results in lower MSE values. In contrast, higher correlation levels and higher error standard deviations tend to produce larger MSE values.
An examination of Table 1 also reveals that increasing the number of explanatory variables generally leads to higher MSE values. For instance, Table 1(2) presents the MSE results for n = 30 and p = 5 , where an increase in the correlation levels corresponds to a rise in MSE. Among the compared estimators, YC1 and TPR consistently yield lower MSE values throughout the simulation. The newly proposed NTPR estimator consistently yielded the lowest or near-lowest MSE values across almost all conditions, particularly when ρ 0.95 and σ 5 .
For better visualization, we also plotted the relationship between MSE by correlation ρ = 0.80 for n = 200 and p = 3 in Figure 1a, as well as the average trends between MSE, correlation ρ and error standard deviation σ for fixed n = 30 and p = 3 in Figure 1b. Both figures confirm the analyses presented in Table 1(1)–(12).
Figure 1 visually summarizes the simulation results reported in Table 1. Figure 1a displays the bar plot of average MSE values across correlation levels ρ = 0.80 ,   0.90 ,   0.95 ,   0.99 for fixed σ = 1 ,   5 ,   10 . As ρ increases, the MSE values of OLS grow exponentially, confirming its instability under strong collinearity. By contrast, the NTPR estimator shows only a mild, nearly flat increase in MSE, remaining the lowest across all noise levels. This pattern underscores the estimator’s ability to balance bias and variance effectively, even as the explanatory variables become highly correlated.
Figure 1b illustrates the trend of average MSE as a function of ρ and σ for n = 100 and p = 10 . Here, each line represents one estimator’s mean MSE trajectory. The slope of the NTPR curve is markedly smaller than that of competing estimators, demonstrating its strong resistance to multicollinearity and noise amplification. The consistency of this behavior across different parameter configurations provides empirical validation of the theoretical dominance conditions established in Section 3.1.

4.2.1. Estimator Performance as a Function of n

An analysis of Table 1 compares the estimated MSE values for various estimators across different sample sizes (n = 30, 50, 100, and 200), while accounting for different numbers of regressors ( p ) , correlation levels ( ρ ) and error standard deviation ( σ ) . In general, as the number of sample sizes ( n ) increases, the estimated MSE tends to decrease across different combinations of p , ρ and σ . Among the estimators, NTPR yields the lowest MSE values, followed closely by YC1 and TPR, which also demonstrate strong performance, and the OLS estimator shows the highest MSE values.

4.2.2. Estimator Performance as a Function of p

Table 1 presents a comparative analysis of the estimated MSE values for various estimators across the values of p (3, 5, and 10), considering multiple sample sizes ( n ), correlation levels ( ρ ) and error standard deviation σ . Generally, an increase in the number of predictors ( p ) is associated with an increase in the estimated MSE across most settings of n , ρ and σ . Among the estimators evaluated, NTPR achieves the lowest MSE values, with TPR and YC1 also performing well, and the OLS estimator records the highest MSE values.

4.2.3. Estimator Performance as a Function of ρ

Based on the results presented in Table 1, we assess the estimated MSE values of various estimators across the numbers of correlation levels ( ρ = 0.80 ,   0.90 , 0.95, and 0.99), sample sizes ( n ), number of regressors ( p ) and error standard deviation ( σ ). Generally, an increase in the number of correlation ( ρ ) is associated with an increase in estimated MSE across different numbers of sample sizes and correlation levels. Among the evaluated estimators, NTPR produces the lowest MSE values, with TPR also demonstrating favorable performance, and OLS yields the highest MSE values overall.

4.2.4. Estimator Performance as a Function of σ

Table 1 provides a comparative assessment of the estimated MSE values for various estimators across different settings of error standard deviation ( σ = 1, 5, and 10), sample sizes (n), number of regressors ( p ) and correlation levels ( ρ ). In most cases, increasing the number of deviation ( σ ) leads to an increase in MSE values across the examined scenarios. Among the considered estimators, NTPR achieves the lowest MSE values, while TPR also performs well, and OLS shows the highest MSE values.
Evaluating the performance of the estimators with respect to sample size ( n ), number of predictors ( p ), correlation coefficient ( ρ ), and error standard deviation ( σ ) provides valuable insight into their behavior and efficiency. Among the examined methods, NTPR and TPR demonstrate strong performance across varying conditions, whereas the OLS often exhibits relatively higher MSE values. Such comparative analysis is crucial for guiding the selection of suitable estimation techniques in regression modeling.

4.2.5. Estimator Performance Under Outliers

Following Yasmin and Kibria (2025) [29], Table 2(1)–(3) provides a different comparison for fixed n = 100 ; different explanatory variables of p = 3 ,   5 , and 10; with different correlations of ρ = 0.8 ,   0.9 ,   0.95 , and 0.99; and error standard deviation of 1, 5, and 10. Also, we generated 25% outliers in this model, using random replacement of some values from the error vector of the third quartile of the errors plus five times the interquartile range (IQR) of the errors.
The results in Table 2(1)–(3) reveal substantial deterioration of OLS and classical estimators, whose MSEs rose by factors of 20–50. However, NTPR remained remarkably stable. Its average MSE increased only marginally compared to OLS in Table 2(1). This confirms that the adaptive eigenvalue-based shrinkage in NTPR provides implicit robustness, even without explicit outlier weighting. From the practical standpoint, the simulation results suggest that the proposed NTPR estimator is particularly effective when the correlation among predictors exceeds 0.90 and the sample size is small to moderate. In such scenarios, classical ridge and Liu-type estimators may still suffer from variance inflation, whereas NTPR provides more stable estimation with substantially reduced MSE.

4.3. Real-Life Application

To demonstrate the practical example of the proposed estimator, an empirical analysis is conducted using the “Body Fat” dataset, which was originally published in the textbook, Biostatistics for the Biological and Health Sciences, second edition, by Triola et al. [30], and is publicly available at www.triolastats.com/biostats3e accessed on 21 June 2025. The dataset consists of measurements collected from 300 adult subjects and has been widely used in statistical modeling and regression analysis, due to the presence of strong correlations among variables.
For the analysis, the following model is considered:
Y = β 0 + β 1 X 1 + β 2 X 2 + + β 4 X 4 + ϵ ,
where the response variable Y denotes a high-density lipoprotein (HDL) cholesterol level (mg/dL), an important biomarker associated with cardiovascular health. The explanatory variables include body weight ( X 1 , in kg), height ( X 2 , in cm), waist circumference ( X 3 , in cm), and arm circumference ( X 4 , in cm). These anthropometric measures are biologically related and therefore expected to exhibit substantial multicollinearity.
Preliminary diagnostic measures, including pairwise correlations, the variance inflation factor (VIF) and the condition number of the design matrix, confirmed the presence of moderate to severe multicollinearity, making this dataset well suited for evaluating the performance of biased regression estimators that are designed to address multicollinearity.
The evidence of strong substantial multicollinearity among the predictors is apparent from the correlation matrix in Table 3 and the fact that one of the variance inflation factors (VIFs) exceeds 10 (see Ozkale and Kacıranlar 2007) [8], and it is considered to be the threshold for multicollinearity (Table 3). Additionally, the structure of the correlation matrix resembles a block correlation pattern (Table 3).
We can see that the independent variables in this dataset also exhibit high levels of correlation. Also, we can use the variance inflation factor (VIF) to check the multicollinearity, which is calculated as follows:
V I F i = 1 1 R i 2 ,   i = 1 ,   ,   p ,
where R i 2 denotes the multiple correlation coefficient, which is obtained from regression of the explanatory variable, and the VIF values are as follows:
Based on the VIF values from Table 4, variables such as x1 (17.04) and x3 (8.11) exhibit high multicollinearity, so we can conclude that there is a dependency among these explanatory variables.
To further assess the severity of the multicollinearity, we obtained the condition number value, calculated as C N = l a r g e s t   e i g e n   v a l u e s m a l l e s t   e i g e n v a l u e 1 / 2 = 88.41461 . The resulting condition number is 88.41461, which exceeds the commonly used threshold of 10, confirming the presence of severe multicollinearity (Liu, 2003 [7]) among the variables.
We have checked the assumption of multicollinearity for the OLS, and we have found that they have severe multicollinearity among the independent variables. We have also checked the assumptions for the homoscedasticity and independence of errors, and from Figure 2, we have found that the residuals versus fitted values plot shows a random scatter of residuals around zero with no systematic pattern or change in the spread, indicating that the assumption of homoscedasticity is satisfied. Additionally, the residuals display no discernible trend or clustering, suggesting that the errors are independent.
In this study, the theoretical conditions are established in Theorems 1–5, using the dataset. The outcomes of this evaluation are reported in Table 5, which shows that all condition values for the dataset are less than one, which is consistent with the requirements specified in the theorems.
Table 6 reports the estimated regression coefficients and also the mean squared error (MSE) values applied to the dataset. The results clearly indicate that all estimators produce smaller MSE values compared to the ordinary least squares (OLS) estimator, although the magnitude of improvement varies across methods. Some estimators yield performance levels that are relatively close to OLS, while others demonstrate substantial gains in estimation accuracy under multicollinearity.
We applied all estimators from Section 2 to this dataset. Table 5 summarizes the theoretical condition checks from Theorems 1–5; all values are less than one, confirming that the assumptions required for MSE dominance of the NTPR estimator over the competing estimators are satisfied.
The estimated regression coefficients and corresponding MSEs are provided in Table 6. While the OLS estimator produced MSE = 0.1545, most biased estimators achieved smaller errors: LTE (0.1314), TP2 (0.1146), and MRT (0.0771). Notably, the proposed NTPR estimator attained the lowest MSE (≈1.7 × 10−5): a reduction of nearly 99.99%, relative to OLS. This dramatic improvement indicates that the eigenvalue-adaptive shrinkage mechanism of NTPR effectively stabilizes the estimates without introducing excessive bias.
Coefficient patterns further reinforce this conclusion. While OLS yielded unstable magnitudes, NTPR produced smoother, moderate coefficients with consistent signs and reduced variability. These results align with the theoretical expectations from Section 3.1, showing that NTPR simultaneously controls variance inflation and numerical instability.
For practitioners, these findings indicate that NTPR can serve as a reliable alternative to traditional biased estimators when multicollinearity is severe and coefficient instability is a concern. The dramatic reduction in MSE demonstrates that the estimator is not only theoretically superior but also practically useful in real-world regression analysis.

5. Conclusions and Directions for Future Research

This study proposed a new minimum eigenvalue two-parameter ridge (NTPR) estimator for the Gaussian linear regression model to mitigate the challenges of multicollinearity. The proposed estimator was examined alongside a range of two-parameter estimators, including the ordinary least squares (OLS), Liu type estimator, modified ridge-type estimator, unbiased two-parameter estimator, and yang and Chang estimator, to name some. For each estimator, we derived the expressions of the bias, covariance, and also the mean squared error (MSE), all of which depend on the biasing parameters k and d . Theoretical comparisons were conducted, using MSE as the primary criterion.
A unified theoretical framework was established to build a performance comparison between the proposed NTPR estimator and 16 existing two-parameter estimators under the MSE framework. Also, a comprehensive Monte Carlo simulation study has been conducted to evaluate estimator behavior under various scenarios. The simulation design incorporated different sample sizes, number of predictors, standard errors and degrees of multicollinearity among the regressors. Each scenario involved 5000 replications, and the average MSE values were calculated to assess estimator performance.
The simulation results indicated that higher multicollinearity among predictors generally resulted in increased MSE values. Among all estimators, NTPR, and TPR outperformed the others. Estimator performance improved with the increasing sample size, as reflected by the declining MSE values, as the increasing number of predictors led to lower MSE. NTPR achieved the smallest MSE and produced stable, interpretable coefficients, even under 25% outlier contamination. These findings suggest that the NTPR estimator performs favorably, especially when compared to the OLS estimator, which consistently yielded higher MSE values. The practical applicability of the proposed methods is illustrated using body fat data.
Although the proposed estimator demonstrates strong theoretical properties and superior empirical performance, several limitations of the present study should be acknowledged. First, the theoretical dominance results are derived under the classical linear regression framework with normally distributed errors and a fixed design matrix. While this assumption is standard in the literature, deviations from normality may affect the estimator’s performance in practice.
Second, the simulation study focuses on low to moderate dimensional settings with a limited number of predictors. Although this setting is relevant for many applied problems, the behavior of the proposed estimator in high-dimensional scenarios, where the number of predictors may exceed the sample size, remains an open question.
Third, the tuning parameters of the proposed estimator are selected based on analytical considerations and empirical performance. Alternative data-driven selection procedures, such as cross-validation or information-criterion-based approaches, may further enhance its practical implementation.
These limitations naturally motivate several directions for future research. Extending the proposed estimator to generalized linear models and high-dimensional regression frameworks is a promising avenue for further study. In addition, developing robust versions of the estimator to accommodate heavy-tailed error distributions and outlier contamination would improve its applicability in real-world data analysis. Finally, investigating adaptive and fully data-driven tuning parameter selection methods represents an important topic for future methodological development.
The conclusions drawn from this study are specific to the simulation settings, particularly the selected combinations of sample size n , number of predictors p , standard error σ and correlation coefficient ρ . Although several methods exist in the literature for estimating the biasing parameters k and d , only a subset was considered in this work. Future work may extend this framework by exploring other generalized linear models, like Logistic, Poisson, Beta, or Negative binomial; finding type I error values and power properties; and also exploring the robust weighted variants of NTPR, thereby broadening its applicability in high-dimensional and contaminated data environments.

Author Contributions

Conceptualization, M.A.H. and B.M.G.K.; methodology, M.A.H. and B.M.G.K.; software, M.A.H.; validation, M.A.H., Z.B. and B.M.G.K.; formal analysis, M.A.H. and B.M.G.K.; investigation, M.A.H., Z.B. and B.M.G.K.; resources, M.A.H., Z.B. and B.M.G.K.; data curation, M.A.H.; writing—original draft preparation, M.A.H.; writing—review and editing, M.A.H., Z.B. and B.M.G.K.; visualization, B.M.G.K. and Z.B.; project administration, Z.B. and B.M.G.K.; funding acquisition, N/A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nayem, H.M.; Aziz, S.; Kibria, B.M.G. Comparison among Ordinary Least Squares, Ridge, Lasso, and Elastic Net Estimators in the Presence of Outliers: Simulation and Application. Int. J. Stat. Sci. 2024, 24, 25–48. [Google Scholar] [CrossRef]
  2. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  3. Kejian, L. A new class of biased estimate in linear regression. Commun. Stat.-Theory Methods 1993, 22, 393–402. [Google Scholar] [CrossRef]
  4. Lukman, A.F.; Adewuyi, E.; Oladejo, N.; Olukayode, A. Modified almost unbiased two-parameter estimator in linear regression model. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2019; Volume 640, p. 012119. [Google Scholar]
  5. Yang, H.; Chang, X. A new two-parameter estimator in linear regression. Commun. Stat.-Theory Methods 2010, 39, 923–934. [Google Scholar] [CrossRef]
  6. Owolabi, A.T.; Ayinde, K.; Alabi, O.O. A new ridge-type estimator for the linear regression model with correlated regressors. Concurr. Comput. Pract. Exp. 2022, 34, e6933. [Google Scholar] [CrossRef]
  7. Liu, K. Using Liu-type estimator to combat collinearity. Commun. Stat.-Theory Methods 2003, 32, 1009–1020. [Google Scholar] [CrossRef]
  8. Özkale, M.R.; Kaciranlar, S. The restricted and unrestricted two-parameter estimators. Commun. Stat.-Theory Methods 2007, 36, 2707–2725. [Google Scholar] [CrossRef]
  9. Hoerl, A.E.; Kennard, R.W.; Baldwin, K.F. Ridge regression: Some simulations. Commun. Stat.-Theory Methods 1975, 4, 105–123. [Google Scholar] [CrossRef]
  10. Sakallıoğlu, S.; Kaçıranlar, S. A new biased estimator based on ridge estimation. Stat. Pap. 2008, 49, 669–689. [Google Scholar] [CrossRef]
  11. Arumairajan, S.; Wijekoon, P. Modified almost unbiased Liu estimator in linear regression model. Commun. Math. Stat. 2017, 5, 261–276. [Google Scholar] [CrossRef]
  12. Lukman, A.F.; Ayinde, K.; Binuomote, S.; Clement, O.A. Modified ridge-type estimator to combat multicollinearity: Application to chemical data. J. Chemom. 2019, 33, e3125. [Google Scholar] [CrossRef]
  13. Dawoud, I.; Kibria, B.M.G. A new biased estimator to combat the multicollinearity of the Gaussian linear regression model. Stats 2020, 3, 526–541. [Google Scholar] [CrossRef]
  14. Zeinal, A. Generalized two-parameter estimator in linear regression model. J. Math. Model. 2020, 8, 157–176. [Google Scholar] [CrossRef]
  15. Aslam, M.; Ahmad, S. The modified Liu-ridge-type estimator: A new class of biased estimators to address multicollinearity. Commun. Stat.-Simul. Comput. 2022, 51, 6591–6609. [Google Scholar] [CrossRef]
  16. Dawoud, I.; Lukman, A.F.; Haadi, A.R. A new biased regression estimator: Theory, simulation and application. Sci. Afr. 2022, 15, e01100. [Google Scholar] [CrossRef]
  17. Idowu, J.I.; Oladapo, O.J.; Owolabi, A.T.; Ayinde, K. On the biased Two-Parameter Estimator to Combat Multicollinearity in Linear Regression Model. Afr. Sci. Rep. 2022, 1, 188–204. [Google Scholar] [CrossRef]
  18. Owolabi, A.T.; Ayinde, K.; Idowu, J.I.; Oladapo, O.J.; Lukman, A.F. A new two-parameter estimator in the linear regression model with correlated regressors. J. Stat. Appl. Probab. 2022, 11, 185–201. [Google Scholar] [CrossRef]
  19. Idowu, J.I.; Oladapo, O.J.; Owolabi, A.T.; Ayinde, K.; Akinmoju, O. Combating multicollinearity: A new two-parameter approach. Nicel Bilim. Derg. 2023, 5, 90–116. [Google Scholar] [CrossRef]
  20. Lipovetsky, S.; Conklin, W.M. Ridge regression in two-parameter solution. Appl. Stoch. Models Bus. Ind. 2005, 21, 525–540. [Google Scholar] [CrossRef]
  21. Khan, M.S.; Ali, A.; Suhail, M.; Kibria, B.G. On some two parameter estimators for the linear regression models with correlated predictors: Simulation and application. Commun. Stat.-Simul. Comput. 2025, 54, 3933–3947. [Google Scholar] [CrossRef]
  22. Toker, S.; Kaçıranlar, S. On the performance of two parameter ridge estimator under the mean square error criterion. Appl. Math. Comput. 2013, 219, 4718–4728. [Google Scholar] [CrossRef]
  23. Farebrother, R.W. Further results on the mean square error of ridge regression. J. R. Stat. Soc. Ser. B (Methodol.) 1976, 38, 248–250. [Google Scholar] [CrossRef]
  24. Trenkler, G.; Toutenburg, H. Mean squared error matrix comparisons between biased estimators—An overview of recent results. Stat. Pap. 1990, 31, 165–179. [Google Scholar] [CrossRef]
  25. Hoque, M.A.; Kibria, B.M.G. Some one and two parameter estimators for the multicollinear gaussian linear regression model: Simulations and applications. Surv. Math. Its Appl. 2023, 18, 183–221. [Google Scholar]
  26. Hoque, M.A.; Kibria, B.M.G. Performance of some estimators for the multicollinear logistic regression model: Theory, simulation, and applications. Res. Stat. 2024, 2, 2364747. [Google Scholar] [CrossRef]
  27. McDonald, G.C.; Galarneau, D.I. A Monte Carlo evaluation of some ridge-type estimators. J. Am. Stat. Assoc. 1975, 70, 407–416. [Google Scholar] [CrossRef]
  28. Gibbons, D.G. A simulation study of some ridge estimators. J. Am. Stat. Assoc. 1981, 76, 131–139. [Google Scholar] [CrossRef]
  29. Yasmin, N.; Kibria, B.M.G. Performance of Some Improved Estimators and their Robust Versions in Presence of Multicollinearity and Outliers. Sankhya B 2025, 87, 173–219. [Google Scholar] [CrossRef]
  30. Triola, M.M.; Triola, M.F.; Roy, J.A. Biostatistics for the Biological and Health Sciences; Pearson Addison-Wesley: Boston, MA, USA, 2006. [Google Scholar]
Figure 1. (a) Bar plot for MSE value comparison by σ for fixed n = 200 , p = 3 and ρ = 0.80 . (b). Trend comparison for average MSE by ρ and σ for fixed n = 30 and p = 3 .
Figure 1. (a) Bar plot for MSE value comparison by σ for fixed n = 200 , p = 3 and ρ = 0.80 . (b). Trend comparison for average MSE by ρ and σ for fixed n = 30 and p = 3 .
Stats 09 00015 g001aStats 09 00015 g001b
Figure 2. Residual-based diagnostic plots for evaluating regression assumptions.
Figure 2. Residual-based diagnostic plots for evaluating regression assumptions.
Stats 09 00015 g002
Table 1. (1) Estimated mean squared error (MSE) values for n = 30 and p = 3 . (2) Estimated mean squared error (MSE) values for n = 30 and p = 5 . (3) Estimated mean squared error (MSE) values for n = 30 and p = 10 . (4) Estimated mean squared error (MSE) values for n = 50 and p = 3 . (5) Estimated mean squared error (MSE) results for n = 50 and p = 5 . (6) Estimated mean squared error (MSE) results for n = 50 and p = 10 . (7) Estimated mean squared error (MSE) results for n = 100 and p = 3 . (8) Estimated mean squared error (MSE) results for n = 100 and p = 5 . (9) Estimated mean squared error (MSE) results for n = 100 and p = 10 . (10) Estimated mean squared error (MSE) results for n = 200 and p = 3 . (11) Estimated mean squared error (MSE) results for n = 200 and p = 5 . (12) Estimated mean squared error (MSE) results for n = 200 and p = 10 .
Table 1. (1) Estimated mean squared error (MSE) values for n = 30 and p = 3 . (2) Estimated mean squared error (MSE) values for n = 30 and p = 5 . (3) Estimated mean squared error (MSE) values for n = 30 and p = 10 . (4) Estimated mean squared error (MSE) values for n = 50 and p = 3 . (5) Estimated mean squared error (MSE) results for n = 50 and p = 5 . (6) Estimated mean squared error (MSE) results for n = 50 and p = 10 . (7) Estimated mean squared error (MSE) results for n = 100 and p = 3 . (8) Estimated mean squared error (MSE) results for n = 100 and p = 5 . (9) Estimated mean squared error (MSE) results for n = 100 and p = 10 . (10) Estimated mean squared error (MSE) results for n = 200 and p = 3 . (11) Estimated mean squared error (MSE) results for n = 200 and p = 5 . (12) Estimated mean squared error (MSE) results for n = 200 and p = 10 .
(1)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.30631.50813.03320.62913.12216.02011.27166.252612.59956.777634.065167.463111.9207
LTE0.06090.27990.5460.10460.52580.9820.20410.99671.91560.23071.06532.03670.7457
TP10.17620.89811.81420.29271.58763.02930.49392.58365.09771.44056.309111.42642.9291
TP20.13550.51420.97620.18390.78411.40250.25731.13282.09410.56772.22153.71861.1657
NBE10.06870.31470.60570.09990.48210.87170.14970.69911.29290.32941.38592.37670.723
NBE20.08770.32280.5950.11550.46950.82380.15950.6631.20390.32731.36292.32880.705
YC10.37760.53550.63480.4250.54870.64750.4510.53620.64030.42520.40890.49290.5103
YC20.17380.38480.60230.21540.47030.72050.26990.53510.86790.35850.90551.45130.5796
MAULE0.81390.86680.88590.83721.05190.871.19252.19921.57180.65090.6183.75311.2759
MAUTP0.07830.21060.36650.08430.25650.42580.10090.31080.54770.14950.5921.05890.3485
MRT0.08820.31320.57270.11560.45610.79010.15580.61161.09650.2611.05141.79230.6087
DK0.09350.44440.85190.15060.70991.28240.24271.12382.12770.79313.65116.8741.5288
GTP0.24880.49050.74610.27380.63891.00140.30550.83941.41990.48321.56942.5160.8777
MLRT0.08540.37660.70720.11950.39720.70680.2150.33110.48760.22280.21340.24680.3425
NBR1.00061.11451.1140.94971.11631.15110.90281.13421.21531.02521.59971.92091.187
BTP0.09760.2220.35150.12750.3160.50830.15240.43680.72650.19650.43140.77990.3622
NTP0.13970.70111.39750.23881.22322.30040.41212.02133.90291.40596.50812.34672.7165
NRT0.10490.48120.92790.15850.73931.33730.241.05741.95870.52822.22733.85731.1348
LKL0.07670.35880.68250.09810.46370.84240.11980.54751.03310.15420.60381.1810.5135
TPR0.01290.06380.14850.01180.06420.1580.01180.07190.20920.0140.18980.94860.1587
NTPR0.06370.11720.06980.00920.05690.06210.06840.11580.07250.00730.02890.01520.0573
(2)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.62573.1656.43011.29836.581613.02172.715513.30526.688414.51774.0654148.6625.9228
LTE0.11140.64551.31650.26151.44842.80160.36611.87693.89330.32191.59362.98721.4687
TP10.39992.26224.71120.70764.19488.431.32817.178914.84614.593223.954146.67769.9403
TP20.14910.74541.53450.22081.24792.31990.38321.72643.50560.85624.33627.9092.0779
NBE10.07640.49071.01720.12730.83061.5660.22631.09772.30370.42992.29424.14531.2171
NBE20.10840.41170.80830.14010.66451.19440.21130.85781.77120.38672.07353.72641.0295
YC10.65010.61650.65250.63010.58670.63320.58810.54510.61470.48840.46170.51190.5816
YC20.33530.44110.62360.36550.49360.72670.37280.51060.84560.41631.29972.00110.7027
MAULE0.93230.91850.91110.98440.85050.97031.51021.07012.27721.0558.22291.85221.7962
MAUTP0.10430.21410.37910.10050.27790.47680.1080.32760.67550.16260.87521.52330.4354
MRT0.10460.38420.77040.12830.58071.10110.18130.7621.60160.33921.76843.22960.9126
DK0.14260.86361.79510.28741.66493.2140.62.93425.97892.72513.639726.75765.0503
GTP0.29930.58510.98060.30990.86561.43010.39071.10452.13390.60592.60094.64681.3294
MLRT0.14760.74231.49120.25680.78341.42110.46660.46240.51550.330.30220.32070.6033
NBR1.06911.09411.08291.00911.07761.10320.93081.07121.12750.92361.26141.35141.0918
BTP0.1130.33640.61010.17550.61621.09220.25060.84591.60760.28580.96511.73490.7194
NTP0.30451.68223.45850.57453.13336.1831.12145.51611.17224.754224.073947.01799.0826
NRT0.12890.78871.66820.20741.33052.59010.37971.93994.06861.01755.460410.13332.4761
LKL0.11190.66181.3760.17871.0252.02570.2681.37012.82980.38761.85033.62361.309
TPR0.00970.05070.10970.00820.04420.1050.00790.0470.1420.00950.12110.68180.1114
NTPR0.04090.01250.10660.00360.03770.23870.00140.04310.00730.00010.01030.0080.0425
(3)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS1.78868.838317.25063.627118.276636.50757.66338.86176.866442.4446210.7642421.004773.6577
LTE0.40642.10514.1910.46482.45694.8980.47442.46654.92970.40261.81263.37762.3321
TP11.2356.881613.7942.25212.915726.26124.3924.11447.65817.358787.9132174.261534.9196
TP20.17340.95221.87940.24781.48142.96160.4432.32334.3161.11625.277810.04842.6017
NBE10.13021.00822.12110.20631.52053.14320.32362.01393.90010.52762.58235.04761.8771
NBE20.1380.45820.8790.14630.64781.29850.18920.93281.77720.3711.84213.69931.0316
YC10.82310.71390.70040.76910.65160.64690.69160.60230.60540.56710.54980.55040.656
YC20.52180.43940.51870.48110.43240.58030.42920.47120.67910.4741.28282.27970.7158
MAULE0.95570.96241.09060.90940.86770.87080.83840.94130.84810.80480.87542.05451.0016
MAUTP0.15360.19910.32650.11770.23970.44160.1110.33950.62280.17340.78551.48680.4164
MRT0.13370.42450.84470.14020.64331.29210.19270.98371.87170.41852.10144.0911.0948
DK0.46712.54675.07031.01765.283210.54332.172810.971421.573111.30156.6025113.165520.0595
GTP0.34730.68141.12590.34930.94111.72670.41631.37932.42970.67182.79615.3491.5178
MLRT0.49671.83263.43320.77460.80290.87150.70660.64310.64530.51220.490.480.9741
NBR1.04731.06891.04830.98361.07421.05840.91641.0681.06150.90951.08491.091.0343
BTP0.28761.11572.08370.43181.67483.15570.55362.11193.8510.60512.29574.27871.8704
NTP0.99385.167910.2231.89479.78519.52613.754219.037737.378917.593888.3292175.457232.4285
NRT0.1841.4423.03640.33362.45865.06560.66164.1188.02752.098910.65320.93374.9177
LKL0.35011.87353.76550.60993.15846.31510.92464.73499.38061.20016.019411.97974.1927
TPR0.00630.03270.06860.00470.02640.06240.00430.02830.08360.00550.07070.37780.0643
NTPR0.00270.01230.04560.00020.08290.01530.00080.05830.02750.00570.01310.08110.0288
(4)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.17190.85971.7020.34811.71793.43350.72593.47246.95513.860519.145638.94156.7778
LTE0.0410.17550.33280.0660.34880.66840.13060.68261.31680.1440.70521.34970.4968
TP10.11630.58741.15580.19361.03312.05860.33171.72473.53181.15735.20859.91772.2514
TP20.09090.36740.68080.12340.58371.07740.18210.84091.63780.52512.02433.46190.9663
NBE10.050.22330.41540.06630.37230.67610.10570.54051.0240.31811.26792.19180.6043
NBE20.0580.2280.41320.07290.36330.64460.10920.51270.95420.31081.2242.11290.5837
YC10.22710.41730.5210.29380.45660.57680.35710.47680.57980.3890.40280.49760.433
YC20.08650.24960.41640.11720.32540.56780.16370.41520.72720.32480.74111.35110.4572
MAULE0.79620.84520.86590.76020.81920.81530.82350.79320.77871.68780.69090.84770.877
MAUTP0.04630.15780.27460.0530.21060.3650.06930.25950.44970.15050.52860.93870.292
MRT0.05670.22780.410.07580.35520.63080.11230.4910.88570.25630.98291.71520.5166
DK0.06520.32340.60370.1010.52690.96390.17160.79411.51430.59162.62374.86231.0951
GTP0.16110.38680.56740.18840.50360.81170.23560.65751.15770.44921.45532.44740.7518
MLRT0.06120.29560.55450.08310.35480.72030.11790.35310.6490.20940.2150.24990.322
NBR1.01671.11951.11840.97381.09211.14680.92981.06541.18140.89831.2991.49871.1117
BTP0.06020.16230.25120.08780.23380.38310.12910.34350.55750.1580.47220.77950.3015
NTP0.09390.46630.90360.15490.80651.58120.27171.33852.65971.01354.59268.80891.8909
NRT0.07210.35170.66270.1080.56271.04420.17460.81391.53490.48561.94463.41730.931
LKL0.0580.28320.5270.07840.40290.73950.10440.48540.92020.15630.64691.20130.467
TPR0.00750.03720.07450.00670.0330.07350.00680.03520.07950.00730.05820.22380.0536
NTPR0.00470.03140.0290.00360.00430.02040.00010.01630.02850.00470.02810.16270.0278
(5)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.34921.75743.49370.69393.5467.22271.50547.311914.8088.111639.678680.953914.1194
LTE0.06330.39510.77610.14610.89971.87450.22911.25092.56090.2350.98451.93790.9461
TP10.25661.39452.79220.43062.53135.33210.83914.70789.75543.277517.068734.37716.8969
TP20.09710.56381.09460.12980.87881.90080.25341.45242.93970.80483.70357.0451.7386
NBE10.05320.36940.71730.07760.59941.30310.1640.97892.03270.43712.09224.00961.0695
NBE20.06460.32060.59550.0780.48131.02720.14650.77061.56380.37551.81493.48230.8934
YC10.53340.53130.57540.54330.52570.5590.5290.4870.54890.45530.43490.48270.5172
YC20.19220.32140.47510.23090.37960.60760.2650.43180.73620.34710.90561.78160.5562
MAULE0.90.91270.88980.89040.89361.1420.84880.82541.04990.69771.42530.78940.9388
MAUTP0.05170.17230.30060.05280.21550.43420.07260.29420.59840.15430.73151.41920.3748
MRT0.06150.29370.55120.07540.43120.89750.12640.65621.36310.31721.5422.9550.7725
DK0.08920.58981.15160.15961.04082.1980.36341.97813.99061.75558.498517.07933.2412
GTP0.21550.48220.76320.22760.63591.21190.29370.96891.82310.58422.24894.19961.1379
MLRT0.09560.55981.11020.15320.75441.53110.27680.70941.15050.3180.29570.29720.6043
NBR1.09861.10541.09181.04671.07981.11040.99181.03441.12510.88541.10121.23811.0757
BTP0.06890.20350.35430.11720.3940.72780.19760.65011.23090.3020.98631.7380.5809
NTP0.18821.05422.09030.32911.8883.91010.67953.51667.09632.987614.491629.35375.6321
NRT0.08990.5881.15610.12910.92331.99080.25661.55073.19140.8684.39528.53681.973
LKL0.07750.50220.98150.12060.76661.60970.21161.14512.33830.39571.93283.83741.1599
TPR0.00560.02890.05850.00460.02440.05080.00430.02370.05680.00490.03840.1470.0373
NTPR0.00180.00670.05740.00040.00340.00260.00610.00720.0130.01940.00150.00940.0107
(6)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.86034.32658.65771.80118.965217.8243.75918.876436.833920.4784103.2537205.022535.8882
LTE0.22961.29312.68890.27941.52593.14520.29071.54343.11260.35981.39292.61091.5394
TP10.67533.72997.65831.28047.240914.77372.471413.979927.837810.770155.8242112.546721.5657
TP20.10370.78171.79440.15731.36272.83790.34982.16734.24391.1665.415510.62492.5838
NBE10.0880.77721.75420.15171.36962.9470.29492.03044.14970.63623.09426.05681.9458
NBE20.0790.39770.89140.09210.6691.39350.16870.9721.93490.40922.04413.98961.0868
YC10.78460.65520.62780.74130.590.57470.67780.53690.54090.51790.48550.51460.6039
YC20.39040.33380.43640.37290.35320.52330.3530.41880.66240.36370.87991.70870.5664
MAULE0.96860.9420.94280.93621.33081.53620.87330.81571.0820.7430.781910.371.7769
MAUTP0.08060.16350.33420.06270.23620.49170.07490.35690.69390.18650.86451.73480.44
MRT0.0760.33520.76230.08150.55721.21910.1370.90981.82860.41072.10534.23381.0547
DK0.23671.48093.15660.5493.1196.34261.23256.32812.53496.311931.378162.975211.3038
GTP0.26730.56791.08260.27460.87511.63020.35941.29462.39580.74873.03555.76821.525
MLRT0.28041.46152.97260.53671.82773.38650.69420.6110.60090.49740.45980.44381.1477
NBR1.06911.08461.07211.0271.06911.08630.94481.02841.08750.85211.00741.09861.0356
BTP0.15040.56021.05280.28451.07672.01070.42771.60592.88090.64912.18943.74941.3865
NTP0.522.85095.84941.02515.457211.03722.04110.489220.73839.580247.926595.820817.778
NRT0.11661.06672.47560.20891.90624.17560.46073.30336.74561.82469.568919.25284.2588
LKL0.2031.23162.61220.40642.234.56680.70823.67397.35031.26926.38912.85643.6248
TPR0.00360.01850.03640.00260.01380.03050.00240.0140.03270.00280.02230.09690.023
NTPR0.00440.00720.00750.00090.00210.00320.00220.0120.00240.00020.00230.02880.0061
(7)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.08130.40640.81680.16330.81881.62890.33931.76543.46021.87169.183818.16133.2248
LTE0.02320.0920.17870.040.18020.35960.07340.430.80060.07840.39650.75380.2839
TP10.06220.31370.62080.10890.55611.11640.18941.07392.09390.72443.6197.00171.4567
TP20.05250.22310.42450.07530.34020.65480.11090.5971.10260.35861.59842.89420.7027
NBE10.03350.13660.26290.04420.21340.4140.06450.38750.7040.22680.99771.80870.4412
NBE20.03550.13890.26430.04590.20940.40040.06470.37070.66330.21980.94071.68870.4202
YC10.06980.24920.36340.12280.3120.42440.1930.37440.48730.310.37430.4910.3143
YC20.03860.13170.23980.05050.18010.32920.07020.2720.49440.19920.50760.99780.2926
MAULE0.82890.89270.90490.76560.8320.84080.72790.76810.78940.66570.61310.69920.7774
MAUTP0.02820.10420.19140.0330.1350.25030.04310.21210.35740.11910.39480.71430.2152
MRT0.03480.14020.26340.04610.20930.3910.06840.36780.63790.19930.79781.44440.3834
DK0.04190.19930.38740.06180.3090.59440.10150.54950.98770.36621.66783.11370.6984
GTP0.06630.24950.39870.09620.32240.52640.13920.48470.8180.31721.13952.02060.5482
MLRT0.03960.18930.36590.05610.27650.53370.07190.36740.69570.1730.19780.24820.2679
NBR1.03021.13441.13320.97611.09311.14240.94891.02741.14730.87241.00931.26441.0649
BTP0.0270.09970.16570.04270.15060.24510.07620.25150.39130.1460.45780.76690.235
NTP0.05520.26060.51010.09020.44550.87520.15350.83641.59560.60712.8485.42981.1423
NRT0.04420.21160.4120.06650.33660.64970.10620.58911.06870.34861.47172.69870.667
LKL0.03940.18580.36120.0540.270.51720.07780.41840.75290.14350.62331.17160.3846
TPR0.00350.01780.03570.00330.01610.03240.00320.01590.03350.00330.0190.04970.0195
NTPR0.00020.00940.00880.00150.00940.02810.00010.0060.0040.00060.05160.05930.0149
(8)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.16260.8021.66380.33111.64143.3550.69473.44276.96833.80618.556637.15726.5485
LTE0.03670.19640.41650.07510.46311.00920.11860.70661.45890.17180.59871.13020.5318
TP10.13740.69021.4360.24261.29432.71570.4462.5155.25681.945310.308620.88163.9891
TP20.06370.33020.69860.0780.51151.15550.12920.94842.01490.63172.79515.4861.2369
NBE10.0360.20810.44810.04950.35510.80740.08850.68091.44320.39171.76033.49040.8133
NBE20.0380.18650.39250.04330.28840.65350.07110.54081.12930.3331.43022.82730.6612
YC10.35480.41950.46270.39540.41320.4560.41960.41070.4540.37310.38440.47560.4183
YC20.07850.180.31440.09850.22680.41070.130.31130.56390.25130.56541.07950.3509
MAULE0.89780.90540.90970.86290.85990.86770.95390.93020.78970.71990.70081.22430.8852
MAUTP0.02640.1090.22260.02480.14030.3080.0350.23090.46530.13630.55431.12070.2811
MRT0.03720.17520.36050.04090.2580.56780.06670.45860.94970.26311.20152.35680.5613
DK0.05790.3240.68420.08510.55321.21380.17681.11372.31191.00624.78599.52081.8195
GTP0.11680.32120.5250.14520.41130.77180.18290.65491.27040.47111.73613.33630.8286
MLRT0.05850.32680.68340.0930.52881.11230.15950.73361.50170.27330.27520.30680.5044
NBR1.14191.12571.10541.10741.09961.11621.05151.03631.11290.90050.94371.1261.0723
BTP0.03350.10970.19390.06090.21190.3910.12130.4060.75670.27670.98211.68770.436
NTP0.10420.53461.11490.17680.96832.04330.33791.87253.86771.63987.905715.72733.0244
NRT0.06230.34050.71390.08130.53921.1930.13940.98062.08170.63623.11086.20681.3405
LKL0.05390.29960.63150.07380.4721.03020.13310.81411.68830.35951.80053.60460.9134
TPR0.00280.01340.02710.00230.01120.0230.00220.01080.02220.00220.01220.03210.0135
NTPR0.00090.0110.0070.00140.00050.00080.00040.00580.04450.00020.00180.00630.0067
(9)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.37551.91443.78120.77943.89297.7881.64248.098416.21228.867344.202288.714115.5223
LTE0.11340.66361.3770.1390.80371.67620.13870.80121.65270.28650.96991.91610.8782
TP10.32951.7483.50170.61623.3916.96031.18966.707913.67235.594729.676160.33511.1435
TP20.06240.47681.10250.07610.84561.98180.15331.59953.23390.91434.33059.09251.9891
NBE10.06040.46031.02920.09260.89522.0380.16761.62333.38020.61093.13236.65941.6791
NBE20.03910.24760.57540.0420.45371.07070.07830.85161.68670.36891.81163.94610.931
YC10.65910.58690.55660.66850.51960.49480.62510.47910.45660.47760.40360.42690.5295
YC20.18250.20360.29340.2150.2310.3710.22470.29810.5040.25740.56791.070.3682
MAULE0.95180.94060.93370.95210.911.15610.91640.82720.80150.74360.66320.86960.8888
MAUTP0.02980.10380.2380.0250.16870.39220.03120.29370.60210.15260.74561.65290.3696
MRT0.03730.20540.47670.03780.35330.83380.06010.63451.35050.31641.73683.71610.8132
DK0.11150.73511.57540.2431.55093.27410.5543.23466.44533.167515.346330.78365.5851
GTP0.18380.38720.69230.18870.5721.16510.21510.98821.84540.61562.42614.96781.1873
MLRT0.14190.82011.67570.28261.43292.84460.50921.82443.29020.47630.41750.411.1771
NBR1.10451.09261.08521.08321.06561.10441.02280.99421.0820.84560.91571.04341.0366
BTP0.06450.24670.46520.14640.57811.08150.26771.03131.92620.571.91053.25620.962
NTP0.24981.36242.75860.46532.58975.31780.92895.059310.11944.638722.734445.65828.4902
NRT0.08110.62121.39260.10951.1122.55860.21362.09814.4221.21696.824114.42332.9228
LKL0.10390.67281.43620.21141.30252.72930.4182.33234.69981.16725.861611.7492.7237
TPR0.00170.00870.01760.00130.00660.01360.00120.00590.01250.00110.0070.01970.0081
NTPR0.00050.01260.03250.00340.00130.03060.00030.00040.01170.00050.00060.00560.0083
(10)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.04040.20530.40650.07890.39850.82190.16680.82661.66060.91114.58399.17651.6064
LTE0.01410.05020.0940.0220.10160.19890.04640.22660.44890.04270.2120.42080.1565
TP10.03310.17330.33750.05870.30360.61850.10970.56121.13820.41062.29484.60830.8873
TP20.03020.1420.2610.04480.210.40570.07170.340.6640.21741.13842.21170.4781
NBE10.02220.09280.16190.02990.12960.2520.04420.21620.42630.13940.73621.41830.3058
NBE20.02280.09420.16370.03050.12880.24710.0440.20850.40730.13380.69011.31520.2905
YC10.02310.1270.22110.03660.17910.28460.08360.23610.35360.20450.34630.46960.2138
YC20.02340.08980.14470.03160.1090.18730.04390.14330.27980.10060.37750.80790.1949
MAULE0.82350.90590.9350.78190.87090.89280.74170.81660.83090.6780.66320.65480.7996
MAUTP0.020.08130.1370.02430.09650.17430.03240.13230.25260.0750.32610.59530.1623
MRT0.02280.09810.17290.03090.13310.250.04650.20990.40170.13180.63021.17480.2752
DK0.02630.1280.24070.03870.19280.37840.06340.31510.60990.22221.07092.06480.4459
GTP0.0260.15530.26090.0390.20030.34080.07020.29160.50630.1990.82541.55460.3725
MLRT0.02570.12460.23280.0360.18330.3560.05490.28060.52980.10840.29080.50420.2273
NBR0.97791.14121.15670.91031.08861.15060.89361.03221.13510.84940.93111.081.0289
BTP0.01670.06560.10940.02010.09220.16450.03840.15530.25480.12350.3950.67670.176
NTP0.0320.15420.29120.05290.2540.50490.09270.44930.8930.34131.74963.46560.6901
NRT0.02690.13170.24910.04090.20560.40570.06790.34340.66930.21941.06812.04250.4559
LKL0.02550.12350.23220.03610.17920.35180.05510.2730.52770.120.56811.10270.2996
TPR0.00180.00850.01770.00150.00790.01590.00160.00780.01530.00160.00840.0180.0088
NTPR0.00130.02810.00450.00310.00030.00240.00480.00210.00370.00090.01070.00420.0055
(11)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.07850.39630.80370.15690.8181.64570.32971.67563.3731.84439.215918.7873.2604
LTE0.0240.11260.22930.03970.25240.53460.06140.36070.78040.08230.38270.73660.2997
TP10.07170.36030.72830.13030.69511.40990.24051.31552.71981.01585.94612.52962.2636
TP20.04290.20890.41630.0530.32220.68340.07430.5321.18250.30971.97754.16690.8308
NBE10.02490.12620.26120.03250.21910.47230.05180.39070.87280.21371.38642.88840.5783
NBE20.02530.11880.23880.0280.18340.39570.03890.30820.69390.17321.08982.24350.4615
YC10.19620.29810.35820.24810.31560.34890.2980.31090.35670.30540.32320.37530.3112
YC20.03350.10740.18970.04060.13620.25950.05570.19090.36160.13120.42520.79670.2274
MAULE0.92270.94160.94560.87650.88790.88880.83440.82750.83020.87250.79450.64220.8554
MAUTP0.0190.08140.15720.01660.10030.21520.01920.14270.31610.07060.42310.84290.2004
MRT0.02610.11740.23080.02790.16640.35530.03690.26470.57410.14180.87691.7960.3845
DK0.04030.19850.40520.05190.32020.67720.0880.57961.26090.48812.72145.56951.0334
GTP0.04290.21010.33820.06580.27420.48610.1040.4010.77660.26341.25032.56170.5645
MLRT0.03890.19160.39520.05370.32790.6820.09640.53171.11690.23920.31160.38560.3642
NBR1.16491.15141.12271.14691.13031.13451.11951.07471.11490.99140.89930.9991.0875
BTP0.01620.06670.11760.02670.12180.21970.06260.24220.44090.22180.80711.48590.3191
NTP0.05870.29220.59020.09710.53151.08580.17410.97532.02520.81824.36758.91041.6605
NRT0.04220.20880.42410.05540.33660.70260.08140.55751.21830.32092.08214.37420.867
LKL0.03880.19080.38970.04820.29540.62350.07620.49081.06330.26131.43962.92030.6532
TPR0.00130.00680.01360.00110.00560.01130.0010.00510.01050.0010.00520.01160.0062
NTPR0.00290.01220.00480.00120.00070.00190.00030.00040.04010.00050.01490.00030.0067
(12)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS0.17730.89331.77820.37071.84113.66620.78453.8767.66474.285521.291342.71577.4454
LTE0.05780.33850.7120.07150.41180.87270.06940.43090.87990.17190.80841.41280.5198
TP10.16480.8431.68820.31821.6713.37870.61033.38016.82042.841216.12132.97725.9012
TP20.04540.29440.65240.04880.47941.11070.06810.92612.05880.49023.40876.68541.3557
NBE10.04130.26710.58360.06540.51761.16270.10091.02562.2440.41733.05336.131.3007
NBE20.02430.15280.3490.02430.26090.61970.03220.53191.1830.24491.6863.25090.6967
YC10.51380.48260.48760.53240.43860.43070.53920.40990.38810.44410.33850.35010.4463
YC20.06720.11350.18070.08490.12920.22540.10410.17660.34080.14680.45170.83380.2379
MAULE0.95250.95820.9620.95830.9111.040.93040.87250.86411.14090.66780.64780.9088
MAUTP0.01430.07130.15850.0110.09940.24240.01190.18560.42960.08660.65041.26970.2692
MRT0.02370.13350.29220.02210.20290.47090.02780.37210.8560.17191.35552.74120.5558
DK0.06050.38150.81920.11810.76621.64180.25751.65673.40561.56968.254416.32682.9382
GTP0.09770.25910.44770.12140.35880.68010.13170.60651.2170.36991.96793.77670.8362
MLRT0.07180.42650.89740.14640.82611.70080.291.43422.81490.46420.37480.36430.8176
NBR1.1411.11141.09221.13451.0971.10751.10481.02481.08460.93910.85430.9561.0539
BTP0.02940.11540.2220.07340.29970.57910.15890.63891.22340.44241.55032.79940.6777
NTP0.12820.67721.36940.23411.28212.62530.44962.55465.14832.288311.913823.74284.3678
NRT0.05970.36750.7910.07360.61661.39330.10741.1842.61840.62884.6939.61951.8461
LKL0.05840.3660.78380.11050.70011.49590.2261.38112.83710.86414.53299.05251.8674
TPR0.00090.00430.00860.00060.00310.00640.00060.00290.00580.00050.00290.00660.0036
NTPR0.00090.00580.01150.00010.00080.00340.00350.00380.00860.00040.01150.05430.0087
Table 2. (1) Estimated MSE values for n = 100 and p = 3 , with 25% outliers. (2) Estimated MSE values for n = 100 and p = 5 , with 25% outliers. (3) Estimated MSE values for n = 100 and p = 10 , with 25% outliers.
Table 2. (1) Estimated MSE values for n = 100 and p = 3 , with 25% outliers. (2) Estimated MSE values for n = 100 and p = 5 , with 25% outliers. (3) Estimated MSE values for n = 100 and p = 10 , with 25% outliers.
(1)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS1.13375.880611.72772.331911.998623.91264.96425.018247.890525.958131.1298271.445346.9492
LTE0.24431.20412.3570.50182.5324.93961.19975.403210.38411.07175.17310.43033.7867
TP10.84954.20598.32161.57398.012415.78813.051215.026628.5849.77849.0534101.102220.4456
TP20.55742.52944.93510.89264.44648.62421.6317.484114.25353.77618.317738.1448.7993
NBE10.34661.54072.95780.55562.71225.11241.05574.59448.79642.36511.484524.39795.4933
NBE20.34531.48482.8140.53262.53764.75560.99124.20878.0392.198510.626322.68385.1015
YC10.42410.96491.44510.48751.22211.8510.5921.52.49040.60181.88843.61821.4238
YC20.31591.34812.59590.43822.22564.38150.74723.52146.65641.36316.870214.97913.7869
MAULE0.9060.92310.93230.85160.86630.9870.79940.82280.9730.71420.66220.76660.8504
MAUTP0.2441.03291.97930.31681.55982.91220.53632.25044.38530.99095.405812.48542.8416
MRT0.34331.51592.93080.51512.49984.7380.9364.00317.56781.92719.133419.34984.6217
DK0.51192.29524.44380.80023.90447.40451.456.394912.23454.196220.617743.1918.9537
GTP0.47471.67013.13580.6732.89775.41951.16884.8899.16932.614212.399725.88785.8666
MLRT0.48522.20864.29470.73343.72317.06520.99225.00769.50330.30590.94451.78453.0874
NBR1.11441.03791.02471.13481.05661.04061.16351.08941.08391.33561.60822.08371.2311
BTP0.21050.79021.46010.31381.3452.39530.54752.14164.11811.044.47948.90072.3119
NTP0.69063.36836.65981.21766.217612.13442.337611.262521.49247.528937.305377.483215.6415
NRT0.54672.47024.79020.87944.2598.10991.55916.96113.2683.566317.269136.26058.3283
LKL0.47732.13844.14190.69443.38916.39621.10334.85769.30831.62047.808816.36914.8587
TPR0.04980.36221.43450.04530.49672.32170.04990.92484.83690.09636.657743.63135.0756
NTPR0.03820.2980.36050.15590.02450.09710.01010.39760.19050.03530.10341.12960.2367
(2)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS2.30511.742323.22884.785723.82146.85599.807750.166797.74553.3967267.2609540.505694.3018
LTE0.58653.04065.85031.47137.081713.92432.078710.783321.48431.59277.329614.48317.4755
TP11.989510.158320.05273.912219.665638.6797.419238.810976.000230.0546149.8613302.282158.2405
TP20.96284.93719.46781.69548.313216.20822.763214.525827.87337.733836.29473.743917.0432
NBE10.62623.18836.06061.18915.706911.19861.981810.368520.12154.927923.19647.068811.3029
NBE20.542.6815.01860.96024.47088.72041.52037.873815.0583.988818.580937.96388.9481
YC10.49530.83741.21310.48350.93511.4850.47781.07511.93510.53661.51342.61611.1336
YC20.40291.57082.86550.54772.27824.34290.70513.20176.58761.47176.680513.57523.6858
MAULE0.91480.93080.95920.8510.86070.90910.87850.81170.94040.68072.1631.31221.0177
MAUTP0.29931.40272.64060.44892.01473.95680.61523.19676.37221.58418.140817.42974.0085
MRT0.49632.41884.63590.83283.97347.68991.28936.748813.02093.346415.967532.34977.7308
DK0.9614.92759.55921.77838.66917.01443.217116.589132.516513.510866.1047132.827725.6396
GTP0.66392.85295.30361.09264.75919.26551.70918.48415.96234.631521.410242.92879.922
MLRT0.96444.97269.68481.60888.359316.49642.122510.540221.43470.31110.56340.87166.4942
NBR1.09191.02781.01621.11271.03951.02521.12521.0541.04491.18211.25721.38181.1132
BTP0.25061.15882.2620.54512.48265.04421.05465.02969.92732.336310.126820.55485.0644
NTP1.54717.854915.48482.940414.578928.66035.413627.741954.601522.4348110.9206222.235942.8679
NRT0.99895.05379.82771.74558.619216.82082.902815.206129.71618.839442.506886.690819.0773
LKL0.88694.5388.80911.50847.36814.48052.363212.153824.05095.114825.33350.827413.1195
TPR0.03780.29461.16890.03330.32051.56990.03220.58644.06270.06024.45330.90833.6273
NTPR0.01140.08710.260.00860.18560.01650.01090.17450.34180.05120.08040.09610.1103
(3)
ρ 0.80.80.80.90.90.90.950.950.950.990.990.99
σ 1510151015101510Average
OLS5.414826.50753.724411.054355.6874112.044523.0374116.724230.1086130.3802635.32721278.3868223.1997
LTE2.031510.25120.80532.443812.717425.91112.412212.262724.39032.661112.171724.556512.7179
TP15.04924.989450.75819.932850.8289102.74119.7745101.3039199.707288.9972435.58876.7175163.865
TP21.67528.982818.42362.820514.978931.22564.7924.614847.70113.152462.4941127.698529.8798
NBE11.55768.241816.80232.939715.530632.16045.026126.185451.48939.622346.442894.999125.9165
NBE20.88054.67359.56121.50397.795316.37482.456212.587924.40665.63126.941555.749614.0468
YC10.55960.68610.90160.49320.67870.98010.46160.75751.14790.44820.90981.54790.7977
YC20.37811.62723.27510.50362.36524.73220.70673.39296.54621.64497.627815.42084.0184
MAULE0.93660.93660.95520.88051.12610.87880.80440.89821.09850.83920.79373.23071.1149
MAUTP0.35861.92673.94580.5592.9636.09180.87534.55958.98042.360811.488524.63185.7284
MRT0.7183.90497.98271.19756.489713.30972.024610.528220.86565.381226.204653.911812.7099
DK2.352712.083124.69364.670423.835648.79799.345147.29392.700145.1835220.0866441.424881.0389
GTP0.99094.61049.39061.5817.951116.57692.719413.310825.50137.17533.514667.935715.9381
MLRT2.480712.734926.15414.175521.329942.06054.38820.744639.93940.40210.45340.53814.6168
NBR1.07241.01981.01071.0951.02751.01521.10141.03381.01931.08721.06361.08711.0528
BTP0.65553.1226.24251.50837.263114.73412.721812.603924.96294.499918.412535.405311.011
NTP4.003819.915440.54597.573538.46977.917314.578273.8364145.11767.0577327.0009656.2634122.6899
NRT2.121711.266223.16033.717720.00241.52986.627934.675368.419821.0663103.2596209.469545.443
LKL2.142611.003222.50113.911319.975440.77366.819334.691468.411517.272684.7935170.102840.1999
TPR0.02570.14870.7230.01910.17090.96090.0190.27261.99850.042.251420.98582.3013
NTPR0.00710.02580.26590.03670.04850.15310.02640.10790.04290.00090.50790.00270.1022
Table 3. Correlation matrix.
Table 3. Correlation matrix.
yx1x2x3x4
y1
x1−0.31681
x2−0.20350.36371
x3−0.31970.89450.077811
x4−0.29710.88800.17680.80241
Table 4. Variance inflation factor.
Table 4. Variance inflation factor.
VariableVIF
x117.04
x22.11
x38.11
x45.61
Table 5. Checking the validation of the theoretical conditions for the real-life data.
Table 5. Checking the validation of the theoretical conditions for the real-life data.
TheoremsConditionsValue
2.1 b N T P R T σ 2 Λ 1 A N T P R Λ 1 A N T P R T 1 b N T P R < 1 0.01142875
2.2 b N T P R T σ 2 A L T E Λ 1 A L T E T A N T P R Λ 1 A N T P R T + b L T E b L T E T 1 b N T P R < 1 0.0179153
2.3 b N T P R T σ 2 A T P Λ 1 A T P T A N T P R Λ 1 A N T P R T + b T P b T P T 1 b N T P R < 1 .0.0103903
2.4 b N T P R T σ 2 A N B E Λ 1 A N B E T A N T P R Λ 1 A N T P R T + b N B E b N B E T 1 b N T P R < 1 0.01037587
2.5 b N T P R T σ 2 Y k , d Λ 1 Y k , d T A N T P R Λ 1 A N T P R T + b Y C b Y C T 1 b N T P R < 1 0.01197182
Table 6. Regression analysis and MSE of the data.
Table 6. Regression analysis and MSE of the data.
OLSLTETP1TP2NBE1NBE2YC1YC2MAUTPMRT
X1−0.5773−0.4961−0.5457−0.5307−0.5479−0.5454−0.5414−0.5427−0.5081−0.5178
X20.33780.33170.34810.34560.3480.34810.34820.34820.36110.357
X30.18910.16870.1920.17490.1940.19170.18810.18930.19830.1939
X40.75880.64660.61970.64580.61980.61970.620.61990.44140.4995
MSE0.15450.13140.10920.11460.12980.12970.10880.10890.06430.0771
DKGTPMLRTNRTLKLTPRNTPR
X1−0.5485−0.4484−0.5492−0.5477−0.57010.09283180.0928147
X20.34780.36190.34780.34790.34050.19465310.1946173
X30.19410.13380.19480.19340.1910.11351970.1134988
X40.62160.48240.62160.62170.72170.03797030.0379633
MSE0.11010.10820.11020.110.14170.00001740.0000173
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hoque, M.A.; Kibria, B.M.G.; Bursac, Z. New Two-Parameter Ridge Estimators for Addressing Multicollinearity in Linear Regression: Theory, Simulation, and Applications. Stats 2026, 9, 15. https://doi.org/10.3390/stats9010015

AMA Style

Hoque MA, Kibria BMG, Bursac Z. New Two-Parameter Ridge Estimators for Addressing Multicollinearity in Linear Regression: Theory, Simulation, and Applications. Stats. 2026; 9(1):15. https://doi.org/10.3390/stats9010015

Chicago/Turabian Style

Hoque, Md Ariful, B. M. Golam Kibria, and Zoran Bursac. 2026. "New Two-Parameter Ridge Estimators for Addressing Multicollinearity in Linear Regression: Theory, Simulation, and Applications" Stats 9, no. 1: 15. https://doi.org/10.3390/stats9010015

APA Style

Hoque, M. A., Kibria, B. M. G., & Bursac, Z. (2026). New Two-Parameter Ridge Estimators for Addressing Multicollinearity in Linear Regression: Theory, Simulation, and Applications. Stats, 9(1), 15. https://doi.org/10.3390/stats9010015

Article Metrics

Back to TopTop