Next Article in Journal
On Bridge Graphs with Local Antimagic Chromatic Number 3
Previous Article in Journal
Introducing the Second-Order Features Adjoint Sensitivity Analysis Methodology for Fredholm-Type Neural Integral Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Stochastic Restricted Biased Regression Estimators

1
Department of Mathematics, Al-Aqsa University, Gaza 4051, Palestine
2
Department of Statistics, Faculty of Science, University of Tabuk, Tabuk 47512, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(1), 15; https://doi.org/10.3390/math13010015
Submission received: 10 November 2024 / Revised: 16 December 2024 / Accepted: 21 December 2024 / Published: 24 December 2024
(This article belongs to the Section D1: Probability and Statistics)

Abstract

:
In this paper, we propose three stochastic restricted biased estimators for the linear regression model. These new estimators generalize the least squares estimator, mixed estimator, and biased estimator. We derive the necessary and sufficient conditions for the superiority of the proposed estimators over existing ones, as well as their relative superiority among each other, using the mean squared error matrix as a criterion. A simulation study is conducted to validate the theoretical findings, and two real-world examples are provided to demonstrate the practical advantages of the proposed estimators.

1. Introduction

The statistical consequences of multicollinearity on a linear model are well-known in the field of statistics. These consequences have encouraged researchers to find a solution by improving some of the remedial approaches. One of the common remedial techniques proposed by authors is the use of biased and unbiased estimators such as the ridge estimator proposed by Hoerl and Kennard [1], the Liu estimator proposed by Liu [2], the Kibria–Lukman estimator proposed by Kibria and Lukman [3], the Dawoud–Kibria estimator (DKE) proposed by Dawoud and Kibria [4], and others. In addition, unrestricted and restricted estimators are proposed in the literature, as seen in the studies of [5,6,7,8,9,10,11,12,13,14]. Moreover, Theil [15] suggested the mixed estimator (ME) by adding stochastic linear restrictions. Then, Özkale [16] proposed the ridge estimator under stochastic restrictions. Also, Hubert and Wijekoon [17] gave a version of the Liu estimator under stochastic restrictions. Later, Yang and Xu [18] introduced another version of the Liu estimator under stochastic restrictions. In addition to that, Yang and Cui [19] gave a two-parameter estimator under stochastic restrictions. Li and Yang [20] studied the efficiency of a two-parameter estimator under stochastic restrictions. Then, Özbay and Kaçıranlar [21] studied the estimation with stochastic linear restrictions using a two-parameter weighted mixed regression estimator. Also, Tyagi and Chandra [22] obtained the two-parameter principal component estimator under stochastic restrictions. Recently, Chen and Wu [23] presented and studied the mixed Kibria–Lukman regression estimator.
The least squares estimator (LSE) is known to suffer from inflated variance in the presence of multicollinearity, while the ME incorporates prior information but does not directly address multicollinearity. The DKE effectively mitigates multicollinearity but does not account for stochastic prior information. This gap highlights the need for a unified framework that simultaneously addresses multicollinearity and incorporates stochastic restrictions. To fill this gap, we propose three stochastic restricted DKEs. These estimators generalize existing methods by integrating the principles of the DKE with stochastic restrictions. The key innovation lies in the dual ability of the proposed estimators to manage multicollinearity while incorporating prior stochastic information. We derive the necessary and sufficient conditions under which each proposed estimator outperforms the LSE, ME, and DKE, using the mean squared error matrix (MSEM) criterion. This theoretical contribution establishes a formal basis for the superiority of the proposed methods. Finally, extensive simulations and two real-life datasets are employed to demonstrate the enhanced performance of the proposed estimators compared to the LSE, ME, and DKE. The remainder of this paper is organized as follows: Section 2 introduces the statistical model and the three proposed stochastic restricted DKEs. Section 3 presents a comparative analysis of the new estimators against existing methods. Section 4 details the results of a comprehensive simulation study. In Section 5, two real-world applications are discussed to illustrate the practical relevance of the proposed estimators. Finally, Section 6 provides concluding remarks and highlights key findings.

2. Model Specification and the New Estimators

The linear regression model is as follows:
y ~ = X ~ β + ε ,
where y ~ is an n × 1 dependent vector, X ~ is an n × p matrix with full rank contains the explanatory variables, β is an p × 1 regression parameter vector, and ε is an n × 1 disturbances vector with E ε = 0 , and V a r ε = σ 2 I n , such that I n is an identity matrix. The LSE of β in (1) is given as follows:
β ^ = S ~ 1 X ~ y ~ ,
where S ~ = X ~ X ~ .
Providing a prior information about β to model (1) with independent stochastic linear restrictions as follows:
r ~ = R ~ β + e ,
where R ~ is a j × p matrix with the rank equals j , e is a j × 1 disturbances vector with E e = 0 and V a r e = σ 2 W ~ , W ~ is known positive definite, r ~ is a j × 1 vector with E r ~ = R ~ β , Chen and Wu [23]. The ε vector is stochastically independent of the vector e .
Combining (1), and (3), the model becomes as follows:
y ~ m = X ~ m + μ ~ m ,
where y ~ m = y ~ r ~ , X ~ m = X ~ R ~ , μ ~ m = ε e , E μ ~ m = 0 , V a r μ ~ m = σ 2 N ~ = σ 2 I 0 0 W ~ .
The ME is obtained by minimizing Ψ ~ 1 = σ 2 ( y ~ m X ~ m β ) σ 2 N ~ 1 ( y ~ m X ~ m β ) with respect to β , that is given as follows:
β ^ M E = S ~ + R ~ W ~ 1 R ~ 1 X ~ y ~ + R ~ W ~ 1 r ~ ,
also, β ^ M E is written as follows:
β ^ M E = β ^ + S ~ 1 R ~ W ~ + R ~ S ~ 1 R ~ 1 r ~ R ~ β ^ ,
see Durbin [24], Theil and Goldberger [25] and Theil [15].
To treat multicollinearity, the DKE was recently proposed by Dawoud and Kibria [4] and defined as follows:
β ^ D K E = F ~ β ^ = F ~ S ~ 1 X ~ y ~ = S ~ 1 F ~ X ~ y ~ ,
where F ~ = ( S ~ + k ( 1 + d ) Ι p ) 1 ( S ~ k ( 1 + d ) Ι p ) .
  • The Three Proposed Stochastic Restricted DKEs
1.
We obtain the first stochastic restricted DKE (SRDKE1) by grafting DKE into the method of mixed estimation, following Hubert and Wijekoon’s [17] procedure as follows:
β ^ 1 = F ~ β ^ M E = F ~ S ~ + R ~ W ~ 1 R ~ 1 X ~ y ~ + R ~ W ~ 1 r ~ .
2.
We obtain the second stochastic restricted DKE (SRDKE2) by replacing LSE in the ME with the DKE, following Yang and Xu’s [18] procedure as follows:
β ^ 2 = β ^ D K E + S ~ 1 R ~ W ~ + R ~ S ~ 1 R ~ 1 ( r ~ R ~ β ^ D K E ) = S ~ 1 F ~ X ~ y ~ + S ~ 1 R ~ W ~ + R ~ S ~ 1 R ~ 1 r ~ R ~ S ~ 1 F ~ X ~ y ~ = S ~ 1 S ~ 1 R ~ W ~ + R ~ S ~ 1 R ~ 1 R ~ S ~ 1 F ~ X ~ y ~ + R ~ W ~ 1 r ~ = S ~ + R ~ W ~ 1 R ~ 1 F ~ X ~ y ~ + R ~ W ~ 1 r ~ .
3.
We obtain the third stochastic restricted DKE (SRDKE3) by minimizing
σ 2 ( y ~ m X ~ m β ) σ 2 N ~ 1 ( y ~ m X ~ m β ) subject   to   ( β + β ^ ) ( β + β ^ ) = c ~ ,   as follows : σ 2 ( y ~ m X ~ m β ) σ 2 N ~ 1 ( y ~ m X ~ m β ) + k ( 1 + d ) [ β + β ^ ) β + β ^ c ~ ,
where c ~ is a constant and k ( 1 + d ) is the Lagrangian multiplier.
Now, the solution to (10) gives the SRDKE3 as follows:
β ^ 3 = X ~ m N ~ 1 X ~ m + k ( 1 + d ) I p 1   X ~ m N ~ 1 y ~ m k ( 1 + d ) β ^ ,   0 < d < 1 ,     k > 0 .
Equation (11) can be rewritten as follows:
β ^ 3 = ( S ~ + k 1 + d Ι p + R ~ W ~ 1 R ~ ) 1 Ι p k 1 + d S ~ 1 X ~ y ~ + R ~ W ~ 1 r ~ .
Now, we determine the expectation and dispersion matrix of the proposed SRDKE1 as follows:
E ( β ^ 1 ) = F ~ β ,
V a r ( β ^ 1 ) = σ 2 F ~ A ~ F ~ ,
where A ~ = ( S ~ + R ~ W ~ 1 R ~ ) 1 .
Now, we determine the expectation and dispersion matrix of the proposed SRDKE2 as follows:
E ( β ^ 2 ) = A ~ ( F ~ S ~ + R ~ W ~ 1 R ~ ) β ,
V a r ( β ^ 2 ) = σ 2 A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ .
Now, we find the expectation and dispersion matrix for the new SRDKE3 as follows:
E ( β ^ 3 ) = Q ~ ( S ~ k ( 1 + d ) I p + R ~ W ~ 1 R ~ ) β ,
V a r ( β ^ 3 ) = σ 2 Q ~ S ~ 2 k ( 1 + d ) I p + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ ,
where Q ~ = ( S ~ + k ( 1 + d ) Ι p + R ~ W ~ 1 R ~ ) 1 .

3. The Comparisons of the Estimators

  • The MSEM of the estimators
The MSEM of the existing estimators are as follows:
M S E M ( β ^ ) = σ 2 S ~ 1
M S E M ( β ^ M E ) = σ 2 A ~ ,
M S E M β ^ D K E = σ 2 F ~ S ~ 1 F ~ + b ~ 1 b ~ 1 ,
also, we give the MSEM of the proposed estimators (SRDKE1, SRDKE2, and SRDK3) using (13)–(18) as follows:
M S E M ( β ^ 1 ) = σ 2 F ~ A ~ F ~ + b ~ 1 b ~ 1 ,
M S E M β ^ 2 = σ 2 A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ + b ~ 2 b ~ 2 ,
M S E M β ^ 3 = σ 2 Q ~ S ~ 2 k ( 1 + d ) I p + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ + b ~ 3 b ~ 3 ,
where b ~ 1 = B i a s ( β ^ D K E ) = B i a s ( β ^ 1 ) = ( F ~ I ) β , b ~ 2 = B i a s ( β ^ 2 ) = A ~ ( F ~ I ) S ~ β , b ~ 3 = B i a s ( β ^ 3 ) = ( Q ~ S ~ k 1 + d I p + R ~ W ~ 1 R ~ I p ) β .
To compare the estimators via the MSEM, we obtain the differences as follows:
Δ 1 = M S E M ( β ^ ) M S E M ( β ^ 1 ) = σ 2 D ~ 1 b ~ 1 b ~ 1 ,
where D ~ 1 = S ~ 1 F ~ A ~ F ~ .
Δ 2 = M S E M ( β ^ M E ) M S E M ( β ^ 1 ) = σ 2 D 2 b ~ 1 b ~ 1 ,
where D ~ 2 = A ~ F ~ A ~ F ~ .
Δ 3 = M S E M ( β ^ D K E ) M S E M ( β ^ 1 ) = σ 2 D 3 ,
where D ~ 3 = F ~ ( S ~ 1 A ~ ) F ~ .
Δ 4 = M S E M ( β ^ ) M S E M ( β ^ 2 ) = σ 2 D ~ 4 b ~ 2 b ~ 2 ,
where D ~ 4 = S ~ 1 A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ .
Δ 5 = M S E M ( β ^ M E ) M S E M ( β ^ 2 ) = σ 2 D ~ 5 b ~ 2 b ~ 2 ,
where D ~ 5 = A ~ A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ .
Δ 6 = M S E M ( β ^ D K E ) M S E M ( β ^ 2 ) = σ 2 D ~ 6 + b ~ 1 b ~ 1 b ~ 2 b ~ 2 ,
where D ~ 6 = F ~ S ~ 1 F ~ A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ .
Δ 7 = M S E M ( β ^ ) M S E M ( β ^ 3 ) = σ 2 D ~ 7 b ~ 3 b ~ 3 ,
where D ~ 7 = S ~ 1 Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ .
Δ 8 = M S E M ( β ^ M E ) M S E M ( β ^ 3 ) = σ 2 D ~ 8 b ~ 3 b ~ 3 ,
where D ~ 8 = A ~ Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ .
Δ 9 = M S E M ( β ^ D K E ) M S E M ( β ^ 3 ) = σ 2 D ~ 9 + b ~ 1 b ~ 1 b ~ 3 b ~ 3 ,
where D ~ 9 = F ~ S ~ 1 F ~ Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ .
Δ 10 = M S E M ( β ^ 1 ) M S E M ( β ^ 2 ) = σ 2 D ~ 10 + b ~ 1 b ~ 1 b ~ 2 b ~ 2 ,
where D ~ 10 = F ~ A ~ F ~ A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ .
Δ 11 = M S E M ( β ^ 1 ) M S E M ( β ^ 3 ) = σ 2 D ~ 11 + b ~ 1 b ~ 1 b ~ 3 b ~ 3 ,
where D ~ 11 = F ~ A ~ F ~ Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ .
Δ 12 = M S E M ( β ^ 2 ) M S E M ( β ^ 3 ) = σ 2 D ~ 12 + b ~ 2 b ~ 2 b ~ 3 b ~ 3 ,
where D ~ 12 = A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ .

3.1. Comparison Between the Proposed SRDKE1 and LSE

The following theorem represents the comparison between the proposed SRDKE1 and LSE.
Theorem 1.
Under model (4), when λ m a x ( F ~ A ~ F ~ S ~ ) < 1 , then β ^ 1 is superior to β ^ via the MSEM, namely Δ 1 0 iff b ~ 1 σ 2 D ~ 1 1 b ~ 1 1 .
Proof. 
It is clear that S ~ 1 > 0 , and F ~ A ~ F ~ > 0 . Therefore, when λ m a x ( F ~ A ~ F ~ S ~ ) < 1 , then we get that D ~ 1 > 0 by applying Lemma A1 in Appendix A. So, from (25) and applying Lemma A3 in Appendix A, we have Δ 1 0 iff b ~ 1 σ 2 D ~ 1 1 b ~ 1 1 . □

3.2. Comparison Between the Proposed SRDKE1 and ME

The following theorem represents the comparison between the proposed SRDKE1 and ME.
Theorem 2.
Under model (4), when λ m a x ( F ~ A ~ F ~ A ~ 1 ) < 1 , then β ^ 1 is superior to β ^ M E via the MSEM, namely Δ 2 0 iff b ~ 1 σ 2 D ~ 2 1 b ~ 1 1 .
Proof. 
It is clear that A ~ > 0 , and F ~ A ~ F ~ > 0 . Therefore, when λ m a x ( F ~ A ~ F ~ A ~ 1 ) < 1 , then we get that D ~ 2 > 0 by applying Lemma A1. So, from (26) and applying Lemma A3, we have Δ 2 0 iff b ~ 1 σ 2 D ~ 2 1 b ~ 1 1 . □

3.3. Comparison Between the Proposed SRDKE1 and DKE

The following theorem represents the comparison between the proposed SRDKE1 and DKE.
Theorem 3.
Under model (4), when λ m a x ( F ~ A ~ S ~ F ~ 1 ) < 1 , then β ^ 1 is superior to β ^ D K E via the MSEM, namely, Δ 3 0 iff b ~ 1 σ 2 D ~ 3 + b ~ 1 b ~ 1 1 b ~ 1 1 .
Proof. 
It is clear that F ~ S ~ 1 F ~ > 0 , and F ~ A ~ F ~ > 0 . Therefore, when λ m a x ( F ~ A ~ S ~ F ~ 1 ) < 1 , then we get that D ~ 3 > 0 by applying Lemma A1. So, from (27) and applying Lemma A3, we have Δ 3 0 iff b ~ 1 σ 2 D ~ 3 + b ~ 1 b ~ 1 1 b ~ 1 1 . □

3.4. Comparison Between the Proposed SRDKE2 and LSE

The following theorem represents the comparison between the proposed SRDKE2 and LSE.
Theorem 4.
Under model (4), when λ m a x ( A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ S ~ ) < 1 , then β ^ 2 is superior to β ^ via the MSEM, namely, Δ 4 0 iff b ~ 2 σ 2 D ~ 4 1 b ~ 2 1 .
Proof. 
It is clear that S ~ 1 > 0 , and A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ > 0 . Therefore, when λ m a x ( A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ S ~ ) < 1 , we get that D ~ 4 > 0 by applying Lemma A1. So, from (28) and applying Lemma A3, we have Δ 4 0 if b ~ 2 σ 2 D ~ 4 1 b ~ 2 1 . □

3.5. Comparison Between the Proposed SRDKE2 and ME

The following theorem represents the comparison between the proposed SRDKE2 and ME.
Theorem 5.
Under model (4), when λ m a x ( A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ ) < 1 , then β ^ 2 is superior to β ^ M E via the MSEM, namely, Δ 5 0 iff b ~ 2 σ 2 D ~ 5 1 b ~ 2 1 .
Proof. 
It is clear that A ~ > 0 , and A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ > 0 . Therefore, when λ m a x ( A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ ) < 1 , we get that D ~ 5 > 0 by applying Lemma A1. So, from (29) and applying Lemma A3, we have Δ 5 0 iff b ~ 2 σ 2 D ~ 5 1 b ~ 2 1 . □

3.6. Comparison Between the Proposed SRDKE2 and DKE

The following theorem represents the comparison between the proposed SRDKE2 and DKE.
Theorem 6.
Under model (4), when λ m a x ( A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ ( F ~ S ~ 1 F ~ ) 1 ) < 1 , then β ^ 2 is superior to β ^ D K E via the MSEM, namely, Δ 6 0 iff b ~ 2 σ 2 D ~ 6 + b ~ 1 b ~ 1 1 b ~ 2 1 .
Proof. 
It is clear that F ~ S ~ 1 F ~ > 0 , and A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ > 0 . Therefore, when λ m a x ( A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ ( F ~ S ~ 1 F ~ ) 1 ) < 1 , we get that D ~ 6 > 0 by applying Lemma A1. So, from (30) and applying Lemma A3, we have Δ 6 0 iff b ~ 2 σ 2 D ~ 6 + b ~ 1 b ~ 1 1 b ~ 2 1 . □

3.7. Comparison Between the Proposed SRDKE3 and LSE

The following theorem represents the comparison between the proposed SRDKE3 and LSE.
Theorem 7.
Under model (4), when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ S ~ ) < 1 , then β ^ 3 is superior to β ^ via the MSEM, namely, Δ 7 0 iff b ~ 3 σ 2 D ~ 7 1 b ~ 3 1 .
Proof. 
It is clear that S ~ 1 > 0 , and Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ > 0 . Therefore, when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ S ~ ) < 1 , we get that D ~ 7 > 0 by applying Lemma A1. So, from (31) and applying Lemma A3, we have Δ 7 0 iff b ~ 3 σ 2 D ~ 7 1 b ~ 3 1 . □

3.8. Comparison Between the Proposed SRDKE3 and ME

The following theorem represents the comparison between the proposed SRDKE3 and ME.
Theorem 8.
Under model (4), when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ A ~ 1 ) < 1 , then β ^ 3 is superior to β ^ M E via the MSEM, namely, Δ 8 0 iff b ~ 3 σ 2 D ~ 8 1 b ~ 3 1 .
Proof. 
It is clear that A ~ > 0 , and Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ > 0 . Therefore, when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ A ~ 1 ) < 1 , we get that D ~ 8 > 0 by applying Lemma A1. So, from (32) and applying Lemma A3, we have Δ 8 0 iff b ~ 3 σ 2 D ~ 8 1 b ~ 3 1 . □

3.9. Comparison Between the Proposed SRDKE3 and DKE

The following theorem represents the comparison between the proposed SRDKE3 and DKE.
Theorem 9.
Under model (4), when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ ( F ~ S ~ 1 F ~ ) 1 ) < 1 , then β ^ 3 is superior to β ^ D K E via the MSEM, namely, Δ 9 0 iff b ~ 3 σ 2 D ~ 9 + b ~ 1 b ~ 1 1 b ~ 3 1 .
Proof. 
It is clear that F ~ S ~ 1 F ~ > 0 , and Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ > 0 . Therefore, when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ ( F ~ S ~ 1 F ~ ) 1 ) < 1 , we get that D ~ 9 > 0 by applying Lemma A1. So, from (33) and applying Lemma A3, we have Δ 9 0 iff b ~ 3 σ 2 D ~ 9 + b ~ 1 b ~ 1 1 b ~ 3 1 . □

3.10. Comparison Between the Proposed SRDKE2 and the Proposed SRDKE1

The following theorem represents the comparison between the proposed SRDKE2 and the proposed SRDKE1.
Theorem 10.
Under model (4), when λ m a x ( A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ ( F ~ A ~ F ~ ) 1 ) < 1 , then β ^ 2 is superior to β ^ 1 via the MSEM, namely, Δ 10 0 iff b ~ 2 σ 2 D ~ 10 + b ~ 1 b ~ 1 1 b ~ 2 1 .
Proof. 
It is clear that F ~ A ~ F ~ > 0 ,   A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ > 0 . Therefore, when λ m a x ( A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ ( F ~ A ~ F ~ ) 1 ) < 1 , we get that D ~ 10 > 0 by applying Lemma A1. So, from (34) and applying Lemma A3, we have Δ 10 0 iff b ~ 2 σ 2 D ~ 10 + b ~ 1 b ~ 1 1 b ~ 2 1 . □

3.11. Comparison Between the Proposed SRDKE3 and the Proposed SRDKE1

The following theorem represents the comparison between the proposed SRDKE3 and the proposed SRDKE1.
Theorem 11.
Under model (4), when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ ( F ~ A ~ F ~ ) 1 ) < 1 , then β ^ 3 is superior to β ^ 1 via the MSEM, namely, Δ 11 0 iff b ~ 3 σ 2 D ~ 11 + b ~ 1 b ~ 1 1 b ~ 3 1 .
Proof. 
It is clear that  F ~ A ~ F ~ > 0 , and  Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ > 0 . Therefore, when  λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ ( F ~ A ~ F ~ ) 1 ) < 1 , we get that  D ~ 11 > 0  by applying Lemma A1. So, from (35) and applying Lemma A3, we have  Δ 11 0  iff  b ~ 3 σ 2 D ~ 11 + b ~ 1 b ~ 1 1 b ~ 3 1 .

3.12. Comparison Between the Proposed SRDKE3 and the Proposed SRDKE2

The following theorem represents the comparison between the proposed SRDKE3 and the proposed SRDKE2.
Theorem 12.
Under model (4), when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ [ A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ ] 1 ) < 1 , then β ^ 3 is superior to β ^ 2 via the MSEM, namely, Δ 12 0 iff b ~ 3 σ 2 D ~ 12 + b ~ 2 b ~ 2 1 b ~ 3 1 .
Proof. 
It is clear that A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ > 0 , and Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ > 0 . Therefore, when λ m a x ( Q ~ S ~ 2 k ( 1 + d ) I + k 2 ( 1 + d ) 2 S ~ 1 + R ~ W ~ 1 R ~ Q ~ [ A ~ F ~ S ~ F ~ + R ~ W ~ 1 R ~ A ~ ] 1 ) < 1 , we get that D ~ 12 > 0 by applying Lemma A1. So, from (36) and applying Lemma A3, we have Δ 12 0 iff b ~ 3 σ 2 D ~ 12 + b ~ 2 b ~ 2 1 b ~ 3 1 . □

4. A Simulation Study

A simulation is performed to show the proposed (SRDKE1, SRDKE2, and SRDKE3) performances in addition to the existing ones. MATLAB software is used for the computations. We use the equation below to generate the matrix of explanatory variables.
x ~ i j = ( 1 γ ~ 2 ) 1 / 2 z ~ i j + γ ~ z ~ i , p + 1 , i = 1,2 , . . . , n , j = 1,2 , . . . , p
where z ~ i j ‘s are independent pseudo-random values which follow a standardized normal distribution, and γ ~ is chosen, where γ ~ 2 is the correlation between two columns of X ~ , (Kibria [26] and Dawoud and Kibria [4]). Also, we take the largest eigenvalue of S ~ as the parameter vector following Kibria [26].
The dependent variable y ~ is generated as follows:
y ~ i = β 1 x ~ i 1 + β 2 x ~ i 2 + + β p x ~ i p + e i , i = 1,2 , , n
where e i ‘s are i . i . d N ( 0 , σ 2 ) and β ‘s values are selected such that β β = 1 , (Dawoud and Kibria [4]).
We state the factors’ values used in the simulation in Table 1.
Here, we consider the stochastic restrictions for each p , following the style provided by Roozbeh et al. [27] and Roozbeh and Hamzah [28], as follows:
-
For p = 3 , R ~ is given as R ~ = ( 1 2 2 ) .
-
For p = 7 , R ~ is given as R ~ = ( 1 2 2 2 2 2 2 ) .
-
For p = 10 , R ~ is given as R ~ = ( 1 2 2 2 2 2 2 2 2 2 ) .
where r ~ = R ~ β + e , e ~ N ( 0 , σ L S E 2 )
Also, we use the following estimators of ( k , d ) for the DKE and SRDKE1, SRDKE2, and SRDKE3.
d ^ = min α ^ j 2 σ ^ 2 λ j + α ^ j 2 j = 1 p ,
k ^ = min σ ^ 2 1 + d ^ σ ^ 2 λ j + 2 α ^ j 2 j = 1 p ,
where λ j is j-the eigenvalue of S ~ , α = P β , P is an orthogonal matrix, σ ^ 2 is the estimated value of σ 2 , α ^ is the estimator of LSE for α ; see [4,29].
To verify the performances of the LSE, ME, DKE, and the proposed estimators (SRDKE1, SRDKE2, and SRDKE3), we use the following formula:
M S E ( δ ~ ) = 1 M C N j = 1 M C N ( δ ~ i j δ ~ i ) ( δ ~ i j δ ~ i )
where δ ~ i j is estimator and δ ~ i is parameter.
The results of the simulation are given in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10.
The performance of the estimators is influenced by several key factors:
  • Sample Size ( n ): As the sample size increases, the MSE for all estimators decreases, reflecting the typical behavior where larger samples yield more stable and reliable parameter estimates.
  • Degree of Correlation ( γ ~ ): As the degree of correlation between explanatory variables increases, the MSE for all estimators also increases. This is expected since higher multicollinearity leads to less stable and less reliable estimates.
  • Number of Explanatory Variables ( p ): An increase in the number of explanatory variables results in a higher MSE for all estimators. The increase in multicollinearity and model complexity associated with larger p explains this behavior.
  • Standard Deviation of the Error Term ( σ ): As the standard deviation of the error term increases, the MSE for all estimators rises accordingly. A larger σ reflects higher variability in the error term, making parameter estimation more challenging and less precise.
  • LSE Performance: The LSE consistently exhibits the highest MSE across all sample sizes. While its MSE decreases as the sample size increases, it remains the least efficient estimator in the presence of multicollinearity.
  • Effect of High Correlation ( γ ~ ≈ 0.99): The LSE shows a sharp increase in MSE as γ ~ approaches 0.99, highlighting its inefficiency in high multicollinearity situations, where it amplifies the variance of parameter estimates.
  • Effect of Model Complexity ( p = 3 to p = 10): The LSE demonstrates a significant increase in MSE as p increases from 3 to 9, illustrating its sensitivity to model complexity and the accompanying multicollinearity.
  • Sensitivity to Noise ( σ ): Among the estimators, the LSE exhibits the largest increase in MSE as σ increases, indicating its sensitivity to noise in the data.
  • Comparative Performance of ME, DKE, and SRDKEs: While the ME and the DKE generally outperform the LSE, they still underperform compared to the stochastic restricted DKE estimators (SRDKE1, SRDKE2, and SRDKE3) in most cases. The latter estimators demonstrate superior robustness under varying conditions of multicollinearity, model complexity, and noise.
  • Superior Performance of SRDKE1: The SRDKE1 consistently outperforms all other estimators across a wide range of scenarios, including varying sample sizes, degrees of correlation, numbers of explanatory variables, and error variances.
  • Robustness and Efficiency: Among the proposed estimators, SRDKE1 emerges as the most robust and efficient, particularly in settings characterized by high multicollinearity, large error variability, and complex models with many explanatory variables. While the SRDKE2 and SRDKE3 estimators also demonstrate strong performance, they are slightly less effective than SRDKE1. In contrast, the LSE remains the least robust estimator, especially in the presence of multicollinearity under stochastic restrictions.
Figure 1 illustrates the comparison of the MSE for six estimators: LSE, ME, DKE, SRDKE1, SRDKE2, and SRDKE3. As observed, the MSE for all estimators increases as γ ~ , σ, and p increase. Conversely, the MSE decreases as the sample size ( n ) increases. Among the estimators, the LSE demonstrates the worst performance, exhibiting the highest MSE under all conditions. The ME and DKE perform better than the LSE but are still outperformed by the proposed estimators (SRDKE1, SRDKE2, and SRDKE3). Notably, SRDKE1 achieves the lowest MSE, highlighting its robustness against multicollinearity and its superior efficiency. While SRDKE2 and SRDKE3 also outperform the LSE, ME, and DKE, they are slightly less effective than SRDKE1.

5. Real-Life Applications

The LSE, ME, DKE, and the suggested SRDKE1, SRDKE2, and SRDKE3 estimators’ performances via the MSE are investigated using two different real-life datasets.

5.1. Application 1

To confirm the superiority of the proposed estimators (SRDKE1, SRDKE2, and SRDKE3) over others, the real-life percentage data of development expenses and research in GNP of different countries (1972–1986) is used, in which these data are explained by Gruber [30], and Akdeniz and Erol [31]. The variables are as follows: y (US), x ~ 1 (France), x ~ 2 (West Germany), x ~ 3 (Japan), x ~ 4 (Soviet Union). The eigenvalues of S ~ are calculated as 302.9626, 0.7283, 0.0446, and 0.0345, and the condition number is 8781.5, indicating the multicollinearity occurrence. Hence, σ ^ 2 is equal to 0.0015. Following Roozbeh et al. [27] and Roozbeh and Hamzah [28], the stochastic restriction is considered as R ~ = ( 1 2 2 2 ) , where r ~ = R ~ β + e , e ~ N ( 0 , σ L S E 2 ) .
Table 11 mentions the coefficients and MSE of the estimators.
Table 11 clarifies the following comments:
  • The LSE exhibits the worst performance as it has the highest MSE value.
  • Both the ME and DKE demonstrate better performance than the LSE, with the ME outperforming the DKE.
  • All the proposed estimators (SRDKE1, SRDKE2, and SRDKE3) have lower MSE values compared to the existing estimators, indicating that the proposed estimators outperform the existing ones.
  • Among all the estimators, SRDKE2 and SRDKE3 achieve the best performance, followed by SRDKE1.

5.2. Application 2

Here, we use the popular real-life Hald data from Woods et al. [32]. These data are used and explained by many authors, including Kaçıranlar et al. [6], Lukman et al. [33], Kibria and Lukman [3], and Dawoud and Kibria [4], among others. The eigenvalues of S ~ are 44,676.206, 5965.422 , 809.952 , 105.419 , and the condition number is 425.3619. This indicates that severe multicollinearity exists.
We consider R ~ = ( 1   1   1   0 ) following Roozbeh et al. [27] and Roozbeh and Hamzah [28].
Table 12 mentions the coefficients and MSE of estimators.
Table 12 clarifies the following comments:
  • The LSE exhibits the worst performance as it has the highest MSE value.
  • Both the ME and DKE demonstrate better performance than the LSE, with the ME outperforming the DKE.
  • All the proposed estimators (SRDKE1, SRDKE2, and SRDKE3) have lower MSE values compared to the existing estimators, indicating that the proposed estimators outperform the existing ones.
  • Among all estimators, the best performance is achieved by the three proposed estimators (SRDKE1, SRDKE2, and SRDKE3).

6. Conclusions

In this article, we propose three stochastic restricted biased estimators for the linear regression model. The properties of these proposed estimators are thoroughly discussed. Additionally, we derive the necessary and sufficient conditions under which the proposed estimators outperform the LSE, the ME, and the DKE, introduced by Dawoud and Kibria [4] to address multicollinearity. These conditions are established using the MSEM criterion, allowing for a comprehensive comparison of the estimators’ performance. To validate the theoretical findings, we conduct a simulation study, which demonstrates the superiority of the proposed estimators over the existing estimators. Furthermore, two real-life datasets are analyzed to illustrate the practical effectiveness of the proposed estimators. These new estimators can be extended to other models in future research. We recommend using the proposed stochastic restricted estimators in the presence of multicollinearity, with a careful selection of the tuning parameters k and d to achieve optimal performance.

Author Contributions

Conceptualization, I.D. and H.E.; Methodology, I.D.; Software, I.D.; Validation, I.D. and H.E.; Formal analysis, I.D.; Investigation, I.D.; Resources, I.D.; Data curation, I.D.; Writing—original draft, I.D.; Writing—review & editing, I.D.; Visualization, I.D.; Supervision, I.D.; Project administration, I.D.; Funding acquisition, H.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Some Lemmas

Lemma A1
([34]). Let n × n matrices N > 0 and B > 0 (or B 0 ); then N > B iff λ m a x B N 1 < 1 , where λ m a x B N 1 is the maximum eigenvalue of matrix B N 1 .
Lemma A2
([35]). Let B be an n × n positive definite matrix, that is B > 0 and α be some vector; then, B α α > 0 if and only if α B 1 α < 1 .
Lemma A3
([36]). Let α i = B i y , i = 1,2 be two linear estimators of α . Suppose that D = V a r ( α 1 ) V a r ( α 2 ) > 0 , where V a r ( α i ) i = 1,2 be the dispersion matrix of α i and b i = B i a s ( α i ) = ( B i X I ) α , i = 1,2 . Consequently,
Δ ( α 1 α 2 ) = M S E M ( α 1 ) M S E M ( α 2 ) = D + b 1 b 1 b 2 b 2 > 0
if and only if  b 2 [ D + b 1 b 1 ] 1 b 2 < 1  where  M S E M ( α i ) = V a r ( α i ) + b i b i .

References

  1. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for non-orthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  2. Liu, K. A new class of biased estimate in linear regression. Commun. Stat. Theory Methods 1993, 22, 393–402. [Google Scholar]
  3. Kibria, B.M.G.; Lukman, A.F. A New Ridge-Type Estimator for the Linear Regression Model: Simulations and Applications. Hindawi Sci. 2020, 2020, 9758378. [Google Scholar] [CrossRef] [PubMed]
  4. Dawoud, I.; Kibria, B.M.G. A new biased estimator to combat the multicollinearity of the Gaussian linear regression model. Stats 2020, 3, 526–541. [Google Scholar] [CrossRef]
  5. Sarkar, N. A new estimator combining the ridge regression and the restricted least squares methods of estimation. Commun. Stat.-Theory Methods 1992, 21, 1987–2000. [Google Scholar] [CrossRef]
  6. Kaçıranlar, S.; Sakallıoğlu, S.; Akdeniz, F.; Styan, G.P.H.; Werner, H.J. A new biased estimator in linear regression and a detailed analysis of the widely analysed dataset on Portland cement. Sankhyā Indian J. Stat. Ser. B 1999, 61, 443–459. [Google Scholar]
  7. Groß, J. Restricted ridge estimation. Stat. Probab. Lett. 2003, 65, 57–64. [Google Scholar] [CrossRef]
  8. Zhong, Z.; Yang, H. Ridge Estimation to the Restricted Linear Model. Commun. Stat.-Theory Methods 2007, 36, 2099–2115. [Google Scholar] [CrossRef]
  9. Xu, J.; Yang, H. On the restricted almost unbiased estimators in linear regression. J. Appl. Stat. 2011, 38, 605–617. [Google Scholar] [CrossRef]
  10. Li, Y.; Yang, H. Two kinds of restricted modified estimators in linear regression model. J. Appl. Stat. 2011, 38, 1447–1454. [Google Scholar] [CrossRef]
  11. Wu, J.; Liu, C. Performance of Some Stochastic Restricted Ridge Estimator in Linear Regression Model. J. Appl. Math. 2014, 2014, 508793. [Google Scholar] [CrossRef]
  12. Wu, J. Efficiency of the restricted r-d class estimator in linear regression. Appl. Math. Comput. 2014, 236, 572–579. [Google Scholar] [CrossRef]
  13. Wu, J. Modified Restricted Almost Unbiased Liu Estimator in Linear Regression Model. Commun. Stat.-Simul. Comput. 2016, 45, 689–700. [Google Scholar] [CrossRef]
  14. Li, Y.; Yang, H. More on the two-parameter estimation in the restricted regression. Commun. Stat.-Theory Methods 2016, 45, 7184–7196. [Google Scholar] [CrossRef]
  15. Theil, H. On the use of incomplete prior information in regression analysis. J. Am. Stat. Assoc. 1963, 58, 401–414. [Google Scholar] [CrossRef]
  16. Özkale, M.R. A Stochastic Restricted Ridge Regression Estimator. J. Multivar. Anal. 2009, 100, 1706–1716. [Google Scholar] [CrossRef]
  17. Hubert, M.H.; Wijekoon, P. Improvement of the Liu estimator in linear regression coefficient. Stat. Pap. 2006, 47, 471–479. [Google Scholar] [CrossRef]
  18. Yang, H.; Xu, J.W. An alternative stochastic restricted Liu estimator in linear regression model. Stat. Pap. 2009, 50, 639–647. [Google Scholar] [CrossRef]
  19. Yang, H.; Cui, J. A Stochastic Restricted Two-Parameter Estimator in Linear Regression Model. Commun. Stat.-Theory Methods 2011, 40, 2318–2325. [Google Scholar] [CrossRef]
  20. Li, Y.; Yang, H. Efficiency of a stochastic restricted two-parameter estimator in linear regression. Appl. Math. Comput. 2014, 249, 371–381. [Google Scholar] [CrossRef]
  21. Özbay, N.; Kaçıranlar, S. Estimation in a linear regression model with stochastic linear restrictions: A new two-parameter-weighted mixed estimator. J. Stat. Comput. Simul. 2018, 88, 1669–1683. [Google Scholar] [CrossRef]
  22. Tyagi, G.; Chandra, S. Two-Parameter Stochastic Restricted Principal Component Estimator in Linear Regression Model. Pak. J. Stat. 2019, 35, 127–154. [Google Scholar]
  23. Chen, H.; Wu, J. On the mixed Kibria-Lukman estimator for the linear regression model. Sci. Rep. 2022, 12, 12430. [Google Scholar] [CrossRef]
  24. Durbin, J. A note on regression when there is extraneous information about one of the coefficients. J. Am. Stat. Assoc. 1953, 48, 799–808. [Google Scholar] [CrossRef]
  25. Theil, H.; Goldberger, A.S. On pure and mixed estimation in econometrics. Int. Econ. Rev. 1961, 2, 65–78. [Google Scholar] [CrossRef]
  26. Kibria, B.M.G. Performance of some new ridge regression estimators. Commun. Stat.-Simul. Comput. 2003, 32, 419–435. [Google Scholar] [CrossRef]
  27. Roozbeh, M.; Hesamian, G.; Akbari, M.G. Ridge estimation in semi-parametric regression models under the stochastic restriction and correlated elliptically contoured errors. J. Comput. Appl. Math. 2020, 378, 112940. [Google Scholar] [CrossRef]
  28. Roozbeh, M.; Hamzah, N.A. Uncertain stochastic ridge estimation in partially linear regression models with elliptically distributed errors. J. Theor. Appl. Stat. 2020, 54, 494–523. [Google Scholar] [CrossRef]
  29. Özkale, M.R.; Kaçıranlar, S. The restricted and unrestricted two-parameter estimators. Commun. Stat.-Theory Methods 2007, 36, 2707–2725. [Google Scholar] [CrossRef]
  30. Gruber, M.H.J. Improving Efficiency by Shrinkage: The James–Stein and Ridge Regression Estimators; Marcel Dekker Inc.: New York, NY, USA, 1998. [Google Scholar]
  31. Akdeniz, F.; Erol, H. Mean Squared error matrix comparisons of some biased estimator in linear regression. Commun. Stat.-Theory Methods 2003, 32, 2389–2413. [Google Scholar] [CrossRef]
  32. Woods, H.; Steinour, H.H.; Starke, H.R. Effect of composition of Portland cement on heat evolved during hardening. J. Ind. Eng. Chem. 1932, 24, 1207–1214. [Google Scholar] [CrossRef]
  33. Lukman, A.F.; Ayinde, K.; Binuomote, S.; Clement, O.A. Modified ridge-type estimator to combat multicollinearity: Application to chemical data. J. Chemom. 2019, 33, e3125. [Google Scholar] [CrossRef]
  34. Wang, S.G.; Wu, M.X.; Jia, Z.Z. Matrix Inequalities, 2nd ed.; Chinese Science Press: Beijing, China, 2006. [Google Scholar]
  35. Farebrother, R.W. Further results on the mean square error of ridge regression. J. R. Stat. Soc. Ser. B (Methodol.) 1976, 38, 248–250. [Google Scholar] [CrossRef]
  36. Trenkler, G.; Toutenburg, H. Mean squared error matrix comparisons between biased estimators: An overview of recent results. Stat. Pap. 1990, 31, 165–179. [Google Scholar] [CrossRef]
Figure 1. MSE comparison for different estimation methods.
Figure 1. MSE comparison for different estimation methods.
Mathematics 13 00015 g001
Table 1. The factors’ values.
Table 1. The factors’ values.
FactorSymbolValues
Sample size n 30, 50, 70, 100, 200, 500
Standard deviation σ 1, 5, 9
Degree of correlation γ ~ 0.7, 0.8, 0.9, 0.95, 0.99
Number of explanatory variables p 3, 7, 10
Number of replicatesMCN1000
Table 2. Estimated MSE of the estimators.
Table 2. Estimated MSE of the estimators.
σ = 1 ,   p = 3
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.700.13140.11590.11260.09900.10120.1009
0.800.17620.14860.14560.12180.12630.1256
0.900.31580.24220.23910.17990.19340.1912
0.950.59760.40880.40720.26960.30210.2973
0.992.85401.46251.58910.78760.88750.8776
500.700.09720.08840.08810.08000.08090.0808
0.800.13210.11600.11610.10160.10350.1033
0.900.24020.19600.19550.15790.16460.1636
0.950.45820.34080.33800.24570.26460.2621
0.992.20531.27261.31160.73010.80620.8001
700.700.06260.05790.05880.05450.05470.0547
0.800.08420.07540.07780.06960.07020.0701
0.900.15230.12750.13470.11220.11430.1140
0.950.29030.22200.24130.18180.18910.1882
0.991.39810.80940.97940.54210.59010.5853
1000.700.04280.04020.04050.03810.03840.0383
0.800.05800.05320.05350.04910.04970.0496
0.900.10590.09200.09090.07870.08100.0808
0.950.20330.16330.15530.12340.13180.1308
0.990.98690.60920.52250.31620.37180.3669
2000.700.02250.02180.02190.02130.02130.0213
0.800.03020.02890.02910.02790.02790.0279
0.900.05450.05040.05090.04710.04740.0473
0.950.10400.09130.09190.08050.08190.0818
0.990.50120.35870.34860.24470.26860.2656
5000.700.00800.00790.00800.00790.00790.0079
0.800.01080.01060.01070.01050.01050.0105
0.900.01950.01880.01910.01840.01850.0185
0.950.03730.03500.03570.03340.03360.0336
0.990.17980.14740.14900.12090.12620.1255
Table 3. Estimated MSE of the estimators.
Table 3. Estimated MSE of the estimators.
σ = 1 ,   p = 7
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.700.53110.40410.51140.38780.39320.3923
0.800.74170.53950.70760.51250.52160.5200
0.901.38640.93111.28900.86140.88420.8803
0.952.68251.67142.42091.50241.55231.5435
0.9913.04617.266111.22206.26236.45896.4177
500.700.31020.24660.30050.23810.24110.2403
0.800.43630.33200.41710.31590.32280.3211
0.900.82170.57530.76090.52890.55240.5461
0.951.59661.02471.42320.90630.96710.9505
0.997.80194.30926.48643.58373.87283.7879
700.700.17960.15500.17550.15140.15190.1518
0.800.25260.21010.24490.20350.20470.2046
0.900.47620.36810.45280.34930.35360.3530
0.950.92670.65980.85840.60880.62130.6195
0.994.54112.71983.97352.36112.42062.4130
1000.700.12630.11250.12300.10960.10990.1098
0.800.17710.15320.17120.14800.14860.1486
0.900.33300.27060.31550.25600.25850.2582
0.950.64690.48760.59600.44780.45620.4550
0.993.16151.99272.72281.70751.76101.7525
2000.700.05660.05290.05610.05240.05250.0525
0.800.07950.07280.07840.07190.07200.0720
0.900.14950.13090.14590.12760.12830.1282
0.950.29050.23910.27780.22850.23100.2307
0.991.42000.97271.26180.86250.88720.8845
5000.700.02410.02330.02400.02320.02320.0232
0.800.03380.03240.03360.03210.03210.0321
0.900.06370.05920.06290.05850.05850.0585
0.950.12380.11040.12100.10790.10810.1081
0.990.60540.46310.56170.42870.43450.4340
Table 4. Estimated MSE of the estimators.
Table 4. Estimated MSE of the estimators.
σ = 1 ,   p = 10
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.700.93400.64480.92010.63280.63740.6364
0.801.31650.87211.29150.85110.86070.8584
0.902.48381.53452.40711.47491.50431.4972
0.954.83022.81634.61392.66282.73252.7158
0.9923.616912.895222.027411.900612.225112.1443
500.700.42150.32660.41330.32000.32390.3229
0.800.59240.44040.57540.42770.43620.4338
0.901.11560.77001.05950.73130.75760.7502
0.952.16821.39412.00521.29101.35451.3370
0.9910.60066.09709.36215.41865.70105.6200
700.700.29880.24470.29530.24160.24240.2422
0.800.42130.33280.41510.32740.32890.3286
0.900.79570.58650.77690.57110.57560.5748
0.951.54881.06181.49391.01941.03151.0295
0.997.58304.57587.11314.25434.31074.3021
1000.700.18240.15880.17920.15610.15650.1565
0.800.25750.21780.25170.21290.21380.2137
0.900.48680.38760.46950.37370.37690.3764
0.950.94770.70490.89750.66720.67660.6752
0.994.64032.99264.21212.71722.77162.7628
2000.700.09020.08290.08930.08210.08210.0821
0.800.12710.11430.12550.11280.11290.1129
0.900.24010.20560.23480.20100.20170.2016
0.950.46740.37650.45070.36300.36560.3653
0.992.28861.57182.11061.44871.47271.4697
5000.700.03510.03360.03510.03350.03350.0335
0.800.04960.04680.04940.04660.04660.0466
0.900.09360.08550.09290.08490.08500.0850
0.950.18210.15900.17970.15690.15720.1572
0.990.89110.67090.85140.64020.64630.6457
Table 5. Estimated MSE of the estimators.
Table 5. Estimated MSE of the estimators.
σ = 5 ,   p = 3
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.703.28441.85982.81461.55171.74541.6787
0.804.40592.34433.63931.86052.18352.0735
0.907.89443.81755.97672.71443.43413.2112
0.9514.94106.737910.18124.25065.58735.2480
0.9971.350229.927839.727815.758618.623218.2843
500.702.43061.49042.20221.33141.43071.3984
0.803.30311.90752.90161.63531.81101.7538
0.906.00383.16304.88872.45912.90212.7649
0.9511.45565.62798.44893.88694.82684.5736
0.9955.132025.086632.790814.114216.532216.2410
700.701.56470.97241.47090.90890.94330.9331
0.802.10441.21281.94541.10771.17051.1512
0.903.80631.95053.36821.67961.85901.8034
0.957.25873.39036.03212.68903.14063.0104
0.9934.952314.664624.48489.528111.200510.9409
1000.701.06950.70641.01310.66170.69810.6864
0.801.44910.88331.33820.80050.86970.8470
0.902.64771.42142.27321.17181.37781.3099
0.955.08332.45973.88131.74992.27942.1152
0.9924.672110.494413.06345.12457.08846.7536
2000.700.56160.42350.54810.41300.41760.4165
0.800.75460.52900.72810.50890.51940.5166
0.901.36320.84361.27340.78010.82080.8086
0.952.59961.42962.29641.23171.37371.3285
0.9912.53105.76888.71423.73974.85984.5713
5000.700.20050.17120.19920.17010.17060.1704
0.800.26980.21760.26720.21520.21630.2161
0.900.48830.35390.47800.34490.35020.3488
0.950.93180.59950.89190.56660.58960.5827
0.994.49452.22393.72451.73312.11181.9923
Table 6. Estimated MSE of the estimators.
Table 6. Estimated MSE of the estimators.
σ = 5 ,   p = 7
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.7013.27847.469612.78387.13327.36807.3049
0.8018.542710.263717.68999.712810.07109.9776
0.9034.660518.834032.224917.377218.144917.9591
0.9567.061736.099360.523732.396233.872133.5446
0.99326.1516174.1541280.5489150.0596155.1024153.9788
500.707.75514.32077.51344.14234.29134.2356
0.8010.90745.944910.42685.61005.91115.7974
0.9020.542310.924619.02249.953210.778010.4792
0.9539.915820.945735.579718.385220.212219.5993
0.99195.0467101.3026162.161084.103091.442889.1713
700.704.49092.73894.38822.66842.70822.6972
0.806.31543.73566.12343.60823.68353.6630
0.9011.90586.763511.32006.39056.60006.5467
0.9523.167712.836821.460111.787212.278012.1692
0.99113.528361.339599.337953.166854.867954.6164
1000.703.15742.01803.07561.95871.98421.9777
0.804.42802.73664.28042.63322.68132.6687
0.908.32554.91287.88824.62194.76224.7255
0.9516.17189.251814.90038.44618.80518.7174
0.9979.038543.773268.070637.413338.985638.6913
2000.701.41510.97921.40170.96960.97640.9746
0.801.98701.31741.96071.29891.31291.3091
0.903.73832.31843.64662.25662.30502.2916
0.957.26284.27246.94594.06844.21704.1776
0.9935.501119.592731.545817.297518.189618.0420
5000.700.60260.46530.59950.46270.46330.4632
0.800.84610.62580.84020.62110.62250.6223
0.901.59251.09001.57251.07521.08081.0798
0.953.09501.97093.02471.92191.94361.9392
0.9915.13508.690414.04208.00978.28158.2323
Table 7. Estimated MSE of the estimators.
Table 7. Estimated MSE of the estimators.
σ = 5 ,   p = 10
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.7023.350012.771723.001612.505712.677112.6305
0.8032.913317.883632.286317.416717.743117.6513
0.9062.096033.473960.176532.139833.012532.7735
0.95120.753964.8056115.347761.261563.161162.6716
0.99590.4232315.8951550.6839291.5832299.7943297.7157
500.7010.53706.096410.33245.95596.13046.0615
0.8014.80948.451214.38518.17558.51398.3834
0.9027.889315.653726.488514.804715.687815.3721
0.9554.206030.127750.129427.795529.687129.0738
0.99265.0157146.1229234.0516129.7329137.0945134.8982
700.707.46914.52487.38304.46144.50324.4917
0.8010.53296.269810.37656.15786.23006.2107
0.9019.893111.577319.423511.251111.437111.3913
0.9538.720022.216937.347321.286721.710521.6190
0.99189.5758107.4084177.828199.7729101.3285101.0731
1000.704.55952.95424.48032.89942.93022.9222
0.806.43824.07826.29303.98074.03524.0212
0.9012.16957.477711.73687.19837.34587.3094
0.9523.692514.260022.437013.477713.837313.7561
0.99116.007268.4443105.301762.118863.656463.3757
2000.702.25411.55912.23211.54351.54971.5484
0.803.17872.13023.13692.10142.11412.1113
0.906.00333.84295.87083.75453.79733.7873
0.9511.68507.245211.26796.97247.10187.0721
0.9957.214834.436152.763931.667332.437732.3162
5000.700.87860.66540.87640.66380.66440.6643
0.801.23880.90421.23440.90100.90250.9022
0.902.33911.60252.32311.59131.59701.5959
0.954.55162.95134.49272.91142.93272.9284
0.9922.277413.468021.286012.829813.084213.0424
Table 8. Estimated MSE of the estimators.
Table 8. Estimated MSE of the estimators.
σ = 9 ,   p = 3
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.7010.64145.50119.11944.56395.24844.9930
0.8014.27527.048511.79135.56526.68306.2751
0.9025.577711.762219.36478.316210.74339.9516
0.9548.408821.186532.987213.306417.735716.5731
0.99231.174696.3864128.718250.730360.044158.9371
500.707.87514.31207.13513.82884.19904.0636
0.8010.70215.60219.40134.77135.40705.1766
0.9019.45229.544815.83957.36738.90698.3865
0.9537.116217.419627.374311.953115.127514.2193
0.99178.627880.5225106.242345.242653.152152.1812
700.705.06972.74544.76582.55862.69462.6467
0.806.81833.47066.30303.16063.39833.3142
0.9012.33255.744210.91304.92815.56905.3496
0.9523.518110.314719.54398.13929.68489.2098
0.99113.245546.790879.330930.342935.806334.9508
1000.703.46531.93683.28241.80191.94511.8899
0.804.69502.44884.33572.19842.46022.3591
0.908.57864.06117.36523.29794.03483.7582
0.9516.46987.297912.57535.10216.92036.3020
0.9979.937833.308942.325316.158622.593621.4678
2000.701.81951.15031.77591.11901.14111.1336
0.802.44501.43632.35901.37711.42341.4071
0.904.41682.32254.12582.13482.29782.2377
0.958.42274.07777.44033.48344.00663.8134
0.9940.600617.976628.233911.575115.301414.3015
5000.700.64960.47840.64550.47480.47760.4767
0.800.87420.59520.86560.58750.59390.5918
0.901.58200.94731.54880.91930.94450.9356
0.953.01911.60702.88961.50701.60301.5671
0.9914.56206.524512.06755.01916.34645.8776
Table 9. Estimated MSE of the estimators.
Table 9. Estimated MSE of the estimators.
σ = 9 ,   p = 7
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.7043.021923.598341.419722.523923.306623.0888
0.8060.078332.609557.315230.846032.033131.7143
0.90112.299960.3731104.408855.691258.208357.5885
0.95217.2799116.2659196.0969104.3295109.1388108.0623
0.991056.7200563.6500909.0100485.7100502.0400498.4200
500.7025.126413.393624.343412.819413.320413.1264
0.8035.340118.649533.782817.566518.566618.1788
0.9066.556934.777961.632631.644434.341033.3509
0.95129.327167.2589115.278159.001564.936762.9344
0.99631.9514327.5880525.4015271.9808295.7584288.3920
700.7014.55058.348714.21768.12878.27008.2281
0.8020.462011.521519.839811.120911.381911.3066
0.9038.574921.267036.676720.081520.787120.6007
0.9575.063440.870369.530737.513339.140638.7703
0.99367.8316197.8239321.8548171.4551177.0017176.1757
1000.7010.22996.05519.96495.87065.96445.9372
0.8014.34678.319913.86867.99658.16768.1182
0.9026.974515.260825.557914.344414.823514.6908
0.9552.396529.219548.276926.661427.858327.5561
0.99256.0847140.9702220.5487120.4924125.6165124.6521
2000.704.58482.82284.54162.79422.82182.8133
0.806.43793.84446.35273.78873.84393.8266
0.9012.11226.948811.81496.75776.93536.8809
0.9523.531413.141722.504612.498013.016112.8692
0.99115.023762.6848102.208555.315058.253957.7568
5000.701.95251.31841.94231.31061.31361.3130
0.802.74141.77772.72221.76371.77001.7686
0.905.15973.15085.09483.10673.12953.1242
0.9510.02775.87039.80005.72175.80265.7833
0.9949.037327.452845.496225.302126.213026.0401
Table 10. Estimated MSE of the estimators.
Table 10. Estimated MSE of the estimators.
σ = 9 ,   p = 10
Existing EstimatorsProposed Estimators
n γ ~ LSEDKEMESRDKE1SRDKE2SRDKE3
300.7075.654140.826874.525139.974640.538540.3827
0.80106.639157.3585104.607755.862456.932056.6282
0.90201.1909107.8572194.9719103.5650106.4074105.6255
0.95391.2426209.4222373.7267197.9733204.1437202.5501
0.991913.01001023.02001784.2500944.3400970.9700964.2400
500.7034.139819.254733.476918.801019.386219.1499
0.8047.982426.840346.607825.948427.073526.6305
0.9090.361450.131285.822747.385450.289549.2382
0.95175.627397.0345162.419189.490795.670393.6534
0.99858.6510473.0814758.3271420.0134443.9173436.7817
700.7024.199814.096923.921113.897114.040513.9991
0.8034.126519.689733.620019.334319.579719.5117
0.9064.453536.791862.932135.748436.369736.2132
0.95125.452871.2306121.005168.237169.632369.3275
0.99614.2256347.1575576.1630322.4593327.5137326.6826
1000.7014.77299.057214.51638.88778.99658.9656
0.8020.859712.636120.389312.331112.520512.4682
0.9039.429223.540738.027222.657023.155023.0266
0.9576.763845.456372.696042.953544.145343.8687
0.99375.8633220.8507341.1775200.4031205.4147204.4948
2000.707.30344.65417.23214.60604.62984.6239
0.8010.29906.443010.16376.35336.40066.3885
0.9019.450711.909319.021411.628111.779311.7407
0.9537.859322.899536.508022.023022.462222.3571
0.99185.3759110.9176170.9550101.9760104.4965104.0963
5000.702.84661.92602.83941.92111.92401.9234
0.804.01392.63613.99962.62662.63272.6314
0.907.57864.77157.52704.73764.75984.7547
0.9514.74739.022414.55628.89688.97488.9571
0.9972.178742.933768.966640.889941.739541.5959
Table 11. The coefficients and MSEs.
Table 11. The coefficients and MSEs.
Existing EstimatorsProposed Estimators
CoefLSEDKEMESRDKE1SRDKE2SRDKE3
α 1 0.64550.60950.67310.63490.67250.6725
α 2 0.08960.09900.08210.09190.08180.0816
α 3 0.14360.15790.13200.14750.13160.1318
α 4 0.15260.15670.15000.15430.15070.1508
M S E 0.08080.06860.04540.03890.03740.0374
k -----0.0014-----0.00140.00140.0014
d -----0.3444-----0.34440.34440.3444
Table 12. The coefficients and MSEs.
Table 12. The coefficients and MSEs.
Existing EstimatorsProposed Estimators
Coef.LSEDKEMESRDKE1SRDKE2SRDKE3
α 1 2.19302.17632.18182.16522.16542.1654
α 2 1.15331.15721.15621.16011.16001.1600
α 3 0.75850.74650.74900.73700.73720.7373
α 4 0.48630.48880.48820.49070.49060.4906
M S E 0.06380.06290.06250.06170.06170.0617
k -----0.3357-----0.33570.33570.3357
d -----0.8101-----0.81010.81010.8101
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dawoud, I.; Eledum, H. New Stochastic Restricted Biased Regression Estimators. Mathematics 2025, 13, 15. https://doi.org/10.3390/math13010015

AMA Style

Dawoud I, Eledum H. New Stochastic Restricted Biased Regression Estimators. Mathematics. 2025; 13(1):15. https://doi.org/10.3390/math13010015

Chicago/Turabian Style

Dawoud, Issam, and Hussein Eledum. 2025. "New Stochastic Restricted Biased Regression Estimators" Mathematics 13, no. 1: 15. https://doi.org/10.3390/math13010015

APA Style

Dawoud, I., & Eledum, H. (2025). New Stochastic Restricted Biased Regression Estimators. Mathematics, 13(1), 15. https://doi.org/10.3390/math13010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop