Next Article in Journal
Limits of Agreement Based on Transformed Measurements
Previous Article in Journal
Note on Pre-Taxation Data Reported by UK FTSE-Listed Companies: Search for Compatibility with Benford’s Laws
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Unbiased Convex Estimator Depending on Prior Information for the Classical Linear Regression Model

by
Mustafa I. Alheety
1,
HM Nayem
2 and
B. M. Golam Kibria
2,*
1
Department of Mathematics, College of Education for Pure Sciences, University of Anbar, Anbar 31001, Iraq
2
Department of Mathematics and Statistics, Florida International University, Miami, FL 33199, USA
*
Author to whom correspondence should be addressed.
Stats 2025, 8(1), 16; https://doi.org/10.3390/stats8010016
Submission received: 30 December 2024 / Revised: 29 January 2025 / Accepted: 5 February 2025 / Published: 9 February 2025

Abstract

:
We propose an unbiased restricted estimator that leverages prior information to enhance estimation efficiency for the linear regression model. The statistical properties of the proposed estimator are rigorously examined, highlighting its superiority over several existing methods. A simulation study is conducted to evaluate the performance of the estimators, and real-world data on total national research and development expenditures by country are analyzed to illustrate the findings. Both the simulation results and real-data analysis demonstrate that the proposed estimator consistently outperforms the alternatives considered in this study.

1. Introduction

Consider the following linear regression model:
Y = X β + ε ,       ε ~ N 0 , σ 2 I n
where Y is an n × 1 vector of the dependent variable of observation, X is an n × p matrix of explanatory variables of rank p , β is a p × 1 vector of unknown parameters, and ε is an n × 1 error vector, which follows a multivariate normal distribution, with E ε = 0 and covariance matrix equal to σ 2 I n . Also, σ 2   is the error variance and I n is the identity matrix of size n.
The ordinary least-squares (OLS) estimator for model (1) can be written as follows:
β ^ = Z 1 X   Y
where Z = X X . In the linear regression model, the unknown regression coefficients are estimated using the OLS estimator.
The linear regression model in (1) is based on a set of assumptions that determine the process of its use in estimating the relationship between the dependent variable and the explanatory variables. One of these assumptions is the absence of a linear relationship, which is called multicollinearity between the explanatory variables, such that this relationship is not strong or harmful. The presence of a strong relationship between the explanatory variables affects the estimates of the model parameters, which is reflected in the decision-making process to accept or reject the null hypothesis. The reason for this is that the presence of a strong relationship will affect the correlation matrix, which is reflected in its inverse. Thus, the variance of the least-squares estimates will be inflated, which will cause the least-squares method to lose the property of the best-unbiased estimator.
Regularization techniques like Ridge regression or Lasso regression, which add a penalty for high coefficients, are used for helping to mitigate the effects of multicollinearity. However, the ordinary ridge regression ( O R R ), which, suggested by Hoerl and Kennard (1970), is one of the most famous biased estimators that is used to reduce the effects of multicollinearity problems on estimation [1]. Although the estimator is always biased, it outperforms the OLS estimator by the criterion of sampling variance. However, the sampling variance is not an appropriate criterion for evaluating the performance of a biased estimator. For this reason, Crouse et al. (1995) introduced the unbiased ridge regression ( U R R ) estimator, which is one of the most important estimators and the most popular due to its statistical properties [2]. It aims to reduce the variance. Therefore, the resulting estimator has a lower mean-squared error (MSE) compared to the OLS, as well as to the ORR. Özkale and Kaçiranlar (2007) introduced the unbiased ridge regression (URR) estimator in two different ways and compared their performances with the other estimations [3].
Also, if there is prior information about the unknown parameters or any additional information that helps in the process of estimating those parameters, this will increase the accuracy and performance of the estimator. Therefore, the restricted least-square (RLS) estimator has minimum variance compared to the OLS estimator. Combining two estimators creates a new estimator that inherits beneficial statistical properties from both. Therefore, a convex combination of these estimators can be advantageous when both seem suitable for a particular scenario. The MSE of this convex combination is constrained by the higher MSE of the individual estimators. This motivated us to propose a new unbiased convex estimator for the regression parameters of the linear regression model.
The aim of this article is to obtain an unbiased estimator depending on prior information and a restricted ridge regression estimator proposed by Sarkar (1992) using the convex combination technique [4]. Therefore, we can use it in the applied aspect to avoid the problem of multicollinearity and obtain estimators that are more accurate than existing estimators.
The rest of this paper is organized as follows: In Section 2, the model and the proposed estimator, as well as its statistical properties are provided. In Section 3, the performance of the new estimator compared with other estimators using the mean-squared-error criterion is studied. In Section 4, the results of a Monte Carlo simulation study are provided to support the theoretical results. Finally, a real dataset is studied with some discussions to justify the superiority of the new convex in Section 5. This paper ends with some concluding remarks.

2. The Proposed Convex Estimator

When there is multicollinearity, the OLS estimator will lose the property of the best estimator, as this correlation between the independent variables will cause an inflation of the variance that increases with the strength of the correlation.
For this reason, the ORR is an estimator that comes by minimizing the objective function Y X β Y X β + k β β C , where C is a p  ×   p matrix, with respect to β , to obtain an estimator that will reduce the inflation of the variance as follows:
β ^ O R R k = Z + k I n ¯ 1 X Y ,     k 0
For more on ridge regression methods for different models, we refer to Hoerl and Kennard (1970), Månsson et al. (2018), Kibria and Banik (2016), Alheety and Kibria (2011, 2020), and, recently, Kibria (2022), among others [1,5,6,7,8,9]. The U R R was a linear combination of prior information, with the OLS estimator as follows:
β ^ U R R k = ( Z + k I n ) 1 ( X Y + k J ) , k 0
where J is a random vector with J   ~   N   ( β , ( σ 2 / k )   I n ) and independent of β ^ .
The RLS method enables the use of sample information and prior information simultaneously. The RLS method is not affected by the multicollinearity problem and works to reduce the variance due to prior information about the parameter and the sample. The RLS estimator is obtained by minimizing the Y X β   Y X β subject to the linear restrictions R β = r , where r is a q × 1 vector of known elements and R is a q × p   ( q < p ) full-row rank matrix with known elements. The RLS estimator can be written as follows [10]:
β ^ R L S = β ^ + Z 1 R   ( R Z 1 R ) 1   r R β ^  
This estimator is unbiased and has a smaller variance than the OLS when the assumed restrictions are true. If the restrictions are incorrect, the reduction in sampling variance still exists, but the estimator becomes a biased one.
The convex estimator can be provided as follows:
β ^ c o n = A β ~ 1 + I n A β ~ 2
where β ~ 1 and β ~ 2 can be any two estimators, and A is a square matrix. The convex estimators have the ability to take advantage of the properties of the estimators that compose them and have the potential to deal with multicollinearity effectively.
Wu (2014) studied the philosophy of prior information on β as follows [11]:
β ^ ( C , J ) = C β ^ + I n C J
where C represented p × p matrix, J is a random variable distributed as normal with mean β and a variance–covariance matrix, and V = σ 2 I n C 1 C Z 1 . He obtained an unbiased estimator by considering the two-parameter estimator proposed by Ozkale and Kaciranlar (2007) [12]. So, the goal of this paper is to present a generalization of Equation (7) by replacing β ^ R L S instead of β ^ to obtain a new family of unbiased restricted convex estimators.
In fact, the true parameter β may have prior information as a vector called J. Also, if we assume the linear restrictions R β = r are satisfied with the model, we can suggest the following general form of the convex estimator (CR) as follows:
β ^ C R = C β ^ R L S + I n C J
To study the optimality of β ^ C R , we find the form of the matrix C that makes the MSE matrix (MSEM) of β ^ C R minimum.
For that, the MSEM of β ^ C R is derived as follows:
L = MSEM   ( β ^ C R ) = C V a r β ^ R L S C + I n C V ( I n C )
A minimum value of L will exist when L is a real-valued, convex, and differentiable function. Therefore, by differentiating (9) with respect to C   and equating to zero, we obtain:
L C = 2 C   V a r β ^ R L S 2 I n C V = 0
And then,
C = V V a r β ^ R L S + V 1
Therefore,
V = I n C 1 C   V a r β ^ R L S = σ 2   I n C 1 C   M   Z 1
where V a r β ^ R L S = σ 2     M   Z 1 and M = I n Z 1 R R   Z 1   R 1   R .
So, the proposed new estimator will be:
β ^ C R = W β ^ R L S + I n W J
where W = I n + k   Z 1 1 ,   k 0 . The random variable J ~ N β , σ 2   I n W 1 W   M   Z 1 , and we call it the unbiased restricted ridge regression estimator. The CR estimator is unbiased since:
E ( β ^ C R ) = W β + I n W β = β
Also, the variance–covariance matrix of URRR is provided as follows:
V a r ( β ^ C R ) = σ 2   W   M   Z 1
Since the CR estimator is unbiased, the MSEM of the CR estimator will be the same as the formula in (13).
We can write the M S E M as follows
M M = M S E M ( β * , β ) = V a r ( β * ) + ( B i a s ( β * , β ) ) ( B i a s ( β * , β ) )
where MM indicates the MSE matrix.
For all matrices H and F, we will write H     F   w h e n   F H is a matrix that will be nonnegative definite (n.n.d.), while if H < F ,   t h e n   F H will be called a positive definite (p.d.).

3. Comparison of Proposed Estimator

To make a comparison between any two estimators, we may need some of the lemmas below:
Lemma 1.
Let  β ~ 1   and   β ~ 2  be the two estimators of  β .  The estimator  β ~ 2  is said to be superior to  β ~ 1  in the sense of the MSEM criterion if  M M β ~ 1 M M   β ~ 2 0 .
A scalar mean-squared-error valued version of the  M S E M is provided by
M S E M = E ( β * β ) ( β * β )
Now, we will compare the URRR estimator with the RLS estimator by providing the following theorem.

3.1. Comparison Between CR and RLS Estimator

Theorem 1.
If  0 k 1 ,  the CR estimator is superior to the RLS estimator using the MSE criterion; that is MM (   β ^ R L S ) M M   β ^ C R > 0 .
Proof. 
The MSEM of the RLS estimator is provided as:
M M ( β ^ R L S ) = σ 2     M   Z 1
So, the difference between M M ( β ^ R L S )   a n d   M M ( β ^ C R ) is provided as:
M M (   β ^ R L S ) M M     β ^ C R = σ 2     M   Z 1 σ 2   W   M   Z 1 = σ 2   I n W   M   Z 1
Since M and Z 1 are p.d., we can focus on the results of the matrix I W   . Since Z is p.d., there exists an orthogonal matrix P such that P Z P = Λ , where Λ = λ 1 0 0 λ p is the diagonal matrix of the eigenvalues of Z. A diagonal matrix is a square matrix in which all the elements outside the main diagonal are zero.
Therefore,
d i a g   I n P W   P = d i a g   1 λ i   ( λ i + k ) i = 1 p
We observe that ( 1 λ i   ( λ i + k ) ) < 1 for all i = 1, 2, …, p, and the proof is completed. From the above results, we notice that the new estimator is better than the RLS estimator for all values of k.
Now, we will compare the URR estimator with the CR estimator by providing the following theorem.

3.2. Comparison Between URR and CR Estimator

Theorem 2.
If  0 k 1 ,  the CR estimator is superior to the URR estimator using the MMSE criterion; that is MM  (   β ^ U R R ) M M     β ^ C R > 0 .
Proof. 
The MSEM of URR estimator is provided as:
M M   ( β ^ U R R ) = σ 2     Z k 1
where Z k = Z + k I n . □
Therefore,
M M (   β ^ U R R ) M M   β ^ U C R = σ 2     Z k 1 σ 2   W   M   Z 1
The matrix W can be written as follows:
W = I n + k   Z 1 1 = Z + k I n 1 Z = Z k 1 Z
So,
M M (   β ^ U R R ) M M     β ^ C R = σ 2     Z k 1 σ 2   Z k 1 Z   M   Z 1                  = σ 2     Z k 1 I n Z   M   Z 1           = σ 2     Z k 1 [ I n Z   [ I n Z 1 R R Z 1 R 1 R ]   Z 1         = σ 2     Z k 1 I n I n R R   Z 1   R 1   R Z 1          = σ 2     Z k 1 R R   Z 1   R 1   R Z 1
This is a positive definite matrix. Therefore, the proof is completed.

4. Simulation Study

For a numerical comparison among the estimators, a simulation study has been conducted in this section. It consists of two parts: simulation technique and simulation result discussion.

4.1. Simulation Technique

The data generation process for the models was conducted according to a widely used method as described below [13,14,15].
x i j = 1 ρ 2 1 / 2 z i j + ρ z i , p + 1 , i = 1,2 , , n ;   j = 1,2 , , p
where ρ denotes the correlation coefficient between any two explanatory variables, z i j denotes the independent pseudo-random variable obtained from standard normal distribution, and p denotes the number of explanatory variables. Moreover, the dependent variable Y was obtained through the following equation [16].
y i = β 0 + β 1 x i 1 + β 2 x i 2 + + β p x i p + ε i , i = 1 ,   2 , , n
where ε i is the error term, i.e., ε i N 0 , σ 2 .
In the simulation, three different values of explanatory variables, p   =   3 ,   5 ,   a n d   10 ,   t h r e e different values of the correlation coefficients between the explanatory variables, ρ =   0.9 ,   0.95 ,   a n d   0.99 , four different values of the sample sizes, n   =   20 ,   30 ,   50 ,   a n d   100 , and two different values of the error variance, σ 2   =   1   a n d   5 , were taken. The data generation process of the explanatory variables was conducted with the help of the values determined for p ,   ρ ,   n , and σ . In this study, we employed the ridge parameter, denoted as k, within a specified range from 0.1 to 0.9. This range was explored in increments of 0.1, allowing for a comprehensive examination of how varying levels of k influence the outcomes of our analysis. The experiment was repeated, N = 10,000 times. The analysis was performed using R programming language version 4.4.2.
The following restrictions were set for different predictors.
When p = 3,
R = 1 1 0   and   r = [ 0 ]
When p = 5,
R = 1 1 2 0 0   and   r = [ 0 ]
When p = 10,
R = 1 1 0 0 1 0 2 1 1 0   and   r = [ 0 ]
The average MSEM was calculated based on the following formula over all simulations [17].
M S E M   ( β ^ i * ) = 1 N i = 1 P   β ˆ i β β ˆ i β
where β ^ i * is any of the URR, RLS, CR, ORR, or OLS estimators, β is the true parameter, and p is the number of predictors. We assumed β denotes a p × 1 vector where all elements are equal to 1. This can be expressed as:
β = 1 1 1 p × 1

4.2. Simulation Results Discussion

4.2.1. Performance as a Function of n

Regarding the sample size, we examined 20, 30, 50, and 100 samples in this study. As the sample size increases, the MSE should be decreased. In Table 1, the proposed CR estimator performs relatively well even with a small sample size when the number of predictors is three. However, a high correlation among the predictors (0.90) adds multicollinearity, affecting its performance slightly. As the sample size increases, the CR estimator stabilizes, with reduced MSE compared to the sample size of 20. The extra data help mitigate some of the issues caused by correlation. MSE continues to decrease, reflecting the benefit of more data. The MSE is significantly lower with this larger sample size, demonstrating the robustness of the CR estimator even in the presence of high correlations. With the same predictors, the high correlation (0.95) adds more multicollinearity. With only 20 samples, the CR estimator tends to exhibit a higher MSE, especially when the predictors are highly correlated. While CR still performs relatively well with limited samples compared to other estimators, significant correlations, such as 0.90, among predictors contributing to increased multicollinearity, subsequently raise the MSE. This indicates that with smaller sample sizes, the negative effects of multicollinearity on estimator performance are more pronounced.
As the sample size increases from 20 to 30 or 50, there is a marked reduction in MSE for the CR estimator. The added data provide a stabilizing effect, enhancing the estimator’s resilience to the multicollinearity caused by high correlations. This improvement in MSE at moderate sample sizes underscores that although high predictor correlation remains a challenge, a larger number of samples helps mitigate its impact on the CR estimator’s accuracy.
At a sample size of 100, the CR estimator consistently achieves its lowest MSE values, even under extreme predictor correlations (e.g., 0.99). With larger samples, the CR estimator displays robustness, as the effect of high correlation among predictors on the MSE diminishes. This trend illustrates the advantages of having more data, as the CR estimator effectively reduces the influence of multicollinearity and maintains a low MSE under challenging high-correlation conditions and multiple predictors.
In summary, as the sample size increases, the MSE of the CR estimator decreases significantly, demonstrating its enhanced ability to manage multicollinearity with more data. The findings emphasize the importance of an adequate sample size in minimizing error, particularly in situations with high predictor correlations. This pattern reinforces the reliability and effectiveness of the CR estimator in maintaining low MSE, even in the face of complex multicollinearity challenges.

4.2.2. Performance as a Function of the Number of Predictors, p

The performance of various estimators is significantly influenced by the size of the predictor set. In this context, we present our proposed estimator, denoted as CR, which demonstrates remarkable adaptability and robustness across a diverse array of scenarios.
When the predictor set comprises a small number of variables (p = 3), the CR estimator consistently outperforms its counterparts, particularly in terms of MSE. This advantage can be attributed to its efficacy in addressing multicollinearity, an effect that becomes increasingly pronounced as the sample size increases. In scenarios characterized by larger sample sizes, CR’s capacity to mitigate the adverse impacts of correlated predictors is particularly noteworthy, resulting in lower MSE compared to alternative estimators.
As we extend the number of predictors to five (p = 5), the model’s complexity escalates, typically leading to an increase in MSE, particularly in the context of smaller sample sizes and elevated correlations among predictors. However, despite these complexities, the CR estimator retains a competitive advantage, effectively reducing MSE more efficiently than its peers as the sample size expands, demonstrating its robust performance in managing more intricate models.
The challenges intensify with 10 predictors (p = 10), where the detrimental effects of multicollinearity and an expanded parameter space are exacerbated. In situations marked by high correlation (e.g., 0.95 and 0.99), we observe a notable increase in MSE due to the inherent difficulties in accurate parameter estimation. Nonetheless, the CR estimator exhibits resilience in these challenging conditions. As sample sizes reach 50 or 100, the performance of CR shows marked improvement, underscoring its adaptability in navigating complex interrelationships among multiple predictors.
In summary, the CR estimator distinguishes itself by consistently reducing MSE across varying sizes of predictor sets. Its strengths are particularly evident in scenarios with larger sample sizes and when confronted with the intricacies of multicollinearity, positioning it as a valuable instrument in the realm of statistical modeling.

4.2.3. Performance as a Function of Correlation Coefficients, ρ

The correlation levels among the predictors, specifically 0.50, 0.70, 0.80, 0.90, 0.95, and 0.99, have a significant impact on the performance of the various estimators. This is clearly demonstrated in the analyses presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18. Generally, higher correlation values are associated with an increase in MSE, indicating a decline in estimation accuracy (Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18).
At a correlation level of 0.90, multicollinearity is evident but remains manageable. Notably, the CR estimator consistently demonstrates superior performance, as indicated by its relatively low MSE values, particularly when examining larger sample sizes. In instances where sample sizes are increased, such as 50 or 100, the CR estimator continues to yield reliable estimates despite the prevailing multicollinearity. Conversely, smaller sample sizes (20 and 30) reveal a significant uptick in variability, resulting in less stable estimates.
As correlation levels rise to 0.95, the adverse effects of multicollinearity become increasingly pronounced, leading to a marked escalation in MSE across all estimators. Nevertheless, the CR estimator maintains a competitive advantage; it experiences a comparatively smaller increase in MSE relative to traditional estimators. The detrimental impacts of heightened correlation are especially observable in smaller samples, where the amplified collinearity exacerbates challenges for the estimators. Notably, as sample sizes expand, the CR estimator reveals its robustness by effectively minimizing MSE in comparison with alternative estimators.
At the maximum correlation level of 0.99, the severity of multicollinearity poses significant obstacles to estimation accuracy. All estimators are subject to a pronounced increase in MSE, with the effects being most severe in smaller sample sizes. In these circumstances, the CR estimator displays remarkable resilience, consistently yielding lower MSE values compared to its counterparts. This advantage becomes particularly salient as the sample size approaches 50 or 100, wherein the CR estimator continues to furnish more stable and accurate estimates.
In summary, across all evaluated correlation levels, the CR estimator consistently demonstrates its superiority in navigating scenarios characterized by high correlation. Its capacity to deliver stable and accurate estimates is enhanced with increases in both correlation and sample size, thereby reinforcing its efficacy in environments susceptible to multicollinearity.

4.2.4. Performance as a Function of Error Variance, σ2

When examining the impact of error variance on estimator performance, it is evident that lower error variance (σ² = 1) correlates with improved overall performance of all estimators, aligning with theoretical expectations. In this context, the CR estimator consistently exhibits a reduced MSE relative to traditional estimators across diverse correlations and sample sizes (Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18). Notably, its performance is particularly pronounced in small sample sizes, where CR significantly outperforms conventional estimators such as OLS, especially in high-correlation environments (0.95 and 0.99). This observation underscores the efficacy of the CR estimator in addressing datasets characterized by strong intercorrelations.
As sample sizes increase to 50 or 100, the CR estimator continues to showcase superior performance, effectively maintaining a low MSE despite an increasing number of predictors. This resilience indicates that CR is adept at managing multicollinearity, a prevalent issue when predictors are highly correlated while also exhibiting robustness in the presence of noise, particularly when the error variance remains comparatively low.
Conversely, as the error variance escalates to σ² = 5, the challenges associated with noise become more pronounced, resulting in a general increase in MSE across all estimators (Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18). Nevertheless, the CR estimator retains a significant advantage in minimizing MSE, particularly under conditions of high correlation. The augmentation of noise exacerbates the complications associated with multicollinearity. However, CR remains resilient in such challenging scenarios.
In stark contrast, traditional estimators like OLS experience a pronounced increase in MSE as error variance rises. In larger sample sizes (n = 50 or 100), the CR estimator adeptly controls the MSE escalation, thereby demonstrating its robustness. This capacity to sustain performance amid more challenging error conditions highlights the potential of the CR estimator as a reliable tool in diverse statistical contexts, especially those marked by significant noise and correlation challenges.
In summary, the findings substantiate the effectiveness of the CR estimator in minimizing error and enhancing accuracy, particularly in environments characterized by multicollinearity and varying levels of noise.

5. Application

5.1. Example 1: Gross National Product Data

To substantiate the simulation results of this study, we conducted an analysis of data regarding total national research and development expenditures as a percentage of gross national product by country from 1972 to 1986. These data were referenced by Akdeniz and Erol (2003) and Gruber (1998) to compare several biased estimators (Table 19) [18,19]. The analysis highlights the relationship between the dependent variable Y , representing the percentage spent by the USA and four independent variables that indicate the percentages spent by France ( X 1 ), West Germany ( X 2 ), Japan ( X 3 ), and the former Soviet Union ( X 4 ).
We assessed the multicollinearity within the predictors and discovered that X 1 , X 2 , a n d   X 3 are highly correlated with one another (Figure 1) [20]. Furthermore, the condition number of approximately 74.79 points to the presence of moderate-to-high multicollinearity among the predictor variables. A condition number exceeding 30 indicates significant multicollinearity, which has the potential to distort regression estimates and render the model unstable.
The following restrictions, R and r were used, respectively, as follows:
R = 1 2 1 0 1 0 1 2 ,   r = 0 0
The MSE was computed by evaluating the estimated coefficients of each model against the true coefficients, which was assumed as β = 1 1 1 1 . For any estimator β ˆ of β , the MSE matrix can be measured by the following:
M S E M ( β ˆ , β ) = E ( β ˆ β ) ( β ˆ β )
The performance of the proposed CR estimator was evaluated and compared to that of URR, RLS, ORR, and OLS estimators across a range of k values from 0.1 to 0.9, using the aforementioned dataset (Table 20).
The results indicate that CR consistently performs better, achieving lower MSE values than the other estimators. Specifically, CR’s MSE ranges from 0.5637 to 0.6143, with the lowest values recorded at k = 0.5 and k = 0.7 (0.5637 and 0.5639, respectively).
In contrast, URR shows considerable variability, with MSE peaking at 0.8031 for k = 0.4 and 0.8736 for k = 0.9, indicating instability and sensitivity to fluctuations in k. RLS and OLS maintain constant MSE values of 0.6030 and 0.6015, respectively, as they are unrelated to k. However, they fall short in competitiveness compared to CR. While ORR demonstrates moderate performance, it does not achieve lower MSE values than CR in the critical ranges.
These findings highlight CR’s robustness, particularly in the mid-range to close to 1 values of k, emphasizing its potential as a reliable and effective estimator (Figure 2).

5.2. Example 2: Acetylene Data

Table 21 provides a dataset detailing the percentage conversion of n-heptane to acetylene, alongside three explanatory variables [21]. The condition number of the dataset is calculated to be 93.68234, indicating potential multicollinearity issues. Furthermore, the variance inflation factors (VIFs) for the predictors are VIF₁ = 6.896552, VIF₂ = 21.73913, VIF₃ = 29.41176, and VIF₄ = 1.795332. These values confirm the presence of a significant multicollinearity problem within the dataset. The performance of the estimators, under these conditions, is presented in Table 22.
The performance of the proposed CR estimator was evaluated and compared to that of URR, RLS, ORR, and OLS estimators across a range of k values from 0.1 to 0.9 for this dataset. The results indicate that CR consistently outperforms the other estimators, achieving the lowest MSE values across all the k values. Specifically, CR’s MSE ranges from 0.2782 to 0.9954, with the lowest value observed at k = 0.6 (MSE = 0.2782), demonstrating its superior efficiency in this range. In contrast, URR exhibits extreme variability, with MSE values ranging from 409.9669 at k = 0.1 to 3.0775 at k = 0.9, indicating instability and sensitivity to fluctuations in k. ORR also shows considerable variation, with MSE peaking at 466.8437 at k = 0.1 and gradually decreasing as k increases.
RLS and OLS do not achieve the same level of performance as CR. Overall, these findings reinforce CR’s robustness, highlighting its potential as the most effective and reliable estimator among the ones evaluated.

6. Some Concluding Remarks

This paper introduces an unbiased restricted estimator for linear regression models, leveraging prior information to enhance estimation accuracy. The statistical properties of the proposed estimator are thoroughly examined, highlighting its superiority over several existing alternatives. A comprehensive simulation study evaluates the performance of the estimators under various parametric scenarios. Furthermore, the methodology is applied to a real-world dataset to demonstrate its practical utility. The results from both simulations and the real-data analysis consistently show that the proposed estimator outperforms its counterparts. It is anticipated that the insights presented in this study will be valuable to researchers in the field.

Author Contributions

M.I.A., H.N. and B.M.G.K. have contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article.

Acknowledgments

Authors are grateful to the reviewers for their constructive comments and suggestions, which helped to improve the quality and presentation greatly.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for non-orthogonal problems. Technometrics 1970, 12, 69–82. [Google Scholar] [CrossRef]
  2. Crouse, R.H.; Jin, C.; Hanumara, R.C. Unbiased ridge estimation with prior information and ridge trace. Commun. Stat.—Theory Methods 1995, 24, 2341–2354. [Google Scholar] [CrossRef]
  3. Ozkale, M.R.; Kaciranlar, S. Comparisons of the unbiased ridge estimation to the other estimations. Commun. Stat.-Theory Methods 2007, 36, 707. [Google Scholar] [CrossRef]
  4. Sarkar, N. A new estimator combining the ridge regression and the restricted least squares method of estimation. Commun. Stat.—Theory Methods 1992, 21, 1987–2000. [Google Scholar] [CrossRef]
  5. Månsson, K.; Shukur, G.; Kibria, B.M.G. Performance of some ridge regression estimators for the multinomial logit model. Commun. Stat.-Theory Methods 2008, 47, 2795–2804. [Google Scholar] [CrossRef]
  6. Kibria, B.M.G.; Banik, S. Some ridge regression estimators and their performance. J. Mod. Appl. Stat. Methods 2016, 15, 206–238. [Google Scholar] [CrossRef]
  7. Alheety, M.I.; Kibria, B.M.G. Choosing ridge Parameters in the Linear regression Model with AR(1) Error: A Comparative Simulation Study. Int. J. Stat. Econ. (IJSE) 2011, 7, 10–26. [Google Scholar]
  8. Alheety, M.I. New versions of liu-type estimator in weighted and non-weighted mixed regression model. Baghdad Sci. J. 2020, 17, 361–370. [Google Scholar] [CrossRef]
  9. Kibria, B.M.G. More than hundred (100) estimators for estimating the shrinkage parameter in a linear and generalized linear ridge regression models. J. Econ. Stat. 2022, 2, 233–252. [Google Scholar]
  10. Özkale, M.R. The relative efficiency of the restricted estimators in linear regression models. J. Appl. Stat. 2014, 41, 998–1027. [Google Scholar] [CrossRef]
  11. Wu, J. An unbiased two-parameter estimator with prior information in linear regression model. Sci. Word J. 2014, 2014, 206943. [Google Scholar]
  12. Özkale, M.R.; Kaciranlar, S. The restricted and unrestricted two-parameter estimators. Commun. Stat.—Theory Methods 2007, 36, 2707–2725. [Google Scholar] [CrossRef]
  13. Kibria, B.M.G. Performance of Some New Ridge Regression Estimators. Commun. Stat.-Simul. Comput. 2003, 32, 419–435. [Google Scholar] [CrossRef]
  14. Alheety, M.I.; Gore, S.D. A new estimator in multiple linear regression model. Model Assist. Stat. Appl. 2008, 3, 187–200. [Google Scholar] [CrossRef]
  15. Nayem, H.M.; Kibria, B.M.G. A simulation study on some confidence Intervals for Estimating the Population Mean Under Asymmetric and Symmetric Distribution Conditions. J. Stat. Appl. Probab. Lett. 2024, 11, 123–144. [Google Scholar]
  16. Nayem, H.M.; Aziz, S.; Kibria, B.M.G. Comparison among Ordinary Least Squares, Ridge, Lasso, and Elastic Net Estimators in the Presence of Outliers: Simulation and Application. Int. J. Stat. Sci. 2024, 24, 25–48. [Google Scholar] [CrossRef]
  17. Najarian, S.; Arashi, M.; Kibria, B.G. A simulation study on some restricted ridge regression estimators. Commun. Stat.-Simul. Comput. 2012, 42, 871–890. [Google Scholar] [CrossRef]
  18. Akdeniz, F.; Erol, H. Mean squared error matrix comparisons of some biased estimators in linear regression. Commun. Stat.—Theory Methods 2003, 32, 2389–2413. [Google Scholar] [CrossRef]
  19. Gruber, M.H.J. Improving Efficiency by Shrinkage; Marcel Dekker: New York, NY, USA, 1998. [Google Scholar]
  20. Kasubi, F.; Mdimi, O.; Nayem, H.M.; Mwambeleko, E.; Shita, H.; Venkata, S.S.G. Deciphering Seasonal Weather Impacts on Crash Severity: A Machine Learning Approach. Inst. Transp. Eng. ITE J. 2024, 94, 39–46. [Google Scholar]
  21. Marquardt, D.W.; Snee, R.D. Ridge Regression in Practice. Am. Stat. 1975, 29, 3–20. [Google Scholar] [CrossRef]
Figure 1. Correlation matrix.
Figure 1. Correlation matrix.
Stats 08 00016 g001
Figure 2. MSE of the estimators across different k values.
Figure 2. MSE of the estimators across different k values.
Stats 08 00016 g002
Table 1. Average MSE values of the estimators when p = 3 and ρ = 0.50.
Table 1. Average MSE values of the estimators when p = 3 and ρ = 0.50.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.11.921.581.561.921.961.591.381.371.601.611.351.231.231.361.361.181.111.111.181.18
0.21.881.591.541.891.971.581.391.361.581.611.351.231.221.351.361.181.121.111.181.18
0.31.821.571.501.831.941.581.391.361.581.631.341.231.211.351.361.171.111.111.181.18
0.41.791.571.481.811.951.551.381.341.561.611.341.231.211.351.371.171.111.111.181.18
0.51.761.571.451.781.951.521.371.321.541.601.341.231.211.351.371.171.121.111.181.18
0.61.741.581.441.761.951.521.381.321.531.611.331.231.201.341.361.171.111.101.171.18
0.71.711.581.421.731.951.511.391.321.531.621.321.231.201.341.371.161.111.101.171.18
0.81.701.591.411.721.971.491.381.291.511.611.321.231.191.331.371.171.121.101.171.18
0.91.671.591.401.701.971.481.391.301.501.621.311.231.191.321.361.161.111.101.171.18
50.12.421.891.852.432.491.931.581.571.931.951.541.351.341.551.551.261.171.171.261.26
0.22.371.891.822.372.491.911.591.551.921.961.531.341.331.541.551.271.171.171.271.27
0.32.321.891.782.332.501.871.581.531.881.941.521.341.331.531.551.261.171.171.261.27
0.42.271.891.752.282.501.861.571.511.871.951.531.351.331.531.561.271.171.171.271.27
0.52.211.881.712.222.491.851.591.521.861.961.511.351.321.521.551.251.161.151.251.26
0.62.161.881.692.172.471.811.581.491.821.941.511.351.311.511.551.261.171.161.261.27
0.72.131.891.672.132.481.811.601.491.821.971.491.341.301.501.541.251.171.161.261.27
0.82.111.891.642.112.511.791.591.471.801.961.481.341.291.491.551.261.171.161.261.28
0.92.081.901.632.092.521.751.581.461.771.941.481.351.301.491.551.251.171.151.251.27
Table 2. Average MSE values of the estimators when p = 3 and ρ = 0.70.
Table 2. Average MSE values of the estimators when p = 3 and ρ = 0.70.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.12.311.851.812.322.391.871.571.551.871.901.531.341.341.541.541.261.171.171.261.26
0.22.261.861.772.272.401.851.561.521.861.911.511.341.321.511.531.261.171.171.261.26
0.32.201.861.742.212.401.821.571.521.831.901.491.331.311.501.521.261.171.161.261.27
0.42.141.851.702.152.381.801.571.501.811.901.501.341.311.511.541.251.161.161.251.26
0.52.101.841.662.122.411.781.561.481.791.911.491.341.311.491.531.251.171.161.251.26
0.62.031.841.632.052.371.761.561.471.771.911.471.331.301.481.531.251.171.161.261.27
0.72.001.841.602.022.391.731.551.441.741.901.471.341.291.481.531.251.171.161.251.27
0.81.951.851.581.982.371.701.551.431.721.891.451.331.281.461.521.241.161.151.251.26
0.91.921.831.551.942.381.681.561.421.701.891.451.331.281.461.531.241.171.151.251.27
50.12.932.202.142.933.032.251.791.772.252.291.731.461.451.731.741.361.221.221.361.36
0.22.852.192.082.853.052.241.791.742.242.311.721.461.441.721.751.361.231.231.361.37
0.32.742.202.042.743.022.201.801.732.202.301.711.461.431.721.751.371.231.221.371.37
0.42.702.242.022.703.062.161.801.712.162.301.691.471.441.701.741.351.231.221.351.36
0.52.622.221.972.623.042.121.781.672.122.291.701.481.441.711.761.351.231.221.351.36
0.62.572.211.922.573.072.081.771.652.082.281.701.481.431.701.771.351.231.221.351.37
0.72.502.201.882.503.062.051.771.632.052.281.661.461.401.671.741.341.231.211.341.36
0.82.442.201.852.433.042.031.781.622.032.281.661.461.401.661.751.351.231.211.351.37
0.92.382.191.812.373.032.041.801.622.042.321.651.461.391.661.751.341.231.211.351.37
Table 3. Average MSE values of the estimators when p = 3 and ρ = 0.80.
Table 3. Average MSE values of the estimators when p = 3 and ρ = 0.80.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.12.752.122.052.752.872.161.741.712.162.201.691.441.431.691.701.351.221.221.351.35
0.22.692.142.012.692.912.101.721.672.112.191.671.451.431.681.701.341.221.221.341.35
0.32.532.111.932.542.832.101.741.672.112.231.661.441.411.671.711.341.231.221.351.36
0.42.492.131.902.492.882.051.751.652.062.211.641.441.401.651.701.341.231.221.341.36
0.52.402.121.852.412.862.031.761.632.032.221.631.441.401.641.701.331.221.211.331.35
0.62.362.131.812.362.881.991.751.612.002.221.641.451.391.641.721.321.221.201.331.34
0.72.272.131.782.282.851.951.741.581.952.201.611.441.381.621.701.321.221.201.321.35
0.82.252.151.762.252.901.941.751.571.952.231.601.451.381.611.711.321.221.201.331.35
0.92.172.131.712.192.871.891.741.541.912.211.581.441.361.591.701.321.221.201.321.35
50.13.492.542.453.493.652.642.021.982.652.711.961.601.591.961.981.471.291.291.471.48
0.23.362.572.403.363.672.572.001.932.572.681.931.591.561.941.971.481.301.291.481.49
0.33.252.542.303.243.682.502.011.912.512.671.931.601.571.931.981.471.301.291.471.48
0.43.112.562.253.103.662.472.021.892.462.681.901.591.541.911.981.461.291.281.461.47
0.53.002.542.172.983.642.422.001.842.422.691.901.601.551.901.991.471.301.291.471.49
0.62.882.532.102.863.612.372.011.822.372.681.881.601.531.881.981.451.291.271.451.48
0.72.852.552.072.823.682.352.011.802.352.711.851.601.521.851.971.451.301.271.451.48
0.82.752.522.012.733.662.312.001.772.312.711.821.591.501.831.961.451.301.271.451.49
0.92.682.551.992.663.662.262.021.762.262.701.811.591.491.821.961.431.291.271.441.48
Table 4. Average MSE values of the estimators when p = 5 and ρ = 0.50.
Table 4. Average MSE values of the estimators when p = 5 and ρ = 0.50.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.13.132.632.543.133.262.352.042.012.362.391.801.611.601.801.811.391.301.301.391.39
0.23.012.632.473.013.242.312.031.972.312.391.781.601.591.781.811.391.301.291.391.39
0.32.902.622.392.903.232.282.041.952.282.391.771.611.581.781.811.381.301.291.381.39
0.42.842.652.352.853.272.252.031.922.252.401.751.601.561.751.791.381.291.281.381.39
0.52.742.622.282.743.242.202.031.902.212.391.731.591.551.741.791.371.291.281.381.39
0.62.682.632.222.683.262.172.031.872.182.391.741.611.551.741.811.381.291.281.381.39
0.72.622.652.192.623.272.172.051.872.182.421.731.611.541.731.811.371.301.281.381.39
0.82.522.612.122.533.232.122.041.832.132.391.721.611.541.731.811.381.301.281.381.40
0.92.482.632.092.493.252.092.031.812.102.391.701.601.521.711.801.371.301.281.381.40
50.13.803.133.023.803.972.752.312.272.752.802.011.761.752.012.031.491.371.371.491.50
0.23.723.192.973.724.042.692.322.252.692.792.001.771.752.002.031.501.381.371.501.50
0.33.533.142.843.523.972.662.332.222.662.811.981.761.731.992.031.491.381.371.491.50
0.43.383.122.743.373.922.612.322.182.612.811.981.781.741.982.041.481.371.361.481.49
0.53.363.172.713.344.032.572.332.162.572.811.951.771.711.962.031.481.371.361.481.49
0.63.223.152.623.203.982.552.352.142.552.831.941.761.701.942.021.481.381.361.481.50
0.73.143.172.563.134.012.492.332.102.492.801.911.751.681.922.011.471.371.351.481.50
0.83.053.142.483.033.992.462.332.082.462.811.911.771.691.922.031.471.381.351.471.50
0.92.963.122.432.933.962.432.352.062.432.821.901.771.671.912.031.471.371.351.471.50
Table 5. Average MSE values of the estimators when p = 5 and ρ = 0.70.
Table 5. Average MSE values of the estimators when p = 5 and ρ = 0.70.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.14.543.753.584.544.783.302.772.713.303.382.352.032.012.362.381.671.511.501.671.68
0.24.373.783.474.384.843.252.782.673.263.422.342.042.002.352.391.671.511.501.671.68
0.34.173.783.324.164.823.132.752.593.133.352.302.021.972.312.381.671.511.501.671.68
0.44.023.803.234.024.853.102.772.553.103.402.282.031.962.282.371.651.511.491.651.67
0.53.823.753.083.834.803.032.772.513.033.392.272.041.952.282.391.651.511.481.651.67
0.63.713.762.993.704.812.972.772.462.973.392.232.031.932.242.371.651.521.491.651.68
0.73.603.772.923.594.842.922.772.422.923.402.232.051.932.242.391.641.511.481.641.68
0.83.473.762.823.474.832.872.782.392.873.412.212.031.902.212.381.641.521.481.641.68
0.93.393.822.773.384.872.802.752.332.803.382.162.011.872.172.351.631.511.471.631.68
50.15.324.344.135.325.623.793.113.043.793.892.632.242.222.632.661.801.611.601.801.81
0.25.064.343.965.065.623.683.112.973.683.872.582.222.172.582.641.791.601.601.791.80
0.34.904.423.874.895.713.583.102.903.583.852.572.242.172.572.661.771.601.581.771.79
0.44.604.303.634.585.583.493.092.833.493.842.532.222.142.532.641.771.601.581.771.80
0.54.474.343.534.445.643.483.162.853.473.922.512.242.142.512.651.771.601.581.771.80
0.64.314.383.444.285.653.363.102.743.363.872.472.222.102.472.631.751.591.561.751.79
0.74.164.373.334.135.663.303.112.703.293.872.482.232.102.482.661.771.611.581.781.82
0.84.074.403.264.035.733.223.092.643.213.852.432.232.072.432.641.751.601.561.751.80
0.93.934.433.163.895.743.213.152.633.203.922.422.232.062.422.651.731.591.551.731.79
Table 6. Average MSE values of the estimators when p = 5 and ρ = 0.80.
Table 6. Average MSE values of the estimators when p = 5 and ρ = 0.80.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.16.115.014.716.116.554.293.553.444.294.442.912.472.442.922.961.971.731.721.971.98
0.25.765.044.485.766.594.203.563.364.204.492.902.482.422.912.991.951.721.711.951.97
0.35.414.994.215.416.554.043.533.244.044.442.872.492.402.873.001.931.711.691.931.96
0.45.084.964.005.086.493.923.553.183.924.442.842.502.382.853.011.941.731.701.941.98
0.54.844.973.844.836.473.853.573.103.854.482.792.502.352.792.991.931.731.691.931.97
0.64.685.023.714.676.583.723.563.033.724.462.762.492.312.763.001.901.721.681.911.96
0.74.495.043.594.486.583.653.552.953.654.482.712.482.282.712.981.891.721.671.901.96
0.84.274.973.434.266.483.533.552.883.534.452.692.482.252.692.991.901.731.671.901.97
0.94.095.003.314.086.503.443.522.813.444.432.632.462.212.632.961.881.721.661.881.96
50.17.095.785.427.097.624.894.013.894.895.073.282.742.703.283.332.121.851.852.122.13
0.26.665.785.116.657.654.744.023.784.745.083.212.722.653.213.312.111.851.842.112.13
0.36.305.814.886.287.674.624.003.664.615.103.172.722.623.173.322.111.851.832.112.14
0.45.935.764.605.907.604.453.973.534.445.063.132.732.593.133.322.081.851.822.092.13
0.55.645.794.425.597.624.404.073.534.395.163.092.732.563.093.322.071.841.802.072.12
0.65.345.704.185.297.574.203.993.374.185.053.062.742.533.063.342.071.841.792.072.13
0.75.195.834.105.127.684.114.003.314.095.073.002.722.492.993.312.061.861.802.062.13
0.84.915.673.864.847.543.993.993.233.975.052.972.732.472.973.332.051.851.792.052.13
0.94.745.723.754.667.593.873.963.133.855.022.982.772.472.973.372.031.841.772.032.12
Table 7. Average MSE values of the estimators when p = 10 and ρ = 0.50.
Table 7. Average MSE values of the estimators when p = 10 and ρ = 0.50.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.19.629.138.219.6110.835.915.435.245.916.143.713.423.383.713.762.282.142.142.282.29
0.28.719.157.548.7110.835.735.455.085.736.173.653.413.323.653.752.272.142.132.272.29
0.38.099.157.028.0810.915.525.454.935.526.153.603.413.283.603.752.262.142.122.262.29
0.47.539.086.557.5210.825.365.454.795.366.173.573.413.253.573.762.272.152.122.272.30
0.57.099.156.217.0710.955.175.414.615.176.133.543.453.243.543.782.252.152.112.252.30
0.66.679.075.856.6410.965.095.504.555.096.233.473.413.173.473.742.242.142.092.242.29
0.76.329.175.616.2910.904.965.504.444.956.243.433.423.143.433.752.232.152.092.232.30
0.86.049.095.366.0110.864.785.454.314.786.163.383.393.083.383.742.242.162.102.242.31
0.95.729.045.135.6910.724.685.444.204.676.183.383.443.103.383.782.222.152.082.222.30
50.110.7610.119.0810.7512.166.445.875.676.446.694.003.673.634.004.062.412.252.252.412.42
0.29.8210.128.349.7912.256.295.995.586.296.793.923.663.563.924.032.412.262.252.412.43
0.38.9710.067.708.9212.116.065.945.356.056.773.873.663.533.874.032.392.252.222.392.42
0.48.3510.147.258.3012.125.825.925.195.816.703.823.663.483.824.032.382.262.222.382.42
0.57.8010.096.847.7512.055.685.965.075.676.753.743.633.413.744.002.362.252.202.362.41
0.67.319.916.427.2311.915.545.974.935.526.793.743.663.403.744.042.362.252.202.362.42
0.76.989.996.146.8912.035.375.954.795.356.773.673.653.353.674.022.352.252.192.352.41
0.86.639.945.856.5211.915.235.964.685.206.773.643.663.323.634.032.332.252.182.332.41
0.96.4010.105.676.2912.125.075.934.555.046.733.623.683.313.614.062.342.262.182.342.43
Table 8. Average MSE values of the estimators when p = 10 and ρ = 0.70.
Table 8. Average MSE values of the estimators when p = 10 and ρ = 0.70.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.117.3416.6814.6617.3320.0310.769.879.4210.7611.316.415.875.766.416.533.623.343.323.623.64
0.215.5916.9013.3415.5720.3110.269.889.0210.2511.296.315.865.666.316.543.593.333.293.593.63
0.314.1417.0412.2314.1120.299.869.958.719.8611.366.195.855.556.196.543.563.333.273.563.63
0.413.0817.0111.4013.0520.299.449.948.369.4411.336.105.885.486.106.563.533.313.243.533.62
0.512.1417.1210.6112.1020.519.009.797.979.0011.205.985.865.375.986.543.503.313.223.503.61
0.611.3317.069.9411.2920.448.789.917.798.7711.335.915.905.335.926.583.493.313.203.493.62
0.710.5916.929.3410.5420.298.469.957.558.4511.345.795.885.225.796.543.473.323.193.473.62
0.810.1117.168.9910.0520.418.189.877.258.1611.345.675.855.135.686.513.443.313.163.443.61
0.99.4716.928.439.4220.267.909.917.047.8811.325.615.875.065.616.553.443.343.173.443.63
50.118.8718.0815.8718.8521.9111.4510.5510.0611.4512.056.776.196.076.776.903.743.443.423.743.76
0.216.5718.0014.1816.5421.5710.9810.559.6110.9812.136.596.135.926.596.843.733.463.413.733.78
0.315.1118.1213.0615.0521.7110.4410.479.1710.4212.036.536.175.846.526.903.683.423.363.683.75
0.413.9718.1412.1113.9021.7910.0110.488.829.9912.016.366.125.706.366.843.703.473.393.703.79
0.512.9418.0311.2612.8521.839.6110.468.499.5911.996.326.205.686.316.913.653.453.353.653.77
0.612.0518.0310.5511.9521.619.3310.588.279.3112.116.156.115.526.156.853.663.483.353.663.80
0.711.3218.149.9911.2021.788.9710.537.958.9312.076.036.105.426.026.813.643.483.343.643.80
0.810.7618.229.5210.6221.808.6710.507.698.6312.045.966.145.375.956.843.593.463.303.593.78
0.910.0917.968.969.9721.588.4110.527.498.3712.055.876.125.275.866.863.573.453.273.573.77
Table 9. Average MSE values of the estimators when p = 10 and ρ = 0.80.
Table 9. Average MSE values of the estimators when p = 10 and ρ = 0.80.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.125.8925.9121.9225.8831.1615.7614.5913.7415.7616.819.318.498.299.319.544.944.524.484.944.98
0.222.4025.7119.1322.3731.1015.0014.7513.1114.9916.999.098.528.139.099.554.894.514.434.894.97
0.320.0625.8717.3220.0231.2014.2114.7112.4214.2017.018.918.487.918.929.594.894.554.444.895.02
0.417.9125.5715.6117.8730.7413.6614.8811.9413.6517.278.628.437.698.629.494.884.574.424.885.05
0.516.5825.6614.5116.5230.9012.7714.6711.2512.7616.908.438.407.498.439.494.824.564.374.825.03
0.615.2625.6913.4515.2030.9012.2914.7510.8512.2716.998.338.527.448.329.584.744.524.304.744.99
0.714.2725.6212.6014.1930.7911.6514.6110.2911.6316.848.048.417.208.049.454.724.544.294.725.01
0.813.3925.6311.8713.3231.0311.2614.699.9511.2517.027.948.437.087.949.524.664.534.244.674.99
0.912.6325.7911.2312.5731.3510.8514.759.6310.8317.017.768.446.947.759.494.634.524.204.634.99
50.127.1427.1522.9727.1132.6916.8415.5714.6416.8317.979.748.908.689.749.995.174.734.695.175.21
0.223.8227.2720.2723.7733.1515.6615.3613.6715.6517.719.438.808.399.429.905.104.714.635.105.19
0.321.0927.0818.1420.9932.9414.8115.3612.9514.7917.749.198.798.209.199.895.074.714.595.075.21
0.419.2427.4516.7819.1332.7514.2015.5312.4314.1817.999.048.828.049.039.945.004.704.545.005.17
0.517.5527.2715.3317.4132.8013.5815.6411.9813.5417.998.848.837.888.839.955.014.724.535.015.23
0.616.3127.2314.2916.1733.0012.9615.5511.4112.9317.988.638.837.718.629.934.904.684.454.905.16
0.715.1327.0313.3414.9732.8612.3315.4310.8812.2817.798.438.787.528.429.914.904.714.444.905.21
0.814.2327.4212.5514.0233.1011.8415.5110.5211.7817.808.298.817.408.279.944.864.714.414.865.20
0.913.4127.3711.9013.2233.0011.3715.4810.0811.3017.848.128.847.278.109.944.854.754.414.855.23
Table 10. Average MSE values of the estimators when p = 3 and ρ = 0.90.
Table 10. Average MSE values of the estimators when p = 3 and ρ = 0.90.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.13.782.752.593.794.082.892.172.102.893.002.121.681.662.122.151.571.351.341.571.58
0.23.612.782.493.604.162.762.162.042.762.982.101.701.662.102.171.571.351.341.571.59
0.33.412.802.393.404.182.732.182.002.733.042.071.701.632.072.181.551.351.331.561.58
0.43.212.772.263.204.152.612.161.942.613.012.051.701.622.052.191.551.361.331.551.58
0.53.032.762.173.024.142.552.181.912.553.032.001.671.582.002.161.541.351.321.541.58
0.62.902.792.112.884.132.492.181.872.493.041.991.701.581.992.191.531.351.321.541.58
0.72.772.782.042.754.102.402.181.832.403.011.941.681.551.952.161.521.341.311.521.58
0.82.702.791.982.684.192.352.161.782.343.031.921.691.551.932.171.521.351.311.521.58
0.92.622.811.942.604.212.292.171.752.283.011.911.711.541.912.191.501.351.301.511.58
50.14.993.473.244.985.413.582.572.493.583.742.561.941.912.562.611.761.451.451.771.78
0.24.633.423.024.615.403.502.592.433.493.802.511.931.882.512.611.761.461.441.761.78
0.34.433.482.914.395.513.322.582.353.323.752.461.921.842.462.601.741.461.441.741.78
0.44.113.452.754.065.413.272.582.293.263.822.411.931.822.412.591.731.461.431.731.78
0.53.903.442.643.855.443.162.562.213.143.812.371.901.782.372.591.731.461.421.731.79
0.63.693.462.553.635.403.032.572.163.013.782.361.941.782.352.621.731.461.421.731.79
0.73.593.482.463.515.492.982.572.122.953.832.321.941.772.312.621.721.471.431.721.80
0.83.413.462.383.325.422.882.632.112.843.782.271.931.742.272.611.701.471.421.701.78
0.93.263.482.313.175.392.832.582.042.793.832.221.911.702.212.581.701.471.411.701.80
Table 11. Average MSE values of the estimators when p = 3 and ρ = 0.95.
Table 11. Average MSE values of the estimators when p = 3 and ρ = 0.95.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.15.583.903.485.586.464.142.902.724.144.482.912.122.062.913.021.971.561.551.971.99
0.24.943.873.124.926.443.912.902.573.904.552.862.172.052.863.081.941.561.531.941.99
0.34.473.892.894.446.473.652.962.503.644.532.712.111.952.703.011.931.561.521.932.00
0.44.053.902.684.006.433.432.922.353.414.502.642.131.922.643.031.901.561.511.901.99
0.53.783.882.523.726.493.262.912.253.234.522.572.141.882.563.031.871.561.491.871.99
0.63.503.972.413.446.443.112.932.183.084.532.512.131.842.503.041.851.561.481.851.99
0.73.323.902.273.246.532.932.912.092.894.462.442.161.822.433.041.831.551.461.831.98
0.83.093.942.173.026.472.842.942.032.804.542.402.151.782.393.071.831.561.461.832.01
0.92.953.922.092.856.492.722.931.972.684.522.302.131.742.283.001.811.561.451.802.00
50.17.365.034.437.328.565.323.583.355.315.783.622.502.423.623.772.311.751.732.312.35
0.26.565.003.966.488.664.953.583.154.925.793.462.502.353.453.742.291.761.722.292.36
0.35.895.023.635.808.684.633.602.984.595.803.382.522.303.373.802.261.751.692.262.36
0.45.334.983.335.198.624.303.542.804.255.753.252.522.243.233.772.241.771.702.242.37
0.54.905.013.144.738.594.123.602.714.035.793.152.512.183.133.782.191.741.662.192.35
0.64.615.012.934.418.723.863.522.553.775.733.082.552.153.043.802.181.761.662.172.36
0.74.215.022.784.038.623.683.552.473.585.732.992.542.112.943.792.141.751.642.132.35
0.83.995.002.643.798.643.553.592.413.445.792.932.562.072.893.832.151.761.622.132.39
0.93.815.002.543.588.703.453.622.343.335.912.792.471.972.753.752.101.751.602.092.36
Table 12. Average MSE values of the estimators when p = 3 and ρ = 0.99.
Table 12. Average MSE values of the estimators when p = 3 and ρ = 0.99.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.112.9511.806.9812.8323.5410.528.015.7610.4615.287.475.134.297.469.194.623.052.844.625.05
0.28.9312.195.128.6723.717.948.194.647.8115.186.395.293.816.349.314.263.132.724.245.05
0.36.6011.833.916.3023.576.408.093.786.2015.215.525.123.285.439.303.933.092.533.915.01
0.45.2912.103.324.9723.205.418.283.335.1815.444.815.152.964.719.223.683.042.373.645.01
0.54.4511.852.844.0923.524.628.252.934.3915.344.355.272.744.239.323.493.122.313.455.08
0.63.9312.172.573.5623.664.058.392.673.8215.363.935.212.523.809.323.313.102.203.255.08
0.73.4612.082.343.0823.333.648.062.413.3715.283.575.172.353.419.183.123.072.093.065.05
0.83.1512.012.162.7623.503.278.142.233.0215.113.355.212.233.199.372.983.092.032.915.07
0.92.8611.842.042.5123.173.007.822.072.7414.923.115.142.112.949.222.853.061.952.765.04
50.118.0416.489.5517.7133.0314.3610.837.7014.2020.9010.246.765.6010.1912.645.903.813.525.906.48
0.212.2816.336.7611.7233.0710.9411.056.0910.6421.318.536.904.898.4312.615.473.823.295.436.53
0.39.0916.185.218.4132.438.6910.854.938.2921.017.236.774.237.0312.335.073.813.075.016.53
0.47.2516.194.296.5031.817.1710.744.166.6620.696.326.703.756.1112.384.703.822.924.626.50
0.56.0916.493.795.3332.406.2811.043.745.7121.135.686.623.395.3812.314.433.852.774.346.55
0.65.3416.183.354.5232.825.4410.903.364.8920.945.166.763.184.8312.304.213.832.624.076.58
0.74.7216.683.073.9132.594.8810.913.074.2820.834.766.882.964.4212.664.023.922.573.896.69
0.84.2516.092.743.4032.014.4210.972.843.8020.764.356.772.753.9712.323.783.892.443.626.60
0.94.136.892.633.7212.634.1011.202.643.4721.214.136.892.633.7212.633.593.822.343.416.55
Table 13. Average MSE values of the estimators when p = 5 and ρ = 0.90.
Table 13. Average MSE values of the estimators when p = 5 and ρ = 0.90.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.19.918.287.429.9011.186.785.515.226.787.204.433.643.554.434.552.722.312.292.722.74
0.28.898.186.698.8711.096.515.625.056.507.314.333.653.484.334.572.662.302.262.662.71
0.38.128.236.208.1011.126.125.554.766.117.234.183.623.374.184.532.652.302.242.652.73
0.47.448.225.747.4111.055.785.494.515.777.174.073.613.284.064.512.632.302.222.642.74
0.56.868.175.326.8411.055.565.584.385.557.244.003.653.254.004.562.582.292.192.582.71
0.66.478.205.026.4311.145.335.574.195.317.273.923.663.183.924.572.572.302.192.572.72
0.76.108.374.816.0511.195.105.564.045.087.243.823.653.113.824.552.552.292.162.552.73
0.85.698.234.505.6511.074.925.573.904.907.273.733.623.033.724.542.512.282.142.512.71
0.95.408.244.305.3511.124.735.593.794.707.233.653.632.983.644.532.512.292.122.512.73
50.111.349.478.4811.3212.807.666.205.867.668.154.974.053.944.975.122.922.462.442.922.96
0.210.339.547.7310.2912.997.276.235.607.268.164.824.023.824.825.092.912.482.442.912.97
0.39.319.477.069.2412.816.956.245.336.938.264.704.023.734.695.102.882.492.422.882.97
0.48.609.516.578.5012.856.536.235.086.518.154.624.073.684.615.142.832.452.362.832.95
0.57.969.486.117.8412.846.326.354.956.288.264.484.043.584.485.122.822.492.382.822.97
0.67.439.365.697.3012.896.046.304.725.988.244.374.063.514.365.112.792.472.342.792.96
0.77.029.465.426.8712.905.736.254.495.688.174.244.043.424.235.082.762.472.322.762.96
0.86.579.495.146.4212.815.576.274.365.508.264.214.083.394.195.152.752.482.312.752.97
0.96.259.434.886.0812.825.336.304.235.268.204.074.053.294.055.092.712.482.292.712.96
Table 14. Average MSE values of the estimators when p = 5 and ρ = 0.95.
Table 14. Average MSE values of the estimators when p = 5 and ρ = 0.95.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.115.6714.0311.5515.6519.3710.989.148.2510.9712.276.935.655.386.937.313.973.283.223.974.05
0.213.1013.939.8213.0719.169.879.237.579.8512.186.655.715.186.647.383.863.253.133.864.02
0.311.3513.968.6111.2819.149.039.126.898.9912.186.355.724.966.347.393.823.283.103.814.06
0.410.0914.017.689.9919.268.299.196.438.2512.146.005.694.735.997.323.753.283.043.744.06
0.59.0814.267.008.9819.567.779.216.007.7212.275.785.694.545.767.373.663.272.983.664.05
0.68.2414.176.368.1219.657.259.245.607.2012.405.565.724.395.547.383.603.302.963.604.05
0.77.4814.255.867.3519.446.749.205.276.6812.275.315.714.225.297.353.573.312.923.564.09
0.86.8113.875.336.6719.236.279.104.946.2012.025.075.634.025.057.293.483.302.873.484.06
0.96.3413.894.996.2019.156.009.194.715.9312.324.945.723.954.917.353.433.292.823.424.07
50.118.0316.2113.2917.9722.3212.4010.309.2712.3813.867.886.466.137.888.334.413.613.544.414.50
0.215.0316.0211.2414.9322.0311.3310.438.5611.2914.017.406.365.767.398.224.283.583.444.284.46
0.313.2116.179.9213.0322.2410.3210.467.8710.2714.017.096.395.537.088.284.233.623.414.224.50
0.411.7016.358.9111.5222.549.4810.367.229.3813.946.776.375.286.768.304.113.603.334.114.46
0.510.3516.117.8910.1122.248.7310.406.738.6413.896.476.365.056.438.264.053.603.284.044.49
0.69.4716.417.309.1922.458.2810.466.348.1414.096.266.444.926.228.363.993.613.233.974.49
0.78.6016.086.628.2922.287.8210.636.047.6614.215.976.364.685.918.263.883.573.143.864.45
0.87.9616.406.207.6222.497.1810.335.547.0013.905.776.404.545.718.333.873.643.153.854.53
0.97.4216.335.777.0622.376.8410.445.316.6513.975.546.374.365.468.283.813.643.093.794.53
Table 15. Average MSE values of the estimators when p = 5 and ρ = 0.99.
Table 15. Average MSE values of the estimators when p = 5 and ρ = 0.99.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.137.7156.3128.0237.4779.2230.7635.1222.8030.6448.2621.7620.7216.3521.7327.8012.2210.329.2912.2113.63
0.223.7656.0617.9523.3979.3422.1135.3716.6521.9548.5517.7520.7613.4517.6827.8111.0610.418.5011.0413.63
0.316.8156.0712.9616.4678.6516.8935.1912.9216.6447.7914.7320.6811.2614.6227.6210.1210.457.7910.0813.71
0.413.0956.4910.0712.6780.0613.6235.2210.3813.3748.7812.7120.769.7112.5627.819.2510.347.119.2013.64
0.510.5756.848.2710.1278.8711.2235.668.6710.9248.6610.9720.748.4510.8027.638.6010.516.678.5413.80
0.68.6355.856.768.2279.379.6735.577.489.3449.149.6420.557.449.4627.607.9910.536.207.9113.81
0.77.3756.055.846.9479.658.4035.486.568.0948.958.6320.706.668.4127.767.4310.495.787.3613.82
0.86.4355.605.085.9779.687.2934.945.706.9748.057.7220.506.017.5327.606.9210.475.426.8213.73
0.95.6656.294.515.2379.316.5835.545.176.2348.767.0320.655.496.8127.756.5110.545.116.4213.81
50.143.4465.1032.3543.0491.3135.8241.0026.5935.6356.2324.7423.5218.5424.6731.6114.0411.9510.7514.0215.66
0.228.2765.7921.1827.5193.1325.4940.3719.0925.1455.5919.9923.2615.0419.8131.3012.7411.929.7212.7015.72
0.320.0566.0815.2419.1992.5619.7840.5914.8519.2255.9716.8823.5212.8016.6031.4311.5011.828.8111.4215.58
0.415.3264.5511.6614.3791.1815.9341.1312.1115.2756.4414.3423.2110.8214.0331.3410.6711.858.1110.5515.73
0.512.3764.239.4611.4092.2313.1640.389.9312.4855.9412.7123.579.6412.3131.839.7411.907.529.6215.61
0.610.5066.128.079.4693.4911.2040.628.5910.5256.1611.0723.468.4710.6631.479.0511.866.978.8915.64
0.78.8665.546.847.8592.579.8040.477.479.0455.829.9923.777.689.5231.648.4711.886.528.2815.67
0.87.8364.426.026.8090.948.6040.516.597.8555.819.0223.476.908.5331.577.8911.916.127.7215.72
0.96.9565.415.375.9092.467.8140.895.997.0456.368.1623.506.337.6931.647.3811.765.697.1615.56
Table 16. Average MSE values of the estimators when p = 10 and ρ = 0.90.
Table 16. Average MSE values of the estimators when p = 10 and ρ = 0.90.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.145.1649.6938.3945.1460.0428.6627.3824.7428.6531.8916.4915.1714.5616.4917.228.377.657.538.378.50
0.237.1449.5031.8837.0859.7826.1827.5922.7526.1632.0215.9015.2614.0715.9017.298.177.547.318.178.42
0.331.8749.8027.7031.8160.3923.7527.3120.7623.7331.6615.1815.1413.4315.1817.178.127.637.298.128.50
0.427.9149.5924.4127.8160.1022.1227.4419.3922.0731.7914.7315.2213.0214.7317.318.027.667.228.028.52
0.524.7949.5321.8224.7060.3920.7027.4318.1220.6732.0514.1315.1912.5414.1217.217.947.687.137.948.57
0.622.4149.5319.8922.3160.1419.4227.6217.0919.3932.1113.6415.1912.1313.6317.207.767.606.967.768.50
0.720.6249.9518.3020.5160.7218.2327.6316.1018.1832.1513.2415.2611.7913.2217.267.637.616.877.638.47
0.819.0049.6516.9518.8660.1617.1527.6015.1717.0832.1212.8015.2211.3712.7817.287.547.616.787.548.49
0.917.4249.4315.6017.2860.1216.2127.6214.4116.1531.9612.3315.1210.9612.3117.197.437.606.687.438.49
50.146.9951.4739.7946.9462.4230.0028.6325.8929.9933.3917.2515.9115.2617.2518.008.667.907.788.668.80
0.238.8651.3733.2138.7362.9127.3828.8123.7427.3433.5716.5015.8014.5716.4917.958.507.857.618.508.77
0.333.0951.7128.7532.9362.5525.1028.6321.8325.0533.4315.9515.8814.1015.9518.068.437.927.578.438.83
0.429.2852.1725.6329.0763.3423.2428.6720.2723.1633.4515.2415.7513.4715.2217.928.267.877.418.268.79
0.526.1551.6422.9125.8863.0521.6028.7218.9421.5033.3514.7015.7613.0214.6717.898.147.867.298.138.78
0.623.7652.3820.9723.4963.7720.2528.8117.8320.1233.4314.1715.6712.5014.1417.898.067.907.238.068.83
0.721.5651.4019.0921.2862.8918.9128.7116.7418.8033.2313.6715.7312.1513.6417.837.917.877.107.908.78
0.820.0452.5117.8419.7363.9617.9928.8615.9517.8933.5113.2315.7411.7813.1817.827.797.867.007.788.77
0.918.3351.5016.3118.0162.8216.9528.7215.0316.8233.2712.8615.7711.4412.8117.897.727.926.957.718.82
Table 17. Average MSE values of the estimators when p = 10 and ρ = 0.95.
Table 17. Average MSE values of the estimators when p = 10 and ρ = 0.95.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.172.6693.4861.9772.60113.8749.7750.9842.7949.7659.8628.6927.2925.3228.6930.9714.3113.1312.7614.3114.73
0.255.1393.4247.6355.05114.6342.4450.7936.8342.4259.5026.6426.9923.3426.6330.9313.9813.1912.4613.9814.81
0.345.2194.2439.6745.04114.8337.1950.8232.4837.1359.4425.1027.2322.0825.0831.1513.5513.1612.1013.5514.76
0.437.5392.2933.1637.38112.8933.0150.4528.8932.9559.2723.4627.1920.7223.4431.0213.2313.1611.7913.2214.81
0.532.8894.2829.3332.71115.2229.6750.5026.1429.5859.1022.1727.2619.6022.1531.1212.8413.1011.4312.8314.75
0.628.5293.1825.4828.35114.0126.7650.0823.5926.6458.7320.9727.2918.5320.9531.2312.5313.1811.2012.5214.78
0.725.5393.5722.8925.30114.3324.7850.9121.9824.6659.3619.9127.4917.6819.8731.3112.1313.0810.8512.1314.69
0.822.9293.2320.6522.71114.2822.8450.7420.2922.7159.2418.8227.3816.7318.7831.2011.9013.1210.6111.8914.78
0.920.7492.9218.7620.50113.7221.1150.7318.8321.0059.2918.0127.3915.9817.9731.3411.6313.1310.3811.6214.78
50.176.9498.6265.3376.80120.3751.5252.9144.4351.4861.9129.7628.0726.0529.7532.1314.8013.5313.1414.8015.24
0.257.3695.9949.3157.00118.6144.1052.6038.1344.0161.8827.7328.2524.4727.7232.1514.4513.5812.8314.4515.31
0.346.9097.6241.2846.59118.6838.9553.3234.0338.8462.3326.1528.4923.1126.1132.3914.0413.6312.5314.0315.29
0.439.5896.8534.7939.12119.5034.3252.7430.0534.1561.8124.3628.1021.4024.3132.2213.6813.6612.2313.6715.31
0.534.3598.1630.4533.91120.8031.1152.8327.3430.9261.9222.7928.0020.1522.7231.9313.2913.5811.8513.2715.27
0.629.8797.3926.7029.39119.0928.0252.5924.8327.7961.2321.7728.3019.2521.6932.3612.9613.5711.5412.9415.28
0.726.8496.8324.0126.35119.1825.7352.4322.7725.4861.5220.5628.2418.2020.4632.2612.6413.5611.2512.6215.29
0.824.2397.3721.7223.70119.3523.9452.7821.2423.6761.7119.6128.3517.3419.5232.4612.4813.7411.1112.4515.48
0.922.1098.8819.9021.58120.9922.0052.5819.5321.7261.6118.7128.5016.6018.6032.5212.0713.6510.7812.0515.36
Table 18. Average MSE values of the estimators when p = 10 and ρ = 0.99.
Table 18. Average MSE values of the estimators when p = 10 and ρ = 0.99.
σ 2 kn = 20n = 30n = 50n = 100
URRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLSURRRLSCRORROLS
10.1154.64411.91135.42154.22509.28135.19216.24117.57135.05255.5295.73113.3783.7095.65130.6951.8251.8045.5951.8158.98
0.290.09408.5280.6489.56510.2790.33218.3379.7590.09257.3874.54113.3365.5074.42130.6846.5152.3841.0046.4859.63
0.362.19413.4956.1961.64516.9866.20216.0358.7565.89255.4560.80113.9253.4460.64131.9241.6952.5236.8741.6359.61
0.446.27411.5742.1245.74514.1151.86218.8746.3051.52258.7750.52114.0944.8550.36131.2737.4552.4933.3037.4259.39
0.536.45413.5533.2335.89512.9141.89218.9137.6241.53258.1743.05114.7238.3642.85131.6133.7151.8429.9333.6358.72
0.629.30410.9426.8328.74509.5234.84219.5331.3834.43258.0437.04113.7532.9636.82131.2231.1652.3127.6331.0759.46
0.724.53414.6922.4723.95517.4229.37218.1326.5228.96257.1032.16113.1528.7031.91130.2428.4152.1625.2628.3259.11
0.820.74420.5119.0720.18523.5825.37218.4922.9224.94257.4628.39113.2925.3828.17130.4526.5952.7723.6226.4859.93
0.917.95414.0016.4517.39511.7522.14218.7720.1021.75257.7725.51113.2922.8225.22130.2724.5552.4621.8124.4359.68
50.1163.66435.76143.85162.79542.52140.67225.46121.96140.37267.5699.51118.3887.3499.37135.8254.2154.4547.9454.1961.69
0.295.23435.5785.1994.06540.0594.33226.3083.0993.77267.4577.79117.7668.1777.56136.0048.3754.7142.8548.3061.91
0.365.36431.1958.9464.19537.4969.43227.4761.7368.79268.0563.31118.6456.0363.04136.5343.3854.7038.4643.3261.97
0.448.96436.8344.3847.71540.2754.27227.4248.4253.50267.5053.08119.5546.9752.71137.7439.0954.5634.7038.9761.80
0.538.72434.2535.2937.55541.5444.37229.4939.6343.46270.5644.73117.9139.6844.26135.6335.4254.3831.4135.2561.73
0.631.46430.9928.7330.19533.6436.37225.5432.6535.47266.0138.95118.9034.6138.43136.5832.3654.4028.7632.1661.61
0.726.23431.5223.8324.91535.4131.17227.2928.0330.23267.6234.07119.4230.3633.54137.0829.7954.3726.4529.5861.74
0.822.26426.2920.3521.05529.5926.94228.0924.2726.01268.7930.20119.1326.9529.60136.6927.5554.6024.5427.3161.89
0.919.29433.2817.6818.07540.3923.50227.7221.1722.54267.8426.95118.3523.9726.33136.6225.5854.6322.7625.2961.93
Table 19. Total national research and development expenditures by country.
Table 19. Total national research and development expenditures by country.
YearYX1X2X3X4
19722.31.92.21.93.7
19752.21.82.223.8
19792.21.82.42.13.6
19802.31.82.42.23.8
19812.422.52.33.8
19822.52.12.62.43.7
19832.62.12.62.63.8
19842.62.22.62.64
19852.72.32.82.83.7
19862.72.32.72.83.8
Table 20. MSE of the total national research and development dataset.
Table 20. MSE of the total national research and development dataset.
kMSE
URRRLSCRORROLS
0.10.60410.60300.60280.58200.6015
0.20.57630.57150.5863
0.30.65750.58580.5897
0.40.80310.61430.5924
0.50.59750.56370.5945
0.60.60880.56750.5963
0.70.61290.56390.5979
0.80.59740.58210.5992
0.90.87360.57250.6004
Table 21. Acetylene data.
Table 21. Acetylene data.
Conversion of n-Heptane to AcetyleneReactor
Temperature (°C)
Ratio of H2
to n-Heptane
(Mole Ratio)
Contact Time (S)
4913007.50.012
50.2130090.012
50.51300110.0115
48.5130013.50.013
47.51300170.0135
44.51300230.012
2812005.30.04
31.512007.50.038
34.51200110.032
35120013.50.026
381200170.034
38.51200230.041
1511005.30.084
1711007.50.098
20.51100110.092
29.51100170.086
Table 22. MSE of the estimators for acetylene data.
Table 22. MSE of the estimators for acetylene data.
kMSE
URRRLSCRORROLS
0.1409.96691.32410.5430466.8437134.0185
0.2162.06190.8144142.6091
0.395.90130.449370.1113
0.41.48460.995442.4851
0.528.53890.657228.9713
0.637.43270.278221.3140
0.720.58800.645716.5318
0.85.22490.616013.3309
0.93.07750.512411.0745
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alheety, M.I.; Nayem, H.; Kibria, B.M.G. An Unbiased Convex Estimator Depending on Prior Information for the Classical Linear Regression Model. Stats 2025, 8, 16. https://doi.org/10.3390/stats8010016

AMA Style

Alheety MI, Nayem H, Kibria BMG. An Unbiased Convex Estimator Depending on Prior Information for the Classical Linear Regression Model. Stats. 2025; 8(1):16. https://doi.org/10.3390/stats8010016

Chicago/Turabian Style

Alheety, Mustafa I., HM Nayem, and B. M. Golam Kibria. 2025. "An Unbiased Convex Estimator Depending on Prior Information for the Classical Linear Regression Model" Stats 8, no. 1: 16. https://doi.org/10.3390/stats8010016

APA Style

Alheety, M. I., Nayem, H., & Kibria, B. M. G. (2025). An Unbiased Convex Estimator Depending on Prior Information for the Classical Linear Regression Model. Stats, 8(1), 16. https://doi.org/10.3390/stats8010016

Article Metrics

Back to TopTop