Next Article in Journal
Professional Development and Teacher Job Satisfaction: Evidence from a Multilevel Model
Previous Article in Journal
Double Accelerated Convergence ZNN with Noise-Suppression for Handling Dynamic Matrix Inversion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Risk Ratios of Shrinkage Estimators in High Dimensions

by
Abdenour Hamdaoui
1,2,*,
Waleed Almutiry
3,
Mekki Terbeche
1,4 and
Abdelkader Benkhaled
5,6
1
Department of Mathematics, University of Science and Technology of Oran-Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria
2
Laboratory of Statistics and Random Modelisations (LSMA), University Abou Bekr Belkaid, Tlemcen 13000, Algeria
3
Department of Mathematics, College of Science and Arts in Ar Rass, Qassim University, Buryadah 52571, Saudi Arabia
4
Laboratory of Analysis and Application of Radiation (LAAR), University of Science and Technology of Oran-Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria
5
Department of Biology, University of Mascara, Mascara 29000, Algeria
6
Laboratory of Stochastic Models, Statistics and Applications, University Tahar Moulay, Saida 20000, Algeria
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(1), 52; https://doi.org/10.3390/math10010052
Submission received: 16 September 2021 / Revised: 15 December 2021 / Accepted: 22 December 2021 / Published: 24 December 2021
(This article belongs to the Section Probability and Statistics)

Abstract

:
In this paper, we analyze the risk ratios of several shrinkage estimators using a balanced loss function. The James–Stein estimator is one of a group of shrinkage estimators that has been proposed in the existing literature. For these estimators, sufficient criteria for minimaxity have been established, and the James–Stein estimator’s minimaxity has been derived. We demonstrate that the James–Stein estimator’s minimaxity is still valid even when the parameter space has infinite dimension. It is shown that the positive-part version of the James–Stein estimator is substantially superior to the James–Stein estimator, and we address the asymptotic behavior of their risk ratios to the maximum likelihood estimator (MLE) when the dimensions of the parameter space are infinite. Finally, a simulation study is carried out to verify the performance evaluation of the considered estimators.

1. Introduction

When it comes to estimating the mean parameter of a multivariate normal distribution, the minimax technique has attracted the greatest attention and development in research thus far. Following Stein [1], it is well-known that the maximum likelihood estimator (MLE) is minimax and admissible when the dimensions of the parameter space are less than or equal to two. On the other hand, the MLE maintains the minimax property but is no longer admissible when the dimension is greater than three. Therefore, enhancing estimators has been accomplished through the development of shrinkage estimators that minimize the risk associated with the quadratic loss function. The efficient outperformance of these shrinkage estimators, compared to the MLE, has been demonstrated in various studies; for example, see Baranchik [2], Efron and Morris [3,4], Stein [5], Casella and Whang [6], Berger [7], Arnold [8], and Gruber [9]. Stein [1] and James and Stein [10] have also provided specific suggestions for improvement. In this paper, we discuss adaptive shrinkage estimating strategies and show how they may be generated by shrinking a raw estimate. In addition, we report our investigation of the characteristics of several shrinkage estimators in the context of risk.
There have been various recent studies focused on shrinkage estimation, including those of Nourouzirad and Arashi [11], Nimet and Selahettin [12], Kashari et al. [13], and Benkhaled and Hamdaoui [14]. Shrinkage estimators for multivariate normal means in the Bayesian framework have been examined by Hamdaoui et al. [15], in order to determine the minimaxity and limitations of their risk ratios. For the X ~ N d ( θ ,   σ 2   I d ) model, the authors used the prior law θ ~ N d ( υ ,   τ 2   I d ) , in which the parameters υ and τ 2 are known but the parameter σ 2 is unknown. They developed two modified Bayes estimators, a δ B and an empirical δ E B . When the sample size n and the dimension of parameter space d are finite, they found that the estimators δ B and δ E B are minimax under the quadratic loss. When n and d approach infinity, the risk ratios of these estimators were examined in terms of the MLE X .
Improvement of the estimators can also be achieved by incorporating a balanced loss function. Zellner [16] presented a balanced loss function that is intended to represent two requirements; namely, quality of fit and accuracy of the estimate. We refer to Farsipour and Asgharzadeh [17], Karamikabir et al. [18], and Selahattin and Issam [19] for further information on the use of this loss function. Using the generalized Bayes shrinkage estimators of location parameter for a spherical distribution subject to a balance-type loss, Karamikabir et al. [20] determined the minimax and acceptable estimators of the location parameter.
In this paper, we use the model Y ~ N d ( μ ,   τ 2   I d ) , in which the parameter τ is well known. Our main purpose was to estimate the unknown parameter μ , by using shrinkage estimators derived from the MLE to solve for μ . We utilized the risk associated with the balanced loss function to compare two estimators. With the incorporation of the balanced loss function, the risk function of the estimators was computed using T α ( Y 2 ) = ( 1 α 1 Y 2 ) Y , where the real constant α may be dependent on d , and . is the typical norm in   d . In addition, we investigated the minimaxity characteristic of the estimators and concluded that the James–Stein estimator has the same feature. We also extended the work to study the limit of the risk ratios of the James–Stein estimator to the MLE when d tends to infinity. We discuss the positive-part version of the James–Stein estimator and the asymptotic behavior of its risk ratios to the MLE in scenarios where the dimension of the parameter space d is either finite or goes to infinity. We demonstrate that, when d is finite, the positive-part version of the James–Stein estimator outperforms the James–Stein estimator.
The remainder of this paper is structured as follows: In Section 2, we present our model and recall some published findings that are useful in proving the main results. In Section 3, we show the minimaxity property and the limit of the risk ratios of the James–Stein estimator and its positive-part version, regarding the dimension of the parameter space. We end this paper with the results of a simulation study, which illustrate the performance of the considered estimators.

2. Model Presentations

In this section, we recall that, if U is a multivariate Gaussian random variable N d ( μ ,   τ 2   I d ) in d , then U 2 τ 2 ~ χ d 2 ( μ 2 τ 2 ) , where χ d 2 ( μ 2 τ 2 ) denotes the non-central chi-square distribution with d degrees of freedom and non-centrality parameter μ 2 τ 2 .
Suppose that Z is a random vector which follows a multivariate normal distribution N d ( μ ,   τ 2   I d ) , where the parameter μ is unknown. For any estimator T of the parameter μ , the balanced squared error loss function of T can then be defined as
ω ( T , μ ) = ω T T 0 2 + ( 1 ω ) T μ 2 ,
where T is the given estimator that is being compared to the target estimator T 0 of μ , ω is the weight provided to the closeness of T to T 0 , and 1 ω is the relative weight given to the precision of the estimator T to μ . This means that the risk function associated with ω ( T , μ ) is defined as follows:
ω ( T , μ ) = ω E ( T T 0 2 ) + ( 1 ω ) E ( T μ 2 ) .  
Now, considering the model Z ~ N d ( μ ,   τ 2   I d ) , in which τ 2 is known, we focus on estimating the unknown mean parameter μ using shrinkage estimators under the balanced loss function defined in Equation (1). For simplicity, we only consider the scenario τ 2 = 1 , as any model of the type Z ~ N d ( μ ,   τ 2   I d ) may be converted to a model X ~ N d ( μ ,   I d ) by a change of variables. Specifically, we investigate the estimation of the unknown parameter μ when Y ~ N d ( μ ,   I d ) . In this case, following Benkhaled et al. [21], it is obvious that the MLE is Y : = T 0 , and its risk function is ω ( T 0 , μ ) = ( 1 ω ) d . Therefore, any estimator that dominates T 0 is likewise minimax for d 3 .
For the proof given in the next section, we needed to address the result of Lemma 1 given in Stein [5], which states that
E ( ( Z υ ) f ( Z ) ) = E ( f ( Z ) ) ,
where Z is a random variable that follows N ( υ ,   1 ) , f is the derivative of f , and E ( f ( Z ) ) < + .

3. Main Results

3.1. General Class of James–Stein Estimator

3.1.1. Risk Function and Minimaxity

Here, we study the minimaxity of estimators defined by
T α ( Y 2 ) = ( 1 α 1 Y 2 ) Y ,  
where α is a real parameter.
Proposition 1.
Under the balanced loss function ω given in Equation (1), the risk function of the estimator T α ( Y 2 ) is
ω ( T α ( Y 2 ) , μ ) = d ( 1 ω ) + α ( α 2 ( 1 ω ) ( d 2 ) ) E ( 1 Y 2 ) .
Proof of Proposition 1.
From Equations (2) and (4), we have
ω ( T α ( Y 2 ) , μ ) = ω E ( T α ( Y 2 ) T 0 2 ) + ( 1 ω ) E ( T α ( Y 2 ) μ 2 ) = ω E ( α 1 Y 2 Y 2 ) + ( 1 ω ) E ( Y μ α 1 Y 2 Y 2 ) = α 2 E ( 1 Y 2 ) + ( 1 ω ) E ( Y μ 2 ) 2 ( 1 ω ) α E ( Y μ   ,   1 Y 2 Y ) .
Using Equation (3), we obtain
E ( < Y μ   ,   1 Y 2 Y > ) = E ( j = 1 d ( Y j μ j ) 1 Y 2 Y j ) = j = 1 d E [ ( Y j μ j ) 1 Y 2 Y j ] = j = 1 d E [ Y j ( 1 i = 1 d Y i 2 Y j ) ] = ( d 2 ) E ( 1 Y 2 ) .
According to Equations (6) and (7), we obtain the desired result. □
Subsequently, from Equation (5), we can immediately deduce that a sufficient condition for the estimator T α ( Y 2 ) to dominate the MLE Y is
0 α 2 ( 1 ω ) ( d 2 ) .
Due to the convexity of the risk function ω ( T α ( Y 2 ) , μ ) on α , the optimal value of α that minimizes this risk function is
α = α ^ = ( 1 ω ) ( d 2 ) .
By replacing α with α ^ in Equation (4), we then obtain the James–Stein estimator that is defined as
T J . S ( Y 2 ) = ( 1 ( 1 ω ) ( d 2 ) Y 2 ) Y .
Additionally, its risk function related to the balanced loss function ω given in Equation (1) is given by
ω ( T J . S ( Y 2 ) , μ ) = d ( 1 ω ) ( d 2 ) 2 ( 1 ω ) 2 E ( 1 Y 2 )
ω ( T 0 , μ ) .
We can then deduce that the James–Stein estimator T J . S ( Y 2 ) dominates the MLE; thus, T J . S ( Y 2 ) is minimax.

3.1.2. Asymptotic Behavior of Risk Ratios of James–Stein Estimator

This section discusses the effectiveness of the James–Stein estimator, in terms of dominating the MLE under the balanced loss function when the dimension of the parameter space d goes to infinity.
Casella and Whang [6] have shown that the James–Stein estimator dominates the MLE under the quadratic loss function; that is, in the specific case of the balanced loss defined by Equation (1): ω = 0 .
Theorem 1.
Under the balanced loss function ω defined in Equation (1), if l i m d + μ 2 d = Q   ( Q > 0 ) , we get
lim d + ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) = ω + Q 1 + Q .
Proof of Theorem 1.
From Lemma 1 of Casella and Whang [6], and for d 3 , we have
1 d 2 + μ 2 E ( 1 Y 2 ) 1 d 2 ( d d + μ 2 ) .
Using Equations (9) and (10), we obtain
1 ( 1 ω ) ( d 2 ) d + μ 2 ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) 1 ( 1 ω ) ( d 2 ) 2 d ( d 2 + μ 2 ) .
By passing to the limit—namely, when d tends to infinity and under the condition lim d + μ 2 d = Q   ( Q > 0 ) we get
1 lim d + ( 1 ω ) ( d 2 ) d + μ 2 lim d + ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) 1 lim d + ( 1 ω ) ( d 2 ) 2 d ( d 2 + μ 2 ) ,
and then
1 lim d + ( 1 ω ) ( d 2 ) d d d + μ 2 d lim d + ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) 1 lim d + ( 1 ω ) ( d 2 ) 2 d 2 d d ( d 2 d + μ 2 d ) .
Thus,
ω + Q 1 + Q = 1 1 ω 1 + Q lim d + ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) 1 1 ω 1 + Q = ω + Q 1 + Q ,
Therefore,
lim d + ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) = ω + Q 1 + Q < 1   ,
as 0 ω < 1 . This means that, even if d tends to infinity, the James–Stein estimator T J . S ( Y 2 ) is superior to the MLE T 0 . As a result, the minimaxity feature of the James–Stein estimator T J . S ( Y 2 ) remains stable. □

3.2. The Positive-Part Version of the James–Stein Estimator

In this section, we study the superiority of the positive-part version of the James–Stein estimator to the James–Stein estimator, and the limit of the risk ratio of the positive-part version of the James–Stein estimator to the MLE when the dimension of the parameter space d tends to infinity. The positive-part version of James–Stein estimator is given by
T J . S + ( Y 2 ) = ( 1 ( 1 ω ) ( d 2 ) Y 2 ) + Y ,
where ( 1 ( 1 ω ) ( d 2 ) Y 2 ) + = max ( 0 , 1 ( 1 ω ) ( d 2 ) Y 2 ) = ( 0 , 1 ( 1 ω ) ( d 2 ) Y 2 ) I ( 1 ω ) ( d 2 ) Y 2 1 , with I   ( 1 ω ) ( d 2 ) Y 2 1 denoting the indicator function of the set (   ( 1 ω ) ( d 2 ) Y 2 1 ) .

3.2.1. Comparison of Risk Functions of the Positive-Part Version of the James–Stein Estimator and the James–Stein Estimator

Proposition 2.
Under the balanced loss function ω defined in Equation (1), the positive-part version of James–Stein estimator T J . S + defined in Equation (11) dominates the James–Stein estimator T J . S given in Equation (8).
Proof of Proposition 2.
We have
ω ( T J . S + ( Y 2 ) , μ ) = ω E ( T J . S + ( Y 2 ) T 0 2 ) + ( 1 ω ) E ( T J . S + ( Y 2 ) μ 2 ) .
Baranchick [2] has shown that, under the quadratic loss function (i.e., in the case where ω = 0 ),
E ( T J . S + ( Y 2 ) μ 2 ) E ( T J . S ( Y 2 ) μ 2 )   for   any   μ d   .  
If ω = 0 , the positive–part version of James–Stein estimator T J . S + ( Y 2 ) then dominates the James–Stein estimator T J . S ( Y 2 ) . Thus, using Equations (12) and (13), a sufficient condition for which T J . S + ( Y 2 ) dominates T J . S ( Y 2 ) under the balanced loss function (i.e., 0 ω < 1 ) is
E ( T J . S + ( Y 2 ) T 0 2 ) E ( T J . S ( Y 2 ) T 0 2 ) .
We have
E ( T J . S + ( Y 2 ) T 0 2 ) = E ( T J . S + ( Y 2 ) T J . S ( Y 2 ) + T J . S ( Y 2 ) Y 2 ) = E ( T J . S + ( Y 2 ) T J . S ( Y 2 ) 2 ) + E ( T J . S ( Y 2 ) Y 2 ) + 2 E ( < T J . S + ( Y 2 ) T J . S ( Y 2 ) , T J . S ( Y 2 ) Y > ) = E ( ( ( 1 ω ) ( d 2 ) Y 2 1 ) I   ( 1 ω ) ( d 2 ) Y 2 1 Y 2 ) + E ( T J . S ( Y 2 ) Y 2 ) 2 ( 1 ω ) ( d 2 ) E ( < (   ( 1 ω ) ( d 2 ) Y 2 1 ) I   ( 1 ω ) ( d 2 ) Y 2 1 , 1 Y 2 Y > ) .
Subsequently,
E ( T J . S + ( Y 2 ) T 0 2 ) E ( T J . S ( Y 2 ) T 0 2 ) = E ( ( ( 1 ω ) ( d 2 ) Y 2 1 ) I   ( 1 ω ) ( d 2 ) Y 2 1 Y 2 ) 2 ( 1 ω ) ( d 2 ) E ( < (   ( 1 ω ) ( d 2 ) Y 2 1 ) I   ( 1 ω ) ( d 2 ) Y 2 1 Y , 1 Y 2 Y > ) = E [ ( 1 Y 2 ( Y 2 ( 1 ω ) ( d 2 ) ) ( Y 2 + ( 1 ω ) ( d 2 ) ) ) I Y 2 ( 1 ω ) ( d 2 ) 0 ] 0 .
Thus, T J . S + ( Y 2 ) dominates T J . S ( Y 2 ) for any 0 ω < 1 . □

3.2.2. Limit of Risk Ratio of the Positive-Part Version of the James–Stein Estimator to the MLE

Theorem 2.
Under the balanced loss function ω defined in Equation (1), if l i m d + μ 2 d = Q   ( Q > 0 ) , we get
lim d + ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) = ω + Q 1 + Q .
Proof of Theorem 2.
As T J . S + ( Y 2 ) dominates T J . S ( Y 2 ) for any 0 ω < 1 , then ω ( T J . S + ( Y 2 ) , μ ) ω ( T J . S ( Y 2 ) , μ ) for any 0 ω < 1 and for all d 3 and μ d . Hence,
lim d + ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) lim d + ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) = ω + Q 1 + Q .
To ensure that T J . S + ( Y 2 ) dominates the MLE as d tends to infinity, it suffices to show that
lim d + ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) ω + Q 1 + Q .
Using the same techniques as used in the proof of Lemma 5 in Benmansour and Hamdaoui [22], based on Lemma 2.1 of Shao and Strawderman [23], we obtain
ω ( T J . S + ( Y 2 ) , μ ) = ω ( T J . S ( Y 2 ) , μ ) + E [ ( Y 2 +   ( 1 ω ) 2 ( d 2 ) 2 Y 2 d ) I   ( 1 ω ) ( d 2 ) Y 2 1 ] .
As
E ( Y 2 I ( 1 ω ) ( d 2 ) Y 2 1 ) = 0 + t I ( 1 ω ) ( d 2 ) t 1 χ d 2 ( μ 2 ) d t   ,
where χ d 2 ( μ 2 ) is the chi-squared distribution with d degrees of freedom and non-centrality parameter μ 2 , and by applying Equation (1.3) in Casella and Hwang [6], we have
E ( Y 2 I ( 1 ω ) ( d 2 ) Y 2 1 ) = 0 + t I ( 1 ω ) ( d 2 ) t 1 χ d 2 ( μ 2 ) d t = d 0 + I ( 1 ω ) ( d 2 ) t 1 χ d + 2 2 ( μ 2 ) d t + 2 μ 2 0 + I ( 1 ω ) ( d 2 ) t 1 χ d + 4 2 ( μ 2 ) d t = d ( χ d + 2 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) + 2 μ 2 ( χ d + 4 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) .
Moreover,
E [ ( 1 ω ) 2 ( d 2 ) 2 Y 2 I ( 1 ω ) ( d 2 ) Y 2 1 ] 1 ( 1 ω ) 2 ( d 2 ) 2 E ( I ( 1 ω ) ( d 2 ) Y 2 1 )       = 1 ( 1 ω ) 2 ( d 2 ) 2 ( Y 2 ( 1 ω ) ( d 2 ) )          = 1 ( 1 ω ) 2 ( d 2 ) 2 ( χ d 2 ( μ 2 ) ( 1 ω ) ( d 2 ) )   ,
and
d E [ I ( 1 ω ) ( d 2 ) Y 2 1 ] = d ( χ d 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) .
From Equations (16)–(18), we obtain
ω ( T J . S + ( Y 2 ) , μ ) ω ( T J . S ( Y 2 ) , μ ) + d   ( χ d + 2 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) + 2 μ 2   ( χ d + 4 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) + 1 ( 1 ω ) ( d 2 ) ( χ d 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) d ( χ d 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) .
Then,
ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) + d d ( 1 ω ) ( χ d + 2 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) + 2 μ 2 d ( 1 ω ) ( χ d + 4 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) + 1 d ( d 2 ) ( 1 ω ) 2 ( χ d 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) d d ( 1 ω ) ( χ d 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) .
Using Equation (3.4) from Casella and Hwang [6], we have
lim d + ( χ d + 2 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) = lim d +   ( χ d + 4 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) = lim d +   ( χ d 2 ( μ 2 ) ( 1 ω ) ( d 2 ) ) = 0 .
Subsequently, under the condition lim d + μ 2 d = Q , we obtain
lim d + ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) lim d + ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) = ω + Q 1 + Q .
According to Equations (14) and (19), we can deduce that
lim d + ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) = ω + Q 1 + Q 1 ;
namely, the positive-part version of James–Stein estimator T J . S + ( Y 2 ) dominates the MLE, even if d tends to infinity. Thus, there is a stability of the minimaxity property of the positive-part version of the James–Stein estimator T J . S + ( Y 2 ) when the dimension of parameter space d is in the neighborhood of infinity. □

4. Simulation Results

In this section, we discuss the values of the risk ratios of the James–Stein estimator T J . S ( Y 2 ) defined in Equation (8), for which the risk function under the balanced loss function is given by Equation (9), and the positive-part version of James–Stein estimator   T J . S + ( Y 2 ) defined by Equation (11), for which the risk function related to the balanced loss function is given by Equation (15), with respect to the MLE. We denote these risk ratios as ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) , respectively. First, we discuss the performance of both estimators as functions of λ = μ 2 , and then compare their performance to the MLE based on selected values of the parameters d and ω . We then explain their performance based on various values of, ω , and λ = μ 2 .
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 show the curves of ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 , based on selected values of the parameters d and ω . These curves were also compared to the gold standard curve of the risk ratio of the MLE (a constant function equal to 1). We noted that the values of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) were less than 1 for all selected values of d and ω . This indicates that the James–Stein estimator T J . S ( Y 2 ) and the positive-part version of the James–Stein estimator T J . S + ( Y 2 ) are minimax. Furthermore, the estimators T J . S ( Y 2 ) and T J . S + ( Y 2 ) represented a significant improvement over the MLE, especially when the values of ω were close to zero and the dimension of the parameter space d was high. Moreover, we noted a better performance of T J . S + ( Y 2 ) , compared to T J . S ( Y 2 ) , for the same values of d and ω . By looking at the curves of both risk ratios, it can be seen that the risk ratio ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) was obviously lower than that of ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) for most values of λ . The difference between these curves was significant for small values of λ and negligible for larger values. This indicates that the improvement of T J . S + ( Y 2 ) over T J . S ( Y 2 ) was slight for large values of λ , and the curves of their risk ratios were almost identical once λ exceeded a certain value. All results discussed through these figures can be confirmed by the values of risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) provided in Table 1, Table 2 and Table 3 for different set values of λ = μ 2 , d , and ω . The first entry of each cell in these tables is the ratio ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) , while the second entry is the ratio ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) .
The superiority of the James–Stein estimator T J . S ( Y 2 ) and the positive-part version of the James–Stein estimator T J . S + ( Y 2 ) over the MLE were observed under small values of both ω and λ . This improvement tended to decrease and approached zero as ω and λ increased. We also observed that the improvement of both estimators and the dimension of the parameter space d were positively correlated under fixed values of ω . We also noted that, for each value of λ = μ 2 , the values of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) tended to be identical for large values of ω .
Hence, these results indicate the minimaxity of James–Stein estimator and the positive-part version of the James–Stein estimator, as well as the superiority of the positive-part version of the James–Stein estimator to the James–Stein estimator for different values of d and ω .

5. Conclusions

In this paper, we considered the estimation of the mean μ of a multivariate normal distribution Y ~ N d ( μ ,   I d ) . We assessed the risk associated with the balanced loss function for comparing any two estimators. First, we established the minimaxity of the estimators defined by T α ( Y 2 ) = ( 1 α 1 Y 2 ) Y , where α is a real parameter related to the dimension of the parameter space, d , and deduced the minimaxity of James–Stein estimator T J . S ( Y 2 ) . When the value of d was in the neighborhood of infinity, we studied the asymptotic behavior of risk ratios of the James–Stein estimator to the MLE. We then showed that, under the condition lim d + μ 2 d = Q > 0 , the limit of the risk ratio ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) tended to the value ω + Q 1 + Q ( 1 ) ; in other words, the James–Stein estimator T J . S ( Y 2 ) dominates the MLE, even when d tends to infinity. Thus, the minimaxity property of the James–Stein estimator T J . S ( Y 2 ) remains stable, even if d is in the neighborhood of infinity. Second, following the same steps as in the first part, we examined the minimaxity of the positive-part version of the James–Stein estimator T J . S + ( Y 2 ) , in the case when d is finite. When d was infinite, we obtained the same results as reported previously; namely, we showed that, under the condition lim d + μ 2 d = Q > 0 , the limit of the risk ratio ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) tended to ω + Q 1 + Q ( 1 ) . Thus, we observed the stability of the minimaxity property of the positive-part version of the James–Stein estimator, T J . S + ( Y 2 ) , when the dimension of parameter space d is in the neighborhood of infinity.
For further work, we plan to examine the general multivariate normal distribution Y ~ N d ( μ ,   ) , where is an arbitrary unknown positive matrix. This work can also be explored in the Bayesian framework as well as in the general case where the model has a symmetrical spherical distribution.

Author Contributions

Conceptualization, A.H., W.A., M.T. and A.B.; methodology, A.H., W.A., M.T. and A.B.; formal analysis, A.H., W.A., M.T. and A.B.; writing—review and editing, A.H., W.A., M.T. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The corresponding author can provide the data sets utilized in this work upon reasonable request.

Acknowledgments

The authors are very grateful to the editor and the anonymous referees for their valuable suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stein, C. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proceedings of the 3rd Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1956; pp. 197–206. [Google Scholar]
  2. Baranchik, A.J. Multiple Regression and Estimation of the Mean of a Multivariate Normal Distribution; Technical Report No. 51; Stanford University: Stanford, CA, USA, 1964. [Google Scholar]
  3. Efron, B.; Morris, C.N. Stein’s estimation rule and its competitors: An empirical Bayes approach. J. Am. Stat. Assoc. 1973, 68, 117–130. [Google Scholar]
  4. Efron, B.; Morris, C.N. Data analysis using Stein’s estimator and its generalizations. J. Am. Stat. Assoc. 1975, 70, 311–319. [Google Scholar] [CrossRef]
  5. Stein, C. Estimation of the mean of a multivariate normal Distribution. Ann. Stat. 1981, 9, 1135–1151. [Google Scholar] [CrossRef]
  6. Casella, G.; Hwang, J.T. Limit expressions for the risk of James-Stein estimator. Can. J. Stat. 1982, 10, 305–309. [Google Scholar] [CrossRef] [Green Version]
  7. Berger, J.O.; Strawderman, W.E. Choice of hierarchical priors: Admissibility in estimation of normal means. Ann. Stat. 1996, 24, 931–951. [Google Scholar] [CrossRef]
  8. Arnold, F.S. The Theory of Linear Models and Multivariate Analysis; John Wiley and Sons: New York, NY, USA, 1981; pp. 159–179. [Google Scholar]
  9. Gruber, H.G.M. Improving efficiency by shrinkage, Statistics: The James-Stein and Ridge Regression Estimators. In Statistics, Textbooks, and Monographs, 1st ed.; Rochester Institute of Technology: Rochester, NY, USA, 1998; pp. 71–370. [Google Scholar]
  10. James, W.; Stein, C. Estimation with quadratic loss. In Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Los Angeles, CA, USA, 20 June–30 July 1960; pp. 361–379. [Google Scholar]
  11. Norouzirad, M.; Arashi, M. Preliminary test and Stein-type shrinkage ridge estimators in robust regression. Stat. Pap. 2019, 60, 1849–1882. [Google Scholar] [CrossRef]
  12. Özbay, N.; Kaçıranlar, S. Risk performance of some shrinkage estimators. Commun. Stat. Simul. Comput. 2019, 50, 323–342. [Google Scholar] [CrossRef]
  13. Kashani, M.; Rabiei, M.R.; Arashi, M. An integrated shrinkage strategy for improving efficiency in fuzzy regression modeling. Soft Comput. 2021, 25, 8095–8107. [Google Scholar] [CrossRef]
  14. Benkhaled, A.; Hamdaoui, A. General classes of shrinkage estimators for the multivariate normal mean with unknown variancee: Minimaxity and limit of risks ratios. Kragujev. J. Math. 2019, 46, 193–213. [Google Scholar]
  15. Hamdaoui, A.; Benkhaled, A.; Mezouar, N. Minimaxity and limits of risks ratios of shrinkage estimators of a multivariate normal mean in the Bayesian case. Stat. Optim. Inf. Comput. 2020, 8, 507–520. [Google Scholar] [CrossRef]
  16. Zellner, A. Bayesian and non-Bayesian estimation using balanced loss functions. In Statistical Decision Theory and Methods; Berger, J.O., Gupta, S.S., Eds.; Springer: New York, NY, USA, 1994; pp. 337–390. [Google Scholar]
  17. Farsipour, N.S.; Asgharzadeh, A. Estimation of a normal mean relative to balanced loss functions. Stat. Pap. 2004, 45, 279–286. [Google Scholar] [CrossRef]
  18. Karamikabir, H.; Afshari, M.; Arashi, M. Shrinkage estimation of non-negative mean vector with unknown covariance under balance loss. J. Inequal. Appl. 2018, 1, 331. [Google Scholar] [CrossRef] [PubMed]
  19. Selahattin, K.; Issam, D. The optimal extended balanced loss function estimators. J. Comput. Appl. Math. 2019, 345, 86–98. [Google Scholar] [CrossRef]
  20. Karamikabir, H.; Afshari, M. Generalized Bayesian shrinkage and wavelet estimation of location parameter for spherical distribution under balanced-type loss: Minimaxity and admissibility. J. Multivar. Anal. 2020, 177, 110–120. [Google Scholar] [CrossRef]
  21. Benkhaled, A.; Terbeche, M.; Hamdaoui, A. Polynomials shrinkage estimators of a multivariate normal mean. Stat. Optim. Inf. Comput. 2021. [Google Scholar] [CrossRef]
  22. Benmansour, D.; Hamdaoui, A. Limit of the ratio of risks of James-Stein estimators with unknown variance. Far East J. Stat. 2011, 36, 31–53. [Google Scholar]
  23. Shao, P.; Strawderman, W.E. Improving on the James-Stein positive-part. Ann. Stat. 1994, 22, 1517–1539. [Google Scholar] [CrossRef]
Figure 1. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 6   and   ω = 0.1 .
Figure 1. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 6   and   ω = 0.1 .
Mathematics 10 00052 g001
Figure 2. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 6   and   ω = 0.4 .
Figure 2. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 6   and   ω = 0.4 .
Mathematics 10 00052 g002
Figure 3. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 10   and   ω = 0.1 .
Figure 3. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 10   and   ω = 0.1 .
Mathematics 10 00052 g003
Figure 4. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 10   and   ω = 0.4 .
Figure 4. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 10   and   ω = 0.4 .
Mathematics 10 00052 g004
Figure 5. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 16   and   ω = 0.1 .
Figure 5. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 16   and   ω = 0.1 .
Mathematics 10 00052 g005
Figure 6. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 16   and   ω = 0.4 .
Figure 6. Graph of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) as functions of λ = μ 2 for d = 16   and   ω = 0.4 .
Mathematics 10 00052 g006
Table 1. Values of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) for d = 4 and ω = 0.1 ,   0.2 ,   0.5 ,   0.7 ,   0.9 at different values of λ = μ 2 .
Table 1. Values of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) for d = 4 and ω = 0.1 ,   0.2 ,   0.5 ,   0.7 ,   0.9 at different values of λ = μ 2 .
λ ω = 0.1 ω = 0.2 ω = 0.5 ω = 0.7 ω = 0.9
1.24180.66480.70200.81380.88820.9627
0.58260.63260.78090.87480.9611
1.67120.69500.72890.83050.89830.9661
0.62290.66860.80280.88720.9647
3.75230.79690.81940.88710.93230.9774
0.76010.78980.87500.92780.9769
5.00190.83480.85320.90820.94490.9816
0.81080.83420.90090.94230.9814
10.43100.91420.92370.95230.97140.9905
0.91080.92120.95150.97120.9904
20.00000.95500.92370.95230.97140.9905
0.95490.92120.95150.97120.9904
Table 2. Values of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) for d = 10 and ω = 0.1 ,   0.2 ,   0.5 ,   0.7 ,   0.9 at different values of λ = μ 2 .
Table 2. Values of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) for d = 10 and ω = 0.1 ,   0.2 ,   0.5 ,   0.7 ,   0.9 at different values of λ = μ 2 .
λ ω = 0.1 ω = 0.2 ω = 0.5 ω = 0.7 ω = 0.9
1.24180.36090.43190.64490.78700.9290
0.30830.39140.63490.78550.9290
1.67120.38540.45670.65850.79510.9317
0.33680.41690.64980.79390.9317
3.75230.48390.54130.71330.82800.9427
0.45250.51900.70890.82740.9426
5.00190.53060.58270.73920.84350.9478
0.50700.56660.73640.84320.9478
10.43100.66680.70390.81490.88890.9630
0.66100.70030.81450.88890.9630
20.00000.78280.80700.87930.92760.9759
0.78250.80680.87930.92760.9759
Table 3. Values of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) for d = 20 and ω = 0.1 ,   0.2 ,   0.5 ,   0.7 ,   0.9 at different values of λ = μ 2 .
Table 3. Values of the risk ratios ω ( T J . S ( Y 2 ) , μ ) ω ( T 0 , μ ) and ω ( T J . S + ( Y 2 ) , μ ) ω ( T 0 , μ ) for d = 20 and ω = 0.1 ,   0.2 ,   0.5 ,   0.7 ,   0.9 at different values of λ = μ 2 .
λ ω = 0.1 ω = 0.2 ω = 0.5 ω = 0.7 ω = 0.9
1.67120.25290.33590.58490.75090.9170
0.22450.31690.58310.75080.9169
2.49480.28070.36060.60040.76020.9201
0.25580.34450.59910.76020.9201
3.75230.31960.39520.62200.77320.9244
0.29910.38260.62110.77320.9244
5.00190.35450.42630.64140.78480.9283
0.33800.41650.64080.78480.9283
10.43100.47390.53230.70770.82460.9415
0.46810.52950.70760.82460.9415
20.00000.60540.64920.78080.86840.9561
0.60470.64900.78070.86840.9561
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hamdaoui, A.; Almutiry, W.; Terbeche, M.; Benkhaled, A. Comparison of Risk Ratios of Shrinkage Estimators in High Dimensions. Mathematics 2022, 10, 52. https://doi.org/10.3390/math10010052

AMA Style

Hamdaoui A, Almutiry W, Terbeche M, Benkhaled A. Comparison of Risk Ratios of Shrinkage Estimators in High Dimensions. Mathematics. 2022; 10(1):52. https://doi.org/10.3390/math10010052

Chicago/Turabian Style

Hamdaoui, Abdenour, Waleed Almutiry, Mekki Terbeche, and Abdelkader Benkhaled. 2022. "Comparison of Risk Ratios of Shrinkage Estimators in High Dimensions" Mathematics 10, no. 1: 52. https://doi.org/10.3390/math10010052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop