Next Article in Journal
Complex Patterns to the (3+1)-Dimensional B-type Kadomtsev-Petviashvili-Boussinesq Equation
Previous Article in Journal
A New Version of Schauder and Petryshyn Type Fixed Point Theorems in S-Modular Function Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Use of Nonconventional Dispersion Measures to Improve the Efficiency of Ratio-Type Estimators of Variance in the Presence of Outliers

1
School of Mathematical Sciences, Institute of Statistics, Zhejiang University, Hangzhou 310058, China
2
School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
3
Department of Statistics, Government College University, Faisalabad 38000, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(1), 16; https://doi.org/10.3390/sym12010016
Submission received: 16 November 2019 / Revised: 14 December 2019 / Accepted: 16 December 2019 / Published: 19 December 2019

Abstract

:
The use of auxiliary information in survey sampling to enhance the efficiency of the estimators of population parameters is a common phenomenon. Generally, the ratio and regression estimators are developed by using the known information on conventional parameters of the auxiliary variables, such as variance, coefficient of variation, coefficient of skewness, coefficient of kurtosis, or correlation between the study and auxiliary variable. The efficiency of these estimators is dubious in the presence of outliers in the data and a nonsymmetrical population. This study presents improved variance estimators under simple random sampling without replacement with the assumption that the information on some nonconventional dispersion measures of the auxiliary variable is readily available. These auxiliary variables can be the inter-decile range, sample inter-quartile range, probability-weighted moment estimator, Gini mean difference estimator, Downton’s estimator, median absolute deviation from the median, and so forth. The algebraic expressions for the bias and mean square error of the proposed estimators are obtained and the efficiency conditions are derived to compare with the existing estimators. The percentage relative efficiencies are used to numerically compare the results of the proposed estimators with the existing estimators by using real datasets, indicating the supremacy of the suggested estimators.

1. Introduction

Suppose a finite population W = { W 1 ,   W 2 , , W N } consists of N different and identifiable units. Let Y be a measurable variable of interest with values Y i being ascertained on W i ; i = 1 , 2 , , N resulting in a set of observations Y = { Y 1 ,   Y 2 , , Y N } . The purpose of the measurement process is to estimate the population variance S Y 2 = N 1 i = 1 N ( Y i Y ¯ ) 2 by drawing a random sample from the population. Suppose that the information on auxiliary variable X which is correlated with the study variable Y is also available against every unit of the population. The use of auxiliary information to improve the efficiency of the estimators of population parameters is very popular, especially when information on the auxiliary variable is readily available and is highly correlated with the variable of interest. For instance, see [1,2,3] and the references cited therein. The ratio, product, and regression method of estimations are frequently used to enhance the efficiency of the estimators depending upon the nature of the relationship between the study and the auxiliary variables. Along with the estimation of population mean, the estimation of population variance is of importance in many real-life situations. Generally, estimation of population variance is dealt with in the context of augmenting the conventional parameters of the auxiliary variable through a ratio or regression method of estimation to achieve greater efficiency. Mostly coefficient of skewness, coefficient of kurtosis, coefficient of variation, and coefficient of correlation are used in linear combination with some other conventional parameters of the auxiliary variable to estimate the variance. The readers can refer to [4,5,6,7,8,9,10,11,12,13,14,15] and the references therein. The auxiliary measures used in most of the existing ratio-type estimators of variance are nonresistant to the presence of outliers or nonsymmetrical populations. Therefore, there is a need to develop such ratio- or regression-type estimators which are somewhat outlier resistant and more stable in the case of asymmetrical populations.
The present study is focused on estimation of population variance by incorporating information on nonconventional dispersion parameters (detailed in Section 3) of the auxiliary variable. These measures are resistant to outliers and used in a linear combination with the other conventional measures to improve the efficiency of the variance estimator under simple random sampling without replacement (SRSWOR) in the presence of outliers in the target population. The rest of the manuscript is structured as follows. The nomenclature used in the manuscript and background of the existing estimators of population variance is described in Section 2, whereas Section 3 presents the proposed improved families of estimators of population variance. In Section 4 the performance of the proposed families of the estimators is evaluated and compared with the existing estimators of population variance. Finally, some concluding remarks are given in Section 5.

2. Background of the Ratio-Type Estimators of Variance

This section deals with some of the existing estimators of population variance under simple random sampling (SRS), which utilizes the known information on conventional parameters of the auxiliary variable to enhance the efficiency of the variance estimators. Before going into the details of the existing estimators of population variance, the notation used in this manuscript is as follows.
N      Population size; n Sample size
f = n N  Sampling fraction; η = 1 f n
Y      Study variable; X Auxiliary variable
Y ¯ , X ¯   Population means; y ¯ , x ¯ Sample means
S Y , S X  Population standard deviations; s y , s x  Sample standard deviations
C Y , C X  Population coefficients of variation; ρ Population correlation coefficient
β 2 ( Y ) = μ 40 μ 20 2 Population coefficient of kurtosis of the study variable
β 2 ( X ) = μ 04 μ 02 2  Population coefficient of kurtosis of the auxiliary variable
β 1 ( X ) = μ 03 2 μ 02 3  Population coefficient of skewness of the auxiliary variable
M d ( X )    Population median of the auxiliary variable
Q 1 ( X )    Population lower quartile of the auxiliary variable
Q 3 ( X )    Population upper quartile of the auxiliary variable
Q r ( X )    Population inter-quartile range of the auxiliary variable
Q a ( X )    Population inter-quartile average of the auxiliary variable
Q d ( X )  Population semi inter-quartile range of the auxiliary variable
D i ( X ) ith ( i = 1 , 2 , , 10 ) population decile of the auxiliary variable
B ( . )  Bias of the estimator; M S E ( . ) Mean square error of the estimator
S ^ R 2   Traditional ratio estimator; S ^ i 2 Existing ratio estimator
S ^ S K ( i ) 2    The class of estimators introduced by [16]
S ^ p 1 j 2  Proposed class-I estimators; S ^ p 2 l 2 Proposed class-II estimators
μ r s = 1 N i = 1 N ( Y i Y ¯ ) r ( X i X ¯ ) s ; λ r s = μ r s μ 20 r / 2 μ 02 s / 2 ; λ 22 = μ 22 μ 20 μ 02 ; λ 21 = μ 21 μ 20 μ 02 The traditional ratio-type estimator of Isaki [4] utilizes information on the known variance S X 2 of the auxiliary variable to estimate the population variance S Y 2 . The estimator, along with its approximate bias and MSE, is given as
S ^ R 2 = s y 2 s x 2 S X 2 , B ( S ^ R 2 ) = η S Y 2 ( ( β 2 ( X ) 1 ) ( λ 22 1 ) ) , M S E ( S ^ R 2 ) = η S Y 4   ( ( β 2 ( Y ) 1 ) +   ( β 2 ( X ) 1 ) 2   ( λ 22 1 ) ) .
Several modifications and improved estimators of variance which have been proposed in the literature make use of different conventional characteristics of the auxiliary variable. All these estimators exhibit superior efficiency as compared to the traditional ratio estimator under certain theoretical conditions. Some of the existing estimators, which utilize information on the variance of the auxiliary variable linearly integrated with other conventional parameters of the auxiliary variable, are summarized in Table 1 with their respective bias, MSEs, and constants.
Recently Subramani and Kumarapandiyan [16] proposed a new class of modified ratio-type estimators of population variance by modifying the estimator proposed by Upadhyaya and Singh [17]. They showed that their new class of estimators outperforms the estimators suggested in [5,11,12,13,14,17,18] under certain conditions.
The general structure of the Subramani and Kumarapandiyan [16] class of estimators is given as
S ^ S K ( i ) 2 = s y 2 ( X ¯ + ω i x ¯ + ω i ) ; i = 1 , 2 , , 51 B ( S ^ S K ( i ) 2 ) = η S Y 2 { γ S K ( i ) 2 C X 2 γ S K ( i ) λ 21 C X } ;   w h e r e   γ S K ( i ) = X ¯ X ¯ + ω i  
M S E ( S ^ S K ( i ) 2 ) = η S Y 4   { ( β 2 ( Y ) 1 ) +   γ S K ( i ) 2 C X 2 2 γ S K ( i ) λ 21 C X }
where   ω i , i = 1 , 2 , ,   51 are different choices of parameters of the auxiliary variable X which can be found in Appendix A.
All these estimators exhibit superior efficiency as compared to the traditional ratio-type estimators suggested by Isaki [4] under certain conditions, but most of these estimators are based on conventional parameters of the auxiliary variable. Although many other estimators of variance are available in the existing literature, they are more complex and involve laborious computational details. Therefore, the above detailed estimators were chosen for comparison purposes due to the simplicity of their structure and relatively lower computational complexities for practitioners.

3. Proposed Estimators of Variance

This section presents two different families of ratio estimators of population variance for the case where information on some nonconventional measures of dispersion of the auxiliary variable is readily available. The nonconventional measures used to develop the new ratio estimators of variance include the following.
  • Inter-decile Range: The inter-decile range is the difference between the largest decile D 9 ( X ) and smallest decile D 1 ( X ) . Symbolically it is given as
    I D R X = D 9 ( X ) D 1 ( X )
  • Sample Inter-quartile Range: The sample inter-quartile range is based on the difference between the upper Q 3 ( X ) and lower Q 1 ( X ) quartiles as discussed by Riaz [19] and Nazir et al. [20]. It is computed as
    S I Q R X = Q 3 ( X ) Q 1 ( X ) 1.34898
    .
  • Probability Weighted Moment Estimator: The probability weighted moment estimator of dispersion suggested by Downton [21] is based on the ordered sample statistics and it is defined as
    S P W X = π N 2 i = 1 N ( 2 i N 1 ) X ( i ) ,
    where X ( i ) denotes the i th order sample statistics.
  • Downton’s Estimator: Another estimator of dispersion, similar to S P W X , was proposed by Downton [21] and defined as
    D O W X = 2 π N ( N 1 ) i = 1 N ( i N + 1 2 ) X ( i ) .
  • Gini Mean Difference Estimator: Gini [22] introduced a dispersion estimator which is also based on the sample order statistics. It is given as
    G X = 4 N 1 i = 1 N ( 2 i N 1 2 N ) X ( i ) .
  • Median Absolute Deviation from Median: Hampel [23] suggested an estimator of dispersion based on absolute deviation from the median. It is defined as
    M A D M X = 1.4826 [ m e d i a n | X i X ˜ | ]
    , where X ˜ is the median.
  • The Median of Pairwise Distances: Shamos [24] (p. 260) and Bickel and Lehmann [25] (p. 38) suggested an estimator of dispersion which is based on the median of pairwise distances as [ m e d i a n | X i X l | ; i < l ] . Rousseeuw and Croux [26] suggested to pre-multiply it by 1.0483 to achieve consistency under the Gaussian distribution, and the resultant estimator can be defined as
      B n X = 1.0483 [ m e d i a n | X i X l | ; i < l ]
    .
  • Median Absolute Deviation from Mean: Wu et al. [27] defined another estimator which is also based on absolute deviation from the mean. It is given as
    M A D X = m e d i a n | X i X ¯ | .
  • Mean Absolute Deviation from Mean: Wu et al. [27] suggested an estimator of dispersion which is based on absolute deviation from the mean. It is given as
    M D X   = i = 1 N   | X i X ¯ | 1.2533   .
  • Average Absolute Deviation from Median: Wu et al. [27] suggested an estimator of dispersion which is based on the average of absolute deviation from the median. It is given as
    A A D M X = i = 1 N median { | X i X ˜ | } N   .
  • The Ordered Statistic of Subranges: A robust estimator of dispersion based on the order of subranges was introduced by Croux and Rousseeuw [28], defined as
    S r X = 1.4826 { | X ( i + [ 0.25 N ] + 1 ) X ( i ) | ( [ N 2 ] 0.25 N ) }
    , where the symbol [·] represents the integer part of a fraction.
  • Trimmed Mean of Median of Pairwise Distances: Croux and Rousseeuw [28] defined another robust estimator of dispersion which is based on the trimmed mean of the median of pairwise distances. It is given as
    T n X = 1.38 h   k = 1 h { m e d i a n | X i X l | ; i l } ( k )
    , where for each   i , we compute the median of | X i X l | , l = 1 , 2 , 3 , , n that yields n values, and the average of the first h order statistics gives the final estimate T n X , where h = [ n 2 ] + 1 , which is roughly half of the number of observations.
  • The 0.25-quantile of Pairwise Distances: Another incorporated in this study as a non-conventional dispersion measure is due to Rousseeuw and Croux [26] and is defined as
      Q n X = d { m e d i a n | X i X l | ; i < l } ( p )
    where d is the constant factor and its default value is   2.2219 to make it a consistent estimator under normality, while p = h ! 2 ! ( h 2 ) ! ( N 2 ) / 4 and h = [ n 2 ] + 1 . Thus, the p th order statistic of the ( N 2 ) interpoint distances yields the desired estimator.
  • The Median of the Median of Distances: This study also includes a robust estimator of dispersion defined in Rousseeuw and Croux [26]. It is given as
    S n X =   C [ m e d i a n i { m e d i a n   j | X i X j | ; i j } ]
    , where C is a constant used for consistency and under a normal population its value is usually set to 1.1926 .
The abovementioned nonconventional measures are used in conjunction with other conventional measures such as the coefficient of skewness, the coefficient of variation, and the coefficient of correlation in the context of ratio and regression methods of estimation under SRSWOR to propose new estimators of population variance. The detailed properties of the above nonconventional measures can be found in the relevant cited references.

3.1. The Suggested Estimators of Class-I

Motivated by Abid et al. [29], we propose a new class of ratio estimators of variance under SRS by using the power transformation and the Searls [30] technique as follows:
S ^ p 1 j 2 = L s y 2 ( φ X ¯ + ψ φ x ¯ + ψ ) δ X ¯ δ X ¯ + ν
where L is the Searls [30] constant, s y 2 is the sample variance of the study variable, and X ¯ and x ¯ are the population and sample mean of the auxiliary variable, respectively. It is worth mentioning that ( φ X ¯ + ψ ) > 0 , ( φ x ¯ + ψ ) > 0 , and ( φ , δ ) can either be known real numbers or known conventional parameters of the auxiliary variable   X , whereas ( ψ ,   ν ) are the known nonconventional dispersion parameters of the auxiliary variable X .
To obtain the bias and MSE of S ^ p 1 j 2 , in terms of relative errors, we can express e 0 = s y 2 S y 2 S y   2 and e 1 = x ¯ X ¯ X ¯ such that s y 2 = S y 2 ( 1 + e 0 ) , x ¯ = X ¯ ( 1 + e 1 ) and E ( e 0 ) = E ( e 1 ) = 0 , E ( e 0 2 ) = η ( β 2 ( y ) 1 ) , E ( e 1 2 ) = η C x 2 , E ( e 0 e 1 ) = η λ 21 C x .
After putting the values of e 0 and e 1 into Equation (4), we get
S ^ p 1 j 2 = L S y 2 ( 1 + e 0 ) ( 1 + ξ 1 e 1 ) ξ 2
where ξ 1 = φ X ¯ φ X ¯ + ψ and ξ 2 = δ X ¯ δ X ¯ + ν .
Assuming | ξ 1 e 1 | < 1 so that ( 1 + ξ 1 e 1 ) ξ 2 is expandable, expanding the right-hand side of Equation (5) and neglecting the terms of e ’s having power greater than two, we have
S ^ p 1 j 2 L ( S y 2 + S y 2 e 0 S y 2 ξ 1 ξ 2 e 1 S y 2 ξ 1 ξ 2 e 0 e 1 + S y 2 ( ξ 2 2 + ξ 2 ) ξ 1 2 2 e 1 2 ) . Subtracting S y 2 from both sides and simplifying, we get
S ^ p 1 j 2 S y 2 S y 2 { ( L 1 ) + L ( e 0 ξ 1 ξ 2 e 1 ξ 1 ξ 2 e 0 e 1 + ( ξ 2 2 + ξ 2 ) ξ 1 2 2 e 1 2 ) } .
By taking the expectation on both sides of Equation (6), we get the bias of S ^ p 1 j 2 up to the first degree of approximation as
B ( S ^ p 1 j 2 ) S y 2 { ( L 1 ) + η L ξ 1 ( ( ξ 2 2 + ξ 2 ) ξ 1 2 C x 2 ξ 2 λ 21 C x ) } .
The mean square error of S ^ p 1 j 2 is defined as
M S E ( S ^ p 1 j 2 ) = E ( S ^ p 1 j 2 S y 2 ) 2 .
So, squaring both sides of Equation (6), keeping terms of e ’s only up to the second order and applying expectation, the MSE of S ^ p 1 j 2 up to the first degree of approximation is represented as
M S E ( S ^ p 1 j 2 ) S y 4 [ ( L 1 ) 2 + L 2 { E ( e 0 2 ) + ( 2 ξ 1 2 ξ 2 2 + ξ 1 2 ξ 2 ) E ( e 1 2 ) 4 ξ 1 ξ 2 E ( e 0 e 1 ) } L { ( ξ 2 2 + ξ 2 ) ξ 1 2 E ( e 1 2 ) 2 ξ 1 ξ 2 E ( e 0 e 1 ) } ] .
Differentiating Equation (8) with respect to L , equating it to zero, and after simplification, we get the optimum value of L as
L = L ( o p t ) = L 1 L 2
where L 1 =   1 + ( ξ 2 2 + ξ 2 ) ξ 1 2 2 E ( e 1 2 ) ξ 1 ξ 2 E ( e 0 e 1 ) and L 2 =   1 + E ( e 0 2 ) + ( 2 ξ 1 2 ξ 2 2 + ξ 1 2 ξ 2 ) E ( e 1 2 ) 4 ξ 1 ξ 2 E ( e 0 e 1 ) .
Substituting the above result into Equation (8) and simplifying, the minimum MSE of S ^ p 1 j 2 is
M S E ( S ^ p 1 j 2 ) m i n S y 4 ( 1 L 1 2 L 2 ) .
The exact values of L 1 and L 2 can easily be obtained by substituting the known results for E ( e 0 2 ) , E ( e 1 2 ) , and E ( e 0 e 1 ) into their respective expressions, which are given as
L 1 = 1 + η 2 ξ 1 C x { ( ξ 2 2 + ξ 2 ) ξ 1 C x 2 ξ 2 λ 21 } ,
L 2 = 1 + η { ( β 2 ( Y ) 1 ) + ( 2 ξ 1 2 ξ 2 2 + ξ 1 2 ξ 2 ) C x 2 4 ξ 1 ξ 2 λ 21 C x } .
The proposed class-I encompasses different kinds of existing estimators by specifying the values of the constants. For example, if we set L = φ = δ = 1 and ψ = ν = 0 , then the estimator suggested by Upadhyaya and Singh [17] is a member of the proposed S ^ p 1 j 2 class of estimators. Similarly, if we set L = φ = δ = 1 , ψ = ω i , and ν = 0 , then the Subramani and Kumarapandiyan [16] class of estimators becomes a member of the proposed S ^ p 1 j 2 class. Some new members of the proposed class-I estimators which are based on integration of conventional parameters and nonconventional dispersion parameters of the auxiliary variable are given in Table 2. It is worth mentioning that many other estimators can be generated from the proposed S ^ p 1 j 2 class of estimators, but to conserve space only a few are given.

Efficiency Conditions for Class-I Estimators

The estimators of class-I perform better than the traditional estimator of Isaki [4] for estimating the population variance if
M S E ( S ^ p 1 j 2 ) m i n M S E ( S ^ R 2 )
S y 4 ( 1 L 1 2 L 2 ) η S Y 4   { ( β 2 ( Y ) 1 ) +   ( β 2 ( X ) 1 ) 2   ( λ 22 1 ) }
or
L 1 2 L 2 [ 1 η ( β 2 ( Y ) + β 2 ( X ) 2 λ 22 ) ] .
The estimators defined in class-I will achieve greater efficiency as compared to the estimators defined in Section 2, i.e., S ^ i 2   ; i = 1 , , 22 if
S y 4 ( 1 L 1 2 L 2 ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ i 2 ( β 2 ( X ) 1 ) 2 γ i ( λ 22 1 ) }
or
L 1 2 L 2 [ 1 η { ( β 2 ( Y ) 1 ) +   γ i 2 ( β 2 ( X ) 1 ) 2 γ i ( λ 22 1 ) } ] .
The suggested class-I estimators will outperform the Upadhyaya and Singh [17] modified ratio-type estimator of population variance in terms of efficiency if
S y 4 ( 1 L 1 2 L 2 ) η S Y 4   { ( β 2 ( Y ) 1 ) +   C X 2 2   λ 21 C X }
or
L 1 2 L 2 [ 1 η { ( β 2 ( Y ) 1 ) +   C X 2 2   λ 21 C X } ] .
The estimators envisaged in the proposed class S ^ p 1 j 2 will exhibit superior performance as compared to the Subramani and Kumarapandiyan [16] modified class of estimators if
S y 4 ( 1 L 1 2 L 2 ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ S K ( i ) 2 C X 2 2 γ S K ( i ) λ 21 C X }
or
L 1 2 L 2 [ 1 η { ( β 2 ( Y ) 1 ) +   γ S K ( i ) 2 C X 2 2 γ S K ( i ) λ 21 C X } ] .

3.2. The Suggested Estimators of Class-II

In this section, we present a new class of regression-type estimators of population variance. The proposed class of estimators S ^ p 2 m 2 is defined as
S ^ p 2 m 2 = M { s y 2 ( α X ¯ + ζ α x ¯ + ζ ) τ X ¯ τ X ¯ + υ + b ( X ¯ x ¯ ) }
where M is the Searls [30] constant, α and τ can be real numbers or functions of known conventional parameter of the auxiliary variable X , ζ and υ are the known functions of the nonconventional dispersion parameters of the auxiliary variable X , and b is the regression coefficient between the study and auxiliary variables.
The minimized bias and minimized MSE of the class-II estimators are obtained by adapting the procedure given in Section 3.1:
B ( S ^ p 2 m 2 ) m i n S y 2 [ ( M ( o p t ) 1 ) + η M ( o p t ) θ 1 C X { ( θ 2 2 + θ 2 ) 2 θ 1 C X θ 2 λ 21 } ]
M S E ( S ^ p 2 m 2 ) m i n = ( S y 4 M 1 2 M 2 )
where θ 1 = α X ¯ α X ¯ + ζ , θ 2 = τ X ¯ τ X ¯ + υ ,   M ( o p t ) = M 1 M 2 , M 1 = S y 4 [ 1 + η 2 θ 1 C X { ( θ 2 2 + θ 2 ) θ 1 C X 2 θ 2 λ 21 } ] , and M 2 = S y 4 + η S y 4 { ( β 2 ( Y ) 1 ) + ( 2 θ 2 2 + θ 2 ) θ 1 2 C X 2 4 θ 1 θ 2 λ 21 C X } 2 η b S y 2 X ¯ { λ 21 C X θ 1 θ 2 C X 2 } + η b X ¯ 2 C X 2 .
The estimators envisaged in class-II incorporate many existing estimators of population variance. For instance, if we set M = α = τ = 1 and ζ = υ = b = 0 , then the estimator suggested by Upadhyaya and Singh [17] is a member of the S ^ p 2 m 2 class of estimators. Similarly, if we set M = α = τ = 1 , ζ = ω i , and υ = b = 0 , then the Subramani and Kumarapandiyan [16] class of estimators becomes a member of the proposed S ^ p 2 m 2 class of estimators. Moreover, if the regression coefficient b = 0 , the class of estimators defined in Section 3.1 is also a member of the proposed class S ^ p 2 m 2 . Table 3 contains some new members of the proposed class-II to estimate the population variance based on auxiliary information.

Efficiency Conditions for Class-II Estimators

The estimators in class-II will perform better than the Isaki [4] traditional ratio estimator of population variance if
M S E ( S ^ p 2 m 2 ) m i n M S E ( S ^ R 2 )
( S y 4 M 1 2 M 2 ) η S Y 4   { ( β 2 ( Y ) 1 ) +   ( β 2 ( X ) 1 ) 2   ( λ 22 1 ) }
or
M 1 2 M 2 S Y 4 [ 1 η ( β 2 ( Y ) + β 2 ( X ) 2 λ 22 ) ] .
The estimators defined in class-II will be superior in terms of efficiency as compared to the estimators defined in Section 2, i.e., S ^ i 2   ; i = 1 , , 22 if
( S y 4 M 1 2 M 2 ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ i 2 ( β 2 ( X ) 1 ) 2 γ i ( λ 22 1 ) }
or
M 1 2 M 2 S y 4 [ 1 η { ( β 2 ( Y ) 1 ) +   γ i 2 ( β 2 ( X ) 1 ) 2 γ i ( λ 22 1 ) } ] .
The suggested class-II estimators will outperform the Upadhyaya and Singh [17] modified ratio-type estimator of population variance in terms of efficiency if
( S y 4 M 1 2 M 2 ) η S Y 4   { ( β 2 ( Y ) 1 ) +   C X 2 2   λ 21 C X }
or
M 1 2 M 2 S y 4 [ 1 η { ( β 2 ( Y ) 1 ) +   C X 2 2   λ 21 C X } ] .
The estimators envisaged in the proposed class-II will exhibit superior performance as compared to the Subramani and Kumarapandiyan [16] modified class of estimators if
( S y 4 M 1 2 M 2 ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ S K ( i ) 2 C X 2 2 γ S K ( i ) λ 21 C X }
or
M 1 2 M 2 S y 4 [ 1 η { ( β 2 ( Y ) 1 ) +   γ S K ( i ) 2 C X 2 2 γ S K ( i ) λ 21 C X } ] .

4. Empirical Study

To assess the performance of the proposed classes of estimators in comparison to their competing estimators of variance, two real populations were taken from Singh and Chaudhary [31] (p. 177). These are the same datasets which were considered by Subramani and Kumarapandiyan [16]. In population-I, Y denotes the area under wheat crop (in acres) during 1974 in 34 villages and X denotes the area under wheat crop (in acres) during 1971 in the same villages; in population-II, Y is the same as in population-I and X is the area under wheat crop (in acres) during 1973. As mentioned earlier, the estimators in this study are nonconventional and somewhat robust dispersion measures. Moreover, these measures perform more efficiently in the presence of outliers in the data as compared to other conventional measures. So, it is expected that proposed classes of estimators will exhibit superior efficiency as compared to the existing and the traditional ratio estimators. The data of both the populations contain outliers, which is observable from the boxplots shown in Figure 1 and Figure 2.
The comparison between the proposed classes of estimators and the existing estimators was made based on their percentage relative efficiencies (PREs) as compared to the traditional ratio estimator of variance suggested by Isaki [4]. The PRE of the proposed estimators relative to the traditional estimator is defined as
P R E ( p ) = M S E ( T r d ) M S E ( p ) × 100
where P R E ( p ) denotes the percentage relative efficiency of the proposed estimator in comparison with the traditional estimator, M S E ( T r d ) is the mean square error of the traditional estimator, and MSE ( p ) is the mean square error of the proposed classes of estimator. It is worth mentioning that due to the length of study, from the class of estimators proposed by Subramani and Kumarapandiyan [16], we took only its most efficient estimator, which is based on the D 10 ( X ) , for comparison purposes. The population characteristics are summarized in Table 4.
The PREs of the existing estimators as compared to the traditional ratio estimator by Isaki [4] are shown in Table 5 and Table 6 for population-1 and -2, respectively, while the PREs of the proposed estimators as compared to the traditional ratio estimator by Isaki [4] are given in Table 7 and Table 8, respectively. For better understanding, Figure 3 and Figure 4 display the comparative PREs of the existing and proposed estimators against population-1 and -2, respectively, where the best estimator from each of the proposed classes was chosen for comparison for better visual display.
From the results reported in Table 5, Table 6, Table 7 and Table 8, the findings are summarized as follows:
  • The estimators proposed in class-I and class-II have higher PREs as compared to the existing estimators for both the populations considered in this study, which reveals the supremacy of the proposed classes of estimators in the presence of outliers in the data (cf. Table 5, Table 6, Table 7 and Table 8 and Figure 3 and Figure 4). For instance, the suggested estimators of class-I and class-II are at least 38% more efficient as compared to the traditional ratio estimator for population-I. For population-II, the efficiency of the suggested estimators exceeds 44%. All existing estimators are at most 11% and 17% more efficient as compared to the traditional ratio estimator for population-I and population-II, respectively.
  • The class-I estimators have higher PREs as compared to class-II proposed in this study (cf. Table 7 and Table 8).
  • The estimators which integrate information on the nonconventional dispersion parameter of the auxiliary variable and correlation coefficient between the study and auxiliary variables were found to be superior in terms of efficiency as compared to other estimators (cf. Table 7 and Table 8).
  • The estimator which is based on inter-decile range and the correlation coefficient between the study and auxiliary variables turned out to be the most efficient estimator.
  • It was also observed that the performance of existing estimators in comparison with the traditional ratio estimators is not much superior in the presence of outliers in the data (cf. Table 5 and Table 6), whereas the suggested estimators perform quite well as compared to the existing and the traditional ratio estimators (cf. Table 7 and Table 8). These findings highlight the significance of using nonconventional measures in estimating the population variance in the presence of outliers.

5. Conclusions

This study introduced two new classes of ratio- and regression-type estimators of population variance under simple random sampling without replacement by integrating information on nonconventional and somewhat robust dispersion measures of an auxiliary variable. The expressions for bias and mean square error were obtained, and the efficiency conditions under which the proposed estimators perform better than the existing estimators were also derived. In support of the theoretical findings, an empirical study was conducted based on two real populations which revealed that the suggested classes of estimators outperform the existing estimators considered in this study in terms of PREs in the presence of outliers. Based on the findings, it is strongly recommended that the proposed classes of estimators be used instead of the existing estimators to estimate the population variance in the case of outliers in the dataset. The present study can be further extended by estimating the population variance in the case of two auxiliary variables; moreover, under different sampling schemes, nonconventional measures can be employed to enhance the efficiency of the variance estimators.

Author Contributions

F.N. Conceptualization, Data curation, Investigation, Methodology, Writing—Original draft. T.N. Formal analysis, Software, Validation, Writing—Original draft. T.P. Project administration, Supervision, Visualization, Writing—Review and editing. M.A. Data curation, Methodology, Resources, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

There was no funding for this paper.

Acknowledgments

F.N. and T.N. express gratitude to Chinese Scholarship Council (CSC), China for providing the financial support and excellent research facilities to study in China.

Conflicts of Interest

There is no potential conflict of interest to be reported by the authors.

Appendix A

Following are the various choices of ω i , i = 1 , 2 , ,   51 , parameters of the auxiliary variable X, used by Subramani and Kumarapandiyan [16] to propose a generalized modified class of estimators of population variance.
  ω 1 = C X ,   ω 2 = β 2 ( X ) , ω 3 = β 1 ( X ) , ω 4 = ρ , ω 5 = S X , ω 6 = M d ( X ) , ω 7 = Q 1 ( X ) , ω 8 = Q 3 ( X ) , ω 9 = Q r ( X ) , ω 10 = Q d ( X ) , ω 11 = Q a ( X ) , ω 12 = D 1 ( X ) , ω 13 = D 2 ( X ) , ω 14 = D 3 ( X ) , ω 15 = D 4 ( X ) , ω 16 = D 5 ( X ) , ω 17 = D 6 ( X ) , ω 18 = D 7 ( X ) , ω 19 = D 8 ( X ) , ω 20 = D 9 ( X ) , ω 21 = D 10 ( X ) , ω 22 = C X β 2 ( X ) , ω 23 = β 2 ( X ) C X , ω 24 = C X β 1 ( X ) , ω 25 = β 1 ( X ) C X , ω 26 = C X ρ , ω 27 = ρ C X , ω 28 = C X S X , ω 29 = S X C X , ω 30 = C X M d ( X ) , ω 31 = M d ( X ) C X , ω 32 = β 2 ( X ) β 1 ( X ) , ω 33 = β 1 ( X ) β 2 ( X ) , ω 34 = β 2 ( X ) ρ , ω 35 = ρ β 2 ( X ) , ω 36 = β 2 ( X ) S X , ω 37 = S X β 2 ( X ) , ω 38 = β 2 ( X ) M d ( X ) , ω 39 = M d ( X ) β 2 ( X ) , ω 40 = β 1 ( X ) ρ , ω 41 = ρ β 1 ( X ) , ω 42 = β 1 ( X ) S X , ω 43 = S X β 1 ( X ) , ω 44 = β 1 ( X ) M d ( X ) , ω 45 = M d ( X ) β 1 ( X ) , ω 46 = ρ S X , ω 47 = S X ρ , ω 48 = ρ M d ( X ) , ω 49 = M d ( X ) ρ , ω 50 = S X M d ( X ) , and ω 51 = M d ( X ) S X .

References

  1. Cochran, W.G. Sampling Techniques, 3rd ed.; John Wiley and Sons: New York, NY, USA, 1977. [Google Scholar]
  2. Singh, H.P. Estimation of normal parent parameters with known coefficient of variation. Gujarat Stat. Rev. 1986, 13, 57–62. [Google Scholar]
  3. Wolter, K.M. Introduction to Variance Estimation, 2nd ed.; Springer: New York, NY, USA, 1985. [Google Scholar]
  4. Isaki, C.T. Variance estimation using auxiliary information. J. Am. Stat. Assoc. 1983, 78, 117–123. [Google Scholar] [CrossRef]
  5. Kadilar, C.; Cingi, H. Ratio estimators for population variance in simple and stratified sampling. Appl. Math. Comput. 2006, 173, 1047–1058. [Google Scholar] [CrossRef]
  6. Khan, M.; Shabbir, J. A ratio type estimator for the estimation of population variance using quartiles of an auxiliary variable. J. Stat. Appl. Probab. 2013, 2, 319–325. [Google Scholar] [CrossRef]
  7. Maqbool, S.; Javaid, S. Variance estimation using linear combination of tri-mean and quartile average. Am. J. Biol. Environ. Stat. 2017, 3, 5–9. [Google Scholar] [CrossRef] [Green Version]
  8. Muneer, S.; Khalil, A.; Shabbir, J.; Narjis, G. A new improved ratio-product type exponential estimator of finite population variance using auxiliary information. J. Stat. Comput. Simul. 2018, 88, 3179–3192. [Google Scholar] [CrossRef]
  9. Naz, F.; Abid, M.; Nawaz, T.; Pang, T. Enhancing the efficiency of the ratio-type estimators of population variance with a blend of information on robust location measures. Sci. Iran. 2019. [Google Scholar] [CrossRef] [Green Version]
  10. Solanki, R.S.; Singh, H.P.; Pal, S.K. Improved ratio-type estimators of finite population variance using quartiles. Hacet. J. Math. Stat. 2015, 44, 747–754. [Google Scholar] [CrossRef]
  11. Subramani, J.; Kumarapandiyan, G. Variance estimation using median of the auxiliary variable. Int. J. Probab. Stat. 2012, 1, 36–40. [Google Scholar] [CrossRef] [Green Version]
  12. Estimation of Variance Using Deciles of an Auxiliary Variable. Available online: https://www.researchgate.net/publication/274008488_7_Subramani_J_and_Kumarapandiyan_G_2012c_Estimation_of_variance_using_deciles_of_an_auxiliary_variable_Proceedings_of_International_Conference_on_Frontiers_of_Statistics_and_Its_Applications_Bonfring_ (accessed on 10 November 2019).
  13. Subramani, J.; Kumarapandiyan, G. Variance estimation using quartiles and their functions of an auxiliary variable. Int. J. Stat. Appl. 2012, 2, 67–72. [Google Scholar] [CrossRef]
  14. Upadhyaya, L.N.; Singh, H.P. An estimator for population variance that utilizes the kurtosis of an auxiliary variable in sample surveys. Vikram Math. J. 1999, 19, 14–17. [Google Scholar]
  15. Yaqub, M.; Shabbir, J. An improved class of estimators for finite population variance. Hacet. J. Math. Stat. 2016, 45, 1641–1660. [Google Scholar] [CrossRef]
  16. Subramani, J.; Kumarapandiyan, G. A class of modified ratio estimators for estimation of population variance. J. Appl. Math. Stat. Inform. 2015, 11, 91–114. [Google Scholar] [CrossRef] [Green Version]
  17. Subramani, J.; Kumarapandiyan, G. Estimation of variance using known coefficient of variation and median of an auxiliary variable. J. Mod. Appl. Stat. Methods 2013, 12, 58–64. [Google Scholar] [CrossRef]
  18. Upadhyaya, L.N.; Singh, H.P. Estimation of population standard deviation using auxiliary information. Am. J. Math. Manag. Sci. 2001, 21, 345–358. [Google Scholar] [CrossRef]
  19. Riaz, M. A dispersion control chart. Commun. Stat.—Simul. Comput. 2008, 37, 1239–1261. [Google Scholar] [CrossRef]
  20. Nazir, H.Z.; Riaz, M.; Does, R.J.M.M. Robust CUSUM control charting for process dispersion. Qual. Reliab. Eng. Int. 2015, 31, 369–379. [Google Scholar] [CrossRef]
  21. Downton, F. Linear estimates with polynomial coefficients. Biometrika 1966, 53, 129–141. [Google Scholar] [CrossRef]
  22. Gini, C. Variabilità e mutabilità. In Memorie di Metodologica Statistica; Pizetti, E., Salvemini, T., Eds.; Libreria Eredi Virgilio Veschi: Rome, Italy, 1912. [Google Scholar]
  23. Hampel, F.R. The influence curve and its role in robust estimation. J. Am. Stat. Assoc. 1974, 69, 383–393. [Google Scholar] [CrossRef]
  24. Shamos, M.I. Geometry and statistics: problems at the interface. In New Directions and Recent Results in Algorithms and Complexity; Traub, J.F., Ed.; Academic Press: New York, NY, USA, 1976; pp. 251–280. [Google Scholar]
  25. Bickel, P.J.; Lehmann, E.L. Descriptive statistics for nonparametric models IV. spread. In Contributions to Statistics, Hajek Memorial Volume; Jureckova, J., Ed.; Academia: Prague, Czechia, 1979; pp. 33–40. [Google Scholar] [CrossRef] [Green Version]
  26. Rousseeuw, P.J.; Croux, C. Alternatives to the median absolute deviation. J. Am. Stat. Assoc. 1993, 88, 1273–1283. [Google Scholar] [CrossRef]
  27. Wu, C.; Zhao, Y.; Wang, Z. The median absolute deviations and their applications to Shewhart control charts. Commun. Stat.—Simul. Comput. 2002, 31, 425–442. [Google Scholar] [CrossRef]
  28. Croux, C.; Rousseeuw, P.J. A class of high-breakdown scale estimators based on subranges. Commun. Stat.—Theory Methods 1992, 21, 1935–1951. [Google Scholar] [CrossRef]
  29. Abid, M.; Abbas, N.; Nazir, H.Z.; Lin, Z. Enhancing the mean ratio estimators for estimating population mean using non-conventional location parameters. Rev. Colomb. Estadística 2016, 39, 63–79. [Google Scholar] [CrossRef]
  30. Searls, D.T. Utilization of known coefficient of kurtosis in the estimation procedure of variance. J. Am. Stat. Assoc. 1964, 59, 1225–1226. [Google Scholar] [CrossRef]
  31. Singh, D.; Chaudhary, F.S. Theory and Analysis of Sample Survey Designs; New Age International Publisher: New Dehli, India, 1986. [Google Scholar]
Figure 1. Boxplot of population-I.
Figure 1. Boxplot of population-I.
Symmetry 12 00016 g001
Figure 2. Boxplot of population-II.
Figure 2. Boxplot of population-II.
Symmetry 12 00016 g002
Figure 3. Comparison of PREs of existing and proposed estimators of variance for Population-I.
Figure 3. Comparison of PREs of existing and proposed estimators of variance for Population-I.
Symmetry 12 00016 g003
Figure 4. Comparison of PREs of existing and proposed estimators of variance for Population-II.
Figure 4. Comparison of PREs of existing and proposed estimators of variance for Population-II.
Symmetry 12 00016 g004
Table 1. Some existing estimators of variance with their bias and mean squared error.
Table 1. Some existing estimators of variance with their bias and mean squared error.
EstimatorProposed ByB(.)MSE(.)
S ^ 1 2 = s y 2 ( S X 2 + β 2 ( X ) s x 2 + β 2 ( X ) ) Upadhyaya and Singh [14] η S Y 2 γ 1 { γ 1 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 1 = S X 2 S X 2 + β 2 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 1 2 ( β 2 ( X ) 1 ) 2 γ 1 ( λ 22 1 ) }
S ^ 2 2 = s y 2 ( S X 2 + C X s x 2 + C X ) Kadilar and Cingi [5] η S Y 2 γ 2 { γ 2 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 2 = S X 2 S X 2 + C X η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 2 2 ( β 2 ( X ) 1 ) 2 γ 2 ( λ 22 1 ) }
S ^ 3 2 = s y 2 ( S X 2 + M d ( X ) s x 2 + M d ( X ) ) Subramani and Kumarapandiyan [11] η S Y 2 γ 3 { γ 3 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 3 = S X 2 S X 2 + M d ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 3 2 ( β 2 ( X ) 1 ) 2 γ 3 ( λ 22 1 ) }
S ^ 4 2 = s y 2 ( S X 2 + Q 1 ( X ) s x 2 + Q 1 ( X ) ) Subramani and Kumarapandiyan [13] η S Y 2 γ 4 { γ 4 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 4 = S X 2 S X 2 + Q 1 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 4 2 ( β 2 ( X ) 1 ) 2 γ 4 ( λ 22 1 ) }
S ^ 5 2 = s y 2 ( S X 2 + Q 3 ( X ) s x 2 + Q 3 ( X ) ) Subramani and Kumarapandiyan [13] η S Y 2 γ 5 { γ 5 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 5 = S X 2 S X 2 + Q 3 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 5 2 ( β 2 ( X ) 1 ) 2 γ 5 ( λ 22 1 ) }
S ^ 6 2 = s y 2 ( S X 2 + Q r ( X ) s x 2 + Q r ( X ) ) Subramani and Kumarapandiyan [13] η S Y 2 γ 6 { γ 6 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 6 = S X 2 S X 2 + Q r ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 6 2 ( β 2 ( X ) 1 ) 2 γ 6 ( λ 22 1 ) }
S ^ 7 2 = s y 2 ( S X 2 + Q d ( X ) s x 2 + Q d ( X ) ) Subramani and Kumarapandiyan [13] η S Y 2 γ 7 { γ 7 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 7 = S X 2 S X 2 + Q d ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 7 2 ( β 2 ( X ) 1 ) 2 γ 7 ( λ 22 1 ) }
S ^ 8 2 = s y 2 ( S X 2 + Q a ( X ) s x 2 + Q a ( X ) ) Subramani and Kumarapandiyan [13] η S Y 2 γ 8 { γ 8 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 8 = S X 2 S X 2 + Q a ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 8 2 ( β 2 ( X ) 1 ) 2 γ 8 ( λ 22 1 ) }
S ^ 9 2 = s y 2 ( S X 2 + D 1 ( X ) s x 2 + D 1 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 9 { γ 9 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 9 = S X 2 S X 2 + D 1 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 9 2 ( β 2 ( X ) 1 ) 2 γ 9 ( λ 22 1 ) }
S ^ 10 2 = s y 2 ( S X 2 + D 2 ( X ) s x 2 + D 2 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 10 { γ 10 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 10 = S X 2 S X 2 + D 2 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 10 2 ( β 2 ( X ) 1 ) 2 γ 10 ( λ 22 1 ) }
S ^ 11 2 = s y 2 ( S X 2 + D 3 ( X ) s x 2 + D 3 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 11 { γ 11 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 11 = S X 2 S X 2 + D 3 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 11 2 ( β 2 ( X ) 1 ) 2 γ 11 ( λ 22 1 ) }
S ^ 12 2 = s y 2 ( S X 2 + D 4 ( X ) s x 2 + D 4 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 12 { γ 12 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 12 = S X 2 S X 2 + D 4 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 12 2 ( β 2 ( X ) 1 ) 2 γ 12 ( λ 22 1 ) }
S ^ 13 2 = s y 2 ( S X 2 + D 5 ( X ) s x 2 + D 5 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 13 { γ 13 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 13 = S X 2 S X 2 + D 5 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 13 2 ( β 2 ( X ) 1 ) 2 γ 13 ( λ 22 1 ) }
S ^ 14 2 = s y 2 ( S X 2 + D 6 ( X ) s x 2 + D 6 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 14 { γ 14 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 14 = S X 2 S X 2 + D 6 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 14 2 ( β 2 ( X ) 1 ) 2 γ 14 ( λ 22 1 ) }
S ^ 15 2 = s y 2 ( S X 2 + D 7 ( X ) s x 2 + D 7 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 15 { γ 15 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where S ^ 15 2 = s y 2 ( S X 2 + D 7 ( X ) s x 2 + D 7 ( X ) ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 15 2 ( β 2 ( X ) 1 ) 2 γ 15 ( λ 22 1 ) }
Subramani and Kumarapandiyan [12] η S Y 2 γ 16 { γ 16 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 16 = S X 2 S X 2 + D 8 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 16 2 ( β 2 ( X ) 1 ) 2 γ 16 ( λ 22 1 ) }
S ^ 17 2 = s y 2 ( S X 2 + D 9 ( X ) s x 2 + D 9 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 17 { γ 17 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 17 = S X 2 S X 2 + D 9 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 17 2 ( β 2 ( X ) 1 ) 2 γ 17 ( λ 22 1 ) }
S ^ 18 2 = s y 2 ( S X 2 + D 10 ( X ) s x 2 + D 10 ( X ) ) Subramani and Kumarapandiyan [12] η S Y 2 γ 18 { γ 18 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 18 = S X 2 S X 2 + D 10 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 18 2 ( β 2 ( X ) 1 ) 2 γ 18 ( λ 22 1 ) }
S ^ 19 2 = s y 2 ( C X S X 2 + β 2 ( X ) C X s x 2 + β 2 ( X ) ) Kadilar and Cingi [5] η S Y 2 γ 19 { γ 19 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 19 = C X S X 2 C X S X 2 + β 2 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 19 2 ( β 2 ( X ) 1 ) 2 γ 19 ( λ 22 1 ) }
S ^ 20 2 = s y 2 ( β 2 ( X ) S X 2 + C X β 2 ( X ) s x 2 + C X ) Kadilar and Cingi [5] η S Y 2 γ 20 { γ 20 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 20 = β 2 ( X ) S X 2 β 2 ( X ) S X 2 + C X η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 20 2 ( β 2 ( X ) 1 ) 2 γ 20 ( λ 22 1 ) }
S ^ 21 2   = s y 2 ( C X S X 2 + M d ( X ) C X s x 2 + M d ( X ) ) Subramani and Kumarapandiyan [17] η S Y 2 γ 21 { γ 21 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 21 = C X S X 2 C X S X 2 + M d ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 21 2 ( β 2 ( X ) 1 ) 2 γ 21 ( λ 22 1 ) }
S ^ 22 2 = s y 2 ( ρ S X 2 + Q 3 ( X ) ρ s x 2 + Q 3 ( X ) ) Khan and Shabbir [6] η S Y 2 γ 22 { γ 22 ( β 2 ( X ) 1 ) ( λ 22 1 ) } ; where γ 22 = ρ S X 2 ρ S X 2 + Q 3 ( X ) η S Y 4   { ( β 2 ( Y ) 1 ) +   γ 22 2 ( β 2 ( X ) 1 ) 2 γ 22 ( λ 22 1 ) }
S ^ 23 2 = s y 2 X ¯ x ¯ Upadhyaya and Singh [18] B ( S ^ 23 2 ) = η S Y 2 ( C X 2 λ 21 C X ) M S E ( S ^ 23 2 ) = η S Y 4   { ( β 2 ( Y ) 1 ) +   C X 2 2   λ 21 C X }
Table 2. Some new members of proposed class-I estimators.
Table 2. Some new members of proposed class-I estimators.
EstimatorValue of Constant
φ ψ δ ν
S ^ p 1 1 2 = L s y 2 [ X ¯ + S P W X x ¯ + S P W X ] X ¯ X ¯ + S P W X 1 S P W X 1 S P W X
S ^ p 1 2 2 = L s y 2 [ X ¯ + G I N X x ¯ + G I N X ] X ¯ X ¯ + G I N X 1 G I N X 1 G I N X
S ^ p 1 3 2 = L s y 2 [ X ¯ + D O W X x ¯ + D O W X ] X ¯ X ¯ + D O W X 1 D O W X 1 D O W X
S ^ p 1 4 2 = L s y 2 [ X ¯ + S I Q R X x ¯ + S I Q R X ] X ¯ X ¯ + S I Q R X 1 S I Q R X 1 S I Q R X
S ^ p 1 5 2 = L s y 2 [ X ¯ + M D X x ¯ + M D X ] X ¯ X ¯ + M D X 1 M D X 1 M D X
S ^ p 1 6 2 = L s y 2 [ X ¯ + A A D M X x ¯ + A A D M X ] X ¯ X ¯ + A A D M X 1 A A D M X 1 A A D M X
S ^ p 1 7 2 = L s y 2 [ X ¯ + M A D X x ¯ + M A D X ] X ¯ X ¯ + M A D X 1 M A D X 1 M A D X
S ^ p 1 8 2 = L s y 2 [ X ¯ + M A D M X x ¯ + M A D M X ] X ¯ X ¯ + M A D M X 1 M A D M X 1 M A D M X
S ^ p 1 9 2 = L s y 2 [ X ¯ + Q n X x ¯ + Q n X ] X ¯ X ¯ + Q n X 1 Q n X 1 Q n X
S ^ p 1 10 2 = L s y 2 [ X ¯ + S n X x ¯ + S n X ] X ¯ X ¯ + S n X 1 S n X 1 S n X
S ^ p 1 11 2 = L s y 2 [ X ¯ + B n X x ¯ + B n X ] X ¯ X ¯ + B n X 1 B n X 1 B n X
S ^ p 1 12 2 = L s y 2 [ X ¯ + T n X x ¯ + T n X ] X ¯ X ¯ + T n X . 1 T n X 1 T n X
S ^ p 1 13 2 = L s y 2 [ X ¯ + S r X x ¯ + S r X ] X ¯ X ¯ + S r X 1 S r X 1 S r X
S ^ p 1 14 2 = L s y 2 [ X ¯ + I D R X x ¯ + I D R X ] X ¯ X ¯ + I D R X 1 I D R X 1 I D R X
S ^ p 1 15 2 = L s y 2 [ C X X ¯ + S P W X C X x ¯ + S P W X ] C X X ¯ C X X ¯ + S P W X C X S P W X C X S P W X
S ^ p 1 16 2 = L s y 2 [ C X X ¯ + G I N X C X x ¯ + G I N X ] C X X ¯ C X X ¯ + G I N X C X G I N X C X G I N X
S ^ p 1 17 2 = L s y 2 [ C X X ¯ + D O W X C X x ¯ + D O W X ] C X X ¯ C X X ¯ + D O W X C X D O W X C X D O W X
S ^ p 1 18 2 = L s y 2 [ C X X ¯ + S I Q R X C X x ¯ + S I Q R X ] C X X ¯ C X X ¯ + S I Q R X C X S I Q R X C X S I Q R X
S ^ p 1 19 2 = L s y 2 [ C X X ¯ + M D X C X x ¯ + M D X ] C X X ¯ C X X ¯ + M D X C X M D X C X M D X
S ^ p 1 20 2 = L s y 2 [ C X X ¯ + A A D M X C X x ¯ + A A D M X ] C X X ¯ C X X ¯ + A A D M X C X A A D M X C X A A D M X
S ^ p 1 21 2 = L s y 2 [ C X X ¯ + M A D X C X x ¯ + M A D X ] C X X ¯ C X X ¯ + M A D X C X M A D X C X M A D X
S ^ p 1 22 2 = L s y 2 [ C X X ¯ + M A D M X C X x ¯ + M A D M X ] C X X ¯ C X X ¯ + M A D M X C X M A D M X C X M A D M X
S ^ p 1 23 2 = L s y 2 [ C X X ¯ + Q n X C X x ¯ + Q n X ] C X X ¯ C X X ¯ + Q n X C X Q n X C X Q n X
S ^ p 1 24 2 = L s y 2 [ C X X ¯ + S n X C X x ¯ + S n X ] C X X ¯ C X X ¯ + S n X C X S n X C X S n X
S ^ p 1 25 2 = L s y 2 [ C X X ¯ + B n X C X x ¯ + B n X ] C X X ¯ C X X ¯ + B n X C X B n X C X B n X
S ^ p 1 26 2 = L s y 2 [ C X X ¯ + T n X C X x ¯ + T n X ] C X X ¯ C X X ¯ + T n X C X T n X C X T n X
S ^ p 1 27 2 = L s y 2 [ C X X ¯ + S r X C X x ¯ + S r X ] C X X ¯ C X X ¯ + S r X C X S r X C X S r X
S ^ p 1 28 2 = L s y 2 [ C X X ¯ + I D R X C X x ¯ + I D R X ] C X X ¯ C X X ¯ + I D R X C X I D R X C X I D R X
S ^ p 1 29 2 = L s y 2 [ ρ X ¯ + S P W X ρ x ¯ + S P W X ] ρ X ¯ ρ X ¯ + S P W X ρ S P W X ρ S P W X
S ^ p 1 30 2 = L s y 2 [ ρ X ¯ + G I N X ρ x ¯ + G I N X ] ρ X ¯ ρ X ¯ + G I N X ρ G I N X ρ G I N X
S ^ p 1 31 2 = L s y 2 [ ρ X ¯ + D O W X ρ x ¯ + D O W X ] ρ X ¯ ρ X ¯ + D O W X ρ D O W X ρ D O W X
S ^ p 1 32 2 = L s y 2 [ ρ X ¯ + S I Q R X ρ x ¯ + S I Q R X ] ρ X ¯ ρ X ¯ + S I Q R X ρ S I Q R X ρ S I Q R X
S ^ p 1 33 2 = L s y 2 [ ρ X ¯ + M D X ρ x ¯ + M D X ] ρ X ¯ ρ X ¯ + M D X ρ M D X ρ M D X
S ^ p 1 34 2 = L s y 2 [ ρ X ¯ + A A D M X ρ x ¯ + A A D M X ] ρ X ¯ ρ X ¯ + A A D M X ρ A A D M X ρ A A D M X
S ^ p 1 35 2 = L s y 2 [ ρ X ¯ + M A D X ρ x ¯ + M A D X ] ρ X ¯ ρ X ¯ + M A D X ρ M A D X ρ M A D X
S ^ p 1 36 2 = L s y 2 [ ρ X ¯ + M A D M X ρ x ¯ + M A D M X ] ρ X ¯ ρ X ¯ + M A D M X ρ M A D M X ρ M A D M X
S ^ p 1 37 2 = L s y 2 [ ρ X ¯ + Q n X ρ x ¯ + Q n X ] ρ X ¯ ρ X ¯ + Q n X ρ Q n X ρ Q n X
S ^ p 1 38 2 = L s y 2 [ ρ X ¯ + S n X ρ x ¯ + S n X ] ρ X ¯ ρ X ¯ + S n X ρ S n X ρ S n X
S ^ p 1 39 2 = L s y 2 [ ρ X ¯ + B n X ρ x ¯ + B n X ] ρ X ¯ ρ X ¯ + B n X ρ B n X ρ B n X
S ^ p 1 40 2 = L s y 2 [ ρ X ¯ + T n X ρ x ¯ + T n X ] ρ X ¯ ρ X ¯ + T n X ρ T n X ρ T n X
S ^ p 1 41 2 = L s y 2 [ ρ X ¯ + S r X ρ x ¯ + S r X ] ρ X ¯ ρ X ¯ + S r X ρ S r X ρ S r X
S ^ p 1 42 2 = L s y 2 [ ρ X ¯ + I D R X ρ x ¯ + I D R X ] ρ X ¯ ρ X ¯ + I D R X ρ I D R X ρ I D R X
S ^ p 1 43 2 = L s y 2 [ β 2 ( X ) X ¯ + S P W X β 2 ( X ) x ¯ + S P W X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + S P W X β 2 ( X ) S P W X β 2 ( X ) S P W X
S ^ p 1 44 2 = L s y 2 [ β 2 ( X ) X ¯ + G I N X β 2 ( X ) x ¯ + G I N X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + G I N X β 2 ( X ) G I N X β 2 ( X ) G I N X
S ^ p 1 45 2 = L s y 2 [ β 2 ( X ) X ¯ + D O W X β 2 ( X ) x ¯ + D O W X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + D O W X β 2 ( X ) D O W X β 2 ( X ) D O W X
S ^ p 1 46 2 = L s y 2 [ β 2 ( X ) X ¯ + S I Q R X β 2 ( X ) x ¯ + S I Q R X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + S I Q R X β 2 ( X ) S I Q R X β 2 ( X ) S I Q R X
S ^ p 1 47 2 = L s y 2 [ β 2 ( X ) X ¯ + M D X β 2 ( X ) x ¯ + M D X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + M . D X β 2 ( X ) M D X β 2 ( X ) M D X
S ^ p 1 48 2 = L s y 2 [ β 2 ( X ) X ¯ + A A D M X β 2 ( X ) x ¯ + A A D M X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + A A D M X β 2 ( X ) A A D M X β 2 ( X ) A A D M X
S ^ p 1 49 2 = L s y 2 [ β 2 ( X ) X ¯ + M A D X β 2 ( X ) x ¯ + M A D X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + M A D X β 2 ( X ) M A D X β 2 ( X ) M A D X
S ^ p 1 50 2 = L s y 2 [ β 2 ( X ) X ¯ + M A D M X β 2 ( X ) x ¯ + M A D M X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + M A D M X β 2 ( X ) M A D M X β 2 ( X ) M A D M X
S ^ p 1 51 2 = L s y 2 [ β 2 ( X ) X ¯ + Q n X β 2 ( X ) x ¯ + Q n X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + Q n X β 2 ( X ) Q n X β 2 ( X ) Q n X
S ^ p 1 52 2 = L s y 2 [ β 2 ( X ) X ¯ + S n X β 2 ( X ) x ¯ + S n X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + S n X β 2 ( X ) S n X β 2 ( X ) S n X
S ^ p 1 53 2 = L s y 2 [ β 2 ( X ) X ¯ + B n X β 2 ( X ) x ¯ + B n X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + B n X β 2 ( X ) B n X β 2 ( X ) B n X
S ^ p 1 54 2 = L s y 2 [ β 2 ( X ) X ¯ + T n X β 2 ( X ) x ¯ + T n X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + T n X β 2 ( X ) T n X β 2 ( X ) T n X
S ^ p 1 55 2 = L s y 2 [ β 2 ( X ) X ¯ + S r X β 2 ( X ) x ¯ + S r X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + S r X β 2 ( X ) S r X β 2 ( X ) S r X
S ^ p 1 56 2 = L s y 2 [ β 2 ( X ) X ¯ + I D R X β 2 ( X ) x ¯ + I D R X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + I D R X β 2 ( X ) I D R X β 2 ( X ) I D R X
Table 3. Some new members of proposed class-II estimators.
Table 3. Some new members of proposed class-II estimators.
EstimatorValue of Constant
α ζ τ υ
S ^ p 2 1 2 = M { s y 2 [ X ¯ + S P W X x ¯ + S P W X ] X ¯ X ¯ + S P W X + b ( X ¯ x ¯ ) } 1 S P W X 1 S P W X
S ^ p 2 2 2 = M { s y 2 [ X ¯ + G I N X x ¯ + G I N X ] X ¯ X ¯ + G I N X + b ( X ¯ x ¯ ) } 1 G I N X 1 G I N X
S ^ p 2 3 2 = M { s y 2 [ X ¯ + D O W X x ¯ + D O W X ] X ¯ X ¯ + D O W X + b ( X ¯ x ¯ ) } 1 D O W X 1 D O W X
S ^ p 2 4 2 = M { s y 2 [ X ¯ + S I Q R X x ¯ + S I Q R X ] X ¯ X ¯ + S I Q R X + b ( X ¯ x ¯ ) } 1 S I Q R X 1 S I Q R X
S ^ p 2 5 2 = M { s y 2 [ X ¯ + M D X x ¯ + M D X ] X ¯ X ¯ + M . D X + b ( X ¯ x ¯ ) } 1 M D X 1 M D X
S ^ p 2 6 2 = M { s y 2 [ X ¯ + A A D M X x ¯ + A A D M X ] X ¯ X ¯ + A A D M X + b ( X ¯ x ¯ ) } 1 A A D M X 1 A A D M X
S ^ p 2 7 2 = M { s y 2 [ X ¯ + M A D X x ¯ + M A D X ] X ¯ X ¯ + M A D X + b ( X ¯ x ¯ ) } 1 M A D X 1 M A D X
S ^ p 2 8 2 = M { s y 2 [ X ¯ + M A D M X x ¯ + M A D M X ] X ¯ X ¯ + M A D M X + b ( X ¯ x ¯ ) } 1 M A D M X 1 M A D M X
S ^ p 2 9 2 = M { s y 2 [ X ¯ + Q n X x ¯ + Q n X ] X ¯ X ¯ + Q n X + b ( X ¯ x ¯ ) } 1 Q n X 1 Q n X
S ^ p 2 10 2 = M { s y 2 [ X ¯ + S n X x ¯ + S n X ] X ¯ X ¯ + S n X + b ( X ¯ x ¯ ) } 1 S n X 1 S n X
S ^ p 2 11 2 = M { s y 2 [ X ¯ + B n X x ¯ + B n X ] X ¯ X ¯ + B n X + b ( X ¯ x ¯ ) } 1 B n X 1 B n X
S ^ p 2 12 2 = M { s y 2 [ X ¯ + T n X x ¯ + T n X ] X ¯ X ¯ + T n X + b ( X ¯ x ¯ ) } 1 T n X 1 T n X
S ^ p 2 13 2 = M { s y 2 [ X ¯ + S r X x ¯ + S r X ] X ¯ X ¯ + S r X + b ( X ¯ x ¯ ) } 1 S r X 1 S r X
S ^ p 2 14 2 = M { s y 2 [ X ¯ + I D R X x ¯ + I D R X ] X ¯ X ¯ + I D R X + b ( X ¯ x ¯ ) } 1 I D R X 1 I D R X
S ^ p 2 15 2 = M { s y 2 [ C X X ¯ + S P W X C X x ¯ + S P W X ] C X X ¯ C X X ¯ + S P W X + b ( X ¯ x ¯ ) } C X S P W X C X S P W X
S ^ p 2 16 2 = M { s y 2 [ C X X ¯ + G I N X C X x ¯ + G I N X ] C X X ¯ C X X ¯ + G I N X + b ( X ¯ x ¯ ) } C X G I N X C X G I N X
S ^ p 2 17 2 = M { s y 2 [ C X X ¯ + D O W X C X x ¯ + D O W X ] C X X ¯ C X X ¯ + D O W X + b ( X ¯ x ¯ ) } C X D O W X C X D O W X
S ^ p 2 18 2 = M { s y 2 [ C X X ¯ + S I Q R X C X x ¯ + S I Q R X ] C X X ¯ C X X ¯ + S I Q R X + b ( X ¯ x ¯ ) } C X S I Q R X C X S I Q R X
S ^ p 2 19 2 = M { s y 2 [ C X X ¯ + M D X C X x ¯ + M D X ] C X X ¯ C X X ¯ + M D X + b ( X ¯ x ¯ ) } C X M D X C X M D X
S ^ p 2 20 2 = M { s y 2 [ C X X ¯ + A A D M X C X x ¯ + A A D M X ] C X X ¯ C X X ¯ + A A D M X + b ( X ¯ x ¯ ) } C X A A D M X C X A A D M X
S ^ p 2 21 2 = M { s y 2 [ C X X ¯ + M A D X C X x ¯ + M A D X ] C X X ¯ C X X ¯ + M A D X + b ( X ¯ x ¯ ) } C X M A D X C X M A D X
S ^ p 2 22 2 = M { s y 2 [ C X X ¯ + M A D M X C X x ¯ + M A D M X ] C X X ¯ C X X ¯ + M A D M X + b ( X ¯ x ¯ ) } C X M A D M X C X M A D M X
S ^ p 2 23 2 = M { s y 2 [ C X X ¯ + Q n X C X x ¯ + Q n X ] C X X ¯ C X X ¯ + Q n X + b ( X ¯ x ¯ ) } C X Q n X C X Q n X
S ^ p 2 24 2 = M { s y 2 [ C X X ¯ + S n X C X x ¯ + S n X ] C X X ¯ C X X ¯ + S n X + b ( X ¯ x ¯ ) } C X S n X C X S n X
S ^ p 2 25 2 = M { s y 2 [ C X X ¯ + B n X C X x ¯ + B n X ] C X X ¯ C X X ¯ + B n X + b ( X ¯ x ¯ ) } C X B n X C X B n X
S ^ p 2 26 2 = M { s y 2 [ C X X ¯ + T n X C X x ¯ + T n X ] C X X ¯ C X X ¯ + T n X + b ( X ¯ x ¯ ) } C X T n X C X T n X
S ^ p 2 27 2 = M { s y 2 [ C X X ¯ + S r X C X x ¯ + S r X ] C X X ¯ C X X ¯ + S r X + b ( X ¯ x ¯ ) } C X S r X C X S r X
S ^ p 2 28 2 = M { s y 2 [ C X X ¯ + I D R X C X x ¯ + I D R X ] C X X ¯ C X X ¯ + I D R X + b ( X ¯ x ¯ ) } C X I D R X C X I D R X
S ^ p 2 29 2 = M { s y 2 [ ρ X ¯ + S P W X ρ x ¯ + S P W X ] ρ X ¯ ρ X ¯ + S P W X + b ( X ¯ x ¯ ) } ρ S P W X ρ S P W X
S ^ p 2 30 2 = M { s y 2 [ ρ X ¯ + G I N X ρ x ¯ + G I N X ] ρ X ¯ ρ X ¯ + G I N X + b ( X ¯ x ¯ ) } ρ G I N X ρ G I N X
S ^ p 2 31 2 = M { s y 2 [ ρ X ¯ + D O W X ρ x ¯ + D O W X ] ρ X ¯ ρ X ¯ + D O W X + b ( X ¯ x ¯ ) } ρ D O W X ρ D O W X
S ^ p 2 32 2 = M { s y 2 [ ρ X ¯ + S I Q R X ρ x ¯ + S I Q R X ] ρ X ¯ ρ X ¯ + S I Q R X + b ( X ¯ x ¯ ) } ρ S I Q R X ρ S I Q R X
S ^ p 2 33 2 = M { s y 2 [ ρ X ¯ + M D X ρ x ¯ + M D X ] ρ X ¯ ρ X ¯ + M . D X + b ( X ¯ x ¯ ) } ρ M D X ρ M D X
S ^ p 2 34 2 = M { s y 2 [ ρ X ¯ + A A D M X ρ x ¯ + A A D M X ] ρ X ¯ ρ X ¯ + A A D M X + b ( X ¯ x ¯ ) } ρ A A D M X ρ A A D M X
S ^ p 2 35 2 = M { s y 2 [ ρ X ¯ + M A D X ρ x ¯ + M A D X ] ρ X ¯ ρ X ¯ + M A D X + b ( X ¯ x ¯ ) } ρ M A D X ρ M A D X
S ^ p 2 36 2 = M { s y 2 [ ρ X ¯ + M A D M X ρ x ¯ + M A D M X ] ρ X ¯ ρ X ¯ + M A D M X + b ( X ¯ x ¯ ) } ρ M A D M X ρ M A D M X
S ^ p 2 37 2 = M { s y 2 [ ρ X ¯ + Q n X ρ x ¯ + Q n X ] ρ X ¯ ρ X ¯ + Q n X + b ( X ¯ x ¯ ) } ρ Q n X ρ Q n X
S ^ p 2 38 2 = M { s y 2 [ ρ X ¯ + S n X ρ x ¯ + S n X ] ρ X ¯ ρ X ¯ + S n X + b ( X ¯ x ¯ ) } ρ S n X ρ S n X
S ^ p 2 39 2 = M { s y 2 [ ρ X ¯ + B n X ρ x ¯ + B n X ] ρ X ¯ ρ X ¯ + B n X + b ( X ¯ x ¯ ) } ρ B n X ρ B n X
S ^ p 2 40 2 = M { s y 2 [ ρ X ¯ + T n X ρ x ¯ + T n X ] ρ X ¯ ρ X ¯ + T n X + b ( X ¯ x ¯ ) } ρ T n X ρ T n X
S ^ p 2 41 2 = M { s y 2 [ ρ X ¯ + S r X ρ x ¯ + S r X ] ρ X ¯ ρ X ¯ + S r X + b ( X ¯ x ¯ ) } ρ S r X ρ S r X
S ^ p 2 42 2 = M { s y 2 [ ρ X ¯ + I D R X ρ x ¯ + I D R X ] ρ X ¯ ρ X ¯ + I D R X + b ( X ¯ x ¯ ) } ρ I D R X ρ I D R X
S ^ p 2 43 2 = M { s y 2 [ β 2 ( X ) X ¯ + S P W X β 2 ( X ) x ¯ + S P W X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + S P W X + b ( X ¯ x ¯ ) } β 2 ( X ) S P W X β 2 ( X ) S P W X
S ^ p 2 44 2 = M { s y 2 [ β 2 ( X ) X ¯ + G I N X β 2 ( X ) x ¯ + G I N X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + G I N X + b ( X ¯ x ¯ ) } β 2 ( X ) G I N X β 2 ( X ) G I N X
S ^ p 2 45 2 = M { s y 2 [ β 2 ( X ) X ¯ + D O W X β 2 ( X ) x ¯ + D O W X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + D O W X + b ( X ¯ x ¯ ) } β 2 ( X ) D O W X β 2 ( X ) D O W X
S ^ p 2 46 2 = M { s y 2 [ β 2 ( X ) X ¯ + S I Q R X β 2 ( X ) x ¯ + S I Q R X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + S I Q R X + b ( X ¯ x ¯ ) } β 2 ( X ) S I Q R X β 2 ( X ) S I Q R X
S ^ p 2 47 2 = M { s y 2 [ β 2 ( X ) X ¯ + M D X β 2 ( X ) x ¯ + M D X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + M D X + b ( X ¯ x ¯ ) } β 2 ( X ) M D X β 2 ( X ) M D X
S ^ p 2 48 2 = M { s y 2 [ β 2 ( X ) + A A D M X β 2 ( X ) x ¯ + A A D M X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + A A D M X + b ( X ¯ x ¯ ) } β 2 ( X ) A A D M X β 2 ( X ) A A D M X
S ^ p 2 49 2 = M { s y 2 [ β 2 ( X ) X ¯ + M A D X β 2 ( X ) + M A D X ] β 2 ( X ) X ¯ β 2 ( X ) + M A D X + b ( X ¯ x ¯ ) } β 2 ( X ) M A D X β 2 ( X ) M A D X
S ^ p 2 50 2 = M { s y 2 [ β 2 ( X ) X ¯ + M A D M X β 2 ( X ) x ¯ + M A D M X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + M A D M X + b ( X ¯ x ¯ ) } β 2 ( X ) M A D M X β 2 ( X ) M A D M X
S ^ p 2 51 2 = M { s y 2 [ β 2 ( X ) X ¯ + Q n X ρ x ¯ + Q n X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + Q n X + b ( X ¯ x ¯ ) } β 2 ( X ) Q n X β 2 ( X ) Q n X
S ^ p 2 52 2 = M { s y 2 [ β 2 ( X ) X ¯ + S n X β 2 ( X ) x ¯ + S n X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + S n X + b ( X ¯ x ¯ ) } β 2 ( X ) S n X β 2 ( X ) S n X
S ^ p 2 53 2 = M { s y 2 [ β 2 ( X ) X ¯ + B n X β 2 ( X ) x ¯ + B n X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + B n X + b ( X ¯ x ¯ ) } β 2 ( X ) B n X β 2 ( X ) B n X
S ^ p 2 54 2 = M { s y 2 [ β 2 ( X ) X ¯ + T n X β 2 ( X ) x ¯ + T n X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + T n X + b ( X ¯ x ¯ ) } β 2 ( X ) T n X β 2 ( X ) T n X
S ^ p 2 55 2 = M { s y 2 [ β 2 ( X ) X ¯ + S r X β 2 ( X ) x ¯ + S r X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + S r X + b ( X ¯ x ¯ ) } β 2 ( X ) S r X β 2 ( X ) S r X
S ^ p 2 56 2 = M { s y 2 [ β 2 ( X ) X ¯ + I D R X β 2 ( X ) x ¯ + I D R X ] β 2 ( X ) X ¯ β 2 ( X ) X ¯ + I D R X + b ( X ¯ x ¯ ) } β 2 ( X ) I D R X β 2 ( X ) I D R X
Table 4. Population characteristics.
Table 4. Population characteristics.
CharacteristicPopulation-IPopulation-IICharacteristicPopulation-IPopulation-II
N 3434 D 4 ( X ) 12.9411.12
n 2020 D 5 ( X ) 15.0014.25
Y ¯ 85.6485.64 D 6 ( X ) 22.7221.02
X ¯ 20.8919.94 D 7 ( X ) 25.0426.45
S Y 73.3173.31 D 8 ( X ) 33.5630.44
C Y 0.860.86 D 9 ( X ) 43.6137.32
S X 15.0515.02 D 10 ( X ) 56.4063.40
C X 0.720.75 S P W X 14.2814.02
β 1 ( X ) 0.871.28 G X 16.6016.30
β 2 ( X ) 2.913.73 D O W X 14.7214.45
β 2 ( Y ) 13.3713.37 I Q R X 18.2818.40
λ 22 1.151.22 M D X 15.2415.16
λ 21 −0.31−0.29 A A D M X 11.7011.33
M d ( X ) 15.0014.25 M A D X 10.959.79
Q 1 ( X ) 9.439.93 M A D M X 12.2311.79
Q 3 ( X ) 25.4827.80 Q n X 13.7812.66
Q r ( X ) 16.0517.88 S n X 12.0511.21
Q d ( X ) 8.038.94 B n X 13.2114.05
Q a ( X ) 17.4518.86 T n X 11.6610.49
D 1 ( X ) 7.036.06 S r X 12.9010.67
D 2 ( X ) 7.688.30 I D R X 36.5831.26
D 3 ( X ) 10.8210.27 Q D X 16.058.94
Table 5. PREs of the existing estimators as compared to the traditional ratio estimator for population-I.
Table 5. PREs of the existing estimators as compared to the traditional ratio estimator for population-I.
EstimatorPREEstimatorPREEstimatorPREEstimatorPRE
S ^ 1 2 100.32 S ^ 2 2 100.08 S ^ 3 2 101.53 S ^ 4 2 100.99
S ^ 5 2 102.47 S ^ 6 2 101.63 S ^ 7 2 100.85 S ^ 8 2 101.76
S ^ 9 2 100.75 S ^ 10 2 100.82 S ^ 11 2 101.13 S ^ 12 2 101.34
S ^ 13 2 101.53 S ^ 14 2 102.23 S ^ 15 2 102.43 S ^ 16 2 103.12
S ^ 17 2 103.85 S ^ 18 2 104.69 S ^ 19 2 100.44 S ^ 20 2 100.03
S ^ 21 2 102.06 S ^ 22 2 104.71 S ^ 23 2 104.81 S ^ S K 2 111.56
Table 6. PREs of the existing estimators as compare to the traditional ratio estimator for population-II.
Table 6. PREs of the existing estimators as compare to the traditional ratio estimator for population-II.
EstimatorPREEstimatorPREEstimatorPREEstimatorPRE
S ^ 1 2 100.55 S ^ 2 2 100.11 S ^ 3 2 102.00 S ^ 4 2 101.43
S ^ 5 2 103.65 S ^ 6 2 102.47 S ^ 7 2 101.29 S ^ 8 2 102.59
S ^ 9 2 100.89 S ^ 10 2 101.20 S ^ 11 2 101.47 S ^ 12 2 101.59
S ^ 13 2 102.00 S ^ 14 2 102.86 S ^ 15 2 103.50 S ^ 16 2 103.95
S ^ 17 2 104.68 S ^ 18 2 107.07 S ^ 19 2 100.73 S ^ 20 2 100.03
S ^ 21 2 102.60 S ^ 22 2 106.99 S ^ 23 2 109.46 S ^ S K 2 117.10
Table 7. PREs of the proposed classes of estimators as compared to the traditional ratio estimator for population-I.
Table 7. PREs of the proposed classes of estimators as compared to the traditional ratio estimator for population-I.
Proposed Class-IProposed Class-II
EstimatorPREEstimatorPREEstimatorPREEstimatorPRE
S ^ p 1 1 2 140.53 S ^ p 1 29 2 141.28 S ^ p 2 1 2 140.47 S ^ p 2 29 2 141.23
S ^ p 1 2 2 140.71 S ^ p 1 30 2 141.37 S ^ p 2 2 2 140.65 S ^ p 2 30 2 141.32
S ^ p 1 3 2 140.57 S ^ p 1 31 2 141.30 S ^ p 2 3 2 140.51 S ^ p 2 31 2 141.25
S ^ p 1 4 2 140.81 S ^ p 1 32 2 141.42 S ^ p 2 4 2 140.76 S ^ p 2 32 2 141.37
S ^ p 1 5 2 140.61 S ^ p 1 33 2 141.32 S ^ p 2 5 2 140.55 S ^ p 2 33 2 141.27
S ^ p 1 6 2 140.28 S ^ p 1 34 2 141.14 S ^ p 2 6 2 140.21 S ^ p 2 34 2 141.09
S ^ p 1 7 2 140.19 S ^ p 1 35 2 141.08 S ^ p 2 7 2 140.12 S ^ p 2 35 2 141.03
S ^ p 1 8 2 140.34 S ^ p 1 36 2 141.17 S ^ p 2 8 2 140.27 S ^ p 2 36 2 141.12
S ^ p 1 9 2 140.49 S ^ p 1 37 2 141.26 S ^ p 2 9 2 140.43 S ^ p 2 37 2 141.21
S ^ p 1 10 2 140.32 S ^ p 1 38 2 141.16 S ^ p 2 10 2 140.25 S ^ p 2 38 2 141.11
S ^ p 1 11 2 140.44 S ^ p 1 39 2 141.23 S ^ p 2 11 2 140.37 S ^ p 2 39 2 141.18
S ^ p 1 12 2 140.27 S ^ p 1 40 2 141.14 S ^ p 2 12 2 140.21 S ^ p 2 40 2 141.09
S ^ p 1 13 2 140.41 S ^ p 1 41 2 141.21 S ^ p 2 13 2 140.34 S ^ p 2 41 2 141.16
S ^ p 1 14 2 141.36 S ^ p 1 42 2 141.64 S ^ p 2 14 2 141.32 S ^ p 2 42 2 141.61
S ^ p 1 15 2 140.90 S ^ p 1 43 2 139.04 S ^ p 2 15 2 140.84 S ^ p 2 43 2 138.96
S ^ p 1 16 2 141.04 S ^ p 1 44 2 139.25 S ^ p 2 16 2 140.98 S ^ p 2 44 2 139.17
S ^ p 1 17 2 140.93 S ^ p 1 45 2 139.08 S ^ p 2 17 2 140.87 S ^ p 2 45 2 139.00
S ^ p 1 18 2 141.12 S ^ p 1 46 2 139.39 S ^ p 2 18 2 141.07 S ^ p 2 46 2 139.31
S ^ p 1 19 2 140.96 S ^ p 1 47 2 139.13 S ^ p 2 19 2 140.91 S ^ p 2 47 2 139.05
S ^ p 1 20 2 140.69 S ^ p 1 48 2 138.77 S ^ p 2 20 2 140.63 S ^ p 2 48 2 138.68
S ^ p 1 21 2 140.61 S ^ p 1 49 2 138.68 S ^ p 2 21 2 140.55 S ^ p 2 49 2 138.60
S ^ p 1 22 2 140.73 S ^ p 1 50 2 138.83 S ^ p 2 22 2 140.68 S ^ p 2 50 2 138.74
S ^ p 1 23 2 140.86 S ^ p 1 51 2 138.99 S ^ p 2 23 2 140.80 S ^ p 2 51 2 138.91
S ^ p 1 24 2 140.72 S ^ p 1 52 2 138.81 S ^ p 2 24 2 140.66 S ^ p 2 52 2 138.72
S ^ p 1 25 2 140.82 S ^ p 1 53 2 138.93 S ^ p 2 25 2 140.76 S ^ p 2 53 2 138.85
S ^ p 1 26 2 140.68 S ^ p 1 54 2 138.77 S ^ p 2 26 2 140.62 S ^ p 2 54 2 138.68
S ^ p 1 27 2 140.79 S ^ p 1 55 2 138.90 S ^ p 2 27 2 140.74 S ^ p 2 55 2 138.82
S ^ p 1 28 2 141.51 S ^ p 1 56 2 140.37 S ^ p 2 28 2 141.47 S ^ p 2 56 2 140.31
Table 8. PREs of the proposed classes of estimators as compared to the traditional ratio estimator for population-II.
Table 8. PREs of the proposed classes of estimators as compared to the traditional ratio estimator for population-II.
Proposed Class-IProposed Class-II
EstimatorPREEstimatorPREEstimatorPREEstimatorPRE
S ^ p 1 1 2 147.30 S ^ p 1 29 2 148.08 S ^ p 2 1 2 147.23 S ^ p 2 29 2 148.03
S ^ p 1 2 2 147.48 S ^ p 1 30 2 148.17 S ^ p 2 2 2 147.42 S ^ p 2 30 2 148.12
S ^ p 1 3 2 147.33 S ^ p 1 31 2 148.10 S ^ p 2 3 2 147.27 S ^ p 2 31 2 148.05
S ^ p 1 4 2 147.62 S ^ p 1 32 2 148.23 S ^ p 2 4 2 147.56 S ^ p 2 32 2 148.18
S ^ p 1 5 2 147.39 S ^ p 1 33 2 148.13 S ^ p 2 5 2 147.33 S ^ p 2 33 2 148.08
S ^ p 1 6 2 147.01 S ^ p 1 34 2 147.92 S ^ p 2 6 2 146.94 S ^ p 2 34 2 147.87
S ^ p 1 7 2 146.79 S ^ p 1 35 2 147.80 S ^ p 2 7 2 146.72 S ^ p 2 35 2 147.74
S ^ p 1 8 2 147.06 S ^ p 1 36 2 147.95 S ^ p 2 8 2 147.00 S ^ p 2 36 2 147.90
S ^ p 1 9 2 147.16 S ^ p 1 37 2 148.01 S ^ p 2 9 2 147.10 S ^ p 2 37 2 147.96
S ^ p 1 10 2 146.99 S ^ p 1 38 2 147.91 S ^ p 2 10 2 146.92 S ^ p 2 38 2 147.86
S ^ p 1 11 2 147.30 S ^ p 1 39 2 148.08 S ^ p 2 11 2 147.24 S ^ p 2 39 2 148.03
S ^ p 1 12 2 146.89 S ^ p 1 40 2 147.86 S ^ p 2 12 2 146.82 S ^ p 2 40 2 147.80
S ^ p 1 13 2 146.92 S ^ p 1 41 2 147.87 S ^ p 2 13 2 146.85 S ^ p 2 41 2 147.82
S ^ p 1 14 2 148.07 S ^ p 1 42 2 148.41 S ^ p 2 14 2 148.02 S ^ p 2 42 2 table
S ^ p 1 15 2 147.63 S ^ p 1 43 2 145.32 S ^ p 2 15 2 147.57 S ^ p 2 43 2 145.23
S ^ p 1 16 2 147.78 S ^ p 1 44 2 145.54 S ^ p 2 16 2 147.73 S ^ p 2 44 2 145.45
S ^ p 1 17 2 147.66 S ^ p 1 45 2 145.37 S ^ p 2 17 2 147.61 S ^ p 2 45 2 145.28
S ^ p 1 18 2 147.89 S ^ p 1 46 2 145.73 S ^ p 2 18 2 147.84 S ^ p 2 46 2 145.64
S ^ p 1 19 2 147.71 S ^ p 1 47 2 145.44 S ^ p 2 19 2 147.66 S ^ p 2 47 2 145.35
S ^ p 1 20 2 147.38 S ^ p 1 48 2 145.03 S ^ p 2 20 2 147.32 S ^ p 2 48 2 144.94
S ^ p 1 21 2 147.20 S ^ p 1 49 2 144.85 S ^ p 2 21 2 147.13 S ^ p 2 49 2 144.76
S ^ p 1 22 2 147.43 S ^ p 1 50 2 145.09 S ^ p 2 22 2 147.37 S ^ p 2 50 2 144.99
S ^ p 1 23 2 147.52 S ^ p 1 51 2 145.18 S ^ p 2 23 2 147.46 S ^ p 2 51 2 145.09
S ^ p 1 24 2 147.37 S ^ p 1 52 2 145.02 S ^ p 2 24 2 147.31 S ^ p 2 52 2 144.93
S ^ p 1 25 2 147.63 S ^ p 1 53 2 145.33 S ^ p 2 25 2 147.57 S ^ p 2 53 2 145.24
S ^ p 1 26 2 147.29 S ^ p 1 54 2 144.94 S ^ p 2 26 2 147.22 S ^ p 2 54 2 144.84
S ^ p 1 27 2 147.31 S ^ p 1 55 2 144.96 S ^ p 2 27 2 147.25 S ^ p 2 55 2 144.86
S ^ p 1 28 2 148.23 S ^ p 1 56 2 146.56 S ^ p 2 28 2 148.19 S ^ p 2 56 2 146.48

Share and Cite

MDPI and ACS Style

Naz, F.; Nawaz, T.; Pang, T.; Abid, M. Use of Nonconventional Dispersion Measures to Improve the Efficiency of Ratio-Type Estimators of Variance in the Presence of Outliers. Symmetry 2020, 12, 16. https://doi.org/10.3390/sym12010016

AMA Style

Naz F, Nawaz T, Pang T, Abid M. Use of Nonconventional Dispersion Measures to Improve the Efficiency of Ratio-Type Estimators of Variance in the Presence of Outliers. Symmetry. 2020; 12(1):16. https://doi.org/10.3390/sym12010016

Chicago/Turabian Style

Naz, Farah, Tahir Nawaz, Tianxiao Pang, and Muhammad Abid. 2020. "Use of Nonconventional Dispersion Measures to Improve the Efficiency of Ratio-Type Estimators of Variance in the Presence of Outliers" Symmetry 12, no. 1: 16. https://doi.org/10.3390/sym12010016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop