Next Article in Journal
Phenomenon of Post-Vibration Interactions
Previous Article in Journal
The Neutron Lifetime Discrepancy and Its Implications for Cosmology and Dark Matter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variance Estimation under Some Transformation for Both Symmetric and Asymmetric Data

1
School of Mathematics and Statistics, Central South University, Changsha 410017, China
2
Department of Quantitative Methods, School of Business, King Faisal University, Al-Ahsa 31982, Saudi Arabia
3
Department of Statistics, Faculty of Science, University of Tabuk, Tabuk 71491, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(8), 957; https://doi.org/10.3390/sym16080957
Submission received: 30 May 2024 / Revised: 15 July 2024 / Accepted: 19 July 2024 / Published: 26 July 2024
(This article belongs to the Section Mathematics)

Abstract

:
This article suggests an improved class of efficient estimators that use various transformations to estimate the finite population variance of the study variable. These estimators are particularly helpful in situations where we know about the minimum and maximum values of the auxiliary variable, and the ranks of the auxiliary variable are associated with the study variable. Consequently, these rankings can be applied as an effective tool to improve the accuracy of the estimator. A first-order approximation is used to investigate the properties of the proposed class of estimators, such as bias and mean squared error ( M S E ) under simple random sampling. A simulation study carried out in order to measure the performance and verify the theoretical results. The suggested class of estimators has a greater percent relative efficiency ( P R E ) than the other existing estimators in all of the simulated situations, according to the results. Three symmetric and asymmetric datasets are examined in the application section in order to show the superior performance of the proposed class of estimators over the existing estimators.

1. Introduction

Survey sampling collects accurate data on population characteristics to enhance the estimation performance while reducing the costs, time, and human resources. In many populations, a few extreme values exist, and estimating the unknown population characteristics without considering such information can be quite sensitive. Consequently, outcomes may be overstated or underestimated in certain cases. Therefore, the accuracy of classical estimators usually decreases in terms of mean square error ( M S E ) when extreme values in the dataset are present. Such information might be tempted to be eliminated from the sample. In order to adequately address this problem, it is important to include this information in the process of estimating population characteristics. Given the known smallest and largest observations of the auxiliary variable, ref. [1] offered two estimators by transforming them linearly. Such works were not studied further after that, until the works of ref. [2]. They applied the concept of using extreme values to a variety of finite population mean estimators. Using a stratified random sampling method, ref. [3] improved the estimate of the finite population mean under extreme values. For more details, see refs. [4,5,6,7] and the references therein.
The estimation problem of finite population variance is an important issue, and controlling variability in applications is challenging. This problem arises in biological and agricultural research, giving a signal that the intended results are unexpected. By carefully using supplementary information, the accuracy of the estimators can be increased. Ref. [8] was the first to discuss the utilization of auxiliary information in the calculation of population variance. Ref. [9] proposed some ratio and product type exponential estimators to estimate the population variance. To estimate the population variance, ref. [10] suggested different efficient classes of the estimator through extreme values transformations. Recently, ref. [11] used the concept of extreme values to introduce new classes of estimators for estimating the population variance with minimum mean squared errors. Ref. [12] provided some new classes of difference-cum-ratio-type exponential estimators for a finite population variance in stratified random sampling by utilizing the known information about extreme values. A variety of researchers have suggested many different kinds of estimators for calculating the population variance, including refs. [13,14,15,16,17,18,19,20,21,22,23].
The rankings of the auxiliary variable are associated with the study variable when there is a relationship between the two variables. As a result, these rankings can be utilized as a valuable tool to enhance the accuracy of the estimator. This article retains the extreme values of the auxiliary variable in the data and utilizes them as auxiliary information. As discussed by refs. [10,11], this article aims to suggest an effective class of estimators for estimating the variance of a finite population. These estimators utilize the available information on the extreme values of an auxiliary variable, as well as the ranks of the auxiliary variable under simple random sampling, in order to enhance the accuracy.
This article is divided into the following sections. Section 2 presents the concepts and notations. This section also includes information on certain existing estimators. In Section 3, we explain our proposed class of estimators. Section 4 provides the mathematical comparison. In Section 5, we simulate six different artificial populations using various probability distributions to assess the theoretical findings described in Section 4. This section also includes numerical examples to support our theoretical results. Finally, Section 6 discusses the results, as well as suggestions for future studies.

2. Concepts and Notations

Consider a finite population with size N units, denoted by U = ( U 1 , U 2 , U 3 , , U N ) . Let y i , x i , and r i represent the ith unit values of the study variable Y, the auxiliary variable X, and the ranks of the auxiliary variable R, respectively. For these variables, we define the population variances
S y 2 = 1 N 1 i = 1 N Y i Y ¯ 2
S x 2 = 1 N 1 i = 1 N X i X ¯ 2
and
S r 2 = 1 N 1 i = 1 N R i R ¯ 2
where
Y ¯ = 1 N i = 1 N Y i
X ¯ = 1 N i = 1 N X i
and
R ¯ = 1 N i = 1 N R i
are the population means of Y , X , and R, respectively.
The population coefficients of variation for Y , X , and R are defined as
C y = S y Y ¯
C x = S x X ¯
and
C r = S r R ¯
respectively. Furthermore, we know that the population correlation coefficients between Y and X , Y and R , and X and R are
ρ y x = S y x S y S x
ρ y r = S y r S y S r
and
ρ x r = S x r S x S r
where
S y x = 1 N 1 i = 1 N Y i Y ¯ X i X ¯
S y r = 1 N 1 i = 1 N Y i Y ¯ R i R ¯
and
S x r = 1 N 1 i = 1 N x i X ¯ R i R ¯
are the population co-variances, respectively.
In order to calculate the unknown population parameter Y ¯ , we adopt simple random sampling without replacement to pick a random sample of n units from the population. Let us define the sample variances
S ^ y 2 = 1 n 1 i = 1 n Y i y ¯ 2
S ^ x 2 = 1 n 1 i = 1 n X i x ¯ 2
and
S ^ r 2 = 1 n 1 i = 1 n R i r ¯ 2
where
y ¯ = 1 n i = 1 n Y i
x ¯ = 1 n i = 1 n X i
and
r ¯ = 1 n i = 1 n R i
are the sample means of Y , X , and R, respectively. Additionally, the sample coefficients of variation are defined as
c y = s y y ¯
c x = s x x ¯
and
c r = s r r ¯
where s y , s x , and s r denote the sample standard deviations, respectively.
For each estimator, we define the following terms in order to obtain the biases and mean square errors:
e 0 = s y 2 S y 2 S y 2 , e 1 = s x 2 S x 2 S x 2 , and e 2 = s r 2 S r 2 S r 2 such that E e i h = 0 for i = 0, 1, 2.
E e 0 2 = ϕ δ 400 , E e 1 2 = ϕ δ 040 , E e 2 2 = ϕ δ 004
E e 0 e 1 = ϕ δ 220 , E e 0 e 2 = ϕ δ 202 , E e 1 e 2 = ϕ δ 022 ,
where δ 400 = ( δ 400 1 ) , δ 040 = ( δ 040 1 ) , δ 004 = ( δ 004 1 ) , δ 220 = ( δ 220 1 ) , δ 202 = ( δ 202 1 ) , δ 022 = ( δ 022 1 ) , and ϕ = 1 n 1 N .
Also
δ l q s = φ l q s φ 200 l / 2 φ 020 q / 2 φ 002 s / 2 ,
φ l q s = i = 1 N Y i Y ¯ l X i X ¯ q R i R ¯ s N 1 ,
where φ l q s represents the population central moment with orders ( l , q , s ) , and ( φ 200 , φ 020 , φ 002 ) denotes the standard deviation of ( Y , X , R ) .
The population coefficients of kurtosis are defined as
δ 400 = φ 400 φ 200 2 ,
δ 040 = φ 040 φ 020 2 ,
δ 004 = φ 004 φ 002 2 ,
where
φ 400 = = i = 1 N Y i Y ¯ 4 N 1 ,
φ 040 = = i = 1 N X i X ¯ 4 N 1 ,
φ 004 = = i = 1 N R i R ¯ 4 N 1 ,
φ 200 = = i = 1 N Y i Y ¯ 2 N 1 ,
φ 020 = = i = 1 N X i X ¯ 2 N 1 ,
φ 002 = = i = 1 N R i R ¯ 2 N 1 ,
respectively. Here δ 400 = β 2 ( y ) , δ 040 = β 2 ( x ) , and δ 004 = β 2 ( r ) .
Next, we go over different existing estimators of finite population variances and compare them with the proposed class of estimators.
For population variance, the usual variance estimator of S ^ y 2 = s y 2 , is provided by
V a r ( S ^ y 2 ) = ϕ S y 4 δ 400 .
Ref. [8] suggested a ratio estimator for population variance S ^ t 2 , which is given by
S ^ t 2 = s y 2 S x 2 s x 2 .
The following are the formulas for the bias and M S E of S ^ t 2 , which can be found in ref. [8]
B i a s S ^ t 2 ϕ S y 2 δ 040 δ 220
and
M S E S ^ t 2 ϕ S y 4 δ 400 + δ 040 2 δ 220 .
The linear regression estimator S ^ l r 2 proposed by ref. [24], is defined as
S ^ l r 2 = s y 2 + b ( s y 2 , s x 2 ) S x 2 s x 2 ,
where b ( s y 2 , s x 2 ) = s y 2 δ ^ 220 s x 2 δ ^ 040 is the sample regression coefficient.
The following is the formula for the M S E of S ^ l r 2 , which can be found in ref. [24]
M S E S ^ l r 2 ϕ S y 4 δ 400 1 ρ 2 ,
where ρ = δ 220 δ 400 δ 040 .
Ref. [9] suggested an exponential ratio-type estimator S ^ b t 2 , which is expressed as
S ^ b t 2 = s y 2 exp S x 2 s x 2 S x 2 + s x 2 .
The following are the formulas for the bias and M S E of S ^ b t 2 , which can be found in ref. [9]
B i a s S ^ b t 2 1 2 ϕ S y 2 3 δ 040 4 δ 220
and
M S E S ^ b t 2 ϕ S y 4 δ 400 + δ 040 4 δ 220 .
In simple random sampling, ref. [20] proposed a ratio-type estimator S ^ u s 2 by utilizing the kurtosis of an auxiliary variable.
S ^ u s 2 = s y 2 S x 2 + δ 040 s x 2 + δ 040 .
The following are the formulas for the bias and M S E of S ^ u s 2 , which can be found in ref. [20]
B i a s S ^ u s 2 ϕ S y 2 v 1 v 1 δ 040 δ 220
and
M S E S ^ u s 2 ϕ S y 4 δ 400 + v 1 2 δ 040 2 v 1 δ 220 ,
where v 1 = S x 2 S x 2 + δ 040 .
Ref. [15] proposed certain ratio estimators as follows
S ^ a 2 = s y 2 S x 2 + C x s x 2 + C x ,
S ^ b 2 = s y 2 δ 040 S x 2 + C x δ 040 s x 2 + C x
and
S ^ c 2 = s y 2 C x S x 2 + δ 040 C x s x 2 + δ 040 .
The following are the formulas for the bias and M S E of S ^ j 2 ( j = a , b , c ) , which can be found in ref. [15]
B i a s S ^ j 2 ϕ S y 2 v i v i δ 040 δ 220 , i = 2 , 3 , 4
and
M S E S ^ j 2 ϕ S y 4 δ 400 + v i 2 δ 040 2 v i δ 220 ,
where v 2 = S x 2 S x 2 + C x , v 3 = δ 040 S x 2 δ 040 S x 2 + C x , v 4 = C x S x 2 C x S x 2 + δ 040 .

3. Proposed Estimator

This section, which is inspired by refs. [10,11], presents a new class of effective estimators that estimate the finite population variance by using the largest and smallest values and rankings of the auxiliary variable under simple random sampling.
S ^ A 2 = s y 2 exp δ 1 ξ 1 S x 2 s x 2 ξ 1 S x 2 + s x 2 + 2 ξ 2 exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 ,
where δ i , i = 1 , 2 represent known constant values, whereas ξ i , i = 1 , 2 , 3 , 4 represent auxiliary variable parameters. The largest and smallest values of the auxiliary variable are denoted by ( x M , x m ) , while the largest and smallest values of the ranks of the auxiliary variable are denoted by ( R M , R m ) . Table 1 shows the known values for ξ 1 , ξ 2 , while ξ 3 = 1 and ξ 4 = R M R m . Table 1 lists the various classes of the proposed estimator derived from (53).

Properties of the Proposed Estimator

The bias and M S E of the proposed estimator S ^ A 2 are now obtained by rewriting (53) in terms of errors, i.e.,
S ^ A 2 = S y 2 1 + e 0 exp δ 1 k 4 e 1 2 1 + k 4 e 1 2 1 exp δ 2 k 5 e 2 2 1 + k 5 e 2 2 1
where k 4 = ξ 1 S x 2 ξ 1 S x 2 + ξ 2 and k 5 = ξ 3 S r 2 ξ 3 S r 2 + ξ 4 .
Using the Taylor series under the first order of approximation, we obtain
S ^ A 2 S y 2 S y 2 e 0 δ 1 k 4 2 e 1 δ 2 k 5 2 e 2 + δ 1 k 4 2 4 + δ 1 2 k 4 2 8 e 1 2 + δ 2 k 5 2 4 + δ 2 2 k 5 2 8 e 2 2 δ 1 k 4 2 e 0 e 1 δ 2 k 5 2 e 0 e 2 + δ 1 δ 2 k 4 k 5 2 e 1 e 2 .
Using (55), the bias of S ^ A 2 is given by
B i a s S ^ A 2 ϕ S y 2 δ 1 k 4 2 4 + δ 1 2 k 4 2 8 δ 040 + δ 2 k 5 2 4 + δ 2 2 k 5 2 8 δ 004 δ 1 k 4 2 δ 220 δ 2 k 5 2 δ 202 + δ 1 δ 2 k 4 k 5 2 δ 022 .
We derived an M S E by squaring both sides of (55) and taking the expectation. The equation is as follows
M S E S ^ A 2 ϕ S y 4 δ 400 + δ 1 2 k 4 2 4 δ 040 + δ 2 2 k 5 2 4 δ 004 δ 1 k 4 δ 220 δ 2 k 5 δ 202 + δ 1 δ 2 k 4 k 5 2 δ 022 .
As the given constant values of ( δ 1 = δ 2 = 1 ) are substituted into (56) and (57), the bias and M S E for S ^ A 2 can be rewritten. After the some simplifications, we obtain
B i a s S ^ A 2 ϕ S y 2 3 8 k 4 2 δ 040 + k 5 2 δ 004 1 2 k 4 δ 220 + k 5 δ 202 k 4 k 5 δ 022
and
M S E S ^ A 2 ϕ S y 4 δ 400 + 1 4 k 4 2 δ 040 + k 5 2 δ 004 1 2 2 k 4 δ 220 + 2 k 5 δ 202 k 4 k 5 δ 022 .

4. Mathematical Comparison

The comparison of the suggested class of estimators S ^ A 2 , with other existing estimators S ^ y 2 , S ^ t 2 , S ^ l r 2 , S ^ b t 2 , S ^ u s 2 , S ^ k c i 2 , is covered in this section.
Condition (i): By (36) and (59)
V a r ( S ^ y 2 ) > M S E S ^ A 2   if
2 k 4 δ 220 + 2 k 5 δ 202 k 4 k 5 δ 022 > 1 2 k 4 2 δ 040 + k 5 2 δ 004 .
Condition (ii): By (39) and (59)
M S E ( S ^ t 2 ) > M S E S ^ A 2   if
2 δ 220 k 4 2 + 2 k 5 δ 202 k 4 k 5 δ 022 > 1 2 δ 040 ( k 4 2 4 ) + k 5 2 δ 004 .
Condition (iii): By (41) and (59)
M S E ( S ^ l r 2 ) > M S E S ^ A 2   if
2 k 4 δ 220 + 2 k 5 δ 202 k 4 k 5 δ 022 > 1 2 4 ρ y x 2 δ 400 + k 4 2 δ 040 + k 5 2 δ 004 .
Condition (iv): By (44) and (59)
M S E ( S ^ b t 2 ) > M S E S ^ A 2   if
2 δ 220 ( k 4 1 ) + 2 k 5 δ 202 k 4 k 5 δ 022 > 1 2 δ 040 ( k 4 2 1 ) + k 5 2 δ 004 .
Condition (v): By (47) and (59)
M S E ( S ^ u s 2 ) > M S E S ^ A 2   if
δ 220 ( k 4 4 v 1 ) + k 5 δ 202 k 4 k 5 δ 022 > 1 2 δ 040 ( k 4 2 4 v 1 2 ) + k 5 2 δ 004 .
Condition (vi): By (52) and (59)
M S E ( S ^ c k i 2 ) > M S E S ^ A 2   if
δ 220 ( k 4 4 v i ) + k 5 δ 202 k 4 k 5 δ 022 > 1 2 δ 040 ( k 4 2 4 v i 2 ) + k 5 2 δ 004 .

5. Numerical Comparison

This section compares the mean squared errors MSEs of several estimators, including the proposed class of estimators, using both simulated and actual datasets. The purpose is to evaluate the performance of these estimators. In addition, we compute the percent relative efficiency (PREs) of both the proposed class of estimators and other existing estimators. For more details see Appendix A and Appendix B.

5.1. Simulation Study

We use the approach outlined in refs. [10,11] to perform a simulation study in order to validate the theoretical results reported in Section 4. Using the probability distributions listed below, it is possible to artificially generate the auxiliary variable X into six different populations:
  • Population 1: X G a m m a ( γ 1 = 2 , γ 2 = 4 ) ,
  • Population 2: X G a m m a ( γ 1 = 3 , γ 2 = 7 ) ,
  • Population 3: X E x p o n e n t i a l ( μ = 5 ) ,
  • Population 4: X E x p o n e n t i a l ( μ = 10 ) .
  • Population 5: X U n i f o r m ( α 1 = 5 , α 2 = 8 ) ,
  • Population 6: X U n i f o r m ( α 1 = 6 , α 2 = 10 ) ,
The variable of interest, Y, is calculated using the following formula:
Y = r y x × X + e ,
where the error term is e N ( 0 , 1 ) and the correlation coefficient between the target and research variables is
r y x = 0.77 .
In order to calculate the mean squared errors ( M S E s ) and percent relative efficiencies ( P R E s ) of the proposed class of estimators and other existing estimators, we performed the following procedures in R software:
  • Step 1: A population of 1000 observations is initially generated by employing the above probability distributions.
  • Step 2: We obtain the population total from Step 1 along with the smallest and largest values of the supplementary variable.
  • Step 3: We use SRSWOR to obtain different sizes of samples for each population.
  • Step 4: For each sample size, calculate the M S E values of all the estimators discussed in this article.
  • Step 5: subsequently 80,000 repetitions of Steps 3 and 4, Table 2 and Table 3 present the outcomes of the artificial populations, while Table 4 and Table 5 present a summary of the real datasets.
Finally, to obtain MSE and PRE for each estimator over all of the replications, we apply the following formulas:
M S E ( S ^ T 2 ) min = h = 1 80000 S ^ T 2 S y 2 2 80000
and
P R E = V ( S ^ y 2 ) M S E ( S ^ T 2 ) m i n × 100 ,
where T = t , l r , b t , u s , a , b , c , A i ( i = 1 , 2 , , 8 ) .

5.2. Numerical Examples

We evaluated the suggested estimator’s performance by comparing the mean squared errors MSEs and PREs among various estimators using three real-life datasets. The following lists the datasets together with summar y statistics:
Data 1. [Source: Ref. [25], p. 135]
Y: Enrollment of students in 2012,
X: The total schools in 2012,
R: Ranks the total schools in 2012.
The summary statistics are as follows:
N = 36 , n = 15 , X ¯ = 1054.39 , Y ¯ = 148718.70 , R ¯ = 18.50 , X M = 2370 , X m = 388 , R M = 36 , R m = 1 , S x = 402.61 , S y = 182315.10 , S r = 10.54 , C x = 0.38 , C y = 1.23 , C r = 0.56 , ρ y x = 0.29 , ρ x r = 0.94 , ρ y r = 0.19 , δ 400 = 3365 , δ 040 = 4698 , δ 004 = 4698 , δ 220 = 2976 , δ 202 = 3298 , δ 022 = 3297
Data 2. [Source: Ref. [25], p. 226]
Y: Total number of workers in 2012,
X: Total number of registered factories in 2012,
R: Ranks the total number of registered factories in 2012.
The summary statistics are as follows:
N = 36 , n = 15 , X ¯ = 335.78 , Y ¯ = 52432.86 , R ¯ = 18.5 , X M = 2055 , X m = 24 , R M = 36 , R m = 1 , S x = 451.14 , S y = 178201.10 , S r = 10.54 , C x = 1.34 , C y = 3.40 , C r = 0.57 , ρ y x = 0.69 , ρ y r = 0.39 , ρ x r = 0.84 , δ 400 = 2366 , δ 040 = 4398 , δ 004 = 4068 , δ 220 = 2276 , δ 202 = 2099 , δ 022 = 2098 .
Data 3. (Source: Ref. [26], p. 24)
Y: Food costs associated with the family’s job,
X: The weekly earnings of families,
R: Ranks the weekly earnings of families.
The summary statistics are as follows:
N = 33 , n = 5 , X ¯ = 72.55 , Y ¯ = 27.49 , R ¯ = 17 , X M = 95 , X m = 58 , R M = 33 , R m = 1 , S x = 10.58 , S y = 10.13 , S r = 9.64 , C x = 0.15 , C y = 0.37 , C r = 0.57 , ρ y x = 0.25 , ρ y r = 0.20 , ρ x r = 0.98 , δ 400 = 5.55 , δ 040 = 3.08 , δ 004 = 1.10 , δ 220 = 2.22 , δ 202 = 1.94 , δ 022 = 2.24
To find out how well the suggested class of estimators performed, we employed three real datasets and simulation tests. For comparing various estimators, the P R E criteria has been adopted. According to the simulation investigation, the M S E and P R E values of the suggested and existing estimators can be found in Table 2 and Table 3, respectively. Table 4 and Table 5 demonstrate the results obtained for the actual datasets. Here are some general findings that we found:
  • Table 2 and Table 4 show that the M S E values of each proposed estimate are less than those of the existing estimators described in the literature for all of the simulated scenarios and real datasets. This validates the better performance of the suggested estimators over the existing estimators.
  • In addition, the P R E values of each proposed estimator are greater than those of the existing estimators, which are given in Table 3 and Table 5. The suggested class of estimators performs better than the existing estimators.

6. Conclusions

We introduced a class of effective estimators for calculating the finite population variance in this article. These estimators use the auxiliary variable’s known minimum and maximum values, as well as its ranks. In Section 4, we discussed theoretical conditions that illustrate the greater efficiency of the suggested estimators in order to compare their qualities with those of existing estimators. We performed a simulation study and examined various empirical datasets in order to validate these conditions. According to Table 3, the suggested estimators consistently perform better in terms of ( P R E s ) than existing estimators. The theoretical conclusions in Section 4 are further confirmed by the empirical data shown in Table 5. The simulation and empirical data lead us to conclude that the suggested estimators S ^ A i 2 ( i = 1 , 2 , 3 , , 8 ) are more efficient than the other estimators under consideration. As S ^ A 8 2 has the lowest M S E among these suggested estimators, it is particularly preferred.
We investigated the characteristics of the suggested efficient class of estimators using a simple random sampling technique. Our findings are useful for identifying more efficient estimators with low M S E s for stratified random sampling. This topic is useful for future research.

Author Contributions

Methodology, U.D.; Software, U.D.; Validation, U.D. and O.A.; Formal analysis, U.D., M.A.A. and O.A.; Investigation, U.D., M.A.A. and O.A.; Resources, U.D. and O.A.; Data curation, U.D., M.A.A. and O.A.; Writing—original draft, U.D.; Writing—review and editing, U.D.; Visualization, U.D.; Supervision, M.A.A.; Project administration, U.D., M.A.A. and O.A.; Funding acquisition, M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. KFU241416].

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to express our sincere gratitude to the editor and the anonymous reviewers for their valuable feedback and insightful suggestions, which greatly improved the quality of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Numerical Examples

m y d a t a = r e a d . c s v ( f i l e . c h o o s e ( ) ) a t t a c h ( m y d a t a ) N = l e n g t h ( X ) = l e n g t h ( Y ) N 1 = N 1 n = 15 X r = r a n k ( X ) Y r = r a n k ( Y ) ϕ = r o u n d ( ( 1 / n ) ( 1 / N ) , d i g i t = 4 ) X ¯ = r o u n d ( m e a n ( X ) , d i g i t = 4 ) Y ¯ = r o u n d ( m e a n ( Y ) , d i g i t = 4 ) R ¯ = r o u n d ( m e a n ( X r ) , d i g i t = 4 ) X M = m a x ( X ) X m = m i n ( X ) R M = m a x ( X r ) R m = m i n ( X r ) S X 2 = r o u n d ( ( s u m ( X 2 ) ( N X ¯ 2 ) ) / N 1 , d i g i t = 4 ) S Y 2 = r o u n d ( ( s u m ( Y 2 ) ( N Y ¯ 2 ) ) / N 1 , d i g i t = 4 ) S r 2 = r o u n d ( ( s u m ( X r 2 ) ( N R ¯ 2 ) ) / N 1 , d i g i t = 4 ) S X = r o u n d ( s q r t ( S X 2 ) , d i g i t = 4 ) S r = r o u n d ( s q r t ( S r 2 ) , d i g i t = 4 ) S Y = r o u n d ( s q r t ( S Y 2 ) , d i g i t = 4 ) C X 2 = r o u n d ( ( S X 2 / X ¯ 2 ) , d i g i t = 4 ) C r 2 = r o u n d ( ( S X r 2 / R ¯ 2 ) , d i g i t = 4 ) C Y 2 = r o u n d ( ( S Y 2 / Y ¯ 2 ) , d i g i t = 4 ) C X = r o u n d ( s q r t ( C X ) , d i g i t = 4 ) C r = r o u n d ( s q r t ( C r ) , d i g i t = 4 ) C Y = r o u n d ( s q r t ( C Y ) , d i g i t = 4 ) ρ Y X = r o u n d ( c o r ( Y , X ) , d i g i t = 4 ) ρ X R = r o u n d ( c o r ( X , X r ) , d i g i t = 4 ) ρ Y R = r o u n d ( c o r ( Y , X r ) , d i g i t = 4 ) φ 400 . = r o u n d ( ( s u m ( ( Y Y ¯ ) 4 ) ) / N 1 , d i g i t = 4 ) φ 040 = r o u n d ( ( s u m ( ( X X ¯ ) 4 ) ) / N 1 , d i g i t = 4 ) φ 004 = r o u n d ( ( s u m ( ( X r X r ¯ ) 4 ) ) / N 1 , d i g i t = 4 ) φ 200 = r o u n d ( ( s u m ( ( Y Y ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 020 = r o u n d ( ( s u m ( ( X X ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 002 = r o u n d ( ( s u m ( ( X r X r ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 220 = r o u n d ( ( s u m ( ( Y Y ¯ ) 2 ( X X ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 022 = r o u n d ( ( s u m ( ( X X ¯ ) 2 ( X r R ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 202 = r o u n d ( ( s u m ( ( Y Y ¯ ) 2 ( X r R ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) ρ = r o u n d δ 220 δ 400 δ 040 , d i g i t = 4 β 2 ( Y ) = r o u n d ( φ 400 / ( φ 200 2 ) , d i g i t = 4 ) β 2 ( X ) = r o u n d ( φ 040 / ( φ 020 2 ) , d i g i t = 4 ) β 2 ( r ) = r o u n d ( φ 004 / ( φ 002 2 ) , d i g i t = 4 ) δ 400 = β 2 ( Y ) δ 040 = β 2 ( X ) δ 004 = β 2 ( r ) δ 400 . = ( β 2 ( Y ) 1 ) δ 040 = ( β 2 ( X ) 1 ) δ 004 = ( β 2 ( r ) 1 ) δ 220 = r o u n d ( φ 220 / ( S Y 2 S X 2 ) , d i g i t = 4 ) δ 022 = r o u n d ( φ 022 / ( S X 2 S r 2 ) , d i g i t = 4 ) δ 202 = r o u n d ( φ 202 / ( S Y 2 S r 2 ) , d i g i t = 4 ) δ 220 = ( δ 220 1 ) δ 022 = ( δ 022 1 ) δ 202 = ( δ 202 1 ) v 1 = r o u n d S X 2 S X 2 + δ 040 , d i g i t = 4 v 2 = r o u n d S X 2 S X 2 + C X , d i g i t = 4 v 3 = r o u n d δ 040 S X 2 δ 040 S X 2 + C X , d i g i t = 3 v 4 = r o u n d C X S X 2 C X S X 2 + δ 040 , d i g i t = 4 ξ 11 = 1 ξ 21 = X M X m ξ 12 = X M X m ξ 22 = C X ξ 13 = X M X m ξ 23 = 1 ξ 14 = X M X m ξ 24 = β 2 ( X ) ξ 15 = β 2 ( X ) ξ 25 = X M X m ξ 16 = ρ Y X ξ 17 = C X ξ 27 = X M X m ξ 18 = ρ Y X ξ 28 = X M X m ξ 3 = 1 ξ 4 = R M R x k 41 = ξ 11 S X 2 ξ 11 S X 2 + ξ 21 k 42 = ξ 12 S X 2 ξ 12 S X 2 + ξ 22 k 43 = ξ 13 S X 2 ξ 13 S X 2 + ξ 23 k 44 = ξ 14 S X 2 ξ 14 S X 2 + ξ 24 k 45 = ξ 15 S X 2 ξ 15 S X 2 + ξ 25 k 46 = ξ 16 S X 2 ξ 16 S X 2 + ξ 26 k 47 = ξ 17 S X 2 ξ 17 S X 2 + ξ 27 k 48 = ξ 18 S X 2 ξ 18 S X 2 + ξ 28 k 5 = ξ 3 S r 2 ξ 3 S r 2 + ξ 4
Mean squared errors:
V a r ( S ^ Y 2 ) = ϕ S Y 4 δ 400 M S E S ^ t 2 = ϕ S Y 4 δ 400 + δ 040 2 δ 220 M S E S ^ l r 2 = ϕ S Y 4 δ 400 1 ρ Y X 2 M S E S ^ b t 2 = ϕ S Y 4 δ 400 + δ 040 4 δ 220 M S E S ^ u s 2 = ϕ S Y 4 δ 400 + v 1 2 δ 040 2 v 1 δ 220 M S E S ^ a 2 = ϕ S Y 4 δ 400 + v 2 2 δ 040 2 v 2 δ 220 M S E S ^ b 2 = ϕ S Y 4 δ 400 + v 3 2 δ 040 2 v 3 δ 220 M S E S ^ c 2 = ϕ S Y 4 δ 400 + v 4 2 δ 040 2 v 4 δ 220 M S E S ^ A 1 2 = ϕ S Y 4 δ 400 + 1 4 k 41 2 δ 040 + k 5 2 δ 004 1 2 2 k 41 δ 220 + 2 k 5 δ 202 k 41 k 5 δ 022 M S E S ^ A 2 2 = ϕ S Y 4 δ 400 + 1 4 k 42 2 δ 040 + k 5 2 δ 004 1 2 2 k 42 δ 220 + 2 k 5 δ 202 k 42 k 5 δ 022 M S E S ^ A 3 2 = ϕ S Y 4 δ 400 + 1 4 k 43 2 δ 040 + k 5 2 δ 004 1 2 2 k 43 δ 220 + 2 k 5 δ 202 k 43 k 5 δ 022 M S E S ^ A 4 2 = ϕ S Y 4 δ 400 + 1 4 k 44 2 δ 040 + k 5 2 δ 004 1 2 2 k 44 δ 220 + 2 k 5 δ 202 k 44 k 5 δ 022 M S E S ^ A 5 2 = ϕ S Y 4 δ 400 + 1 4 k 45 2 δ 040 + k 5 2 δ 004 1 2 2 k 45 δ 220 + 2 k 5 δ 202 k 45 k 5 δ 022 M S E S ^ A 6 2 = ϕ S Y 4 δ 400 + 1 4 k 46 2 δ 040 + k 5 2 δ 004 1 2 2 k 46 δ 220 + 2 k 5 δ 202 k 46 k 5 δ 022 M S E S ^ A 7 2 = ϕ S Y 4 δ 400 + 1 4 k 47 2 δ 040 + k 5 2 δ 004 1 2 2 k 47 δ 220 + 2 k 5 δ 202 k 47 k 5 δ 022 M S E S ^ A 8 2 = ϕ S Y 4 δ 400 + 1 4 k 48 2 δ 040 + k 5 2 δ 004 1 2 2 k 48 δ 220 + 2 k 5 δ 202 k 48 k 5 δ 022
Percent relative efficiency:
V a r ( S ^ Y 2 ) V a r ( S ^ Y 2 ) 100 V a r ( S ^ Y 2 ) M S E S ^ t 2 100 V a r ( S ^ Y 2 ) M S E S ^ l r 2 100 V a r ( S ^ Y 2 ) M S E S ^ b t 2 100 V a r ( S ^ Y 2 ) M S E S ^ u s 2 100 V a r ( S ^ Y 2 ) M S E S ^ a 2 100 V a r ( S ^ Y 2 ) M S E S ^ b 2 100 V a r ( S ^ Y 2 ) M S E S ^ c 2 100 V a r ( S ^ Y 2 ) M S E S ^ A 1 2 100 V a r ( S ^ Y 2 ) M S E S ^ A 2 2 100 V a r ( S ^ Y 2 ) M S E S ^ A 3 2 100 V a r ( S ^ Y 2 ) M S E S ^ A 4 2 100 V a r ( S ^ Y 2 ) M S E S ^ A 5 2 100 V a r ( S ^ Y 2 ) M S E S ^ A 6 2 100 V a r ( S ^ Y 2 ) M S E S ^ A 7 2 100 V a r ( S ^ Y 2 ) M S E S ^ A 8 2 100

Appendix B. Simulation Study

l i b r a r y ( s a m p l i n g ) N = 1000 c o r r e l a t e d   V a l u e = f u n c t i o n ( X , r ) r 2 = r 2 v e = 1 r 2 S D = s q r t ( v e ) e = r n o r m ( l e n g t h ( X ) , m e a n = 0 , s d = S D ) Y = r X + e r e t u r n ( Y ) s e t . s e e d ( 0 ) X = r g a m m a ( N , 2 , 4 ) Y = c o r r e l a t e d   V a l u e ( X = X , r = 0.77 ) m y d a t a = d a t a . f r a m e ( X , Y ) Y = m y d a t a [ , 1 ] X = m y d a t a [ , 2 ] n = 50 ϕ = r o u n d ( ( 1 / n ) ( 1 / N ) , d i g i t = 4 ) N 1 = N 1 X ¯ = r o u n d ( m e a n ( X ) , d i g i t = 4 ) Y ¯ = r o u n d ( m e a n ( Y ) , d i g i t = 4 ) R ¯ = r o u n d ( m e a n ( X r ) , d i g i t = 4 ) X r = r a n k ( X ) Y r = r a n k ( Y ) X M = m a x ( X ) X m = m i n ( X ) R M = m a x ( X r ) R m = m i n ( X r ) S X 2 = r o u n d ( ( s u m ( X 2 ) ( N X ¯ 2 ) ) / N 1 , d i g i t = 4 ) S Y 2 = r o u n d ( ( s u m ( Y 2 ) ( N Y ¯ 2 ) ) / N 1 , d i g i t = 4 ) S r 2 = r o u n d ( ( s u m ( X r 2 ) ( N R ¯ 2 ) ) / N 1 , d i g i t = 4 ) S X = r o u n d ( s q r t ( S X 2 ) , d i g i t = 4 ) S r = r o u n d ( s q r t ( S r 2 ) , d i g i t = 4 ) S Y = r o u n d ( s q r t ( S Y 2 ) , d i g i t = 4 ) C X 2 = r o u n d ( ( S X 2 / X ¯ 2 ) , d i g i t = 4 ) C r 2 = r o u n d ( ( S X r 2 / R ¯ 2 ) , d i g i t = 4 ) C Y 2 = r o u n d ( ( S Y 2 / Y ¯ 2 ) , d i g i t = 4 ) C X = r o u n d ( s q r t ( C X ) , d i g i t = 4 ) C r = r o u n d ( s q r t ( C r ) , d i g i t = 4 ) C Y = r o u n d ( s q r t ( C Y ) , d i g i t = 4 ) ρ Y X = r o u n d ( c o r ( Y , X ) , d i g i t = 4 ) ρ X R = r o u n d ( c o r ( X , X r ) , d i g i t = 4 ) ρ Y R = r o u n d ( c o r ( Y , X r ) , d i g i t = 4 ) φ 400 . = r o u n d ( ( s u m ( ( Y Y ¯ ) 4 ) ) / N 1 , d i g i t = 4 ) φ 040 = r o u n d ( ( s u m ( ( X X ¯ ) 4 ) ) / N 1 , d i g i t = 4 ) φ 004 = r o u n d ( ( s u m ( ( X r X r ¯ ) 4 ) ) / N 1 , d i g i t = 4 ) φ 200 = r o u n d ( ( s u m ( ( Y Y ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 020 = r o u n d ( ( s u m ( ( X X ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 002 = r o u n d ( ( s u m ( ( X r X r ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 220 = r o u n d ( ( s u m ( ( Y Y ¯ ) 2 ( X X ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 022 = r o u n d ( ( s u m ( ( X X ¯ ) 2 ( X r R ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) φ 202 = r o u n d ( ( s u m ( ( Y Y ¯ ) 2 ( X r R ¯ ) 2 ) ) / N 1 , d i g i t = 4 ) ρ = r o u n d δ 220 δ 400 δ 040 , d i g i t = 4 β 2 ( Y ) = r o u n d ( φ 400 / ( φ 200 2 ) , d i g i t = 4 ) β 2 ( X ) = r o u n d ( φ 040 / ( φ 020 2 ) , d i g i t = 4 ) β 2 ( r ) = r o u n d ( φ 004 / ( φ 002 2 ) , d i g i t = 4 ) δ 400 = β 2 ( Y ) δ 040 = β 2 ( X ) δ 004 = β 2 ( r ) δ 400 . = ( β 2 ( Y ) 1 ) δ 040 = ( β 2 ( X ) 1 ) δ 004 = ( β 2 ( r ) 1 ) δ 220 = r o u n d ( φ 220 / ( S Y 2 S X 2 ) , d i g i t = 4 ) δ 022 = r o u n d ( φ 022 / ( S X 2 S r 2 ) , d i g i t = 4 ) δ 202 = r o u n d ( φ 202 / ( S Y 2 S r 2 ) , d i g i t = 4 ) δ 220 = ( δ 220 1 ) δ 022 = ( δ 022 1 ) δ 202 = ( δ 202 1 ) b ( s y 2 , s x 2 ) = s y 2 δ 220 s x 2 δ 040 v 1 = r o u n d S X 2 S X 2 + δ 040 , d i g i t = 4 v 2 = r o u n d S X 2 S X 2 + C X , d i g i t = 4 v 3 = r o u n d δ 040 S X 2 δ 040 S X 2 + C X , d i g i t = 3 v 4 = r o u n d C X S X 2 C X S X 2 + δ 040 , d i g i t = 4 ξ 11 = 1 ξ 21 = X M X m ξ 12 = X M X m ξ 22 = C X ξ 13 = X M X m ξ 23 = 1 ξ 14 = X M X m ξ 24 = β 2 ( X ) ξ 15 = β 2 ( X ) ξ 25 = X M X m ξ 16 = ρ Y X ξ 17 = C X ξ 27 = X M X m ξ 18 = ρ Y X ξ 28 = X M X m ξ 3 = 1 ξ 4 = R M R x k 41 = ξ 11 S X 2 ξ 11 S X 2 + ξ 21 k 42 = ξ 12 S X 2 ξ 12 S X 2 + ξ 22 k 43 = ξ 13 S X 2 ξ 13 S X 2 + ξ 23 k 44 = ξ 14 S X 2 ξ 14 S X 2 + ξ 24 k 45 = ξ 15 S X 2 ξ 15 S X 2 + ξ 25 k 46 = ξ 16 S X 2 ξ 16 S X 2 + ξ 26 k 47 = ξ 17 S X 2 ξ 17 S X 2 + ξ 27 k 48 = ξ 18 S X 2 ξ 18 S X 2 + ξ 28 k 5 = ξ 3 S r 2 ξ 3 S r 2 + ξ 4
Now, we perform simulations
N 2 = 8000 S Y 2 = c ( ) ; S X 2 = c ( ) ; s x 2 = c ( ) ; Y ¯ = c ( ) ; X ¯ ; X M = c ( ) ; X m = c ( ) ; R ¯ = c ( ) ; y ¯ = c ( ) ; x ¯ = c ( ) ; x M = c ( ) , x m = c ( ) ; r ¯ = c ( ) ; R M = c ( ) ; R m = c ( ) ; S ^ t 2 = c ( ) ; S ^ l r 2 = c ( ) ; S ^ b t 2 = c ( ) ; S ^ u s 2 = c ( ) ; S ^ a 2 = c ( ) ; S ^ b 2 = c ( ) ; S ^ c 2 = c ( ) ; S ^ A 1 2 = c ( ) ; S ^ A 2 2 = c ( ) ; S ^ A 3 2 = c ( ) ; S ^ A 4 2 = c ( ) ; S ^ A 5 2 = c ( ) ; S ^ A 6 2 = c ( ) ; S ^ A 7 2 = c ( ) ; S ^ A 8 2 = c ( ) for ( i   in   1 : N 2 ) { N = 1000 n = 150 d 1 = m y d a t a [ s a m p l e ( 1 : n r o w ( m y d a t a ) , N ) , ] d 2 = d 1 [ s a m p l e ( 1 : n r o w ( d 1 ) , n ) , ] x r = r a n k ( d 2 [ , 2 ] ) X M = c ( X M , m a x ( d 1 [ , 2 ] ) ) X m = c ( X m , m i n ( d 1 [ , 2 ] ) R M = c ( R M , m a x ( d 1 [ , 2 ] ) ) R m = c ( R m , m i n ( d 1 [ , 2 ] ) x M = c ( x M , m a x ( d 2 [ , 2 ] ) ) x m = c ( x m , m i n ( d 2 [ , 2 ] ) ) X ¯ = c ( X ¯ , m e a n ( d 1 [ , 2 ] ) ) Y ¯ = c ( X ¯ , m e a n ( d 1 [ , 1 ] ) ) R ¯ = c ( R ¯ , m e a n ( d 1 [ , 2 ] ) ) x ¯ = c ( x ¯ , m e a n ( d 2 [ , 2 ] ) ) y ¯ = c ( y ¯ , m e a n ( d 2 [ , 1 ] ) ) r ¯ = c ( r ¯ , m e a n ( d 1 [ , 2 ] ) ) S X 2 = c ( s X 2 , v a r ( d 1 [ , 2 ] ) ) S Y 2 = c ( s Y 2 , v a r ( d 1 [ , 1 ] ) ) S R 2 = c ( s R 2 , v a r ( d 1 [ , 2 ] ) ) s x 2 = c ( s x 2 , v a r ( d 2 [ , 2 ] ) ) s y 2 = c ( s y 2 , v a r ( d 2 [ , 1 ] ) ) s r 2 = c ( s r 2 , v a r ( d 2 [ , 2 ] ) ) b ( s y 2 , s x 2 ) = s y 2 δ 220 s x 2 δ 040 S ^ t 2 = s y 2 S X 2 s y 2 S ^ l r 2 = s y 2 + b ( s y 2 , s x 2 ) S X 2 s x 2 S ^ b t 2 = s y 2 exp S X 2 s x 2 S X 2 + s x 2 S ^ u s 2 = s y 2 S X 2 + δ 040 s x 2 + δ 040 S ^ a 2 = s y 2 S X 2 + C X s x 2 + C X S ^ b 2 = s y 2 δ 040 S X 2 + C X δ 040 s x 2 + C X S ^ c 2 = s y 2 C X S X 2 + δ 040 C X s x 2 + δ 040 S ^ A 1 2 = s 2 2 exp S X 2 s x 2 S X 2 + s x 2 + 2 ( x M x m ) exp S R 2 s r 2 S R 2 + s r 2 + 2 ( R M R m ) S ^ A 2 2 = s 2 2 exp ( x M x m ) ( S X 2 s x 2 ) ( x M x m ) ( S X 2 + s x 2 ) + 2 C X exp S R 2 s r 2 S R 2 + s r 2 + 2 ( R M R m ) S ^ A 3 2 = s 2 2 exp ( x M x m ) ( S X 2 s x 2 ) ( x M x m ) ( S X 2 + s x 2 ) + 2 exp S R 2 s r 2 S R 2 + s r 2 + 2 ( R M R m ) S ^ A 4 2 = s 2 2 exp ( x M x m ) ( S X 2 s x 2 ) ( x M x m ) ( S X 2 + s x 2 ) + 2 β 2 ( X ) exp S R 2 s r 2 S R 2 + s r 2 + 2 ( R M R m ) S ^ A 5 2 = s 2 2 exp β 2 ( X ) ( S X 2 s x 2 ) β 2 ( X ) ( S X 2 + s x 2 ) + 2 ( x M x m ) exp S R 2 s r 2 S R 2 + s r 2 + 2 ( R M R m ) S ^ A 6 2 = s 2 2 exp ( x M x m ) ( S X 2 s x 2 ) ( x M x m ) ( S X 2 + s x 2 ) + 2 ρ Y X exp S R 2 s r 2 S R 2 + s r 2 + 2 ( R M R m ) S ^ A 7 2 = s 2 2 exp C X ( S X 2 s x 2 ) C X ( S X 2 + s x 2 ) + 2 ( x M x m ) exp S R 2 s r 2 S R 2 + s r 2 + 2 ( R M R m ) S ^ A 7 2 = s 2 2 exp ρ Y X ( S X 2 s x 2 ) ρ Y X ( S X 2 + s x 2 ) + 2 ( x M x m ) exp S R 2 s r 2 S R 2 + s r 2 + 2 ( R M R m ) }
Mean squared errors:
M S E S ^ t 2 = m e a n S ^ t 2 S Y 2 2 M S E S ^ l r 2 = m e a n S ^ l r 2 S Y 2 2 M S E S ^ b t 2 = m e a n S ^ b t 2 S Y 2 2 M S E S ^ u s 2 = m e a n S ^ u s 2 S Y 2 2 M S E S ^ a 2 = m e a n S ^ a 2 S Y 2 M S E S ^ b 2 = m e a n S ^ b 2 S Y 2 2 M S E S ^ c 2 = m e a n S ^ c 2 S Y 2 2 M S E S ^ A 1 2 = m e a n S ^ A 1 2 S Y 2 2 M S E S ^ A 2 2 = m e a n S ^ A 2 2 S Y 2 2 M S E S ^ A 3 2 = m e a n S ^ A 3 2 S Y 2 2 M S E S ^ A 4 2 = m e a n S ^ A 4 2 S Y 2 2 M S E S ^ A 5 2 = m e a n S ^ A 5 2 S Y 2 2 M S E S ^ A 6 2 = m e a n S ^ A 6 2 S Y 2 2 M S E S ^ A 7 2 = m e a n S ^ A 7 2 S Y 2 2 M S E S ^ A 8 2 = m e a n S ^ A 8 2 S Y 2 2
Relative efficiency:
r o u n d V a r ( S ^ Y 2 ) M S E S ^ t 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ l r 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ b t 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ u s 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ a 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ b 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ c 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ A 1 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ A 2 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ A 3 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ A 4 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ A 5 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ A 6 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ A 7 2 100 , d i g i t = 4 r o u n d V a r ( S ^ Y 2 ) M S E S ^ A 8 2 100 , d i g i t = 4

References

  1. Mohanty, S.; Sahoo, J. A note on improving the ratio method of estimation through linear transformation using certain known population parameters. Sankhyā Indian J. Stat. Ser. 1995, 57, 93–102. [Google Scholar]
  2. Khan, M.; Shabbir, J. Some improved ratio, product, and regression estimators of finite population mean when using minimum and maximum values. Sci. World J. 2013, 2013, 431868. [Google Scholar] [CrossRef] [PubMed]
  3. Daraz, U.; Shabbir, J.; Khan, H. Estimation of finite population mean by using minimum and maximum values in stratified random sampling. J. Mod. Appl. Stat. Methods 2018, 17, 20. [Google Scholar] [CrossRef]
  4. Cekim, H.O.; Cingi, H. Some estimator types for population mean using linear transformation with the help of the minimum and maximum values of the auxiliary variable. Hacet. J. Math. Stat. 2017, 46, 685–694. [Google Scholar]
  5. Chatterjee, S.; Hadi, A.S. Regression Analysis by Example; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  6. Khan, M. Improvement in estimating the finite population mean under maximum and minimum values in double sampling scheme. J. Stat. Appl. Probab. Lett. 2015, 2, 115–121. [Google Scholar]
  7. Walia, G.S.; Kaur, H.; Sharma, M. Ratio type estimator of population mean through efficient linear transformation. Am. J. Math. Stat. 2015, 5, 144–149. [Google Scholar]
  8. Isaki, C.T. Variance estimation using auxiliary information. J. Am. Stat. Assoc. 1983, 78, 117–123. [Google Scholar] [CrossRef]
  9. Bahl, S.; Tuteja, R. Ratio and product type exponential estimators. J. Inf. Optim. Sci. 1991, 12, 159–164. [Google Scholar] [CrossRef]
  10. Daraz, U.; Khan, M. Estimation of variance of the difference-cum-ratio-type exponential estimator in simple random sampling. Res. Math. Stat. 2021, 8, 1899402. [Google Scholar] [CrossRef]
  11. Daraz, U.; Wu, J.; Albalawi, O. Double exponential ratio estimator of a finite population variance under extreme values in simple random sampling. Mathematics 2024, 12, 1737. [Google Scholar] [CrossRef]
  12. Daraz, U.; Wu, J.; Alomair, M.A.; Aldoghan, L.A. New classes of difference cum-ratio-type exponential estimators for a finite population variance in stratified random sampling. Heliyon 2024, 10, e33402. [Google Scholar] [CrossRef]
  13. Ahmad, S.; Al Mutairi, A.; Nassr, S.G.; Alsuhabi, H.; Kamal, M.; Rehman, M.U. A new approach for estimating variance of a population employing information obtained from a stratified random sampling. Heliyon 2023, 9, 1–13. [Google Scholar] [CrossRef] [PubMed]
  14. Dubey, V.; Sharma, H. On estimating population variance using auxiliary information. Stat. Transit. New Ser. 2008, 9, 7–18. [Google Scholar]
  15. Kadilar, C.; Cingi, H. Ratio estimators for the population variance in simple and stratified random sampling. Appl. Math. Comput. 2006, 173, 1047–1059. [Google Scholar] [CrossRef]
  16. Shabbir, J.; Gupta, S. Some estimators of finite population variance of stratified sample mean. Commun. Stat. Theory Methods 2010, 39, 3001–3008. [Google Scholar] [CrossRef]
  17. Shabbir, J.; Gupta, S. Using rank of the auxiliary variable in estimating variance of the stratified sample mean. Int. J. Comput. Theor. Stat. 2019, 6, 207. [Google Scholar] [CrossRef]
  18. Singh, H.; Chandra, P. An alternative to ratio estimator of the population variance in sample surveys. J. Transp. Stat. 2008, 9, 89–103. [Google Scholar]
  19. Singh, H.P.; Solanki, R.S. A new procedure for variance estimation in simple random sampling using auxiliary information. J. Stat. Pap. 2013, 54, 479–497. [Google Scholar] [CrossRef]
  20. Upadhyaya, L.; Singh, H. An estimator for populationvariance that utilizes the kurtosis of an auxiliary variablein sample surveys. Vikram Math. J. 1999, 19, 14–17. [Google Scholar]
  21. Yadav, S.K.; Kadilar, C.; Shabbir, J.; Gupta, S. Improved family of estimators of population variance in simple random sampling. J. Stat. Theory Pract. 2015, 9, 219–226. [Google Scholar] [CrossRef]
  22. Yasmeen, U.; Noor-ul-Amin, M. Estimation of Finite Population Variance Under Stratified Sampling Technique. J. Reliab. Stat. Stud. 2021, 14, 565–584. [Google Scholar] [CrossRef]
  23. Zaman, T.; Bulut, H. An efficient family of robust-type estimators for the population variance in simple and stratified random sampling. Commun. Stat. Theory Methods 2023, 52, 2610–2624. [Google Scholar] [CrossRef]
  24. Watson, D.J. The estimation of leaf area in field crops. J. Agric. Sci. 1937, 27, 474–483. [Google Scholar] [CrossRef]
  25. Bureau of Statistics. Punjab Development Statistics Government of the Punjab, Lahore, Pakistan; Bureau of Statistics: Lahore, Pakistan, 2013.
  26. Cochran, W.B. Sampling Techniques; John Wiley and Sons: Hoboken, NJ, USA, 1963. [Google Scholar]
Table 1. Some classes of the proposed estimator.
Table 1. Some classes of the proposed estimator.
Subsets of the Proposed Estimator S ^ A 2 ξ 1 ξ 2
S ^ A 1 2 = s y 2 exp δ 1 S x 2 s x 2 S x 2 + s x 2 + 2 ( x M x m ) exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 1 x M x m
S ^ A 2 2 = s y 2 exp δ 1 ( x M x m ) S x 2 s x 2 ( x M x m ) S x 2 + s x 2 + 2 c x exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 x M x m c x
S ^ A 3 2 = s y 2 exp δ 1 ( x M x m ) S x 2 s x 2 ( x M x m ) S x 2 + s x 2 + 2 exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 x M x m 1
S ^ A 4 2 = s y 2 exp δ 1 ( x M x m ) S x 2 s x 2 ( x M x m ) S x 2 + s x 2 + 2 β 2 ( x ) exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 x M x m β 2 ( x )
S ^ A 5 2 = s y 2 exp δ 1 β 2 ( x ) S x 2 s x 2 β 2 ( x ) S x 2 + s x 2 + 2 ( x M x m ) exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 β 2 ( x ) x M x m
S ^ A 6 2 = s y 2 exp δ 1 ( x M x m ) S x 2 s x 2 ( x M x m ) S x 2 + s x 2 + 2 ρ y x exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 x M x m ρ y x
S ^ A 7 2 = s y 2 exp δ 1 c x S x 2 s x 2 c x S x 2 + s x 2 + 2 ( x M x m ) exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 c x x M x m
S ^ A 8 2 = s y 2 exp δ 1 ρ y x S x 2 s x 2 ρ y x S x 2 + s x 2 + 2 ( x M x m ) exp δ 2 ξ 3 S r 2 s r 2 ξ 3 S r 2 + s r 2 + 2 ξ 4 ρ y x x M x m
Table 2. MSEs of all the estimators using simulated data.
Table 2. MSEs of all the estimators using simulated data.
EstimatorPop-IPop-IIPop-IIIPop-IVPop-VPop-VI
(1) S ^ y 2 7.34e−59.62e−58.82e−46.43e−47.47e−36.00e−3
(2) S ^ t 2 5.98e−57.90e−55.90e−44.99e−46.00e−35.02e−3
(3) S ^ l r 2 5.90e−57.88e−55.89e−44.80e−45.60e−34.80e−3
(4) S ^ b t 2 5.31e−57.60e−55.80e−44.65e−45.40e−34.70e−3
(5) S ^ u s 2 5.32e−57.58e−55.78e−44.50e−45.20e−34.50e−3
(6) S ^ a 2 5.30e−57.40e−55.76e−44.20e−45.00e−34.30e−3
(7) S ^ b 2 5.30e−57.40e−55.76e−44.20e−45.00e−34.30e−3
(8) S ^ c 2 5.20e−57.35e−55.60e−44.00e−44.90e−34.10e−3
(9) S ^ A 1 2 2.69e−55.78e−53.80e−42.80e−42.77e−32.00e−3
(10) S ^ A 2 2 2.98e−55.92e−53.98e−43.00e−42.96e−32.20e−3
(11) S ^ A 3 2 2.39e−55.39e−53.50e−42.50e−42.60e−32.10e−3
(12) S ^ A 4 2 2.380e−55.35e−53.35e−42.20e−42.40e−31.90e−3
(13) S ^ A 5 2 2.50e−55.60e−53.60e−42.70e−42.80e−31.70e−3
(14) S ^ A 6 2 2.50e−55.61e−53.66e−42.77e−43.00e−32.15e−3
(15) S ^ A 7 2 2.40e−55.25e−53.20e−42.10e−42.35e−32.40e−3
(16) S ^ A 8 2 2.30e−55.22e−53.05e−41.90e−41.99e−31.40e−3
Table 3. PREs of all the estimators using simulated data.
Table 3. PREs of all the estimators using simulated data.
EstimatorPop-IPop-IIPop-IIIPop-IVPop-VPop-VI
(1) S ^ y 2 100100100100100100
(2) S ^ t 2 122.74125.57149.49128.86124.50119.52
(3) S ^ l r 2 124.41125.88149.74133.95133.39125.00
(4) S ^ b t 2 138.23130.52152.07138.28138.33127.66
(5) S ^ u s 2 137.97130.8752.50142.89143.65133.33
(6) S ^ a 2 138.49134.05153.13153.09149.00139.53
(7) S ^ b 2 138.49134.05153,13153.00149.00139.53
(8) S ^ c 2 141.15134.97157.5060.75152.45146.34
(9) S ^ A 1 2 272.0571.63232.1229.64269.68300.00
(10) S ^ A 2 2 246.31167.57221.61214.33252.36272.73
(11) S ^ A 3 2 307.11184.04252.00257.20287.11285.71
(12) S ^ A 4 2 308.40185.42263.28292.28311.25315.79
(13) S ^ A 5 2 293.60177.14245.00238.15266.79352.94
(14) S ^ A 6 2 293.60176.82240.98232.13249.00279.07
(15) S ^ A 7 2 305.83188.85275.63306.19317.87250.00
(16) S ^ A 8 2 319.13189.74289.18337.53375.38428.57
Table 4. MSEs using empirical datasets.
Table 4. MSEs using empirical datasets.
EstimatorData 1Data 2Data 3
(1) S ^ y 2 1.45e+239.27e+228130.61
(2) S ^ t 2 9.07.e+228.67e+227487.31
(3) S ^ l r 2 6.36e+224.65e+226851.91
(4) S ^ b t 2 6.72e+224.66e+226879.75
(5) S ^ u s 2 8.67e+228.33e+227407.67
(6) S ^ a 2 9.07e+228.67e+227483.48
(7) S ^ b 2 9.07e+228.67e+227486.06
(8) S ^ c 2 8.13e+228.41e+227080.74
(9) S ^ A 1 2 4.22e+223.81e+226658.82
(10) S ^ A 2 2 4.25e+223.84e+226726.29
(11) S ^ A 3 2 4.25e+223.84e+226726.18
(12) S ^ A 4 2 4.25e+223.84e+226725.93
(13) S ^ A 5 2 4.25e+223.84e+226686.33
(14) S ^ A 6 2 4.25e+223.84e+226726.27
(15) S ^ A 7 2 4.17e+223.82e+226831.87
(16) S ^ A 8 2 4.15e+223.80e+226631.44
Table 5. PREs using empirical datasets.
Table 5. PREs using empirical datasets.
EstimatorData 1Data 2Data 3
(1) S ^ y 2 100100100
(2) S ^ t 2 159.32106.92108.59
(3) S ^ l r 2 227.25199.09118.66
(4) S ^ b t 2 215.92198.86118.18
(5) S ^ u s 2 166.89111.34109.75
(6) S ^ a 2 159.32106.92108.65
(7) S ^ b 2 159.32106.92108.61
(8) S ^ c 2 177.80110.22114.79
(9) S ^ A 1 2 342.83243.29122.10
(10) S ^ A 2 2 340.27241.54120.87
(11) S ^ A 3 2 340.27241.55120.88
(12) S ^ A 4 2 340.28241.55120.88
(13) S ^ A 5 2 340.27241.55121.60
(14) S ^ A 6 2 340.27241.54120.88
(15) S ^ A 7 2 346.72242.85119.01
(16) S ^ A 8 2 348.52244.05122.60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Daraz, U.; Alomair, M.A.; Albalawi, O. Variance Estimation under Some Transformation for Both Symmetric and Asymmetric Data. Symmetry 2024, 16, 957. https://doi.org/10.3390/sym16080957

AMA Style

Daraz U, Alomair MA, Albalawi O. Variance Estimation under Some Transformation for Both Symmetric and Asymmetric Data. Symmetry. 2024; 16(8):957. https://doi.org/10.3390/sym16080957

Chicago/Turabian Style

Daraz, Umer, Mohammed Ahmed Alomair, and Olayan Albalawi. 2024. "Variance Estimation under Some Transformation for Both Symmetric and Asymmetric Data" Symmetry 16, no. 8: 957. https://doi.org/10.3390/sym16080957

APA Style

Daraz, U., Alomair, M. A., & Albalawi, O. (2024). Variance Estimation under Some Transformation for Both Symmetric and Asymmetric Data. Symmetry, 16(8), 957. https://doi.org/10.3390/sym16080957

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop