Next Article in Journal
Evaluation of the Gauss Integral
Next Article in Special Issue
Bayesian Bootstrap in Multiple Frames
Previous Article in Journal
Goodness-of-Fit and Generalized Estimating Equation Methods for Ordinal Responses Based on the Stereotype Model
Previous Article in Special Issue
Opening the Black Box: Bootstrapping Sensitivity Measures in Neural Networks for Interpretable Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparison of Existing Bootstrap Algorithms for Multi-Stage Sampling Designs

1
Department of Biostatistics and Epidemiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
2
Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON K1N 6N5, Canada
3
Department of Mathematics and Statistics, University of Winnipeg, Winnipeg, MB R3B 2E9, Canada
*
Author to whom correspondence should be addressed.
Stats 2022, 5(2), 521-537; https://doi.org/10.3390/stats5020031
Submission received: 20 April 2022 / Revised: 23 May 2022 / Accepted: 29 May 2022 / Published: 6 June 2022
(This article belongs to the Special Issue Re-sampling Methods for Statistical Inference of the 2020s)

Abstract

:
Multi-stage sampling designs are often used in household surveys because a sampling frame of elements may not be available or for cost considerations when data collection involves face-to-face interviews. In this context, variance estimation is a complex task as it relies on the availability of second-order inclusion probabilities at each stage. To cope with this issue, several bootstrap algorithms have been proposed in the literature in the context of a two-stage sampling design. In this paper, we describe some of these algorithms and compare them empirically in terms of bias, stability, and coverage probability.

1. Introduction

Many surveys conducted by national statistical offices use stratified multi-stage sampling designs for selecting a sample. Reasons for using multi-stage sampling rather than direct element sampling include the lack of element-level sampling frames and cost considerations when data collection involves face-to-face interviews. Stratified multi-stage sampling designs include some form of stratification, selection of primary sampling units (psu), and subsampling within selected psus. This is especially common in social and health surveys. For general multi-stage sampling designs, unbiased variance estimation is a complex task as it relies on the availability of the second-order inclusion probabilities at each stage. If the first-stage sampling fraction is small, a common variance estimation strategy is to pretend that the psus were selected with replacement and use the customary with replacement variance estimator. The resulting estimator is generally conservative in the sense that it may suffer from a small positive bias.
Another approach to variance estimation for survey data is bootstrap variance estimation originally proposed by Efron [1] in the context of independent and identically distributed observations. In a finite population sampling, bootstrap procedures can be classified into two broad groups. In the first, bootstrap samples are selected from the original sample; e.g., [2,3] among others. Rao and Wu [2] applied a scale adjustment directly to the survey data values so as to recover the usual variance formulae. Rao et al. [4] presented a modification of the method of Rao and Wu [2], where the scale adjustment is applied to the survey weights rather than to the data values. The second group of procedures consists of first creating a pseudo-population from the original sample. Bootstrap samples are then selected from the pseudo-population using the same sampling design utilized to select the original samples; see [5,6,7,8], among others. Many of the aforementioned bootstrap procedures may be implemented by randomly generating bootstrap weights so that the first two (or more) design moments of the sampling error are tracked by the corresponding bootstrap moments; see [9,10]. These procedures are often referred to as bootstrap weight procedures. For a comprehensive review of bootstrap procedures for survey data, the reader is referred to Mashreghi et al. [11].
The goal of this paper is to empirically compare several existing bootstrap algorithms that have been proposed in the literature for two-stage sampling designs. The bootstrap procedures are compared with respect to bias, stability, and coverage probability of confidence intervals. In Section 2 we present the basic setup and discuss some classical variance estimation procedures for two-stage sampling designs. In Section 3, we present some bootstrap algorithms proposed in the case of simple random sampling without replacement in both stages. Bootstrap algorithms for unequal probability sampling designs are described in Section 4. In Section 5, we present the results from a simulation study. We make some final remarks in Section 6.

2. The Setup

Consider a finite population U consisting of N primary sampling units (psu), U 1 , , U N , of size M 1 , , M N , such that U = i U U i   a n d   U i U j =   i f   i j . Let M 0 = i U M i be the total number of elements in the population. We are interested in estimating a population total t y of a survey variable y:
t y = i U t i ,
where t i = k U i y i k denotes the ith psu total, i = 1 , , N , and y i k denotes the y-value for the kth element in the ith psu. To that end, we select a sample according to a two-stage sampling design:
(i)
A sample S 1 s t of psus, of size n, is selected according to a given sampling design P ( S 1 s t ) with first-order inclusion probabilities, π i = P ( i S 1 s t ) , and with second-order inclusion probabilities, π i j = P ( i S 1 s t & j S 1 s t ) . Finally, let Δ i j = π i j π i π j .
(ii)
In the ith psu sampled at the first stage, i S 1 s t , a subsample of the elements, S i , of size m i is selected according to a given sampling design P ( S i | S 1 s t ) with first-order inclusion probabilities π k | i = P ( k S i | i S 1 s t ) and second-order inclusion probabilities π k | i = P ( k S i & S i | i S 1 s t ) . Subsampling in a given psu is carried out independently of subsampling in any other psu.
A design-unbiased estimator of t y is the Horvitz–Thompson estimator given by
t ^ y = i S 1 s t π i 1 t ^ i ,
where t ^ i = k S i y i k / π k | i . The estimator (1) can be written as t ^ y = k S ˜ w k y k , where w k = π k 1 with π k = π i × π k | i , and S ˜ denotes the sample of elements of size i S 1 s t m i .
The design variance of t ^ y , denoted by V p ( t ^ y ) , can be unbiasedly estimated by
V ^ ( t ^ y ) = i S 1 s t j S 1 s t Δ i j π i j t ^ i π i t ^ j π j + i S 1 s t π i 1 V ^ i ,
where
V ^ i = k S i S i Δ k | i π k | i y i k π k | i y i π | i
and Δ k | i = π k | i π k | i π | i . That is, E 1 E 2 V ^ ( t ^ y ) S 1 s t = V p ( t ^ y ) , where E 1 ( · ) denotes the expectation with respect to the first-stage sampling design, and E 2 ( · S 1 s t ) denotes the expectation with respect to the second-stage sampling design conditionally on S 1 s t . In the case of simple random sampling without replacement at both stages, the estimator (2) reduces to
V ^ ( t ^ y , π ) = N 2 1 n N s t 2 n + N n i S M i 2 1 m i M i s y i 2 m i ,
where
s t 2 = 1 n 1 i S 1 s t t ^ i i S t ^ i n 2
and
s y i 2 = 1 m i 1 k S i y i k y ¯ i 2
with y ¯ i = m i 1 k S i y i k .
For general two-stage sampling designs, the computation of (2) is cumbersome as it requires the availability of the second-order inclusion probabilities at each stage. A simplified variance estimator is given by
V ^ sim ( t ^ y ) = i S 1 s t j S 1 s t Δ i j π i j t ^ i π i t ^ j π j .
That is, only the first term of (2) is kept. The bias of V ^ sim ( t ^ y ) , which is always negative, is expected to be small provided that the first-stage sampling fraction, f 1 = n / N , is small; see [12,13].
An alternative simplified variance estimator can be obtained by pretending that the psus are selected with replacement. It is given by
V ^ ( t ^ y , w r ) = 1 n ( n 1 ) i = 1 n t ^ i p i t ^ y , w r 2 ,
where t ^ y , w r = i S t ^ i n p i , with p i denoting the probability of selection of the ith psu at any given draw. If the first-stage sampling fraction, f 1 , is small, we expect (5) to suffer from a small positive bias. Unlike (4), the estimator does not require the availability of the second-order inclusion probabilities π i j .
So far, we have considered the case of a population total t y . In practice, it may be of interest to estimate more complex parameters such as distribution functions and quantiles. Let θ N be defined as the solution of the following census estimating equation:
U N ( θ ) = 1 M 0 i U k U i u ( y i k ; θ ) = 0 ,
where u ( y i k ; θ ) can be either a smooth (i.e., a function that is differentiable and whose derivatives are continuous) or a non-smooth function of θ . When u ( · ) is smooth, the solution of (6) is called a smooth parameter; otherwise, it is called a non-smooth parameter. Common parameters include: (i) the population mean obtained with u ( y i k ; θ ) = y i k θ ; (ii) the finite population distribution function obtained with u ( y i k ; θ ) = I ( y i k t ) θ , t R ; (iii) the τ -th population percentile obtained with u ( y i k ; θ ) = I ( y i k θ ) τ . The population mean is an example of a smooth parameter, whereas distribution functions and quantiles are examples of non-smooth parameters.
An estimator θ ^ of θ N can be obtained by solving the following sample estimating equation:
U ^ S ( θ ) = 1 M 0 k S ˜ w k u ( y k ; θ ) = 0 .
The variance of θ ^ may be obtained using a first-order Taylor expansion or by using a resampling method such as balanced repeated replication, jackknife, and bootstrap; see [14] for a discussion of resampling methods. In the remainder of this paper, we confine to bootstrap.

3. Bootstrap Procedures for Simple Random Sampling without Replacement at Both Stages

In this section, we describe several bootstrap algorithms for a two-stage sampling design with simple random sampling without replacement at both stages.

3.1. The Rescaling Bootstrap Algorithm

Rao and Wu [2] proposed a rescaled bootstrap algorithm for both uni-stage and two-stage sampling designs. Because the rescaling factor is applied to the y-values, this method is applicable to smooth statistics but not to the case of non-smooth statistics such as quantiles. The algorithm can be described as follows:
Step 1.
Draw a sample of size n psus from S 1 s t , according to simple random sampling with replacement.
Step 2.
From each psu selected in Step 1, select a sample of elements, of size m i according to simple random sampling with replacement. For a psu selected more than once in Step 1, perform independent subsampling.
Step 3.
Let y i k * be the y-value of the kth bootstrap element in the ith bootstrap psu and m i * be the m i -value of the ith bootstrap psu and M i * is defined similarly. Let
y ˜ i k = Y ¯ ^ + n ( 1 f 1 ) n 1 1 / 2 t ^ i * M ¯ 0 Y ¯ ^ + m i * f 1 ( 1 f 2 i * ) m i * 1 1 / 2 M i * y i k * M ¯ 0 t ^ i * M ¯ 0 ,
where f 2 i * = m i * / M i * , t ^ i * = M i * m i * k = 1 m i * y i k , M ¯ 0 = M 0 / N and Y ¯ ^ = 1 M 0 N n i S 1 s t M i m i k S i y i k .
Step 4.
Compute θ ^ * using the same formulae that were used to obtain the original point estimator.
Step 5.
Repeat Steps 1–4 a large number of times, B, to obtain θ ^ 1 * , , θ ^ B * .
Step 6.
The bootstrap variance estimator is v a r * ( θ ^ * ) . In practice, the Monte Carlo approximation of v a r * ( θ ^ * ) is applied
v a r ^ * = 1 B 1 b = 1 B ( θ ^ b * θ * ^ ¯ ) 2 ,
where θ ^ * ¯ = B 1 b = 1 B θ ^ b * .
Rao and Wu [2] showed that in the case of a population total, the above algorithm matches the standard variance estimator (3). Rao et al. [4] proposed a weighted version of the Rao–Wu method, whereby the rescaling is applied to the sampling weights rather than the y-values; see also [10]. The method of Rao et al. [4] is described in Section 4.

3.2. The Mirror-Match Bootstrap Algorithm

Sitter [3] proposed an extension of his mirror-match bootstrap to the case of a two-stage sampling design. In [3], the algorithm assumed that the number of repetetions k 1 and k 2 i (see below) are integers. It can be described as follows:
Step 1.
Choose 1 n < n and draw a sample of size n psus from S 1 s t , according to simple random sampling without replacement.
Step 2.
Repeat Step 1 k 1 = n ( 1 f 1 * ) / { n ( 1 f 1 ) } times independently to obtain a bootstrap sample of psus of size n * = k 1 n , where f 1 * = n / n .
Step 3.
Choose 1 m i < m i and draw according to simple random sampling without replacement m i units within the ith psu obtained in Steps 1 and 2.
Step 4.
Repeat Step 3 k 2 i = N m i ( 1 f 2 i * ) / { n * m i ( 1 f 2 i ) } times independently to obtain a bootstrap sample of size m i * = k 2 i m i from the ith psu drawn in Step 3, where f 2 i * = m i / m i .
Step 5.
Compute θ ^ * using the same formulae that were used to obtain the original point estimator.
Step 6.
Repeat Steps 1–5 a large number of times, B, to obtain θ ^ 1 * , , θ ^ B * .
Step 7.
The bootstrap variance estimator is v a r * ( θ ^ * ) . In practice, the Monte Carlo approximation of v a r * ( θ ^ * ) is applied
v a r ^ * = 1 B 1 b = 1 B ( θ ^ b * θ ^ * ¯ ) 2 ,
where θ ^ * ¯ = B 1 b = 1 B θ ^ b * .
Sitter [3] showed that in the case of a population total, the above algorithm matches the standard variance estimator (3). If k 1 and k 2 i are not integers, a randomization between bracketing integer values is proposed in [15].

3.3. The Without-Replacement Bootstrap Algorithm

Sitter [16] proposed a pseudo-population bootstrap algorithm, referred to as the without-replacement bootstrap (BWO) method, in the case of uni-stage and two-stage sampling designs. We focus on the latter. In [16], the algorithm assumed that the quantities k 1 , n , k 2 i , and m i (see below) are integers. It can be described as follows:
Step 1:
Create a pseudo-population by replicating each psu in S 1 s t k 1 times and each unit within the ith psu k 2 i times. Let U be the resulting pseudo-population consisting of N = n k 1 psus, U 1 , , U N , of size M 1 , , M N , where there exists j S 1 s t such that M i = m j k 2 j . Let M 0 = i U M i be the total number of elements in the pseudo-population.
Step 2:
From the pseudo-population U , select a sample of psus, S 1 s t * , of size n , according to simple random sampling without replacement. In each selected psu, select a sample, S i * , of size m i according to simple random sampling without replacement.
Step 3:
Compute θ ^ * using the formulae that were used to obtain the original point estimator.
Step 4:
Repeat Steps 2 and 3 a large number of times, B, to obtain θ ^ 1 * , , θ ^ B * .
Step 5:
The bootstrap variance estimator is v a r * ( θ ^ * ) . In practice, the Monte Carlo approximation of v a r * ( θ ^ * ) is applied
v a r ^ * = 1 B 1 b = 1 B ( θ ^ b * θ ^ * ¯ ) 2 ,
where θ ^ * ¯ = B 1 b = 1 B θ ^ b * .
In the case of the population total (or the population mean), Sitter [16] showed that the bootstrap variance estimator reduces to the standard variance estimator provided that n and k 1 satisfy
f 1 = f 1 a n d k 1 ( n 1 ) n ( k 1 n 1 ) = 1 n ,
and m i and k 2 i satisfy
f 2 i = f 2 i a n d k 2 i ( m i 1 ) n ( k 2 i m i 1 ) = f 1 n m i ,
where f 1 = n / N and f 2 i = m i / M i , for each i. If k 1 , n , k 2 i , and m i are not integers, a randomization between bracketing integer values was proposed in [15]. In Appendix A, we show that, if we define k 1 , n , k 2 i , and m i as in (8) and (9), the bootstrap variance estimator does not reduce to the standard variance estimator in (3). We suggest a modification to (8) and (9) so that the bootstrap variance estimators reduces to the standard variance estimator (3). In the simulation study (see Section 5), we show that the modified version of Sitter [16] works well in terms of bias and coverage probability of confidence intervals.

3.4. The Bernoulli Bootstrap Algorithm

Funaoka et al. [17] proposed two bootstrap procedures, referred to as Bernoulli bootstrap, for stratified three-stage sampling. Here, we consider the special case of two-stage sampling. Funaoka et al. [17] proposed a short-cut algorithm and a general algorithm. The general algorithm can handle any combination of sample sizes but is more computationally intensive. The general algorithm can be described as follows:
Step 1.
Draw a sample, S ˜ 1 , of size n = n 1 , from the original sample of clusters, S 1 s t , according to simple random sampling with replacement. Generate n Bernoulli random variables, I 1 i , with probability
p 1 = 1 1 f 1 2 ( 1 n 1 ) .
For each i S 1 s t , keep the ith cluster in the bootstrap sample S * and go to Step 2, if I 1 i = 1 , and replace the ith cluster with one randomly selected cluster from S ˜ 1 s t , if I 1 i = 0 .
Step 2.
For cluster i kept in Step 1, draw a sample, S ˜ 2 i , of size m i = m i 1 , from the original sample S 2 i according to simple random sampling with replacement. Generate m i Bernoulli random variable, I 2 i j , with probability
p 2 i = 1 f 1 ( 1 f 2 i ) 2 p 1 1 ( 1 m i 1 ) .
For each ( i j ) S 2 i , keep the ( i j ) th element in the bootstrap sample, S * , if I 2 i j = 1 , and replace it with one randomly selected element from S ˜ 2 i , if I 2 i j = 0 .
Step 3.
Compute θ ^ * using the formulae that were used to obtain the original point estimator.
Step 4.
Repeat Steps 1–3 a large number of times, B, to obtain θ ^ 1 * , , θ ^ B * .
Step 5.
The bootstrap variance estimator is v a r * ( θ ^ * ) . In practice, the Monte Carlo approximation of v a r * ( θ ^ * ) is applied
v a r ^ * = 1 B 1 b = 1 B ( θ ^ b * θ ^ * ¯ ) 2 ,
where θ ^ * ¯ = B 1 b = 1 B θ ^ b * .
Funaoka et al. [17] argued that the resulting bootstrap variance estimator is consistent for both smooth and non-smooth parameters.

3.5. The Preston Bootstrap Weights Algorithm

Preston [18] proposed a bootstrap weights approach for stratified three-stage sampling. Here, we focus on the special case of two-stage sampling. The algorithm can be described as follows:
Step 1.
Draw a sample of size n psus from S 1 s t , according to simple random sampling without replacement. Let δ i = 1 if the ith psu is selected and δ i = 0 , otherwise.
Step 2.
Define the psu bootstrap weights:
w i * = 1 + λ 1 n n δ i 1 π i 1 ,
where λ 1 = { n ( 1 f 1 ) / ( n n ) } 1 / 2 .
Step 3.
Within each of the sample of psus selected in Step 1, draw a simple random sample without replacement, of size m i . Let δ i k = 1 if the kth element in the ith psu is selected and δ i k = 0 , otherwise. We define the conditional element bootstrap weights:
w i k * = 1 + λ 1 n n δ i 1 + λ 2 i n n 1 / 2 δ i m i m i δ i k 1 π i 1 w i * π k | i 1 ,
where λ 2 i = { m i f 1 ( 1 f 2 i ) / ( m i m i ) } 1 / 2 .
Step 4.
Compute θ ^ * using the formulae that were used to obtain the original point estimator with the original weights replaced by the bootstrap weights w i k * .
Step 5.
Repeat Steps 1–4 a large number of times, B, to obtain θ ^ 1 * , , θ ^ B * .
Step 6.
The bootstrap variance estimator is v a r * ( θ ^ * ) . In practice, the Monte Carlo approximation of v a r * ( θ ^ * ) is applied
v a r ^ * = 1 B 1 b = 1 B ( θ ^ b * θ ^ * ¯ ) 2 ,
where θ ^ * ¯ = B 1 b = 1 B θ ^ b * .
In the case of the population total, Preston [18] showed that the bootstrap variance estimator reduces to the textbook variance estimator (3). He suggested that the choice of n = n / 2 and m i = m i / 2 will be optimal and lead to non-negative bootstrap weights, where · denotes the integer part.

4. Bootstrap Procedures for Unequal Probability Sampling Designs

4.1. The Rao-Wu-Yue Bootstrap Weights Algorithm

Rao et al. [4] proposed a bootstrap weights approach for stratified multi-stage sampling designs. Unlike the method of Rao and Wu [2], it can be applied to estimate the variance of smooth and non-smooth parameters (e.g., quantiles).
Step 1.
Select n psus according to simple random sampling with replacement from S 1 s t .
Step 2.
Define the bootstrap weight as
w i k * = 1 + n n 1 1 / 2 n n i * n 1 π i 1 π k | i 1 ,
where n i * denotes the number of times the ith psu is selected in the bootstrap sample.
Step 3.
Compute θ ^ * using the formulae that were used to obtain the original point estimator with the original weights replaced by the bootstrap weights w i k * .
Step 4.
Repeat Steps 1–3 B times to obtain θ ^ 1 * , , θ ^ B * .
Step 5.
The bootstrap variance estimator is v a r * ( θ ^ * ) . In practice, the Monte Carlo approximation of v a r * ( θ ^ * ) is applied
v a r ^ * = 1 B 1 b = 1 B ( θ ^ b * θ ^ * ¯ ) 2 ,
where θ ^ * ¯ = B 1 b = 1 B θ ^ b * .
The algorithm of Rao et al. [4] leads to consistent variance estimators provided that the first-stage sampling fraction is negligible. The choice 0 < n n 1 leads to non-negative bootstrap weights.

4.2. The Pseudo-Population Bootstrap Algorithm

Chauvet [8] proposed a pseudo-population bootstrap approach (PPB) in the case of unequal two-stage sampling designs. It can be described as follows:
Step 1.
Each unit k S i is duplicated [ π k | i 1 ] times to create a second-stage pseudo-population denoted by U i * , i S 1 s t , where [ · ] denotes the closet integer.
Step 2.
Each pair ( S i , U i * ) is duplicated π i 1 times. The population of pairs is completed by selecting a sample in the set { ( S i , U i * ) ; i S 1 s t } by means of sampling design with first-order inclusion probabilities π i 1 π i 1 . This leads to the pseudo-population U * .
Step 3.
Select a first-stage bootstrap sample S 1 s t * from U * using the original first-stage sampling design with first-order inclusion probabilities π i .
Step 4.
Select a second-stage bootstrap sample S i * * from U i * using the original second-stage sampling design. We set S i * = S i * * with probability π i and S i * = S i with probability 1 π i . This procedure is applied to each pair ( S i , U i * ) S 1 s t * . The union of the S i * ’s leads to the bootstrap sample S * .
Step 5.
Compute θ ^ * using the formulae that were used to obtain the original point estimator.
Step 6.
Steps 3–5 are repeated B P P B times to obtain the bootstrap statistics θ ^ 1 * , , θ ^ P P B * . Let
v ^ * = 1 B P P B 1 b = 1 B P P B ( θ ^ b * θ ^ * ¯ ) 2 ,
where θ ^ * ¯ = B P P B 1 b = 1 B P P B θ b * ^ .
Step 7.
Steps 2–6 are repeated A P P B times to obtain v ^ 1 * , , v ^ A P P B * . The variance of θ ^ is estimated by
v a r ^ * = 1 A P P B a = 1 A P P B v ^ a * .
Chauvet [8] showed that in the case of high entropy sampling design (e.g., [19,20,21,22]) at both stages, the above algorithm leads to a consistent estimator in the context of a population total. In the case of a fixed-size sampling design, Chauvet [8] suggested completing the pseudo-population in Step 2, by applying Poisson sampling design with inclusion probabilities π i 1 π i 1 .

5. Simulation Study

We conducted a simulation study to assess the performance of the bootstrap procedures described in Section 3 and Section 4 in terms of bias, stability, and coverage rate of confidence intervals based on the t-distribution with n 1 degrees of freedom. The simulation study was carried out using the R software. We created two finite populations consisting of N = 200 primary sampling units. The cluster sizes M i were generated according to a Poisson distribution with a mean equal to 50. In each population, we generated a survey variable y according to
y i j = 10 + x i + ε i j ,
where
x i N 0 , ρ 1 ρ and ε i j N ( M i , 1 ) .
The parameter ρ in (10) was set to 0.1 for Population 1 and 0.3 for Population 2. We were interested in estimating the population total of the y-variable, t y , as well as the finite population median.
From each population, we selected K = 3000 samples according to a two-stage sampling design:
(i)
At the first stage, we selected n psus according to two sampling designs: simple random sampling without replacement and inclusion probability-proportional-to-size randomized systematic sampling. The value of n was set to n = 10 which corresponds to a first-stage sampling fraction, f 1 = 5 % , and n = 40 , which corresponds to f 1 = 20 % .
(ii)
At the second stage, m i = 5 elements within each psu selected at the first stage were selected according to simple random sampling without replacement.
In each sample, we computed the estimator t ^ y given by (1). Its variance was estimated using the variance estimation procedures listed in Table 1. Except of the procedure of Chauvet [8], we used B = 500 bootstrap samples for all the other bootstrap procedures. For the procedure of Chauvet [8], we used A P P B = 10 and B P P B = 50 ( B = A P P B × B P P B ).
As a measure of bias of a variance estimator v a r ^ , we computed its Monte Carlo percent relative bias
R B M C ( v a r ^ ) = 100 × E M C ( v a r ^ ) V M C ( t ^ y ) V M C ( t ^ y ) ,
where E M C ( v a r ^ ) denotes the Monte Carlo expectation of v a r ^ and V M C ( t ^ y ) denotes the Monte Carlo variance estimator of t ^ y . As a measure of stability of a variance estimator v a r ^ , we computed its Monte Carlo percent coefficient of variation given by
C V M C = 100 × V M C ( v a r ^ ) E M C ( v a r ^ ) ,
where V M C ( v a r ^ ) denotes the Monte Carlo variance estimator of v a r ^ . Finally, we computed the Monte Carlo coverage rate of t-based confidence intervals and their corresponding Monte Carlo average length.
Table 2, Table 3, Table 4 and Table 5 show the results for the bootstrap methods in the case of SRSWOR/SRSWOR. Table 2 shows the Monte Carlo percent relative bias of the bootstrap variance estimators. For the population total, all the procedures led to small biases for f 1 = 5 % with the value of absolute relative bias ranging from 1.08% to 8.66%. For the population median, except for the method of Rao and Wu [2], the other procedures led to good results with an absolute relative bias ranging from 0.02% to 16.58%. As expected, the method of Rao and Wu [2] led to substantial bias because it cannot be applied to non-smooth statistics. For f 1 = 20 % , the absolute relative bias varied between 2.49% and 10.00% for all bootstrap methods except for the method of Rao et al. [4] who suffered from a significant positive bias. This can be explained by the fact that the sampling fraction was not small. For f 1 = 20 % the absolute relative bias varied between 2.60% and 13.20% for all bootstrap methods except for the methods of Rao and Wu [2] and Rao et al. [4].
Table 3 shows the percent CV. All the bootstrap methods led to similar Monte Carlo coefficients of variation (CV). For f 1 = 5 % , the CV varied between 41.2% and 46.0% for the population total, and between 59.0% and 64.3% (except for the method of Rao and Wu [2] that led to a CV of 69.1% for ρ = 0.3 ) for the population median. For f 1 = 20 % , the CV varied between 18.9% and 21.0% for the population total, and between 36.4% and 42.3% for the population median.
Table 4 and Table 5 show the coverage probability and the average length of 95% confidence intervals based on the t-distribution, respectively. All the bootstrap methods led to good coverage and similar average length except the method of Rao and Wu [2] for the population median. The coverage rate in all cases, except the method of Rao and Wu [2] for the population median, varied between 93.17% and 96.73%.
Table 6, Table 7, Table 8 and Table 9 show the results for the bootstrap methods in the case of randomized IPPS systematic/SRSWOR. Table 6 shows the percent relative bias of the bootstrap variance estimators. The method of Chauvet [8] worked generally better than the method of Rao et al. [4] in terms of relative bias, especially in the case of the population median. The percent CVs presented in Table 7 were very similar for both methods.
Table 8 and Table 9 respectively show the coverage probability and the average length of the 95 percent confidence intervals based on the t-distribution for both methods. Both bootstrap methods led to good coverage and similar average length. The coverage rate in all cases varied from 93.23% to 96.43%.

6. Final Remarks

The results of the simulation studies suggest that most bootstrap procedures work well in terms of bias and coverage rate of confidence intervals for estimating smooth or non-smooth parameters. An exception is the method of Rao and Wu [2] for quantiles and the method of Rao et al. [4] for appreciable first-stage sampling fractions. In terms of stability of the variance estimators, there is little difference between the bootstrap procedures. Our results are aligned with those of Saigo [23] who empirically compared four bootstrap procedures in the context of stratified three-stage sampling with simple random sampling without replacement at each stage: the procedure of Funaoka et al. [17], the mirror match bootstrap of Sitter [3], the method of Rao et al. [4], and the BWO method of Sitter [16].
In this paper, we have compared several bootstrap algorithms in the context of a two-stage sampling design. The algorithms were described in an ideal setup. In practice, the design weights π k 1 undergo a weighting process that involves a nonresponse adjustment followed by some form of calibration, whose goal is to ensure consistency between survey estimates and known population quantities. When the first-stage sampling fraction is small, the method of Rao et al. [4] is typically used in surveys conducted by national statistical offices. To account for unit nonresponse and calibration, the bootstrap weights need to undergo the same weighting process (non-response adjustment and calibration) that was used in the original sample were.
Bootstrap may be used to estimate the variance of imputed estimators in the context of imputation for item nonresponse. If the first-stage sampling fraction is small, one can re-impute the missing values in each bootstrap sample using the same imputation procedure that was utilized in the original sample, see [24]. The case of non-negligible first-stage sampling fractions requires further research.
We end this paper by pointing out a very recent paper by Beaumont and Émond [25], who proposed a bootstrap weight approach for multi-stage sampling designs. It would be interesting to compare this method to the procedures considered in this paper.

Author Contributions

Conceptualization, S.C., D.H. and Z.M.; methodology, S.C., D.H. and Z.M.; software, S.C. and Z.M.; validation, S.C., D.H. and Z.M.; formal analysis, NA; investigation, S.C., D.H. and Z.M.; resources, S.C., D.H. and Z.M.; data curation, NA; writing—original draft preparation, S.C., D.H. and Z.M.; writing—review and editing, S.C., D.H. and Z.M.; visualization, NA; supervision, NA; project administration, S.C., D.H. and Z.M.; funding acquisition, NA. All authors have read and agreed to the published version of the manuscript.

Funding

Sixia Chen is partly supported by the National Institute on Minority Health and Health Disparities at the National Institutes of Health (1R21MD014658-01A1) and the Oklahoma Shared Clinical and Translational Resources (U54GM104938) with an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The work of David Haziza and Z.M. is funded by grants of the Natural Sciences and Engineering Research Council of Canada.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the Associate Editor and three reviewers for their comments that helped improve the overall quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this section, we show that, obtaining k 1 , n , k 2 i , and m i from Equations (8) and (9) in the Sitter [16] algorithm in Section 3.3, the resulting bootstrap variance estimator does not reduce to the standard variance estimator in the case of the population total θ = t y .
In the case of θ = t y , the bootstrap estimator is
θ ^ * = N n i S 1 s t * M i m i k S i * y i k *
where y i k * is the y-value for the kth selected element in S i * in Step 2. The bootstrap variance of θ ^ * is
v a r * ( θ ^ * ) = V 1 * E 2 * θ ^ * | S 1 s t * + E 1 * V 2 * θ ^ * | S 1 s t * ,
where the subscripts 1 * and 2 * respectively denote the expectation and the variance with respect to the first-stage and second-stage resampling design in Step 2. Let t i = k U i y i k be the total of y-values in U i in Step 1. The first and the second component of the bootstrap variance estimator v a r * ( θ ^ * ) in (A1) respectively are
V 1 * E 2 * θ ^ * | S 1 s t * = V 1 * N n i S 1 s t * t i = N 2 1 f 1 n 1 N 1 i U t i 1 N j U t j 2 = N 2 1 f 1 n k 1 k 1 n 1 i S 1 s t k 2 i m i y ¯ i 1 n j S 1 s t k 2 j m j y ¯ j 2
and
E 1 * V 2 * θ ^ * | S 1 s t * = E 1 * N 2 n 2 i S 1 s t * M i 1 f 2 i m i 1 M i 1 k U i y i k * 1 M i l U i y i l * 2 = N n i U M i 2 1 f 2 i m i 1 M i 1 k U i y i k * 1 M i l U i y i l * 2 = N 2 1 n n i S 1 s t ( k 2 i m i ) 2 1 f 2 i m i k 2 i k 2 i m i 1 k S i y i k y ¯ i 2 = i S 1 s t N k 2 i m i 2 1 f 2 i n n m i k 2 i ( m i 1 ) k 2 i m i 1 s y i 2 ,
where y ¯ i = i S i / m i . In [16], it is claimed that the first and the second components of the bootstrap variance estimator respectively are
N 2 k 1 ( n 1 ) k 1 n 1 1 f 1 n s t 2 ,
and
N 2 i S 1 s t M i 2 1 f 2 i n n m i k 2 i ( m i 1 ) k 2 i m i 1 s y i 2 ,
see Equation (3.5) in Section 3.2 in [16]. This is true only when
N = k 1 n = N , and k 2 i m i = M i for all i S 1 s t .
In other words, we have to define k 1 = N / n and k 2 i = M i / m i which is contradictory to the way Sitter [16] defined k 1 , n , k 2 i , and m i using Equations (8) and (9).
In the following, we suggest how the method of Sitter [16] can be modified. We first define
k 1 = N n and k 2 i = M i m i .
To have the equality between the first and the second component of the bootstrap variance estimator and the first and the second component of the standard variance estimator in (3), respectively, we need to have
k 1 ( n 1 ) k 1 n 1 ( 1 f 1 ) n = 1 f 1 n k 1 ( n 1 ) k 1 n 1 1 n N n = 1 f 1 n n = N 1 + 1 f 1 f 1 k 1 n 1 k 1 ( n 1 )
and
k 2 i ( m i 1 ) k 2 i m i 1 1 m i M i n m i = f 1 n m i ( 1 f 2 i ) 1 m i M i m i = f 1 ( 1 f 2 i ) n m i n ( k 2 i m i 1 ) k 2 i ( m i 1 ) m i = M i 1 + f 1 ( 1 f 2 i ) n f 2 i n ( k 2 i m i 1 ) k 2 i ( m i 1 )
Defining k 1 , n , k 2 i , and m i as in Equations (A3)–(A5), the bootstrap variance estimator reduces to the usual variance estimator in (3). In the simulation study, the method "Modified Sitter" refers to the method of Sitter [16] while applying the modified k 1 , n , k 2 i , and m i defined in Equations (A3)–(A5). When k 1 , n , k 2 i , or m i is not integer, we simply rounded it to the closest integer value.

References

  1. Efron, B. Bootstrap methods: Another look at the jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  2. Rao, J.N.K.; Wu, C.F.J. Resampling inference with complex survey data. J. Am. Stat. Assoc. 1988, 83, 231–241. [Google Scholar] [CrossRef]
  3. Sitter, R.R. A resampling procedure for complex survey data. J. Am. Stat. Assoc. 1992, 87, 755–765. [Google Scholar] [CrossRef]
  4. Rao, J.N.K.; Wu, C.F.J.; Yue, K. Some recent work on resampling methods for complex surveys. Surv. Methodol. 1992, 18, 209–217. [Google Scholar]
  5. Gross, S. Median estimation in sample surveys. Proc. Sect. Surv. Res. Methods 1980, 181–184. [Google Scholar]
  6. Bickel, P.J.; Freedman, D.A. Asymptotic normality and the bootstrap in stratified sampling. Ann. Stat. 1984, 12, 470–482. [Google Scholar] [CrossRef]
  7. Booth, J.G.; Butler, R.W.; Hall, P. Bootstrap methods for finite populations. J. Am. Stat. Assoc. 1994, 89, 1282–1289. [Google Scholar] [CrossRef]
  8. Chauvet, G. Méthodes de Bootstrap en Population Finie. Ph.D. Thesis, École Nationale de Statistique et Analyse de l’Information, Bruz, France, 2007. [Google Scholar]
  9. Antal, E.; Tillé, Y. A direct bootstrap method for complex sampling designs from a finite population. J. Am. Stat. Assoc. 2011, 106, 534–543. [Google Scholar] [CrossRef] [Green Version]
  10. Beaumont, J.F.; Patak, Z. On the generalized bootstrap for sample surveys with special attention to Poisson sampling. Int. Stat. Rev. 2012, 80, 127–148. [Google Scholar] [CrossRef]
  11. Mashreghi, Z.; Haziza, D.; Léger, C. A survey of bootstrap methods in finite population sampling. Stat. Surv. 2016, 10, 1–52. [Google Scholar] [CrossRef]
  12. Särndal, C.E.; Swensson, B.; Wretman, J. Model-Assisted Survey Sampling; Springer: New York, NY, USA, 1992. [Google Scholar]
  13. Beaumont, J.F.; Béliveau, A.; Haziza, D. Clarifying some aspects of variance estimation in two-phase sampling. J. Surv. Stat. Methodol. 2015, 3, 524–542. [Google Scholar] [CrossRef]
  14. Wolter, K.M. Introduction to Variance Estimation; Springer Series in Statistics: New York, NY, USA, 2007. [Google Scholar]
  15. Sitter, R.R. Resampling Procedures for Complex Survey Data. Ph.D. Thesis, University of Waterloo, Waterloo, ON, Canada, 1989. [Google Scholar]
  16. Sitter, R.R. Comparing three bootstrap methods for survey data. Can. J. Stat. 1992, 20, 135–154. [Google Scholar] [CrossRef]
  17. Funaoka, F.; Saigo, H.; Sitter, R.R.; Toida, T. Bernoulli bootstrap for stratified multistage sampling. Surv. Methodol. 2006, 32, 151–156. [Google Scholar]
  18. Preston, J. Rescaled bootstrap for stratified multistage sampling. Surv. Methodol. 2009, 35, 227–234. [Google Scholar]
  19. Hájek, J. Asymptotic theory of rejective sampling with varying probabilities from a finite population. Ann. Math. Stat. 1964, 35, 1491–1523. [Google Scholar] [CrossRef]
  20. Berger, Y.G. Rate of convergence for asymptotic variance of the Horvitz–Thompson estimator. J. Stat. Plan. Inference 1998, 74, 149–168. [Google Scholar] [CrossRef]
  21. Matei, A.; Tillé, Y. Evaluation of variance approximations and estimators in maximum entropy sampling with unequal probability and fixed sample size. J. Off. Stat. 2005, 21, 543–570. [Google Scholar]
  22. Haziza, D.; Mecatti, F.; Rao, J.N.K. Evaluation of some approximate variance estimators under the Rao–Sampford unequal probability sampling design. Metron 2008, 66, 91–108. [Google Scholar]
  23. Saigo, H. Comparing four bootstrap methods for stratified three-stage sampling. J. Off. Stat. 2010, 26, 193–207. [Google Scholar]
  24. Shao, J.; Sitter, R.R. Bootstrap for imputed survey data. J. Am. Stat. Assoc. 1996, 91, 1278–1288. [Google Scholar] [CrossRef]
  25. Beaumont, J.F.; Émond, N. A bootstrap variance estimation method for multistage sampling and two-phase sampling when poisson sampling is used at the second phase. Stats 2022, 5, 339–357. [Google Scholar] [CrossRef]
Table 1. Variance estimation procedures for each population parameter/sampling design.
Table 1. Variance estimation procedures for each population parameter/sampling design.
Sampling DesignPopulation TotalPopulation Median
SRSWOR/SRSWORTextbook variance estimator
Rao and Wu [2]
Rao et al. [4]
Modified Sitter
Funaoka et al. [17]
Chauvet [8]
Preston [18]
Linearization variance estimator
Rao and Wu [2]
Rao et al. [4]
Modified Sitter
Funaoka et al. [17]
Chauvet [8]
Preston [18]
IPPSWOR/SRSWORTextbook variance estimator
Rao et al. [4]
Chauvet [8]
Linearization variance estimator
Rao et al. [4]
Chauvet [8]
Table 2. Monte Carlo percent RB of the bootstrap variance estimators to estimate the variance of the point estimator based on 3000 samples selected according to SRSWOR/SRSWOR.
Table 2. Monte Carlo percent RB of the bootstrap variance estimators to estimate the variance of the point estimator based on 3000 samples selected according to SRSWOR/SRSWOR.
ρ = 0.1 ρ = 0.3
f 1 = 5 % f 1 = 20 % f 1 = 5 % f 1 = 20 %
θ ^ Bootstrap Method ( n = 10 ) ( n = 40 ) ( n = 10 ) ( n = 40 )
t ^ y Textbook1.995.101.285.05
Chauvet [8]−7.792.49−8.392.54
Rao et al. [4]6.9929.076.2729.38
Rao and Wu [2]8.6610.006.969.20
Preston [18]1.885.201.135.11
Modified Sitter6.979.935.849.49
Funaoka et al. [17]1.804.001.084.15
θ ^ 1 / 2 Textbook11.717.4219.3110.96
Chauvet [8]−0.027.560.038.67
Rao et al. [4]11.1920.3913.0527.92
Rao and Wu [2]930.28729.81502.50367.14
Preston [18]10.977.5816.5811.00
Modified Sitter11.8112.9810.7413.20
Funaoka et al. [17]7.382.609.216.76
Table 3. Monte Carlo percent CV based on 3000 samples selected according to SRSWOR/SRSWOR.
Table 3. Monte Carlo percent CV based on 3000 samples selected according to SRSWOR/SRSWOR.
ρ = 0.1 ρ = 0.3
f 1 = 5 % f 1 = 20 % f 1 = 5 % f 1 = 20 %
θ ^ Bootstrap Method ( n = 10 ) ( n = 40 ) ( n = 10 ) ( n = 40 )
t ^ y Textbook43.418.945.319.8
Chauvet [8]43.919.945.920.9
Rao et al. [4]44.020.145.821.0
Rao and Wu [2]41.219.343.320.2
Preston [18]43.820.245.621.0
Modified Sitter43.619.945.020.6
Funaoka et al. [17]44.120.246.021.0
θ ^ 1 / 2 Textbook62.037.262.436.4
Chauvet [8]61.041.160.538.9
Rao et al. [4]59.941.159.338.0
Rao and Wu [2]64.337.469.139.7
Preston [18]63.838.064.037.3
Modified Sitter59.440.459.038.3
Funaoka et al. [17]59.842.359.339.5
Table 4. Coverage rate (CR) of the 95% t-distribution based confidence intervals constructed using bootstrap standard error estimators based on 3000 samples selected according to SRSWOR/SRSWOR.
Table 4. Coverage rate (CR) of the 95% t-distribution based confidence intervals constructed using bootstrap standard error estimators based on 3000 samples selected according to SRSWOR/SRSWOR.
ρ = 0.1 ρ = 0.3
f 1 = 5 % f 1 = 20 % f 1 = 5 % f 1 = 20 %
θ ^ Bootstrap Method ( n = 10 ) ( n = 40 ) ( n = 10 ) ( n = 40 )
t ^ y Textbook95.3395.0795.4094.73
Chauvet [8]94.5094.8794.7094.67
Rao et al. [4]95.5096.6395.7796.73
Rao and Wu [2]96.1095.3396.0095.47
Preston [18]95.3795.0795.3094.77
Modified Sitter95.8095.4795.7395.30
Funaoka et al. [17]95.2794.9795.3394.73
θ ^ 1 / 2 Textbook94.6395.2394.2795.20
Chauvet [8]93.9094.6793.1794.57
Rao et al. [4]95.1095.9794.7096.03
Rao and Wu [2]100.0099.9799.9799.93
Preston [18]94.5095.2393.7094.80
Modified Sitter95.0795.1394.6794.93
Funaoka et al. [17]94.7394.2794.2094.13
Table 5. Average length (AL) of the 95% t-distribution based confidence intervals constructed using bootstrap standard error estimators based on 3000 samples selected according to SRSWOR/SRSWOR.
Table 5. Average length (AL) of the 95% t-distribution based confidence intervals constructed using bootstrap standard error estimators based on 3000 samples selected according to SRSWOR/SRSWOR.
ρ = 0.1 ρ = 0.3
f 1 = 5 % f 1 = 20 % f 1 = 5 % f 1 = 20 %
θ ^ Bootstrap Method ( n = 10 ) ( n = 40 ) ( n = 10 ) ( n = 40 )
t ^ y Textbook21,470.78725.523,157.59721.9
Chauvet [8]20,404.48612.622,012.19599.7
Rao et al. [4]21,978.59663.623,707.210,782.5
Rao and Wu [2]22,218.58925.423,857.59910.3
Preston [18]21,450.98724.423,134.49718.9
Modified Sitter21,988.48919.423,684.79921.4
Funaoka et al. [17]21,435.48674.523,119.59674.2
θ ^ 1 / 2 Textbook0.97960.40811.38470.5705
Chauvet [8]0.92840.40711.27170.5634
Rao et al. [4]0.97980.43061.35370.6117
Rao and Wu [2]2.97701.13433.09651.1681
Preston [18]0.97360.40811.36540.5701
Modified Sitter0.97950.41751.34070.5753
Funaoka et al. [17]0.96340.39721.33060.5582
Table 6. Monte Carlo percent RB of the bootstrap variance estimators to estimate the variance of the point estimator based on 3000 samples selected according to IPPS/SRSWOR.
Table 6. Monte Carlo percent RB of the bootstrap variance estimators to estimate the variance of the point estimator based on 3000 samples selected according to IPPS/SRSWOR.
ρ = 0.1 ρ = 0.3
f 1 = 5 % f 1 = 20 % f 1 = 5 % f 1 = 20 %
θ ^ Bootstrap Method ( n = 10 ) ( n = 40 ) ( n = 10 ) ( n = 40 )
t ^ y Textbook−1.26−0.36−1.98−0.05
Chauvet [8]−13.65−5.74−11.86−3.83
Rao et al. [4]1.249.932.0618.03
θ ^ 1 / 2 Textbook18.373.9222.585.14
Chauvet [8]0.16−3.242.652.09
Rao et al. [4]16.4515.7117.7221.56
Table 7. Monte Carlo percent CV based on 3000 samples selected according to IPPS/SRSWOR.
Table 7. Monte Carlo percent CV based on 3000 samples selected according to IPPS/SRSWOR.
ρ = 0.1 ρ = 0.3
f 1 = 5 % f 1 = 20 % f 1 = 5 % f 1 = 20 %
θ ^ Bootstrap Method ( n = 10 ) ( n = 40 ) ( n = 10 ) ( n = 40 )
t ^ y Textbook44.918.944.619.0
Chauvet [8]46.221.046.721.3
Rao et al. [4]46.922.446.121.3
θ ^ 1 / 2 Textbook62.037.759.336.1
Chauvet [8]61.340.561.040.1
Rao et al. [4]60.341.258.437.5
Table 8. Coverage rate (CR) of the 95% t-distribution based confidence intervals constructed using bootstrap standard error estimators based on 3000 samples selected according to IPPS/SRSWOR.
Table 8. Coverage rate (CR) of the 95% t-distribution based confidence intervals constructed using bootstrap standard error estimators based on 3000 samples selected according to IPPS/SRSWOR.
ρ = 0.1 ρ = 0.3
f 1 = 5 % f 1 = 20 % f 1 = 5 % f 1 = 20 %
θ ^ Bootstrap Method ( n = 10 ) ( n = 40 ) ( n = 10 ) ( n = 40 )
t ^ y Textbook95.2795.5094.7394.83
Chauvet [8]93.2394.8393.6394.70
Rao et al. [4]95.4096.0795.0396.43
θ ^ 1 / 2 Textbook94.5394.6395.7093.63
Chauvet [8]93.7793.8093.2793.93
Rao et al. [4]95.1794.9395.2795.43
Table 9. Average length (AL) of the 95% t-distribution based confidence intervals constructed using bootstrap standard error estimators based on 3000 samples selected according to IPPS/SRSWOR.
Table 9. Average length (AL) of the 95% t-distribution based confidence intervals constructed using bootstrap standard error estimators based on 3000 samples selected according to IPPS/SRSWOR.
ρ = 0.1 ρ = 0.3
f 1 = 5 % f 1 = 20 % f 1 = 5 % f 1 = 20 %
θ ^ Bootstrap Method ( n = 10 ) ( n = 40 ) ( n = 10 ) ( n = 40 )
t ^ y Textbook7694.43401.911,021.34746.8
Chauvet [8]7317.23314.310,508.34650.7
Rao et al. [4]7772.13566.911228.15152.5
θ ^ 1 / 2 Textbook0.97970.41841.37090.5597
Chauvet [8]0.92470.41101.26270.5472
Rao et al. [4]0.97360.44031.34610.6011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, S.; Haziza, D.; Mashreghi, Z. A Comparison of Existing Bootstrap Algorithms for Multi-Stage Sampling Designs. Stats 2022, 5, 521-537. https://doi.org/10.3390/stats5020031

AMA Style

Chen S, Haziza D, Mashreghi Z. A Comparison of Existing Bootstrap Algorithms for Multi-Stage Sampling Designs. Stats. 2022; 5(2):521-537. https://doi.org/10.3390/stats5020031

Chicago/Turabian Style

Chen, Sixia, David Haziza, and Zeinab Mashreghi. 2022. "A Comparison of Existing Bootstrap Algorithms for Multi-Stage Sampling Designs" Stats 5, no. 2: 521-537. https://doi.org/10.3390/stats5020031

Article Metrics

Back to TopTop