Next Article in Journal
Dynamic Analysis of a Standby System with Retrial Strategies and Multiple Working Vacations
Previous Article in Journal
Computing the COVID-19 Basic and Effective Reproduction Numbers Using Actual Data: SEIRS Model with Vaccination and Hospitalization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Class of Calibrated Estimators of Population Proportion Under Diagonal Systematic Sampling Scheme

1
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Pretoria 0204, South Africa
2
Department of Statistics, Usmanu Danfodiyo University, Sokoto 840101, Nigeria
3
Department of Mathematics, Kebbi State University of Science and Technology, Aliero 840101, Nigeria
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(24), 3997; https://doi.org/10.3390/math12243997
Submission received: 18 November 2024 / Revised: 13 December 2024 / Accepted: 16 December 2024 / Published: 19 December 2024
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
Estimators of population characteristics which only exploit information of the study characters tend to be prone to outliers or extreme values that may characterize sampling information due to randomness in selection thereby making them to be less efficient and robust. One of the approaches often adopted in sampling surveys to address the aforementioned issue is to incorporate supplementary character information into the estimators through a calibration approach. Therefore, this study introduced two novel methods for estimating population proportion using diagonal systematic sampling with the help of an auxiliary variable. We developed two new calibration schemes and analyzed the theoretical properties (biases and mean squared errors) of the estimators up to the second-degree approximation. The theoretical findings were supported by simulation studies on five populations generated using the binomial distribution with various success probabilities. Biases, mean square errors (MSE) and the percentage relative efficiency (PRE) were computed, and the results revealed that the proposed estimators have the least biases, the least MSEs and higher PREs, indicating the superiority of the proposed estimators over the existing conventional estimator. The simulation results showed that our proposed estimators under the proposed calibration schemes performed more efficiently on average compared to the traditional unbiased estimator proposed for population proportion under diagonal systematic sampling. The superiority of the results of the proposed method over the conventional method in terms of bias, efficiency, efficiency gain, robustness and stability imply that the calibration approach developed in the study proved to be effective.

1. Introduction

In sampling survey literature, it is well established that the efficiency of estimators for population parameters of the characters of interest can be improved by incorporating auxiliary information related to a correlated auxiliary attribute x [1,2,3]. This auxiliary information can be leveraged at the planning or design stage to secure better estimators compared to those that do not use auxiliary data. A common technique for utilizing known population parameters of the auxiliary attribute is through ratio, product and regression estimation methods, applied across diverse probability sampling designs like simple random sampling, cluster sampling, systematic sampling, stratified sampling and two-phase sampling. In the current research, the focus will be on using the knowledge of auxiliary attributes within the framework of systematic sampling.
Systematic sampling is a sampling design where the first unit is randomly selected, and the remaining units are automatically chosen according to a predetermined pattern. This is one of the most used sampling techniques because of its practical implementation. Compared to simple random sampling, systematic sampling is easier to execute, especially when conducted in the field [4]. Additionally, systematic sampling can give estimators with higher precision than simple random sampling when the sampling frame exhibits explicit or implicit stratification [4]. This is because systematic sampling effectively segregates the population units into n strata of k equal size, selecting a unit from each stratum, which is like stratified random sampling with a unit per stratum. Systematic sampling is also efficient for sampling natural populations, such as estimating timber volume in forest areas [5]. Many research organizations, including the Food and Agriculture Organization (FAO) of the United Nations, utilize systematic sampling in their surveys, such as the Survey of Global Forest Resources Assessment in 2010 [1].
Systematic sampling is a sampling procedure that provides an equal chance of selection for each unit in the population. In systematic sampling approach which has its origins in the work of [6] a sample of size n is selected from a finite population of size N. The process involves first randomly selecting the first unit from the first k units, where k = N / n . After this initial random selection, every k t h unit from the population is then included in the sample. But, if the population size cannot be expressed as product of n and k, then the sample size cannot be fixed. This means that the sample mean, which is an estimate of the population mean, becomes biased [7] and this led to development of modified systematic sampling procedure like diagonal systematic sampling (DSS). Additionally, the properties of estimators (such as the sample mean) derived from systematic samples depend on the order of the units in the sampling frame. Under certain arrangements, such as the presence of linear, parabolic, or periodic trends in the population, the systematic sample may be less efficient than other sampling methods. If the sampling interval k is equivalent to the period of a periodic trend, the efficiency of the systematic sampling units will be approximately to that of a single random observation from the population set. However, if the sampling interval k is an odd multiple of the half-period, the systematic sample mean will equal the true population mean [4].
The DSSS was first unveiled by [8] which was generalized by [9] followed by [10] who proposed a modified version of systematic sampling called diagonal circular systematic sampling. Ref. [11] examined the conditions under which the [10] sampling scheme is applicable. Additionally, [11] presented a generalized type of systematic sampling and demonstrated that DSS is a special case of the generalized scheme. Beyond these specific developments, systematic sampling has been extensively studied by many researchers over the years, with various aspects of the technique being investigated. Some of the notable works include those by [4,12,13,14,15,16,17,18,19,20,21,22,23].
The literature on survey sampling has explored various techniques to improve the efficiency of estimators for population proportions and other parameters. These include using unknown weights, power/exponential/logarithmic transformations, linear combinations of estimators and robust measures. One particularly promising technique is calibration estimation. Calibration aims to enhance the accuracy of estimators by modifying the original design weights using auxiliary information. This is done by minimizing a distance measure subject to a set of calibration constraints. Significant research on estimation using calibration has been conducted by several researchers, including [24,25,26,27,28,29,30] and others.
Recently, ref. [31] presented a new conventional estimator based on DSS and its efficiency was compared to that of conventional estimators based on simple random sampling without replacement (SRSWOR) and linear systematic sampling. However, their proposed estimators do not consider the presence of outliers. Ref. [4] defines outliers as observations that are substantially different from the rest of the data. These observations can have a disproportionate influence on sample statistics, such as the mean, variance, proportion and other statistics, and can lead to biased or inefficient estimates. Ref. [4] provides strategies for dealing with outliers in sampling and these include trimming, robust estimation (median-based estimators, M-estimators, calibration approach). Therefore, this current study focused on the modification of [31] estimators of population proportion in the presence of outliers using calibration approach.
This study presented the concepts of calibration approach to incorporate auxiliary attributes into estimators of population proportion under diagonal systematic sampling to obtain new estimators that are robust against outliers/extreme values, stable and efficient and can produce the estimate of population proportion with higher precision. The calibration approach is not only limited to sampling theory, but it also has applications in different fields, e.g., system reliability. For example, ref. [32] discussed the concepts of calibration and validation in the context of reliability engineering, providing insights into how these processes can enhance the reliability of systems through proper statistical methods. Ref. [33] discussed methodologies for assessing the reliability of measurement systems, including calibration techniques that ensure measurement accuracy over time. Ref. [34] outlined best practices for developing calibration procedures that enhance the reliability of measurement systems, emphasizing the role of regular calibration in maintaining system performance.
The paper is organized as follows: Section 1, titled Introduction, discusses the background, the problem to be solved and the significance of the study, as well as the novelty of the proposed methods. Section 2 discusses the procedure for diagonal systematics sampling (DSS) and approach for defining conventional estimator for population proportion. Section 3 presents the proposed calibrated estimators for estimating population proportion when sampling is conducted using DSS, along with the members of estimators in different situations. Section 4 compares the performance of the proposed estimators against the conventional estimator numerically through simulation studies. Finally, Section 5 provides the conclusion and offers some recommendations based on the findings.

2. Diagonal Systematic Sampling Procedure and Estimator of Population Proportion

Consider a finite population Ω with N elements U 1 , U 2 , U 3 , , U N and a sample of size n is drawn at random without a replacement from the first k units and every k t h subsequent unit.
Let ϕ y i , ϕ x i ,     i = 1 , 2 , , N be pairs of study and auxiliary characters of population units belonging to one of two disjoint classes H and H c where H is the class of units possessing the attribute of interest. That is, let
ϕ y i = 1 ,               i f       U i   H 0 ,               i f       U i H c
ϕ x i = 1 ,               i f       U i   H 0 ,               i f       U i H c
where ϕ y and ϕ x are the study and auxiliary attributes, respectively.
Then, the systematic sample proportion are defined as p a s y s = n 1 i = 1 n ϕ y i   = a ( s y s ) n and p b s y s = n 1 i = 1 n ϕ x i = b ( s y s ) n are unbiased estimators of the population proportion P A = N 1 i = 1 N ϕ y i = A N and P B = N 1 i = 1 N ϕ x i = B N for Y and X , respectively.
The usual sample proportion p a s y s estimator of population proportion P A is given in (3) as
p a s y s = a ( s y s ) n
The variance of p a s y s denoted by V   p a s y s is given as in (4).
V a r p a ( s y s ) = 1 k i = 1 k p a ( s y s ) P 2
Ref. [8] presented the procedure for DSS as shown below.
  • Classify N = n k units into n rows and k n k columns.
  • Draw a random number r 1 r k .
  • Then, the sample units are selected in the pattern defined in (5).
Θ y r = ϕ y r , ϕ y ( k + 1 ) + r , ϕ y 2 ( k + 1 ) + r , , ϕ y n 1 ( k + 1 ) + r ,                                 i f   r k n + 1 , ϕ y r , ϕ y ( k + 1 ) + r , ϕ y 2 ( k + 1 ) + r , , ϕ y t ( k + 1 ) + r ,                 ϕ y ( t + 1 ) k , ϕ y ( t + 1 ) k + 1 , ϕ y ( t + 1 ) k + 2 , , ϕ y ( n 1 ) k + ( n t 1 ) ,       i f   r > k n + 1 ,
where 0 t n 1 .
Under DSS, the first-order inclusion probability denoted by π i * and the second-order inclusion probability denoted by π i j * are given by Equations (6) and (7) respectively.
π i * = 1 k
π i j * = 1 k ,         i f   i t h a n d   j t h u n i t s   a r e   f r o m   t h e   s a m e   d i a g o n a l   o r   b r o k e n   d i a g o n a l , 0 ,             o t h e r w i s e
Ref. [31] suggested sample proportion based on the DSSS denoted by p a d ( s y s ) as given in (8).
p a d ( s y s ) = a d ( s y s ) n
where a d ( s y s ) = i = 0 n 1 ϕ y i ( k + 1 ) + r ,                                                                         i f         r k n + 1 , i = o t ϕ y i ( k + 1 ) + r + i = 1 n t 1 ϕ y ( t + i ) k + i ,         i f         r > k n + 1 .
The variance of p a d ( s y s ) denoted by V a r p a d ( s y s ) is given by (9).
V a r p a d ( s y s ) = 1 k i = 1 k p a d ( s y s ) i P 2
Following the Sen–Yates–Grundy approach suggested by Sen (1953), Yates and Grundy (1953) [23], the V a r p a d ( s y s ) and estimate of V a r p a d ( s y s ) denoted by V a r p a d ( s y s ) ^ can are given as in (10) and (11), respectively.
V a r p a d ( s y s ) = 2 1 N 2 i = 1 N j i = 1 N π i * π j * π i j * ϕ y i π i * ϕ y j π j * 2
V a r P a d ( s y s ) ^ = 2 1 N 2 i = 1 n j i = 1 n π i * π j * π i j * π i j * ϕ y i π i * ϕ y j π j * 2

3. Proposed Calibration Estimators of Population Proportion Based on DSS

The estimator in (8) proposed by [31] can be written in the formed as in (12).
p a d s y s = i = 0 n 1 ϖ i ϕ y i ( k + 1 ) + r ,                                                                                                       i f           r k n + 1 i = o t ϖ i ϕ y i ( k + 1 ) + r + i = 1 n t 1 ϖ i ϕ y ( t + i ) k + i ,                                 i f           r > k n + 1
where ϖ i = n 1 .
Having studied the estimator proposed by Azeem (2021) [31], the two calibrated estimators for population proportion under diagonal systematic sampling are proposed as follows.

3.1. First Proposed Calibration Estimator t 1 d s y s

Motivated by [31] and calibration approach in Audu et al. (2024) [24], calibration estimator in (13) is proposed.
t 1 d s y s = i = 0 n 1 ϖ i d * ϕ y i ( k + 1 ) + r ,                                                                                                       i f           r k n + 1 i = o t ϖ i d * ϕ y i ( k + 1 ) + r + i = 1 n t 1 ϖ i d * ϕ y ( t + i ) k + i ,                                 i f           r > k n + 1
where the new calibrated weights, denoted as ϖ 1 i * , are chosen such that the sum of the chi-square distance measure given in Equation (14) is minimized, subject to the calibration constraints.
min                   Z 1 = i = 0 n 1 ϖ i ( d ) * ϖ i 2 ϖ i ϕ i s . t .                 i = 0 n 1 ϖ i ( d ) * ϕ x i ( k + 1 ) + r ,       =       P B                                                                                               i f           r k n + 1                                                                                       O R s . t .         i = 0 t ϖ i ( d ) * ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ( d ) * ϕ y ( t + i ) k + i = P B ,           i f           r > k n + 1
The estimator of the first proposed calibration can be obtained using the Lagrange function, as in (15).
L 1 = i = 0 n 1 ϖ i ( d ) * ϖ i 2 ϖ i ϑ i 2 λ 1 i = 0 n 1 ϖ i ( d ) * ϕ x i ( k + 1 ) + r P B ,           r k n + 1 i = 0 n 1 ϖ i ( d ) * ϖ i 2 ϖ i ϑ i 2 λ 1 i = 0 t ϖ i ( d ) * ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ( d ) * ϕ x ( t + i ) k + i P B ,             r > k n + 1
By differentiating (15) with respect to ϖ i ( d ) * , equating the result to zero and solving for ϖ i ( d ) * this gives (16).
ϖ 1 i ( d ) * = ϖ i + λ 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r ,         r k n + 1 ϖ i + λ 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r + ϕ x ( t + i ) k + i ,               r > k n + 1
Similarly, for λ 1 , (17) is obtained.
i = 0 n 1 ϖ 1 i ( d ) * ϕ x i ( k + 1 ) + r = P B ,           r k n + 1 i = 0 t ϖ 1 i ( d ) * ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ 1 i ( d ) * ϕ x ( t + i ) k + i = P B ,           r > k n + 1
By substituting the values obtained for ϖ 1 i ( d ) * in (17), (18) is obtained.
λ 1 = P B i = o n 1 ϖ i ϕ x i ( k + 1 ) + r / i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 ,           r k n + 1 P B i = 0 t ϖ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϕ x ( t + i ) k + i i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2     ,                   r > k n + 1
By substituting the expression for λ 1 in (16), (19) is obtained.
ϖ 1 i ( d ) * = ϖ i + P B i = o n 1 ϖ i ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 ϖ i ϑ i ϕ x i ( k + 1 ) + r ,       r k n + 1 ϖ i + P B i = 0 t ϖ i ϕ x i ( k + 1 ) + r i = 0 n t 1 ϖ i ϕ x ( t + i ) k + i i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2 ϖ i ϑ i ϕ x i ( k + 1 ) + r + ϕ x ( t + i ) k + i ,   r > k n + 1
By substituting (19) in (13), the first proposed calibrated estimator denoted by t 1 d ( s y s ) is obtained, as in (20).
t 1 d ( s y s ) = p a d ( s y s ) + β ( P B P b d ( s y s ) )                                                     i f     r k n + 1 p a d ( s y s ) + β * ( P B P b d ( s y s ) )                                                 i f   r > k n + 1                                      
where
β = i = o n 1 ϖ i ϑ i ϕ y i ( k + 1 ) + r ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 ,     β * = i = o t ϖ i ϑ i ϕ y i ( k + 1 ) + r ϕ x i ( k + 1 ) + r + i = o n t 1 ϖ i ϑ i ϕ y ( t + i ) k + i ϕ x ( t + i ) k + i i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2
The members of the proposed calibrated estimator t 1 d ( s y s ) are as obtained below.
i. Setting ϑ i = 1 in β   and β * , the first member denoted by t 1 d ( s y s ) I is obtained as in (21).
t 1 d ( s y s ) I = p a d ( s y s ) + β 1 ( P B P b d ( s y s ) )                                                     i f     r k n + 1 p a d ( s y s ) + β 1 * ( P B P b d ( s y s ) )                                                 i f   r > k n + 1                                      
where
β 1 = i = o n 1 ϖ i ϕ y i ( k + 1 ) + r ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϕ x i ( k + 1 ) + r 2 ,     β 1 * = i = o t ϖ i ϕ y i ( k + 1 ) + r ϕ x i ( k + 1 ) + r + i = o n t 1 ϖ i ϕ y ( t + i ) k + i ϕ x ( t + i ) k + i i = 0 t ϖ i ϕ x i ( k + 1 ) + r 2 + i = 0 n t 1 ϖ i ϕ x ( t + i ) k + i 2
ii. Setting ϕ i = 1 x i ( k + 1 ) + r or ϕ i = 1 x ( t + i ) k + i , in β   and β * , the second member denoted by t 1 d ( s y s ) I I is obtained, as in (22).
t 1 d ( s y s ) I I = p a d ( s y s ) + β 2 P B P b d ( s y s )                                                     i f     r k n + 1 p a d ( s y s ) + β 2 * P B P b d ( s y s )                                                 i f   r > k n + 1                                      
where β 2 = i = o n 1 ϖ i ϕ y i ( k + 1 ) + r i = 0 n 1 ϖ i ϕ x i ( k + 1 ) + r ,     β 2 * = i = o t ϖ i ϕ y i ( k + 1 ) + r + i = o n t 1 ϖ i ϕ y ( t + i ) k + i i = 0 t ϖ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϕ x ( t + i ) k + i .

3.2. Second Proposed Calibration Estimator t 2 d s y s

Motivated by [31] and calibration approach in Audu et al. (2024) [24], the calibration estimator in (23) is proposed.
t 2 d s y s = i = 0 n 1 ϖ i d * * ϕ y i k + 1 + r ,                                                                                                   i f           r k n + 1 i = o t ϖ i d * * ϕ y i k + 1 + r + i = 1 n t 1 ϖ i d * * ϕ y ( t + i ) k + i ,                       i f           r > k n + 1
where the new calibrated weights, denoted as ϖ i d * * , are chosen such that the sum of the chi-square distance measure given in Equation (24) is minimized, subject to the calibration constraints.
min                   Z 2 = i = 0 n 1 ϖ i d * * ϖ i 2 ϖ i ϑ i s . t .                 i = 0 n 1 ϖ i d * * ϕ x i k + 1 + r   = P B ,             i = 0 n 1 ϖ i d * * = i = 0 n 1 ϖ i                       i f           r k n + 1                                                                                     O R s . t .         i = 0 t ϖ i ( d ) * * ϕ x i k + 1 + r + i = 0 n t 1 ϖ i ( d ) * * ϕ x ( t + i ) k + i = P B ,           i f           r > k n + 1                       i = 0 t ϖ i ( d ) * * + i = 0 n t 1 ϖ i ( d ) * * = i = 0 t ϖ i + i = 0 n t 1 ϖ i
The estimator of the second proposed calibration can be obtained using the Lagrange multiplier function L2 as in (25).
L 2 = i = 0 n 1 ϖ 1 i ( d ) * ϖ i 2 ϖ i ϑ i 2 λ 11 i = 0 n 1 ϖ 1 i ( d ) * ϕ x i ( k + 1 ) + r P B 2 λ 12 i = 0 n 1 ϖ i d * * i = 0 n 1 ϖ i ,     r k n + 1 i = 0 n 1 ϖ 1 i ( d ) * ϖ i 2 ϖ i ϑ i 2 λ 11 i = 0 t ϖ 1 i ( d ) * ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ 1 i ( d ) * ϕ x t + i k + i P B 2 λ 12 i = 0 t ϖ i ( d ) * * + i = 0 n t 1 ϖ i ( d ) * * i = 0 t ϖ i i = 0 n t 1 ϖ i ,             r > k n + 1
By differentiating (25) partially with respect to ϖ i ( d ) * * , equating to zero and solving for ϖ i ( d ) * * , (26) is obtained. Similarly, for λ 11 and λ 12 , (27) and (28) are obtained, respectively.
ϖ i ( d ) * * = ϖ i + λ 11 ϖ i ϑ i x i ( k + 1 ) + r + λ 12 ϖ i ϑ i ,             r k n + 1 ϖ i + λ 11 ϖ i ϑ i x i ( k + 1 ) + r + x ( t + i ) k + i + λ 12 ϖ i ϑ i ,             r > k n + 1
i = 0 n 1 ϖ i ( d ) * * ϕ x i ( k + 1 ) + r = P B ,             r k n + 1 i = 0 t ϖ i ( d ) * * ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ( d ) * * ϕ x t + i k + i x ( t + i ) k + i = P B ,             r > k n + 1
i = 0 n 1 ϖ i ( d ) * * = i = 0 n 1 ϖ i ,             r k n + 1 i = 0 t ϖ i ( d ) * * + i = 0 n t 1 ϖ i ( d ) * * = i = 0 t ϖ i + i = 0 n t 1 ϖ i ,             r > k n + 1
By substituting the values obtained for ϖ 2 i ( d ) * in (27) and (28), the system of equations in (29) and (30) are obtained.
λ 11 i = 0 n 1 ϖ i ϑ i ϕ i k + 1 + r 2 + λ 12 i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r = P B i = o n 1 ϖ i ϕ x i ( k + 1 ) + r λ 11 i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r + λ 12 i = 0 n 1 ϖ i ϑ i = 0
λ 11 i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2 + λ 12 i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i                                                                                                                                                             = P B i = o t ϖ i ϕ x i ( k + 1 ) + r i = o n t 1 ϖ i ϕ x ( t + i ) k + i λ 11 i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ x t + i k + i + λ 12 i = 0 t ϑ i ϖ i + i = 0 n t 1 ϑ i ϖ i = 0
By solving the system of equations in (29) and (30), (31) and (32) are obtained.
λ 11 = i = 0 n 1 ϖ i ϑ i P B i = o n 1 ϖ i ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 i = 0 n 1 ϖ i ϑ i i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 λ 12 = i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r P B i = o n 1 ϖ i ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 i = 0 n 1 ϖ i ϑ i i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 ,             r k n + 1
For r > k n + 1
λ 11 = i = 0 t ϖ i ϑ i + i = 0 n t 1 ϖ i ϑ i P B i = o n 1 ϖ i ϕ x i ( k + 1 ) + r i = o n t 1 ϖ i ϕ x ( t + i ) k + i i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 + i = o n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2 i = 0 t ϖ i ϑ i + i = 0 n t 1 ϖ i ϑ i                                                                                                         i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2 λ 12 = i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i P B i = o t ϖ i ϕ x i ( k + 1 ) + r i = o n t 1 ϖ i ϕ x ( t + i ) k + i i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 + i = o n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2 i = 0 t ϖ i ϑ i + i = 0 n t 1 ϖ i ϑ i                                                                                                               i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2
Also, by substituting (31) and (32) in (26), (33) and (34) are obtained.
ϖ i ( d ) * * = ϖ i + P B i = o n 1 ϖ i ϕ x i ( k + 1 ) + r ϖ i ϑ i ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϑ i ϖ i ϑ i i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϑ i ϕ x i k + 1 + r 2 i = 0 n 1 ϖ i ϑ i i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 , r k n + 1
ϖ i ( d ) * * = ϖ i + A ϖ i ϑ i ϕ x i ( k + 1 ) + r + ϕ x ( t + i ) k + i B ϖ i ϑ i C B D C 2 ,                         r k n + 1
where A = P B i = o t ϖ i ϕ x i ( k + 1 ) + r i = o n t 1 ϖ i ϕ x ( t + i ) k + i ,   B = i = 0 t ϖ i ϑ i + i = 0 n t 1 ϖ i ϑ i , C = i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i ,   D = i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2 .
By substituting (33) and (34) in (23), the second proposed calibrated estimator denoted by t 2 d ( s y s ) is obtained, as in (34).
t 2 d ( s y s ) = p a d ( s y s ) + β * * ( P B P b d ( s y s ) )                                                     i f     r k n + 1 p a d ( s y s ) + β * * * ( P B P b d ( s y s ) )                                                 i f   r > k n + 1                                      
where β * * = i = o n 1 ϖ i ϑ i i = o n 1 ϖ i ϑ i ϕ y i ( k + 1 ) + r ϕ x i ( k + 1 ) + r i = o n 1 ϖ i ϑ i ϕ y i ( k + 1 ) + r i = o n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 i = o n 1 ϖ i ϑ i i = o n 1 ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 , β * * * = Δ 1 Δ 2 Δ 3 Δ 4 Δ 1 Δ 5 Δ 4 2 , Δ 1 = i = 0 t ϖ i ϑ i + i = 0 n t 1 ϖ i ϑ i , Δ 3 = i = 0 t ϖ i ϑ i ϕ y i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ y ( t + i ) k + i , Δ 2 = i = 0 t ϖ i ϑ i ϕ y i ( k + 1 ) + r ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ y ( t + i ) k + i ϕ x ( t + i ) k + i , Δ 4 = i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i , Δ 5 = i = 0 t ϖ i ϑ i ϕ x i ( k + 1 ) + r 2 + i = 0 n t 1 ϖ i ϑ i ϕ x ( t + i ) k + i 2 .
The members of the proposed calibrated estimator t 2 d ( s y s ) are obtained as below:
Setting ϕ i = 1 in β * *   and β * * * , the first member denoted t 2 d ( s y s ) I is obtained as in (36).
t 2 d ( s y s ) I = p a d ( s y s ) + β 1 * * P B P b d ( s y s )                                                     i f     r k n + 1 p a d ( s y s ) + β 1 * * * P B P b d ( s y s )                                                 i f   r > k n + 1                                      
where β 1 * * = i = o n 1 ϖ i ϕ y i ( k + 1 ) + r ϕ x i ( k + 1 ) + r i = o n 1 ϖ i ϕ y i ( k + 1 ) + r i = o n 1 ϖ i ϕ x i ( k + 1 ) + r i = 0 n 1 ϖ i ϕ x i ( k + 1 ) + r 2 i = o n 1 ϖ i ϕ x i ( k + 1 ) + r 2 , β 1 * * * = Δ 11 Δ 12 Δ 13 Δ 14 Δ 11 Δ 15 Δ 14 2 , Δ 11 = i = 0 t ϖ i + i = 0 n t 1 ϖ i , Δ 13 = i = 0 t ϖ i ϕ y i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϕ y ( t + i ) k + i , Δ 12 = i = 0 t ϖ i ϕ y i ( k + 1 ) + r ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϕ y ( t + i ) k + i ϕ x ( t + i ) k + i , Δ 14 = i = 0 t ϖ i ϕ x i ( k + 1 ) + r + i = 0 n t 1 ϖ i ϕ x ( t + i ) k + i , Δ 15 = i = 0 t ϖ i ϕ x i ( k + 1 ) + r 2 + i = 0 n t 1 ϖ i ϕ x ( t + i ) k + i 2 .

4. Empirical Study

In this section, simulation studies were conducted to evaluate the performance of the proposed estimators,   t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) , and   t 2 d 1 ( s y s ) , in comparison to the [31] estimator. The simulation data consisted of 500 units generated using the binomial distribution, with success probabilities of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9. The distributions of the simulated data are presented in Figure 1. The data in the populations 1-IV are skewed to the left, while that of populations VI–IX are skewed to right, and population V is uniformly distributed.
A sample of size 100 was selected using the diagonal systematic sampling method, and this process was repeated 500 times. The biases, mean squared errors (MSEs) and percentage relative efficiencies (PREs) of the considered estimators were computed using the formulas provided in Equations (37)–(39),
B i a s ( T ) = 1 500 j = 1 500 T P A
M S E ( T ) = 1 500 j = 1 500 T P A 2
P R E ( T ) = M S E t 0 M S E ( T ) × 100
where T is any estimator, t 0 is the sample proportion.
The tables (Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9) present numerical results comparing the biases, mean squared errors (MSEs) and percentage relative efficiencies (PREs) for existing estimators and the new estimators proposed in the study. The findings indicate that, when the probability of success is set to 0.5, all the proposed new estimators exhibit lower MSEs and higher PREs compared to the existing estimators considered in the investigation.
Furthermore, this pattern holds true not just for p = 0.5, but also for other probability of success values examined, such as 0.1, 0.2, 0.3, 0.4, 0.6, 0.7, 0.8 and 0.9 which include extreme probabilities. The results indicated that, the proposed estimators t 1 d 1 ( s y s ) , t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) exhibited lower MSEs with a significant percentage gain in efficiency compared to that of the estimator by [31] in all the cases considered for empirical studies.
i.
The Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 present numerical results that compare the biases, mean squared errors (MSEs) and percentage relative efficiencies (PREs) of the existing estimator and the new estimators proposed in the study for the other probability of success of 0.1, 0.2, 0.3, 0.5, 0.6, 0.7, 0.8 and 0.9, respectively.
ii.
The results of the biases showed that the proposed estimators have lower values compared to the estimator proposed by [31], with the exception of a few cases indicating robustness of the proposed estimators over the conventional one.
iii.
The results of the MSEs showed that the proposed estimators have lower values compared to the estimator proposed by [31] in all cases, indicating the higher efficiency and precision of the proposed estimators than the conventional one.
iv.
The results of the PREs showed that the proposed estimators have higher values compared to the estimator proposed by [31] in all cases, indicating higher efficiency gains by the proposed estimators over the conventional one.

5. Conclusions

This study introduced calibrated estimators for estimating the population proportion of a characteristic of interest under the diagonal systematic sampling scheme. Two novel calibration schemes were proposed, and the corresponding estimators were derived. Empirical studies were conducted through simulation to evaluate the biases, mean squared errors (MSEs) and percentage relative efficiencies (PREs) of the existing and the suggested estimators. The simulations considered success probabilities (p) ranging from 0.1 to 0.9 in increments of 0.1. The findings indicate that all the proposed estimators,   t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and   t 2 d 1 ( s y s ) , exhibit lower MSEs and higher PREs compared to the Azeem (2021) [31] estimator considered in this investigation. In conclusion, the proposed estimators under the calibration technique prove to be more efficient and precise than the existing estimator.
The results of the proposed calibrated estimators which utilized information on auxiliary character dominated the existing conventional estimator in terms of biasness, efficiency, robustness, stability as well as efficiency gain.
The proposed calibrated estimators of population proportion based on DSS using auxiliary attribute in this study can be practical in different areas of endeavors. For example, in market research, they can be applied to determining consumer preferences using demographic data (e.g., age, income) as auxiliary attributes. In public health, they can be used for assessing vaccination rates or health behaviors by incorporating socioeconomic factors or geographic data. In social sciences, they can be utilized in analyzing public opinion on policies while using demographic characteristics to refine estimates. The proposed estimators can be utilized in environmental studies to estimate the proportion of polluted sites using historical industrial activity data. The suggested estimators can be used in census data collection, for evaluating household access to services like the internet using urban–rural classifications. In education, the estimators can be used in estimating student performance or enrollment rates based on socioeconomic status and prior educational outcomes. In transportation, they can be used in assessing traffic patterns or public transport usage using data on population density or urban planning.

Author Contributions

Conceptualization, A.A. and J.A.; Methodology, A.A., R.V.K.S., M.A. and J.A.; Software, J.A. and A.A.; Validation, R.V.K.S., M.A. and A.A. Formal analysis, A.A., M.A., R.V.K.S. and J.A.; Resources, A.A., M.A., R.V.K.S. and J.A., Writing—Original Draft, A.A. and J.A., Review and Editing, R.V.K.S. and M.A., Supervision, M.A., Funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by Sefako Makgatho Health Sciences University, Pretoria, South Africa.

Data Availability Statement

No data was available, simulation was carried out.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Global Forest Resources Assessment 2010: Main Report; Food and Agriculture Organization of the United Nations: Rome, Italy, 2010; p. 163.
  2. Rao, J.N.K. Small Area Estimation; Wiley: Hoboken, NJ, USA, 2003. [Google Scholar]
  3. Särndal, C.-E.; Swensson, B.; Wretman, J. Model Assisted Survey Sampling; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  4. Cochran, W.G. Sampling Techniques; John Wiley & Sons: New York, NY, USA, 1977. [Google Scholar]
  5. Zinger, A. Systematic sampling in forestry. Biometrics 1964, 20, 553–565. [Google Scholar] [CrossRef]
  6. Madow, W.G.; Madow, L.H. On the theory of systematic sampling, I. Ann. Math. Stat. 1944, 25, 1–24. [Google Scholar] [CrossRef]
  7. Kish, L. Survey Sampling; John Wiley & Sons: New York, NY, USA, 1965. [Google Scholar]
  8. Subramani, J. Diagonal systematic sampling scheme for finite populations. J. Indian Soc. Agric. Stat. 2000, 53, 187–195. [Google Scholar] [CrossRef]
  9. Subramani, J. Further results on diagonal systematic sampling for finite populations. J. Indian Soc. Agric. Stat. 2009, 63, 277–282. [Google Scholar]
  10. Sampath, S.; Varalakshmi, V. Diagonal circular systematic sampling. Model Assist. Stat. Appl. 2008, 3, 345–352. [Google Scholar] [CrossRef]
  11. Khan, Z.; Shabbir, J.; Gupta, S. Generalized systematic sampling. Commun. Stat.—Simul. Comput. 2015, 44, 2240–2250. [Google Scholar] [CrossRef]
  12. Bellhouse, D.R.; Rao, J.N.K. Systematic sampling in the presence of linear trends. Biometrika 1975, 62, 694–697. [Google Scholar] [CrossRef]
  13. Fountain, R.L.; Pathak, P.L. Systematic and non-random sampling in the presence of linear trends. Commun. Stat.—Theory Methods 1989, 18, 2511–2526. [Google Scholar] [CrossRef]
  14. Khan, Z.; Shabbir, J.; Gupta, S. A new sampling design for systematic sampling. Commun. Stat.—Theory Methods 2013, 42, 3359–3370. [Google Scholar] [CrossRef]
  15. Madow, W.G. On the theory of systematic sampling III-comparison of centered and random start systematic sampling. Ann. Math. Stat. 1953, 24, 101–106. [Google Scholar] [CrossRef]
  16. Naidoo, L.R.; North, D.; Zewotir, T.; Arnab, R. Balanced modified systematic sampling in the presence of linear trend. S. Afr. Stat. J. 2015, 49, 187–203. [Google Scholar]
  17. Sen, A.R. On the estimate of variance in sampling with varying probabilities. J. Indian Soc. Agric. Stat. 1953, 5, 119–127. [Google Scholar]
  18. Subramani, J. A modification on linear systematic sampling for odd sample size. Bonfring Int. J. Data Min. 2012, 2, 32–36. [Google Scholar]
  19. Subramani, J. A modification on linear systematic sampling. Model Assist. Stat. Appl. 2013, 8, 215–227. [Google Scholar] [CrossRef]
  20. Subramani, J.; Gupta, S.N. Generalized modified linear systematic sampling scheme for finite populations. Hacet. J. Math. Stat. 2014, 43, 529–542. [Google Scholar]
  21. Subramani, J. On Circular Systematic Sampling in the Presence of Linear trend. Biom. Biostat. Int. J. 2018, 7, 286–292. [Google Scholar] [CrossRef]
  22. Yates, F. Systematic sampling. Philos. Trans. R. Soc. 1948, 241, 345–377. [Google Scholar]
  23. Yates, F.; Grundy, P.M. Selection without replacement from within strata with probability proportional to size. J. Roy Stat. Soc. B 1953, 15, 153–161. [Google Scholar] [CrossRef]
  24. Audu, A.; Singh, R.V.K.; Ishaq, O.O.; Khare, S.; Singh, R.; Adewara, A.A. On the estimation of finite population variance for a mail survey design in the presence of non-response using new conventional and calibrated estimators. Commun. Stat.—Theory Methods 2024, 53, 848–864. [Google Scholar] [CrossRef]
  25. Audu, A.; Singh, R.; Khare, S. Developing calibration estimators for population mean using robust measures of dispersion under stratified random. Stat. Transit. New Ser. 2021, 22, 125–142. [Google Scholar] [CrossRef]
  26. Audu, A.; Singh, R.; Khare, S.; Dauran, N.S. Class of Estimators under New Calibration Schemes using Non-conventional Measures of Dispersion. Philipp. Stat. 2021, 70, 23–42. [Google Scholar]
  27. Koyuncu, N. New calibration estimators for the population variance under stratified random sampling. Commun. Stat. Simul. Comput. 2020, 49, 320–336. [Google Scholar]
  28. Shahzad, M.F.; Javed, I.; Hanif, M. Calibration estimation for the population mean under ranked set sampling. Ann. Data Sci. 2021, 8, 279–295. [Google Scholar]
  29. Singh, H.P.; Mehta, N.; Connolly, C. Calibration estimators for estimating the finite population mean in the presence of non-response. J. Indian Soc. Agric. Stat. 2018, 72, 63–72. [Google Scholar]
  30. Tracy, D.S.; Singh, S.; Arnab, R. Note on calibration in stratified and double sampling. Surv. Methodol. 2003, 29, 99–104. [Google Scholar]
  31. Azeem, M. On Estimation of Population Proportion in Diagonal Systematic Sampling. Life Cycle Reliab. Saf. Eng. 2021, 10, 249–254. [Google Scholar] [CrossRef]
  32. Helton, J.C.; Johnson, J.D. Calibration, validation, and sensitivity analysis: What’s what. Reliab. Eng. Syst. Saf. 2005, 91, 1345–1355. [Google Scholar]
  33. Smith, J.; Doe, A. Reliability and Calibration of Measurement Systems. J. Meas. Sci. 2023, 45, 123–135. [Google Scholar]
  34. Johnson, R.; Lee, T. Calibration Procedures for Measurement Systems. Int. J. Qual. Eng. Technol. 2023, 12, 45–60. [Google Scholar]
Figure 1. Distributions of the frequency of attributes in the simulated data.
Figure 1. Distributions of the frequency of attributes in the simulated data.
Mathematics 12 03997 g001
Table 1. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.1.
Table 1. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.1.
EstimatorsBIASMSEPRE
p a d ( s y s ) −0.0017400.001440100
Proposed estimators
t 1 d 1 ( s y s ) −0.0063320.000991145.3643
t 1 d 2 ( s y s ) −0.0035930.001337110.7789
t 2 d 1 ( s y s ) −0.0060010.001353106.4489
Table 2. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.2.
Table 2. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.2.
EstimatorsBIASMSEPRE
p a d ( s y s ) 0.0049200.005470100
Proposed estimators
t 1 d 1 ( s y s ) 0.0018680.004257128.4968
t 1 d 2 ( s y s ) 0.0083370.004115132.9421
t 2 d 1 ( s y s ) 0.0027340.005129106.6414
Table 3. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.3.
Table 3. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.3.
EstimatorsBIASMSEPRE
p a d ( s y s ) 0.0014500.001595100
Proposed estimators
t 1 d 1 ( s y s ) 0.0009110.001273125.255
t 1 d 2 ( s y s ) 0.0024930.001282124.3626
t 2 d 1 ( s y s ) 0.0011910.001592100.1595
Table 4. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.4.
Table 4. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.4.
EstimatorsBIASMSEPRE
p a d ( s y s ) −0.0017100.002247100
Proposed estimators
t 1 d 1 ( s y s ) −0.0026040.001575142.7085
t 1 d 2 ( s y s ) −0.0006630.001932116.2662
t 2 d 1 ( s y s ) −0.0022390.002115106.2450
Table 5. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.5.
Table 5. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.5.
EstimatorsBIASMSEPRE
p a d ( s y s ) −0.0006100.002123100
Proposed estimators
t 1 d 1 ( s y s ) 0.0002550.001573134.9455
t 1 d 2 ( s y s ) 0.0023090.001870113.5177
t 2 d 1 ( s y s ) 0.0007170.002026104.7672
Table 6. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.6.
Table 6. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.6.
EstimatorsBIASMSEPRE
p a d ( s y s ) −0.0001600.001431100
Proposed estimators
t 1 d 1 ( s y s ) −0.0001460.000725197.4806
t 1 d 2 ( s y s ) 0.0012610.001070133.6865
t 2 d 1 ( s y s ) 0.0012350.001333107.3797
Table 7. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.7.
Table 7. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.7.
EstimatorsBIASMSEPRE
p a d ( s y s ) 0.0012900.001718100
Proposed estimators
t 1 d 1 ( s y s ) 0.0007190.001149149.4954
t 1 d 2 ( s y s ) 0.0017180.001134151.4669
t 2 d 1 ( s y s ) 0.0012790.001679102.2571
Table 8. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.8.
Table 8. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.8.
EstimatorsBIASMSEPRE
p a d ( s y s ) 0.0006500.0028117100
Proposed estimators
t 1 d 1 ( s y s ) 0.00045070.002376118.3419
t 1 d 2 ( s y s ) 0.0004010.002288122.8801
t 2 d 1 ( s y s ) 0.0009340.002758101.9441
Table 9. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.9.
Table 9. Biases, MSEs and PREs of estimators p a d ( s y s ) ,     t 1 d 1 ( s y s ) ,     t 1 d 2 ( s y s ) and t 2 d 1 ( s y s ) when p = 0.9.
EstimatorsBIASMSEPRE
p a d ( s y s ) 0.0011200.001606100
Proposed estimators
t 1 d 1 ( s y s ) 0.0009360.001373116.9893
t 1 d 2 ( s y s ) 0.0009140.001355118.5831
t 2 d 1 ( s y s ) 0.0013670.001565102.6482
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Audu, A.; Aphane, M.; Ahmad, J.; Singh, R.V.K. Class of Calibrated Estimators of Population Proportion Under Diagonal Systematic Sampling Scheme. Mathematics 2024, 12, 3997. https://doi.org/10.3390/math12243997

AMA Style

Audu A, Aphane M, Ahmad J, Singh RVK. Class of Calibrated Estimators of Population Proportion Under Diagonal Systematic Sampling Scheme. Mathematics. 2024; 12(24):3997. https://doi.org/10.3390/math12243997

Chicago/Turabian Style

Audu, Ahmed, Maggie Aphane, Jabir Ahmad, and Ran Vijay Kumar Singh. 2024. "Class of Calibrated Estimators of Population Proportion Under Diagonal Systematic Sampling Scheme" Mathematics 12, no. 24: 3997. https://doi.org/10.3390/math12243997

APA Style

Audu, A., Aphane, M., Ahmad, J., & Singh, R. V. K. (2024). Class of Calibrated Estimators of Population Proportion Under Diagonal Systematic Sampling Scheme. Mathematics, 12(24), 3997. https://doi.org/10.3390/math12243997

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop