Next Article in Journal
On Lexicographic and Colexicographic Orders and the Mirror (Left-Recursive) Reflected Gray Code for m-Ary Vectors
Previous Article in Journal
Neutral, Leakage, and Mixed Delays in Quaternion-Valued Neural Networks on Time Scales: Stability and Synchronization Analysis
Previous Article in Special Issue
A Two-Stage Feature Screening Framework for Ultrahigh-Dimensional Survival Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Location Retrieval Strategy Under Systematic Sampling

by
Huda M. Alshanbari
1 and
Malik Muhammad Anas
2,*
1
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Economics and Statistics, University of Salerno, 84084 Fisciano, Salerno, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(3), 441; https://doi.org/10.3390/math14030441
Submission received: 18 December 2025 / Revised: 22 January 2026 / Accepted: 24 January 2026 / Published: 27 January 2026

Abstract

Systematic sampling design is an ordered observation scheme that is popular in data collection because it has the property of uniform coverage and is operationally simple. This scheme is, however, susceptible to extreme observations, which may severely compromise the accuracy of traditional location estimators. To overcome this weakness, this research proposes a robust location retrieval or estimation method that regulates the impact of unusual observations in the ordered selection framework. In the suggested strategy, a set of twenty influence-adjusted estimators is built with a variety of re-descending weighting functions, which is then extended with another family of five generalized ones. Large-scale derivations of mean squared error (MSE) and percentage relative efficiency (PRE) are used to explore the theoretical properties of the proposed estimators. Significant improvements in stability and efficiency over existing methods are demonstrated by extensive empirical analyses, both with real data (e.g., mtcars and Trees) and on a wide variety of synthetic problems containing embedded outliers. The findings suggest that location retrieval based on influence-controlled processes is much more robust when using an ordered observation scheme and can be an efficient and scalable tool implemented in the modern data-intensive and computationally demanding environment.

1. Introduction

To analyze a subset of a given population, a sampling method is used, which can be classified as probabilistic or non-probabilistic. Specifically, in this context, we are concerned with systematic sampling, which involves selecting elements from the population at regular intervals. For a positive integer m, the process starts with an initial element g chosen at random and then selects the following sequence of elements g ,   g + m ,   g + 2 m ,   ,   g + ( t 1 ) m , where m g .
Systematic sampling is especially helpful when the population is uniformly allotted or logically listed. For example, in agricultural studies, the researcher should systematically sample the plants in a field to determine crop health. In industrial quality control, every k-th product from an assembly line is inspected periodically to measure defects. This method is favored because it is simple, easy to implement, and offers evenly distributed samples across the entire population. The cost-effectiveness of systematic sampling is one of the main advantages of the approach because it takes less time and requires less effort than simple random sampling. Systematic sampling is an appropriate method for studying natural populations, such as estimating timber volume in forests (Zinger, [1]). This technique is widely used by research institutions, such as the United Nations Food and Agriculture Organization (FAO) in its 2010 Global Forest Resources Assessment survey.
The uniform systematic sampling approach has received considerable attention, and numerous modifications and improvements have been made to it. Further details on the theoretical foundations and practical application of systematic sampling are provided in existing literature on sampling theory. The primary aim of this approach is to estimate various population characteristics or trends. This study focuses on the estimation of the mean for a particular variable. Moreover, the accuracy of these estimates is improved by incorporating an additional auxiliary variable, particularly when the sample size for the principal variable is limited.
Many researchers have calculated the average using data from one or more additional variables under systematic sampling. Initially, Swain [2] provided a ratio-type mean estimator in this context. Kushwaha and Singh [3] proposed nearly unbiased product and ratio estimators aimed at estimating the true mean of a research variable. Their methods involve the use of the jackknife technique as first proposed by Ref. [4] or a refined form of the jackknife as proposed by Singh and Singh [5]. Kushwaha and Kushwaha’s [6] research introduced a group of estimators for estimating the population average in systematic sampling design by utilizing supporting information. The study extended a previously established estimator, proposed by Swain [2], by examining conditions that enhance its efficiency. Additionally, it considered difference estimators as particular cases and identified scenarios where the proposed approach outperforms traditional estimation methods. Singh et al. [7,8] developed a generalized group of estimators for the true mean of the population in systematic sampling with a linear transformation of a supplementary variable. They derived bias and MSE expressions and an asymptotically optimal estimator. Shahzad [9] extended this concept by including the auxiliary attributes. In the study by Ali et al. [10], two classes of estimators were introduced for cases where data affecting the systematic random sampling setting includes an outlier in one or both of the critical region parameters. The first class considers ratio-type robust regression-based estimators, and the second class considers a regression-based estimator. Ref. [11] presented a regression-ratio technique for average estimation from a population with high-breakdown regression to mitigate the outlier-induced effect. The latter were extended to create these estimators using quantile regression coefficients, minimizing absolute deviations instead of squared deviations. Their estimator efficiency was then evaluated in a forestry case study and found to exceed 30 percent compared with the standard procedure for timber volume. In Audu et al.’s [12] recent work, calibrated estimators were developed to estimate population proportions from diagonal systematic samples.
Outliers in the accuracy of population parameter estimates have serious effects, but efforts have been frequently made to minimize such impact with weighted methods. Such extreme values may sometimes be ignored, but having such extreme values tends to result in significant overestimation or underestimation, impeding the reliability of traditional estimators and, in particular, increasing MSE. This has been carried out previously in [13], where an array of different estimation techniques were employed, including ratio, product, and regression-based methods, accompanied by both regular and robust estimators. On the other hand, systematic sampling methods using re-descending M-estimators to deal with outlying observations have not been explored yet. This paper addresses this gap by proposing M-type average approximations with re-descending coefficients that aid estimation accuracy in the case of anomalies under systematic sampling.
The remaining article explores the development and application of re-descending M-estimators for robust average approximation (estimation) in systematic sampling design with multiple sections. Section 2 provides an overview of re-descending M-estimators, which is one of the potential solutions under outlier-contaminated conditions. The adapted estimators are also provided in the same section. In Section 3, the theoretical framework behind the proposed estimator is explained based on a systematic sampling design. The statistical features of the estimators under study are also provided in the same section. In Section 4, the real-world applications and the simulation studies are separated into two main subsections. In the real-world application, the study evaluates the effectiveness of the developed estimators through well-known datasets, including the mtcars and Trees datasets, where comparisons are made against adapted estimators based on MSE and PRE. The results are also compared using a simulation study for varying statistical scenarios. Section 4.2 interprets the findings. Finally, Section 6 summarizes the important findings of the study, emphasizes the advantage of re-descending M-estimators in systematic sampling, and points out future research directions.

2. M-Estimators Based Average Estimation

OLS is a fundamental tool in regression analysis in many applications. However, outliers are present in the real world, and the assumption of a normal error distribution for the method is unrealistic. Therefore, OLS is not used if there are anomalies in the data (Dunder et al. [14]), even in the case of large datasets. In order to mitigate this drawback, re-descending regression methods have been developed as an extension of OLS in order to strengthen its robustness. In large samples with skewed logistic distribution, OLS is still vulnerable to distortion by outliers; robust M-estimators are more effective.
Huber built on research studying the effect of outliers on linear regression models using the M-estimator. With the central values reaching close to one and the extreme values near zero, this estimator assigns weights close to one to the central values and reduces the influence of extreme values. While not explicitly stated, the M-estimator provides a reasonable method of inference within the context of likelihood maximizing when the error normality assumption about the model may not be valid. With regard to robust M-estimators, instead of squared error terms used in the OLS, M-estimators use the following symmetric loss function:
min Θ ^ i = 1 N ψ ( ϵ i ) .
These estimators are constructed by adjusting the loss function ψ ( . ) to ensure a re-descending property (see Refs. [15,16,17,18,19,20,21]). These estimators demonstrate a great ability to mitigate the influence of outliers, thus improving the reliability of statistical inference.
The arithmetic average is a key measure employed in statistical analysis. When a strong positive linear association exists between the auxiliary and primary study variables in survey sampling, ratio estimation provides an effective approach for estimating the population mean. Ratio estimation was first introduced by Cochran during the post-1950 period and has since been a major methodological breakthrough in research, except for in agricultural studies. For a comprehensive discussion on ratio-type estimators, readers may refer to Refs. [22,23,24].
Kadilar and Cingi [25] were the first to use ratio-type estimation incorporating OLS regression coefficients. However, traditional OLS-based estimators are incredibly sensitive to outliers, and are prone to delivering very bad estimates in their presence. To address this issue, Kadilar et al. [26], Zaman et al. [27], and Koc and Koc [28] introduced robust regression techniques that enhance the stability and accuracy of ratio estimators in simple random sampling. Based on this work, we suggest an adapted class of intuitive mean estimators based on re-descending coefficients under systematic sampling.

2.1. T tt Δ 1 T tt Δ 4 Estimators with [19] Coefficients

To handle datasets consisting of outliers, Ref. [19] developed an estimator that penalizes large residuals using a weighting function. This estimator is adjustable with parameters c and a, which determine its robustness, efficiency, and flexibility in accounting for robust regression models. The corresponding objective function (OF), influence function (IF), and weight function (WF) are expressed as follows:
OF:
θ ( λ l ) = k 2 1 + λ l c 2 a , | λ l |     0
IF:
ψ ( λ l ) = λ l 1 + λ l c 2 a 1 , | λ l |   <   c , 0 , | λ l |     c
WF:
v ( λ l ) = 1 + λ l c 2 a 1 , | λ l |   <   c , 0 , | λ l |     c
The adapted family of mean estimators under systematic sampling utilizing the re-descending coefficient Γ ^ ( t 1 ) proposed by Ref. [19] is defined as follows:
T tt Δ j = [ y ¯ s + Γ ^ ( t 1 ) ( X ¯ d ( s ) ) ] D j x ¯ s + H j D j X ¯ + H j , j = 1 , , 4
Note that X ¯ d ( s ) = X ¯ x ¯ s . The MSE of T tt Δ j is derived using a Taylor series expansion:
MSE ( T tt Δ j ) = 1 f n ϑ y S y 2 + ( K tt Δ j + Γ ( t 1 ) ) 2 ϑ x S x 2 2 ( K tt Δ j + Γ ( t 1 ) ) ϑ x t * S xy ,
where:
ϑ x = 1 + ( n 1 ) r x , ϑ y = 1 + ( n 1 ) r y , t * = ϑ y ϑ x .

2.2. T tt Δ 5 T tt Δ 8 Estimators with [29] Coefficients

In the work of Khan et al. [29], a novel estimator was introduced using a hyperbolic tangent function. The estimator is particularly effective in handling asymmetric heavy-tailed distributions. Khan et al. [29] used the hyperbolic cosine function as a weight function in order to minimize sensitivity. The OF, IF, and WF are formulated as follows:
OF:
ρ ( e l ) = c tan 1 tanh e l 2 c , | e l | 0 .
IF:
ψ ( e l ) = e l cosh e l 2 c , | e l | 0 .
WF:
w ( e l ) = 1 cosh e l 2 c , | e l | 0 .
The modified class of mean estimators under systematic sampling utilizing the re-descending M-estimator coefficient Γ ^ ( t 2 ) from Khan et al. [29] is given by
T tt Δ j = [ y ¯ s + Γ ^ ( t 2 ) ( X ¯ d ( s ) ) ] D j x ¯ s + H j D j X ¯ + H j , j = 5 , , 8
The MSE of the estimators is given by
MSE ( T tt Δ j ) = 1 f n ϑ y S y 2 + ( K tt Δ j + Γ ( t 2 ) ) 2 ϑ x S x 2 2 ( K tt Δ j + Γ ( t 2 ) ) ϑ x t * S xy

2.3. T tt Δ 9 T tt Δ 12 Estimators with [20] Coefficients

Ref. [20] introduced a re-descending-type estimator that employs polynomial-driven weight functions. When residuals exceed a defined threshold h, the weight function smoothly reduces to zero. This method ensures a high breakdown point, making it resistant to leverage points. OF, IF, and WF are defined as follows:
OF:
ρ ( e l ) = e l 6 h 4 + e l 10 2 h 8 2 e l 6 h 4 + e l 2 2 2 e l 6 ( 3 e l 4 5 h 4 ) 15 h 8 , | e l |     h , 4 h 2 15 , | e l |   >   h .
IF:
ψ ( e l ) = e l 1 e l h 2 2 1 + e l h 2 2 , | e l |   <   h , 0 , | e l |     h .
WF:
w ( e l ) = 1 e l h 2 2 1 + e l h 2 2 , | e l |   <   h , 0 , | e l |     h .
The modified class of mean estimators using the re-descending M-estimator coefficient Γ ^ ( t 3 ) proposed by Anekwe and Onyeagu [20] is given by
T tt Δ j = [ y ¯ s + Γ ^ ( t 3 ) ( X ¯ d ( s ) ) ] D j x ¯ s + H j D j X ¯ + H j , j = 9 , , 12
The MSE of these estimators is given by
MSE ( T tt Δ j ) = 1 f n ϑ y S y 2 + ( K tt Δ j + Γ ( t 3 ) ) 2 ϑ x S x 2 2 ( K tt Δ j + Γ ( t 3 ) ) ϑ x t * S xy

2.4. T tt Δ 13 T tt Δ 16 Estimators with [30] Coefficients

Ref. [30] defined a parameterized robustness method for re-descending estimator. The weight function of the estimator is purposely designed to vary the parameters of its robustness and efficiency, m and b, so that some desired balance can be achieved. The OF, IF, and WF are defined as follows:
OF:
ρ ( e l ) = m 2 2 b 1 1 + e l m 2 b , | e l |     0
IF:
ψ ( e l ) = e l 1 + e l m 2 b 1 , | e l |     0
WF:
w ( e l ) = 1 + e l m 2 b 1 , | e l |     0
The modified class of mean estimators utilizing the re-descending M-estimator coefficient Γ ^ ( t 4 ) proposed by Raza et al. [30] is expressed as
T tt Δ j = [ y ¯ s + Γ ^ ( t 4 ) ( X ¯ d ( s ) ) ] D j x ¯ s + H j D j X ¯ + H j , j = 13 , , 16
The MSE of these estimators is formulated as
MSE ( T tt Δ j ) = 1 f n ϑ y S y 2 + ( K tt Δ j + Γ ( t 4 ) ) 2 ϑ x S x 2 2 ( K tt Δ j + Γ ( t 4 ) ) ϑ x t * S xy

2.5. T tt Δ 17 T tt Δ 20 Estimators with [31] Re-Descending

Raza et al. [31] suggested a re-descending-type estimator of a higher-order polynomial, which minimizes the effect of anomalies while keeping the accuracy in data regions away from the tails. The estimator’s OF, IF, and WF are defined as follows:
OF:
ρ ( e l ) = e l 2 810 b 6 e l 8 30 b 2 e l 4 + 405 b 4 , | e l |     b , 18 b 2 405 , | e l |   >   b .
IF:
ψ ( e l ) = e l 1 e l 3 b 4 2 , | e l |     b , 0 , | e l |   >   b .
WF:
w ( e l ) = 1 e l 3 b 4 2 , | e l |     b , 0 , | e l |   >   b .
The modified class of mean estimators using the re-descending M-estimator coefficient Γ ^ ( t 5 ) proposed by Raza et al. [31] is expressed as
T tt Δ j = [ y ¯ s + Γ ^ ( t 5 ) ( X ¯ d ( s ) ) ] D j x ¯ s + H j D j X ¯ + H j , j = 17 , , 20
The MSE of these estimators is given by
MSE ( T tt Δ j ) = 1 f n ϑ y S y 2 + ( K tt Δ j + Γ ( t 5 ) ) 2 ϑ x S x 2 2 ( K tt Δ j + Γ ( t 5 ) ) ϑ x t * S xy
In the estimators T tt Δ 1 T tt Δ 20 ,
K tt Δ j = D j Y ¯ D j X ¯ + H j , for j = 1 , , 20
where D j and H j take values:
( D j , H j ) = ( 0 , 0 ) , j = 1 , 5 , 9 , 13 , 17 ( 1 , H c ) , j = 2 , 6 , 10 , 14 , 18 ( 1 , H b ) , j = 3 , 7 , 11 , 15 , 19 ( H c , H b ) , j = 4 , 8 , 12 , 16 , 20
In Equation (13), H c represents the coefficient of variation, and H b denotes the kurtosis coefficient of X.
The tuning constants a , c , h , b , and m control the aggressiveness of the respective re-descending influence functions and thus directly control the robust-efficiency trade-off. These constants are not treated as free design parameters in this research but are instead chosen based on the original formulations and recommendations in the relevant robust regression references [19,20,29,30,31]. Smaller values of a , c , or h tend to mean that more extreme residual values have been rejected/down-weighted (some robustness at the cost of efficiency), whereas larger values imply fewer of them have been thrown out (less robustness at the cost of a large amount of contaminated data). On the same note, m and b can control the smoothness and tail decay of the weight function, and thus the extreme observations can be either well diminished or hard discarded. The approach that is used in practice is to choose these parameters based on robust scale approximations of the residuals and to select them such that the central values have almost equal weights, while extreme values are heavily suppressed.
The importance of robust estimation methods increases in real-life situations when the data is noisy and contaminated [32,33]. It is noteworthy, as highlighted in the Introduction, that even though a number of strong estimation methods have been used to approximate the average parameter in the presence and absence of anomalies, including Huber M-estimators [34], quantile regression [28], and jackknife-based methods [4], these approaches have already been established. While these methods are utilized in simple or stratified random sampling, to the best of our knowledge, no current study has applied re-descending M-estimators to the estimation of means with a systematic sampling design in environments characterized by outlier influence. This use of re-descending M-estimators is the gist and the novelty of our work: the creation of 20 adapted estimators specifically designed to improve the robustness in the systematic sampling case.

3. Proposed Family: T Q 1 T Q 5

The data points that deviate significantly from other data points in the dataset are called outliers. Their presence can have a huge effect on estimating the mean as a way of measuring the center point of the distribution. In most cases, traditional mean estimators such as the population mean ( Y ¯ ) are approximated through OLS regression coefficients. However, these regression coefficients are greatly influenced by the existence of anomalies, making the approximates (estimates) of the population mean ( y ¯ ) unreliable. To counteract this problem, a robust alternative is provided by M-estimators for data that are non-normal and contaminated by extreme values. Also, re-descending coefficients are less sensitive to outliers (Raza et al. [30,31]). OLS and re-descending regression, as different techniques for modeling the link between a response variable and a predictor, differ in their optimization criterion. We extend the notion of the adapted class T tt to a novel class of robust re-descending-coefficient-driven estimators. The developed family of estimators within the systematic random sampling design is
T Q i = y ¯ s + Γ ^ ( t i ) ( X ¯ d ( s ) ) for i = 1 , 2 , . . . , 5 ,
where Γ ^ ( t i ) represents the five re-descending coefficients ( Γ ^ ( t 1 ) , Γ ^ ( t 2 ) , Γ ^ ( t 3 ) , Γ ^ ( t 4 ) , Γ ^ ( t 5 ) ) used in this context. The notations used in T Q i retain their standard interpretation as discussed in previous sections. The MSE corresponding to the developed estimator family is given by
MSE ( T Q i ) = V ( y ¯ s ) 2 Γ ( t i ) cov ( x ¯ s , y ¯ s ) + Γ ( t i ) 2 V ( x ¯ s ) .
Inserting the values of V ( y ¯ s ) , V ( x ¯ s ) , and cov ( x ¯ s , y ¯ s ) from Ref. [11] into Equation (15) results in:
MSE ( T Q i ) = 1 f n ϑ y S y 2 2 Γ ( t i ) ϑ x t * S xy + Γ ( t i ) 2 ϑ x S x 2 .
It is essential to highlight that our estimation process incorporates five distinct re-descending coefficients. To enhance robustness, these coefficients are determined iteratively in order to dampen the extreme values. The weight functions, w ( e l ) , applied by each estimator, are specifically structured to reduce the effect of large residuals. Initially, the OLS technique is employed for the initial estimate of Γ ^ ( t i ) . Subsequently, an iterative weighted least squares algorithm is applied, where residual weights are updated in each iteration t according to
Γ ^ ( t + 1 ) = w ( e l ( t ) ) X i Y i w ( e l ( t ) ) X i 2 .
Repeat the iteration until the convergence criterion | Γ ^ ( t + 1 ) Γ ^ ( t ) |   <   ϵ is satisfied, with ϵ being defined as a predefined tolerance threshold.
For clarity, the five key members of the developed family and their MSE are presented in a compact form as follows:
T Q i = y ¯ s + Γ ^ ( t 1 ) ( X ¯ d ( s ) ) , for   ( i ) = ( 1 ) , y ¯ s + Γ ^ ( t 2 ) ( X ¯ d ( s ) ) , for   ( i ) = ( 2 ) , y ¯ s + Γ ^ ( t 3 ) ( X ¯ d ( s ) ) , for   ( i ) = ( 3 ) , y ¯ s + Γ ^ ( t 4 ) ( X ¯ d ( s ) ) , for   ( i ) = ( 4 ) , y ¯ s + Γ ^ ( t 5 ) ( X ¯ d ( s ) ) , for   ( i ) = ( 5 ) .
MSE ( T Q i ) = 1 f n ϑ y S y 2 2 Γ ( t 1 ) ϑ x t * S xy + Γ ( t 1 ) 2 ϑ x S x 2 , for   ( i ) = ( 1 ) , 1 f n ϑ y S y 2 2 Γ ( t 2 ) ϑ x t * S xy + Γ ( t 2 ) 2 ϑ x S x 2 , for   ( i ) = ( 2 ) , 1 f n ϑ y S y 2 2 Γ ( t 3 ) ϑ x t * S xy + Γ ( t 3 ) 2 ϑ x S x 2 , for   ( i ) = ( 3 ) , 1 f n ϑ y S y 2 2 Γ ( t 4 ) ϑ x t * S xy + Γ ( t 4 ) 2 ϑ x S x 2 , for   ( i ) = ( 4 ) , 1 f n ϑ y S y 2 2 Γ ( t 5 ) ϑ x t * S xy + Γ ( t 5 ) 2 ϑ x S x 2 , for   ( i ) = ( 5 ) .
We should also note that when finding the expressions of MSE, the first-order Taylor series approximation was applied. This method is standard in the analysis of robust estimation schemes, as it is simple and can give a sensible analytical estimate of the behavior of estimators in conditions of mild regularity. First-order expansion is a representation of the principal linear characteristic of the estimator at the actual parameter, which is usually sufficient with large samples or in moderate deviations. Although higher-order terms may provide more accurate approximations, especially where there are extreme outliers or heavy tails, they have less effect on bias and variance in practice in systematic sampling problems. In addition, there is no guarantee that adding terms of higher order adds the same amount of complexity to an algebric structure or that this doing so provides an equivalent improvement in accuracy. Therefore, the first-order approximation is a trade-off between features of easy analysis and empirical validity, which is borne out by the fact that the theoretical MSEs have the same behavior as the ones in our simulation studies, as shown in Section 4.
The efficiency conditions of the developed family can be determined by comparing the MSE of T tt Δ 1 T tt Δ 20 with T Q 1 T Q 5 :
Efficiency conditions = Γ ( t 1 ) > S X Y S X 2 1 2 K tt Δ j , for   j = 1 , 2 , 3 , 4 , Γ ( t 2 ) > S X Y S X 2 1 2 K tt Δ j , for   j = 5 , 6 , 7 , 8 , Γ ( t 3 ) > S X Y S X 2 1 2 K tt Δ j , for   j = 9 , 10 , 11 , 12 , Γ ( t 4 ) > S X Y S X 2 1 2 K tt Δ j , for   j = 13 , 14 , 15 , 16 , Γ ( t 5 ) > S X Y S X 2 1 2 K tt Δ j , for   j = 17 , 18 , 19 , 20 .

4. Numerical Performance Evaluation

We evaluated our proposed and adaptive estimators in this section using real and synthetic data.

4.1. Applications to Real-World Data

4.1.1. Cars Dataset

The mtcars dataset available in R is a widely recognized real-world dataset originally published in the 1974 Motor Trend US magazine. It was derived from the 1973–74 Road Test dataset by Henderson and Velleman [35]. In this study, we considered miles per gallon (mpg) as the research variable and horsepower (hp) as the supplementary variable, with an intra-class correlation of r X r Y r W = 0.3192 . This dataset is a good candidate for systematic sampling, as it is structured and can be ordered systematically.

4.1.2. Trees Dataset

We also examined the estimators using the trees dataset, which originates from Murthy [36]. In this case, the supplementary variable is strip length, while the research variable is the timber volume for 176 forest strips. The intra-class correlation was calculated as r X r Y r W = 0.1510 .
Table 1 presents the key characteristics that are observed in the datasets. The visualizations in Figure 1 and Figure 2 show the data distributions, highlighting asymmetry along with outliers emerging from these datasets. Also, it is worth noting that the Trees data in Figure 2 shows leverage-type outliers. The computed MSE and PRE outcomes for these real datasets are provided in Table 2, Table 3, Table 4 and Table 5.

4.2. Simulation Study

Within the framework of systematic sampling, a comprehensive simulation experiment was conducted to evaluate the behavior of the developed re-descending estimators. This study consisted of an analytical investigation of a variety of artificially constructed populations ( 1 , 2 , 3 ) , some of which specifically included outliers. In these populations, X was generated from Uniform ( 1 , 100 ) , Gumbel ( 50 , 10 ) , and Weibull ( 2 , 50 ) , respectively. Y was defined using a linear model:
Y = 2 X + ϵ ,
where ϵ N ( 0 , 10 ) . To introduce anomalies into the dataset, random deviations were added to selected observations of Y using N ( 100 , 20 ) , as anomalies significantly deviate from the majority of the data. The datasets generated by this method contained standard data points and extreme outliers that reflect the case in the real world, where outliers are caused by measurement error or some rare event.
The simulated dataset consisted of 100 observations, including both regular and outlier points. Figure 3, Figure 4 and Figure 5 display the distribution of the independent variable Y using violin plots generated in R software. The violin density distributions in the plots are shown in gray and red, representing regular data points and data points with outliers, respectively. Each violin at different Y values shows the width, which represents the data density. A wider section indicates a higher concentration, while a narrower section represents a lower concentration in the data. The black dots represent individual observations, clearly distinguishable from the rest in the red violin. The legend on the right differentiates between regular observations and outliers.
It is important to note that we used a convergence tolerance of ϵ = 10 6 across all re-descending coefficient calculations to achieve consistency and fairness among all estimators. The simulation-based MSE and PRE outcomes for these experimental populations are reported in Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11. For more details about the calculation of table results, refer to [37].

4.3. Results Discussion

4.3.1. Cars Data

Table 2 results highlight the superior effectiveness of ( T Q 1 T Q 5 ) estimators relative to the adapted ones. The MSE values reveal that the developed estimators exhibit significantly reduced error margins relative to the best-performing adapted ones, which is T tt Δ 8 with an MSE of 3.950901. The MSE of the best-performing estimator, T Q 2 , is 1.31742, which is about three times smaller. This reduction in error indicates that the developed estimators are better in dealing with the variability in the Cars dataset, especially for the outliers. Additionally, in Table 3, the PRE values further support this presumption. T Q 2 yields a PRE of 333.2642, greatly outperforming the traditional adapted estimators. The consistency of the proposed estimators’ PRE values demonstrates their robustness and adaptability to systematic sampling scenarios across all proposed estimators.

4.3.2. Trees Data

Table 4 results demonstrate the superior efficiency of ( T Q 1 T Q 5 ) estimators in comparison with the adapted estimators for the Trees dataset. The MSE values demonstrate a significant reduction in estimation error compared to the best-performing adapted estimator, T tt Δ 3 , which has an MSE of 6252.913. In contrast, T Q 1 performs best, with an MSE of 4750.646. This reduction indicates the robustness of the developed estimators for systematic sampling within extreme data scenarios. Additionally, the PRE values in Table 5 also support this finding, as the proposed estimators exhibit much higher levels of efficiency due to outlier severity, with T Q 1 having the highest PRE of 1809.2878.

4.3.3. First Simulated Data

Table 6 presents the results that show a considerable gain in estimation accuracy and efficiency using the proposed re-descending M-estimators over the adapted estimators. The developed MSE values are substantially reduced, and the best-performing proposed estimator, T Q 2 , has an MSE of 14.7064, which is lower than the best adapted estimator, T tt Δ 8 , with an MSE of 84.18337. This shows that the developed estimators are more robust and accurate in the case of simulated data contaminated with outliers. In Table 7, the PRE values further confirm that the proposed estimators work well, with the T Q 2 estimator obtaining the highest PRE of 699.1072, which exceeds the ones obtained for the adapted estimators by a significant margin.

4.3.4. Second Simulated Data

Table 8 shows that the developed re-descending M-estimators perform more accurately than the adapted ones in the second simulated dataset. An appreciable reduction in the value of the MSE is evidenced by the lower values of the MSE, with the best adapted estimator, T tt Δ 8 , having an MSE value of 24.06637. However, T Q 2 performs the best, with an MSE of 14.71397. A substantial decrease in error compared to a systematic sample with a skewed or heavy-tailed distribution confirms that the proposed estimators are robust for systematic sampling. Additionally, as shown in Table 9, T Q 2 has the highest PRE of 245.6101, which is significantly higher than the corresponding adapted estimators.

4.3.5. Third Simulated Data

Table 10 shows the strong performance of ( T Q 1 T Q 5 ) estimators compared to adapted estimators in the third simulated dataset. The estimation error is significantly decreased, as the best -performing proposed estimator, T Q 4 , has an MSE of only 14.6955, which has achieved lower MSEs compared to the best adapted estimator, which is T tt Δ 8 , with an MSE of 49.63902. The existence of this substantial reduction in error shows the robustness of the proposed estimators, particularly when handling systematic sampling, where the distributions are skewed or have heavy tails. Additionally, these PRE values from Table 11 coincide with the findings, as the proposed estimators continue to have consistently higher efficiency, with the T Q 4 showing the highest PRE of 444.6700.
Based on the correlation of all the real and simulated populations. It is recommended to use proposed estimators for moderate and high correlation in the presence of outliers.
Even though some of the suggested estimators may seem algebraically identical, they vary in the selection of tuning parameters, which dictate the shape and vigor of the re-descending weight functions. The parameters are important to the trade-off between robustness and efficiency, in which a large parameter will tend to smooth outliers, and a small parameter will tend to smooth attenuation. To estimate the sensitivity of our estimators to these parameter choices, we utilized performance measures like MSE and PRE, which are sensitive to the choice of parameters. However, the relative performance of our estimators did not change much over a wide range of values. These results indicate that the estimators are relatively consistent to moderate variations of the tuning constants.
This significant efficiency gain confirms that the developed estimators perform much more efficiently than traditional adapted ones in terms of accuracy and robustness. Thus, they seem to be appropriate for use in the real world when data contain outliers or exhibit asymmetric distributions.

5. Limitations of the Study

One of the significant limitations of the proposed estimators is their particular design in terms of datasets that contain outliers; they are incredibly effective when used in such cases, but their benefits can decrease when using clean and normally distributed data. Based on this, they are mostly advised to be used in situations where outliers are probable or known to exist.

6. Conclusions

Overall, this study evaluated comprehensive, robust mean estimators, such as re-descending M-estimators, in systematic sampling. The disadvantage of traditional mean estimators is that they remain very susceptible to outliers, resulting in a large bias in the estimates and low efficiency of the estimators. To overcome this problem, we developed estimators ( T Q 1 T Q 5 ) that limit the impact of extreme values on the estimated parameters while maintaining statistical efficiency under systematic sampling with bias. For the analysis, real datasets (mtcars and trees) and simulated datasets were used, and empirically, the performance of these estimators was consistently more favorable in terms of MSE compared with conventional adapted estimators. PRE values were also calculated to confirm the accuracy of the developed ( T Q 1 T Q 5 ) estimators across different asymmetric datasets. These estimators offer increased accuracy and are the recommended choice as a reliable alternative to standard robust estimation techniques. For future research and in the study of high-dimensional data, these estimators can be extended to rare-event and cluster sampling [38].

Author Contributions

Conceptualization, H.M.A. and M.M.A.; Methodology, H.M.A. and M.M.A.; Software, H.M.A. and M.M.A.; Validation, H.M.A. and M.M.A.; Formal analysis, H.M.A. and M.M.A.; Investigation, H.M.A. and M.M.A.; Resources, H.M.A. and M.M.A.; Data curation, H.M.A. and M.M.A.; Writing—original draft, H.M.A. and M.M.A.; Writing—review & editing, H.M.A. and M.M.A.; Visualization, H.M.A.; Supervision, H.M.A.; Project administration, H.M.A.; Funding acquisition, H.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research and Libraries in Princess Nourah bint Abdulrahman University for funding this research work through the Supporting Publication in Top-Impact Journals Initiative (SPTIF-2026).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zinger, A. Systematic sampling in forestry. Biometrics 1964, 20, 553–565. [Google Scholar] [CrossRef]
  2. Swain, A.K.P.C. The use of systematic sampling ratio estimate. J. Indian Stat. Assoc. 1964, 2, 160–164. [Google Scholar]
  3. Kushwaha, K.S.; Singh, H.P. Class of almost unbiased ratio and product estimators in systematic sampling. J. Ind. Soc. Agric. Stat. 1989, 41, 193–205. [Google Scholar]
  4. Quenouille, M.H. Notes on bias in estimation. Biometrika 1956, 43, 353–360. [Google Scholar] [CrossRef]
  5. Singh, S.; Singh, R. Almost filtration of bias precipitates: A new approach. J. Ind. Soc. Agric. Stat. 1993, 45, 214–218. [Google Scholar]
  6. Kushwaha, S.N.S.; Kushwaha, K.S. A class of ratio, product and difference (RPD) estimators in systematic sampling. Microelectron. Reliab. 1993, 33, 455–457. [Google Scholar]
  7. Singh, R.; Malik, S.; Chaudhary, M.K.; Verma, H.K.; Adewara, A.A. A general family of ratio-type estimators in systematic sampling. J. Reliab. Stat. Stud. 2012, 5, 73–82. [Google Scholar]
  8. Singh, H.P.; Solanki, R.S. An efficient class of estimators for the population mean using auxiliary information in systematic sampling. J. Stat. Theory Pract. 2012, 6, 274–285. [Google Scholar] [CrossRef]
  9. Shahzad, U. On the estimation of population mean under systematic sampling using auxiliary attributes. Orient. J. Phys. Sci. 2016, 1, 21–25. [Google Scholar] [CrossRef]
  10. Ali, N.; Ahmad, I.; Hanif, M.; Shahzad, U.; Masood, M. Robust estimators for mean estimation in systematic sampling with the numerical application in forestry. Fresenius Environ. Bull. 2021, 30, 5635–5644. [Google Scholar]
  11. Shahzad, U.; Ahmad, I.; Al-Noor, N.H.; Hanif, M.; Almanjahie, I.M. Robust estimation of the population mean using quantile regression under systematic sampling. Math. Popul. Stud. 2023, 30, 195–207. [Google Scholar] [CrossRef]
  12. Audu, A.; Aphane, M.; Ahmad, J.; Singh, R.V.K. Class of Calibrated Estimators of Population Proportion Under Diagonal Systematic Sampling Scheme. Mathematics 2024, 12, 3997. [Google Scholar] [CrossRef]
  13. Triveni, G.R.V.; Danish, F.; Albalawi, O. Optimizing Variance Estimation in Stratified Random Sampling through a Log-Type Estimator for Finite Populations. Symmetry 2024, 16, 540. [Google Scholar] [CrossRef]
  14. Dunder, E.; Zaman, T.; Cengiz, M.; Alakus, K. Implementation of adaptive lasso regression based on multiple Theil-Sen Estimators using differential evolution algorithm with heavy tailed errors. J. Natl. Sci. Found. Sri Lanka 2022, 50, 395–404. [Google Scholar] [CrossRef]
  15. Beaton, A.E.; Tukey, J.W. The fitting of power series, meaning polynomials, illustrated on band-spectroscopic data. Technometrics 1974, 16, 147–185. [Google Scholar] [CrossRef]
  16. Qadir, M.F. Robust method for detection of single and multiple outliers. Sci. Khyber 1996, 9, 135–144. [Google Scholar]
  17. Alamgir, A.A.; Khan, S.A.; Khan, D.M.; Khalil, U. A new efficient redescending M-estimator: Alamgir redescending M-estimator. Res. J. Recent Sci. 2013, 2277, 2502. [Google Scholar]
  18. Khalil, U.; Ali, A.; Khan, D.M.; Khan, S.A.; Qadir, F. Efficient UK’s re-descending m-estimator for robust regression. Pak. J. Stat. 2016, 32, 125–138. [Google Scholar]
  19. Noor-Ul-Amin, M.; Asghar, S.U.D.; Sanaullah, A.; Shehzad, M.A. Redescending M-estimator for robust regression. J. Reliab. Stat. Stud. 2018, 11, 69–80. [Google Scholar]
  20. Anekwe, S.; Onyeagu, S. The Redescending M estimator for detection and deletion of outliers in regression analysis. Pak. J. Stat. Oper. Res. 2021, 17, 997–1014. [Google Scholar] [CrossRef]
  21. Luo, R.; Chen, Y.; Song, S. On the M-estimator under third moment condition. Mathematics 2022, 10, 1713. [Google Scholar] [CrossRef]
  22. Cochran, W.G. Sampling Techniques; John Wiley and Sons: New York, NY, USA, 1977. [Google Scholar]
  23. Cetin, A.E.; Koyuncu, N. Calibration estimator of population mean in stratified extreme ranked set sampling with simulation study. Filomat 2024, 38, 599–608. [Google Scholar] [CrossRef]
  24. Daraz, U.; Agustiana, D.; Wu, J.; Emam, W. Twofold Auxiliary Information Under Two-Phase Sampling: An Improved Family of Double-Transformed Variance Estimators. Axioms 2025, 14, 64. [Google Scholar] [CrossRef]
  25. Kadilar, C.; Cingi, H. Ratio estimators in simple random sampling. Appl. Math. Comput. 2004, 151, 893–902. [Google Scholar] [CrossRef]
  26. Kadılar, C.; Cingi, H. Ratio estimators using robust regression. Hacet. J. Math. Stat. 2007, 36, 181–188. [Google Scholar]
  27. Zaman, T.; Iftikhar, S.; Sozen, C.; Sharma, P. A new logarithmic type estimators for analysis of number of aftershocks using poisson distribution. J. Sci. Arts 2024, 24, 833–842. [Google Scholar] [CrossRef]
  28. Koc, T.; Koc, H. A new class of quantile regression ratio-type estimators for finite population mean in stratified random sampling. Axioms 2023, 12, 713. [Google Scholar] [CrossRef]
  29. Khan, D.M.; Ali, M.; Ahmad, Z.; Manzoor, S.; Hussain, S. A New Efficient Redescending M-Estimator for Robust Fitting of Linear Regression Models in the Presence of Outliers. Math. Probl. Eng. 2021, 2021, 3090537. [Google Scholar] [CrossRef]
  30. Raza, A.; Noor-ul-Amin, M.; Ayari-Akkari, A.; Nabi, M.; Usman Aslam, M. A redescending M-estimator approach for outlier-resilient modeling. Sci. Rep. 2024, 14, 7131. [Google Scholar] [CrossRef]
  31. Raza, A.; Talib, M.; Noor-ul-Amin, M.; Gunaime, N.; Boukhris, I.; Nabi, M. Enhancing performance in the presence of outliers with redescending M-estimators. Sci. Rep. 2024, 14, 13529. [Google Scholar] [CrossRef]
  32. Zhang, H.; Xu, Y.; Luo, R.; Mao, Y. Fast GNSS acquisition algorithm based on SFFT with high noise immunity. China Commun. 2023, 20, 70–83. [Google Scholar] [CrossRef]
  33. Li, J.; Sun, R.; Wang, Y.; Ochieng, W.Y. A robust time synchronization algorithm for GNSS/IMU integrated navigation in urban environments. Meas. Sci. Technol. 2025, 36, 036302. [Google Scholar] [CrossRef]
  34. Huber, P.J. Robust estimation of a location parameter. In Breakthroughs in Statistics: Methodology and Distribution; Springer: New York, NY, USA, 1992; pp. 492–518. [Google Scholar]
  35. Henderson, H.V.; Velleman, P.F. Building multiple regression models interactively. Biometrics 1981, 37, 391–411. [Google Scholar] [CrossRef]
  36. Murthy, M.N. Sampling Theory and Methods; Statistical Publishing Society: New Delhi, India, 1967. [Google Scholar]
  37. Rashedi, K.; Abdulrahman, A.; Alshammari, T.; Alshammari, K.; Shahzad, U.; Shabbir, J.; Mehmood, T.; Ahmad, I. Robust Särndal-Type Mean Estimators with Re-Descending Coefficients. Axioms 2025, 14, 261. [Google Scholar] [CrossRef]
  38. Chutiman, N.; Nathomthong, A.; Wichitchan, S.; Guayjarernpanishk, P. Improved Estimator Using Auxiliary Information in Adaptive Cluster Sampling with Networks Selected Without Replacement. Symmetry 2025, 17, 375. [Google Scholar] [CrossRef]
Figure 1. Cars data scatter plot.
Figure 1. Cars data scatter plot.
Mathematics 14 00441 g001
Figure 2. Trees data scatter plot.
Figure 2. Trees data scatter plot.
Mathematics 14 00441 g002
Figure 3. Presentation of outliers in the first simulated data.
Figure 3. Presentation of outliers in the first simulated data.
Mathematics 14 00441 g003
Figure 4. Presentation of outliers in second simulated data.
Figure 4. Presentation of outliers in second simulated data.
Mathematics 14 00441 g004
Figure 5. Presentation of outliers in third simulated data.
Figure 5. Presentation of outliers in third simulated data.
Mathematics 14 00441 g005
Table 1. Cars and Trees data.
Table 1. Cars and Trees data.
Cars DataTrees Data
N32176
n44
ρ −0.77610.9520
S y 6.026948606.6869
S x 68.5628719.58067
S x y −320.732111309.95
Γ ( t 1 ) 0.105907331.19974
Γ ( t 2 ) 0.096525131.98072
Γ ( t 3 ) 0.107261331.19974
Γ ( t 4 ) 0.105293731.56664
Γ ( t 5 ) 0.0997334231.19974
Table 2. MSEs for Cars data.
Table 2. MSEs for Cars data.
Adapted
T tt Δ 1 T tt Δ 4 T tt Δ 1 T tt Δ 2 T tt Δ 3 T tt Δ 4
4.3536794.3418864.2782834.196810
T tt Δ 5 T tt Δ 8 T tt Δ 5 T tt Δ 6 T tt Δ 7 T tt Δ 8
4.1029954.0915584.0298833.950901
T tt Δ 9 T tt Δ 12 T tt Δ 9 T tt Δ 10 T tt Δ 11 T tt Δ 12
4.3904904.3786454.3147654.232931
T tt Δ 13 T tt Δ 16 T tt Δ 13 T tt Δ 14 T tt Δ 15 T tt Δ 16
4.3370484.3252784.2618024.180491
T tt Δ 17 T tt Δ 20 T tt Δ 17 T tt Δ 18 T tt Δ 19 T tt Δ 20
4.1878554.1762954.1139624.034128
Proposed
T Q 1 T Q 5 T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
1.456051.317421.476691.4467481.363962
Table 3. PREs for Cars data.
Table 3. PREs for Cars data.
Estimators T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
T tt Δ 1 299.0062330.4700294.8269300.9287319.1936
T tt Δ 2 298.1962329.5749294.0283300.1135318.3290
T tt Δ 3 293.8281324.7471289.7212295.7173313.6659
T tt Δ 4 288.2326318.5627284.2039290.0858307.6926
T tt Δ 5 281.7895311.4416277.8509283.6012300.8145
T tt Δ 6 281.0040310.5735277.0763282.8107299.9760
T tt Δ 7 276.7683305.8920272.8998278.5477295.4543
T tt Δ 8 271.3438299.8968267.5512273.0884289.6636
T tt Δ 9 301.5343333.2642297.3197303.4730321.8924
T tt Δ 10 300.7209332.3651296.5176302.6543321.0240
T tt Δ 11 296.3336327.5162292.1917298.2389316.3406
T tt Δ 12 290.7134321.3046286.6500292.5825310.3409
T tt Δ 13 297.8640329.2077293.7007299.7791317.9743
T tt Δ 14 297.0557328.3143292.9036298.9656317.1114
T tt Δ 15 292.6962323.4960288.6051294.5781312.4576
T tt Δ 16 287.1118317.3241283.0988288.9578306.4962
T tt Δ 17 287.6175317.8830283.5975289.4668307.0360
T tt Δ 18 286.8237317.0056282.8147288.6678306.1886
T tt Δ 19 282.5427312.2741278.5935284.3593301.6185
T tt Δ 20 277.0597306.2142273.1872278.8411295.7654
Table 4. MSEs for Trees data.
Table 4. MSEs for Trees data.
Adapted
T tt Δ 1 T tt Δ 4 T tt Δ 1 T tt Δ 2 T tt Δ 3 T tt Δ 4
85,952.8660,073.036252.9138895.110
T tt Δ 5 T tt Δ 8 T tt Δ 5 T tt Δ 6 T tt Δ 7 T tt Δ 8
89,173.0662,737.576738.3949658.905
T tt Δ 9 T tt Δ 12 T tt Δ 9 T tt Δ 10 T tt Δ 11 T tt Δ 12
85,952.8660,073.026252.9128895.109
T tt Δ 13 T tt Δ 16 T tt Δ 13 T tt Δ 14 T tt Δ 15 T tt Δ 16
87,457.9061,317.036473.2049246.151
T tt Δ 17 T tt Δ 20 T tt Δ 17 T tt Δ 18 T tt Δ 19 T tt Δ 20
85,952.8660,073.036252.9138895.110
Proposed
T Q 1 T Q 5 T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
4750.6464918.0234750.6464821.4944750.646
Table 5. PREs for Trees data.
Table 5. PREs for Trees data.
Estimators T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
T tt Δ 1 1809.28781747.71151809.28781782.70171809.2878
T tt Δ 2 1264.52331221.48721264.52341245.94211264.5233
T tt Δ 3 131.6224127.1428131.6224129.6883131.6224
T tt Δ 4 187.2400180.8676187.2400184.4887187.2400
T tt Δ 5 1877.07231813.18911877.07241849.49021877.0723
T tt Δ 6 1320.61141275.66641320.61141301.20601320.6114
T tt Δ 7 141.8416137.0143141.8416139.7574141.8416
T tt Δ 8 203.3177196.3981203.3177200.3301203.3177
T tt Δ 9 1809.28771747.71141809.28781782.70161809.2877
T tt Δ 10 1264.52331221.48721264.52331245.94211264.5233
T tt Δ 11 131.6224127.1428131.6224129.6883131.6224
T tt Δ 12 187.2400180.8676187.2400184.4886187.2400
T tt Δ 13 1840.96861778.31411840.96861813.91701840.9686
T tt Δ 14 1290.70921246.78191290.70931271.74321290.7092
T tt Δ 15 136.2594131.6221136.2595134.2572136.2594
T tt Δ 16 194.6293188.0054194.6293191.7694194.6293
T tt Δ 17 1809.28781747.71151809.28781782.70171809.2878
T tt Δ 18 1264.52331221.48721264.52331245.94211264.5233
T tt Δ 19 131.6224127.1428131.6224129.6883131.6224
T tt Δ 20 187.2400180.8676187.2400184.4887187.2400
Table 6. MSEs from the first simulated data.
Table 6. MSEs from the first simulated data.
Adapted
T tt Δ 1 T tt Δ 4 T tt Δ 1 T tt Δ 2 T tt Δ 3 T tt Δ 4
102.76081101.3860696.8942391.13955
T tt Δ 5 T tt Δ 8 T tt Δ 5 T tt Δ 6 T tt Δ 7 T tt Δ 8
95.2826893.9678089.6749784.18337
T tt Δ 9 T tt Δ 12 T tt Δ 9 T tt Δ 10 T tt Δ 11 T tt Δ 12
102.81353101.4383696.9451691.18866
T tt Δ 13 T tt Δ 16 T tt Δ 13 T tt Δ 14 T tt Δ 15 T tt Δ 16
102.72157101.3471296.8563191.10298
T tt Δ 17 T tt Δ 20 T tt Δ 17 T tt Δ 18 T tt Δ 19 T tt Δ 20
100.9187899.5585395.1149189.42396
Proposed
T Q 1 T Q 5 T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
15.051714.706415.0552415.0490814.93762
Table 7. PREs for the first simulated data.
Table 7. PREs for the first simulated data.
Estimators T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
T tt Δ 1 682.7188698.7487682.5587682.8377687.9330
T tt Δ 2 673.5853689.4008673.4273673.7026678.7298
T tt Δ 3 643.7426658.8574643.5916643.8547648.6592
T tt Δ 4 605.5098619.7269605.3678605.6153610.1344
T tt Δ 5 633.0358647.8992632.8873633.1460637.8706
T tt Δ 6 624.3001638.9583624.1536624.4088629.0681
T tt Δ 7 595.7796609.7682595.6398595.8833600.3298
T tt Δ 8 559.2946572.4266559.1634559.3920563.5662
T tt Δ 9 683.0690699.1072682.9088683.1880688.2859
T tt Δ 10 673.9328689.7564673.7747674.0501679.0799
T tt Δ 11 644.0810659.2037643.9299644.1931649.0001
T tt Δ 12 605.8361620.0609605.6940605.9416610.4632
T tt Δ 13 682.4581698.4819682.2980682.5769687.6703
T tt Δ 14 673.3266689.1360673.1686673.4438678.4691
T tt Δ 15 643.4907658.5996643.3397643.6027648.4053
T tt Δ 16 605.2669619.4783605.1249605.3723609.8896
T tt Δ 17 670.4808686.2234670.3235670.5975675.6015
T tt Δ 18 661.4436676.9740661.2885661.5588666.4953
T tt Δ 19 631.9212646.7585631.7730632.0313636.7475
T tt Δ 20 594.1119608.0614593.9725594.2153598.6494
Table 8. MSEs from second simulated data.
Table 8. MSEs from second simulated data.
Adapted
T tt Δ 1 T tt Δ 4 T tt Δ 1 T tt Δ 2 T tt Δ 3 T tt Δ 4
36.1390035.9906132.4562325.19268
T tt Δ 5 T tt Δ 8 T tt Δ 5 T tt Δ 6 T tt Δ 7 T tt Δ 8
34.5176734.3749930.9831524.06637
T tt Δ 9 T tt Δ 12 T tt Δ 9 T tt Δ 10 T tt Δ 11 T tt Δ 12
36.1277535.9794032.4459825.18480
T tt Δ 13 T tt Δ 16 T tt Δ 13 T tt Δ 14 T tt Δ 15 T tt Δ 16
36.0250035.8770132.3524725.11286
T tt Δ 17 T tt Δ 20 T tt Δ 17 T tt Δ 18 T tt Δ 19 T tt Δ 20
35.9972835.8493832.3272425.09347
Proposed
T Q 1 T Q 5 T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
14.8406714.7139714.8395914.8298814.82729
Table 9. PREs for the second simulated data.
Table 9. PREs for the second simulated data.
Estimators T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
T tt Δ 1 243.5133245.6101243.5310243.6905243.7329
T tt Δ 2 242.5134244.6016242.5310242.6899242.7322
T tt Δ 3 218.6979220.5810218.7138218.8570218.8951
T tt Δ 4 169.7543171.2160169.7667169.8779169.9075
T tt Δ 5 232.5883234.5910232.6052232.7576232.7981
T tt Δ 6 231.6270233.6214231.6438231.7955231.8359
T tt Δ 7 208.7719210.5696208.7871208.9239208.9602
T tt Δ 8 162.1650163.5613162.1768162.2830162.3113
T tt Δ 9 243.4374245.5336243.4551243.6146243.6570
T tt Δ 10 242.4378244.5254242.4554242.6143242.6565
T tt Δ 11 218.6288220.5114218.6447218.7879218.8261
T tt Δ 12 169.7012171.1624169.7135169.8247169.8543
T tt Δ 13 242.7451244.8353242.7628242.9218242.9641
T tt Δ 14 241.7479243.8295241.7655241.9238241.9660
T tt Δ 15 217.9987219.8758218.0145218.1573218.1953
T tt Δ 16 169.2165170.6736169.2288169.3397169.3692
T tt Δ 17 242.5583244.6469242.5759242.7348242.7771
T tt Δ 18 241.5618243.6417241.5793241.7375241.7796
T tt Δ 19 217.8287219.7043217.8445217.9872218.0252
T tt Δ 20 169.0858170.5417169.0981169.2089169.2383
Table 10. MSEs from third simulated data.
Table 10. MSEs from third simulated data.
Adapted
T tt Δ 1 T tt Δ 4 T tt Δ 1 T tt Δ 2 T tt Δ 3 T tt Δ 4
65.0071063.7096457.4697252.12881
T tt Δ 5 T tt Δ 8 T tt Δ 5 T tt Δ 6 T tt Δ 7 T tt Δ 8
62.1138060.8544554.8052849.63902
T tt Δ 9 T tt Δ 12 T tt Δ 9 T tt Δ 10 T tt Δ 11 T tt Δ 12
65.3360464.0343257.7730752.41262
T tt Δ 13 T tt Δ 16 T tt Δ 13 T tt Δ 14 T tt Δ 15 T tt Δ 16
63.4626362.1853756.0465850.79826
T tt Δ 17 T tt Δ 20 T tt Δ 17 T tt Δ 18 T tt Δ 19 T tt Δ 20
65.3464964.0446457.7827152.42165
Proposed
T Q 1 T Q 5 T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
14.7067814.7056414.7122314.695514.71242
Table 11. PREs for the third simulated data.
Table 11. PREs for the third simulated data.
Estimators T Q 1 T Q 2 T Q 3 T Q 4 T Q 5
T tt Δ 1 442.0214442.0557441.8575442.3605441.8517
T tt Δ 2 433.1992433.2328433.0386433.5316433.0330
T tt Δ 3 390.7703390.8007390.6255391.0702390.6204
T tt Δ 4 354.4544354.4819354.3229354.7263354.3184
T tt Δ 5 422.3481422.3809422.1915422.6722422.1861
T tt Δ 6 413.7851413.8172413.6317414.1026413.6263
T tt Δ 7 372.6532372.6822372.5151372.9392372.5102
T tt Δ 8 337.5248337.5510337.3997337.7838337.3953
T tt Δ 9 444.2580444.2925444.0933444.5989444.0875
T tt Δ 10 435.4069435.4407435.2454435.7409435.2398
T tt Δ 11 392.8330392.8634392.6873393.1344392.6822
T tt Δ 12 356.3841356.4118356.2520356.6576356.2474
T tt Δ 13 431.5196431.5531431.3596431.8507431.3541
T tt Δ 14 422.8348422.8676422.6780423.1593422.6726
T tt Δ 15 381.0936381.1231380.9523381.3860380.9473
T tt Δ 16 345.4071345.4339345.2791345.6721345.2746
T tt Δ 17 444.3291444.3636444.1644444.6700444.1586
T tt Δ 18 435.4770435.5108435.3156435.8112435.3099
T tt Δ 19 392.8985392.9290392.7529393.2000392.7478
T tt Δ 20 356.4455356.4732356.3133356.7190356.3087
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alshanbari, H.M.; Anas, M.M. Robust Location Retrieval Strategy Under Systematic Sampling. Mathematics 2026, 14, 441. https://doi.org/10.3390/math14030441

AMA Style

Alshanbari HM, Anas MM. Robust Location Retrieval Strategy Under Systematic Sampling. Mathematics. 2026; 14(3):441. https://doi.org/10.3390/math14030441

Chicago/Turabian Style

Alshanbari, Huda M., and Malik Muhammad Anas. 2026. "Robust Location Retrieval Strategy Under Systematic Sampling" Mathematics 14, no. 3: 441. https://doi.org/10.3390/math14030441

APA Style

Alshanbari, H. M., & Anas, M. M. (2026). Robust Location Retrieval Strategy Under Systematic Sampling. Mathematics, 14(3), 441. https://doi.org/10.3390/math14030441

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop