Next Article in Journal
Additive Effects of Small Permanent Charges on Ionic Flow Using Poisson–Nernst–Planck Systems
Previous Article in Journal
Some Reverse Inequalities for Scalar Birkhoff Weak Integrable Functions
Previous Article in Special Issue
Statistical Formulation for Average Parameter Determination via Quantile-Linked Auxiliary Characteristics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust and Non-Parametric Regression Estimators for Predictive Mean Estimation in Stratified Sampling

1
Department of Statistics, PMAS-Arid Agriculture University, Rawalpindi 46300, Pakistan
2
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Axioms 2026, 15(2), 134; https://doi.org/10.3390/axioms15020134
Submission received: 13 December 2025 / Revised: 3 February 2026 / Accepted: 10 February 2026 / Published: 12 February 2026
(This article belongs to the Special Issue Probability, Statistics and Estimations, 2nd Edition)

Abstract

In modern survey sampling, particularly when using stratified random sampling (StRS), the existence of outliers and model mis-specifications is a daunting challenge to the conventional parametric and nonparametric methods of estimating parameters. This research presents a new type of predictive estimator that is synergistic to both robust regression and nonparametric local polynomial kernel regression. It aims to offer more resistant and efficient estimators of the average parameter in the areas where supplementary information is known, but irregularity in the data is usual. The proposed estimators use dual calibration methods based on both auxiliary variable means and coefficients of variation, which improves efficiency. This framework enhances predictive performance by integrating the adaptability of kernel-based smoothing with the outlier resistance of robust regression. The accuracy of the suggested estimators is measured by using large scales of simulation experiments on artificial populations with structural heterogeneity and outlier contamination. An empirical comparison, based on percentage relative efficiency (PRE), indicates that the new estimators are superior to classical methods based on the use of a kernel regression in most bandwidth selection strategies. In addition to bringing methodological innovation as it connects distribution theory, regression models, and robust estimation strategies, this work also offers the usefulness of survey practitioners who work with complicated and imperfect real-life data of fisheries and radiations.

1. Introduction

In multifaceted survey environments, supplemental information gathered through a national census, remote sensing network, or environment inventory may be of great importance in the design as well as estimation steps of a survey. It is common to use these external sources of data in order to formulate effective estimators of major population parameters including the total or mean. The conventional methods of estimation were founded on the belief that the research variable has a functional relationship, usually linear, with auxiliary variables. These model-based estimators necessitate the upfront specification of underlying model structure, which, in any case, is a hard task when there are more than a few variables or when the population is characterised in a complex manner (Opsomer et al. [1] and Wu and Sitter [2]). These have led to the tendency to switch to nonparametric methods, which are more flexible since they do not presume strongly defined functional forms. It is necessary to highlight that the initial work of Dorfman [3] and subsequently of Dorfman and Hall [4] established the principles of nonparametric modelling inclusion in survey estimation, which can enable more flexible approaches that can be used to model up complex relations in various populations.
Compared to parametric methods, nonparametric inference methods are less sensitive to sampling designs and assumptions of the models (Nadaraya, [5]). The theoretical literature focuses on two major frameworks of the development of efficient estimators: the design-driven framework, which is built only on the randomisation of the sampling design, and the model-driven framework, which assumes the consideration of the finite population conceptualised as stemming from a superpopulation model. The latter model allows for estimating even when non-sampled units are used, and an assumed correlation exists between the survey and auxiliary variables (Dorfman and Hall, [4]). One early step in this direction was the work of Nadaraya, which introduced the Local Polynomial Regression (LPR) as a versatile nonparametric alternative to the classical parametric regression estimators [5]. The numerical experiments demonstrate that the LPR-based estimators are the lowest in MSE (the highest in PRE) compared to the parametric estimators, which are allowed to be used in skewed and outlier-contaminated conditions. On this basis, Rueda and Sanchez-Borrego (RSB) [6] applied the use of LPR techniques to guide probability sampling situations, which added to the further verification of their usefulness in model-based predictive contexts. Recent developments in robust regression and nonparametric learning, with a focus on distribution-resistant modelling such as functional coefficient and quantile-based regression [7], neural network-driven robustness [8], and the adaptive kernel smoothing methods [9,10], highlight the increasing importance of flexible and outlier-resistant estimation strategies for complex data environments.
The local polynomial kernel regression is a versatile tool that can be used to handle both continuous and discrete datasets; although, its usefulness heavily relies on the characteristics of the response variable. In continuous cases, the technique is superior as it models localised polynomials at the target point where the local observations have a heavier influence on the output by weighting with a kernel. This localised construction provides adaptive smoothing that makes little assumptions about the global shape of the underlying function. In the case of discrete or categorical data, it must be modified. As an example, binary or multiclass outcomes are modelled with local logistic regression, and count data are generally modelled with local versions of generalised linear models (GLMs) with a link function, such as Poisson or negative binomial. In such situations, it is particularly important to choose a suitable bandwidth of a kernel, since sparse or unevenly distributed data may have a considerable influence on the model stability and predictive capabilities.
The traditional average stands as one of the most prevalent measures of statistics used as a fundamental measure of data summary in all fields of study, including the sciences, social sciences as well as arts (Zaman [11]). Since it can be interpreted and is generally applicable, the correct estimation of the population mean is of critical significance in survey sampling as well as in a variety of applications (Subzar et al., [12]; Kumar and Siddiqui, [13]). An in-depth description of the methods of mean estimation is available in the article by Shahzad et al. [14] and Koc and Koc [15]. There is an ongoing need of more efficient and reliable ways of estimating the mean in the light of its critical importance. This has elicited the increased use of the model-based methods, which incorporate the auxiliary information and easily customisable modelling structures to enhance the accuracy of the mean estimates.
In this paper, the evidence-based literature of the model-based nonparametric mean estimation methods is subject to discussion since they are broadly applied to estimate population parameters in the context of complex sampling designs. In model-based estimation, the dependent and independent variables are modelled explicitly, and one can predict non-sampled units. The other type of estimation technique constructs the structure of the underlying model that forms the basis of parameter determination (Srivastava, [16]). These models, subject to specific assumptions, enable the imputation of unobserved values at both micro and macro levels. In the estimation, when a sampling design has been used to gather data, this design can be included in the estimation process just like design-based methods. RSB [6] formulated an LPR-assisted estimator on simple random sampling and exhibited a number of the desirable properties following a model-based paradigm. Such kernel-based and nonparametric predictive estimators, however, are susceptible to outliers, a major drawback of these types of estimators, which are common in real-world applications like environmental or meteorological measurements because of sensor errors, data anomalies, or extreme events. As a solution to this, we suggest a new predictive mean estimator, which is a combination of robustness of the regression methods to ensure that they do not give prominence to the outliers in the data and the flexibility of the kernel regression, which offers local smoothing without the assumption of the global parametric form. This mixed methodology improves the accuracy and consistency of central tendency estimation when contaminated data is present and is thus especially appropriate for real-life situations where data quality problems are widespread.
In the case of many predictor variables, LPR may be generalised to a multivariate framework, often known as multiple local polynomial regression (MLPR), where a local polynomial surface is fitted, instead of a curve. Although this generalisation allows for modelling complex multidimensional relationships, it also comes with the problem of the curse of dimensionality. Higher dimensions cause the data to be sparse in higher dimensional space, which may cause instability and decrease the reliability of the estimates. In addition, appropriate bandwidths to be used in each covariate must be chosen; the wrong decisions may lead to over-smoothing with important signals or under-smoothing with noisy predictors. Conversely, these problems are to a great extent alleviated when the model contains one predictor variable. The sparsity in one-dimensional space permits smoothing to be more effective and consistent, and makes bandwidth choice more convenient, decreasing the likelihood of inflation in the variance and making the regression more robust overall.
Although there is a rich body of literature on both model-based and calibration-type mean estimation techniques, to our knowledge, no prior research has constructed calibrated predictive mean estimators under stratified random sampling that apply (i) robust regression to sampled units and (ii) local polynomial regression to non-sampled units, incorporating dual calibration constraints on auxiliary means and coefficients of variation. Historically, the calibration estimators enhance accuracy by adding auxiliary information using constraints to match sample estimates with known population characteristics. This paper then builds on that research by suggesting a new hybrid estimator in which the sampled subset of the population is estimated with outlier-resistant regression approaches, which remain resistant to outlier-induced distortions, and the non-sampled subset is then estimated with kernel regression, which is flexible and data-driven in its smoothing. Robust regression contributes to the accuracy and stability of the overall predictive mean estimator, especially when the data is contaminated. In order to operationalise this hybrid process, we apply a model-based methodology where we implement an LPR estimator to the non-sampled units. It is also necessary to carefully select kernel bandwidths that would have an understood effect on the quality of kernel-based estimator. The resultant process does not only fill the gap between two effective nonparametric tools but is also a practical and strong substitute to more traditional estimators in stratified sampling designs. The application of Gaussian kernel function also guarantees a stable and smooth estimation behaviour in a variety of data conditions.
The accurate measurement and assessment of natural resources, in particular, aquaculture and fisheries have received considerable attention in recent years due to the need for sustainable management and information-based decision-making. Estimation of the average values of biological parameters such as fish length, weight, and body shape are significant in stock assessment and economic and operational planning in the fishery industry. In this paper, the performance of predictive mean estimators is compared on a real-life data of fish market that has diverse morphological characteristics. Meanwhile, a simulated dataset concerning solar ultraviolet (UV) radiation is taken into account as well, which covers significant environmental variables and categories of UV risks. It is necessary to estimate average levels of UV exposure in the environment to conduct environmental risk assessment and develop adaptive strategies in aquatic ecosystems. Collectively, the datasets can help investigate the possibility of powerful, model-oriented mean estimation methods that combine both biological and environmental data. This study also highlights the excellence of estimating means through advanced survey sampling and predictive models in the intricate natural environment, which is useful for fishery management, environmental surveillance, and natural resource planning.
The theoretical and methodological foundation for the creation of a new type of predictive estimators in StRS is laid out in the later Section 2, Section 3, Section 4 and Section 5 of this article. Section 2 explains the existing model of kernel regression in a stratified sampling case and briefly describes the LPR estimator as a nonparametric version of the classical linear regression estimator. This section also describes the way in which the estimator depends upon the smoothing parameters and auxiliary variables and how it is sensitive to the bandwidth selection and the problem of contaminated data. Section 3 deals with the issue of incorporating robust regression techniques into the nonparametric estimation framework. This combination results in a more resistant kernel regression estimator that suppresses the impact of outliers and heteroscedastic noise, particularly in stratified data in the real world. The work is further expanded in Section 3, where two calibrated forms of the adaptive predictive estimators are introduced, which make more efficient use of auxiliary information. Section 4 includes a detailed numerical analysis of artificial and natural populations intended to provide simulations of real situations of sampling with outliers and stratified design. The results are evaluated based on PRE, using three bandwidth selection techniques: fixed, data-driven plug-in (dpik), and biased cross-validation (bcv). Ultimately, Section 5 delivers the final conclusions.

2. Fundamental Estimators

Alshanbari and Anas [17], Alomair et al. [18], and RSB [6] propose a model-driven technique, according to which the bounded population is supposed to be satisfactorily characterised by a predictive model, denoted as ξ , such that
y i = m ( w i ) + ϖ i
In stratified random sampling (StRS), the predictive model is generalised to every stratum H ϑ , which is denoted as
y i H ϑ = m ( w i H ϑ ) + ϖ i H ϑ
where ϖ i H ϑ represents independent, identically distributed random errors with zero mean, E ξ H ϑ ( ϖ i H ϑ ) = 0 , and constant variance σ H ϑ 2 = 1 . Further, m ( · ) is a smooth and unknown function of the supplementary variable w and E ξ H ϑ is the expectation in the model ξ H ϑ .
After selecting a sample, the average of the population in stratum H ϑ , which is denoted by Y ¯ H ϑ , can be written as
Y ¯ H ϑ = f H ϑ y ¯ s H ϑ + ( 1 f H ϑ ) y ¯ s ¯ H ϑ
In this case, i.e., Equation (1), y ¯ s H ϑ = ( n H ϑ ) 1 i H ϑ s H ϑ y i H ϑ represents the sample mean of the sampled unit s H ϑ and y ¯ s ¯ H ϑ = 1 N H ϑ n H ϑ j s ¯ H ϑ y j H ϑ is the non-sample mean of non-sampled unit s ¯ H ϑ . The population and sample counts in the stratum are represented by N H ϑ and n H ϑ respectively and the sampling fraction is f H ϑ = n H ϑ N H ϑ . N is the total elements of the strata.
It should be noted that the first term of Equation (1) can be calculated directly using the sample. Consequently, the task is to estimate the unknown element y ¯ s ¯ H ϑ , which refers to non-sampled units. If auxiliary variable w were observed in all units, it would be easy to predict using the regression model, y j H ϑ = m ( w j H ϑ ) is the proxy of the unobservable y j H ϑ , and j H ϑ s ¯ H ϑ . Nevertheless, in a real world context, the actual m ( · ) is not known. In response to this, nonparametric kernel regression methods are used to obtain predictions m ( · ) , which are predictions m ^ j H ϑ , obtained at each j H ϑ s ¯ H ϑ , as illustrated by Chambers et al. [19]. Since then, this method has been adapted and generalised by a number of researchers such as RSB [6] to enhance the predictive estimation with respect to more complicated sampling designs.

2.1. Rueda and Sanchez-Borrego Estimator

Based on the fundamental contributions made by RSB [6], the traditional model-driven estimator under the stratified random sampling (StRS) of the H ϑ th stratum takes the form
y ¯ B R H ϑ = f H ϑ y ¯ s H ϑ + ( 1 f H ϑ ) 1 N H ϑ n H ϑ j H ϑ = s ¯ H ϑ m ^ j H ϑ
The aggregated estimator y ¯ B R of the entire population that is represented by all strata is expressed as
y ¯ B R = H ϑ = 1 C P H ϑ y ¯ B R H ϑ ,
where P H ϑ = N H ϑ n H ϑ is the weight of the single stratum and C is the weight of all the strata, being considered.
It is worth noting that m ^ j H ϑ , which is obtained through LPR, is a generalisation of the classical LPR regression model and may be employed across diverse forms of modelling. Following the approach taken by Ref. [20] and its developed methodology, RSB [6] utilised a kernel-based p th -order LPR estimator aimed at computing the research variable. The kernel function takes the form K h ( u ) = h 1 K ( u / h ) , in which K is generally adopted as a Gaussian-shaped kernel, with h representing the window-width parameter. To see a bigger picture pertaining to up-to-date changes within the sphere of the kernel-based approach, the readers can turn to Refs. [18,21,22].
Accordingly, the predicted value m ^ j H ϑ for the non-sampled unit j H ϑ s ¯ H ϑ is calculated using
m ^ j H ϑ = e 1 ( W s j H ϑ G s j H ϑ W s j H ϑ ) 1 W s j H ϑ G s j H ϑ Y s H ϑ = g s j H ϑ Y s H ϑ ,
where e 1 is a unit vector of length p + 1 , Y s H ϑ = [ y i H ϑ ] i H ϑ s H ϑ represents the vector of observed responses, G s j H ϑ = diag { K h ( w i H ϑ w j H ϑ ) } i H ϑ s H ϑ is the diagonal weight matrix formed using the kernel function, and W s j H ϑ = [ 1 , ( w i H ϑ w j H ϑ ) , , ( w i H ϑ w j H ϑ ) p ] i H ϑ s H ϑ is the design matrix constructed from local polynomial terms.
In the findings of Alshanbari and Anas [17], it is observed that under stratified sampling, the base estimator y ¯ B R H ϑ can be improved upon using calibration techniques.

2.2. Alshanbari and Anas Estimator

The inclusion of auxiliary data in estimation processes has been commonly accepted as a favourable approach to the accuracy of estimators of means. The common assumption defined in Shahzad et al. [23] and Zaman [24] is that a meaningful association is present between the principal variable of the study, Y, and a corresponding auxiliary variable, W. As an example, one can state a solid example of the positive correlation between education and income in which education has generally been considered as a causal variable that affects income. Several socioeconomic studies have proven this association (Leesch and Skopek [25]). Likewise, in health sciences, a considerable amount of empirical evidence has been conducted to show the positive impact of physical activity on cardiovascular health. According to Kaiser and Oswald [26], more active people tend to have a healthier heart condition on average. These examples demonstrate that in the proper use of the auxiliary variables, they help to refine the mean calculation and increase the trustworthiness of the survey outcomes.
Calibration estimation is generally recognised as an effective method of smoothing survey weights by minimising an appropriated distance functional; it is also applied to combine auxiliary or additional information. Many scholars have noted the importance of carrying out calibration in strata in order to maximise the effectiveness of population parameter estimates. Construction of the calibration weights is generally divided into two basic modes: selection of the appropriate distance measure and introduction of the calibration constraints. These constraints when well-matched with auxiliary variables could greatly enhance the accuracy of the estimates of the main study variable. This method was expanded by Refs. [27,28], which included several calibration conditions in the general survey sampling paradigm, a concept also examined by Refs. [29,30,31,32]. Although these developments have been made, few efforts have been focused on the development of calibrated mean estimators in the context of stratified random sampling (StRS) and within the model-based framework. The Alshanbari and Anas [17] study is one of them. Their study presents a new, calibration-supported, model-driven mean estimator with StRS using the flexibility of nonparametric calibrated kernel-oriented nonparametric regression techniques.
In the StRS scheme, ( N , n ) denote the population size and total sample size respectively. ( w ¯ H ϑ , W ¯ H ϑ ) represents the sample and population averages and CVs ( C ^ w H ϑ , C w H ϑ ) of the supplementary variable W of the H ϑ t h stratum. Likewise, ( P H ϑ , Π H ϑ ) represents the usual stratified weights and their calibrated versions. Based on the foregoing definitions, a particular randomly obtained sample n H ϑ is selected out of a stratum containing N H ϑ population units, with H ϑ = 1 , 2 , , C . Under these conditions, the calibrated estimator introduced by Alshanbari and Anas [17] is given as
y ¯ P M = H ϑ = 1 C Π H ϑ y ¯ B R H ϑ ,
subject to the constraints
H ϑ = 1 C Π H ϑ w ¯ H ϑ = H ϑ = 1 C P H ϑ W ¯ H ϑ
H ϑ = 1 C Π H ϑ C ^ w H ϑ = H ϑ = 1 C P H ϑ C w H ϑ
H ϑ = 1 C Π H ϑ = H ϑ = 1 C P H ϑ
The motivation of incorporating loss functions within the calibration framework as discussed by Ref. [27] is the accuracy of estimating the parameter by changing the weight of the sampled units. This optimisation is a process of reducing a given distance measure, usually between the original design weights and the calibrated weights, with a known set of calibration constraints. In order to operationalise this process, we build a Lagrange-type function (LF) developed by adding the constraint multipliers ( η 1 ( m ) , η 2 ( m ) , η 3 ( m ) ) to a loss function H ϑ = 1 C ( Π H ϑ P H ϑ ) 2 Q ^ H ϑ P H ϑ based on chi-square and have the following formulation:
A ( m 2 ) = H ϑ = 1 C ( Π H ϑ P H ϑ ) 2 Q ^ H ϑ P H ϑ 2 η 1 ( m ) H ϑ = 1 C Π H ϑ w ¯ H ϑ H ϑ = 1 C P H ϑ W ¯ H ϑ 2 η 2 ( m ) H ϑ = 1 C Π H ϑ C ^ w H ϑ H ϑ = 1 C P H ϑ C w H ϑ 2 η 3 ( m ) H ϑ = 1 C Π H ϑ H ϑ = 1 C P H ϑ .
Computing δ A ( m 2 ) δ Π H ϑ and subjecting it to the zero-gradient condition provides
Π H ϑ = P H ϑ + Q ^ H ϑ P H ϑ η 1 ( m ) w ¯ H ϑ + η 2 ( m ) C ^ w H ϑ + η 3 ( m ) .
Calibrated weights ( Π H ϑ ) have a number of desirable characteristics, such as being able to reduce bias and minimise variance and being coherent with known auxiliary information. The main goal in constructing such weights is to initiate a suitably weighted average of the supplementary information in the sample to their respective established population totals in an attempt to increase the quality of estimates using surveys. It should be noted, however, that one cannot assume calibrated weights will be strictly positive. Negative-valued weights may occur, especially in cases where large differences arise between the distribution of the sample and population, or certain types of distance functions are used in the calibration. The chi-square distance is one of the distance functions that is quite effective in alleviating the negative weight occurrence. The reason is that it penalises large deviations relative to the starting weights which provides incentives to change values without moving too far. Consequently, the application of chi-square distance causes more consistent calibration by providing a better balance and minimising the likelihood of assigning extreme or negative weights.
By substituting (8) in (4), (5), and (6), respectively, we get
G 1 ( 3 × 3 ) η 1 ( 3 × 1 ) = F 1 ( 3 × 1 ) ,
where
η 1 ( 3 × 1 ) = η 1 ( m ) η 2 ( m ) η 3 ( m ) ,
F 1 ( 3 × 1 ) = H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ 0 ,
G 1 ( 3 × 3 ) = H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ .
By solving Equation (9), we get
η 1 ( m ) = D 71 ( m ) H 1 , η 2 ( m ) = D 72 ( m ) H 1 , η 3 ( m ) = D 73 ( m ) H 1 ,
where D 71 ( m ) , D 72 ( m ) , D 73 ( m ) , and H 1 are provided in Appendix A.
Substituting these values in (8) and (3), we get
y ¯ P M = y ¯ s t ( m ) + η 1 ( m ) H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ B R H ϑ + η 2 ( m ) H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ B R H ϑ + η 3 ( m ) H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ B R H ϑ ,
= H ϑ = 1 C P H ϑ y ¯ B R H ϑ + R a H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R b H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ,
where
R a = D 74 ( m ) H 1 , R b = D 75 ( m ) H 1 ,
D 74 ( m ) = H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ + H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ + H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ ,
D 75 ( m ) = H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ + H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ + H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ B R H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 .
It is worth noticing that the adapted estimator y ¯ P M may be generalised into a more generalised form by choosing different values of Q ^ H ϑ . As a simplification to help the readers to stay on point, we will now assume that Q ^ H ϑ = 1 . However, estimator formulation is adjustable by inclusion of various known values of the population characteristic, Q ^ H ϑ , which permits a range of functional manifestations. To shed more light on such generalisations, the readers can consult the works of Refs. [33,34,35].

3. Adapted and Proposed Families of Estimators

3.1. Adapted Family

In this section, we propose some novel type of predictive robust estimators based on the predictive-robust-regression approach. Building on previous work by [36,37] and extending concepts from [6,17] as discussed in Section 2, our method calculates the population mean of a target variable Y with the aid of additional information W in the StRS design. Although the least squares (LS) method is generally considered to be the standard method of parameter estimation, a number of robust alternatives have been created to accommodate the presence of the outliers and non-normality. The least absolute deviations (LAD) is one way of doing this, first suggested by Roger Joseph Boscovich in 1757, and minimises the total of the absolute residuals rather than the squared residuals. Continuing on the necessity of sound methods, Huber [38] proposed the Huber-M estimator, that is, instead of the squared error in the LS, he used a symmetric function, named ρ . Huber [39] went on to extend this concept to regression modelling, which formed the basis of later robust estimators. A number of researchers extended the methodology of Huber: Hampel [40] introduced the Hampel-M estimator, Tukey [41] introduced the Tukey-M estimator, and Yohai [42] introduced the Huber-MM estimator, each improving on the robustness properties of different data structures. Another robust estimator, trimmed least squares (TLS), proposed by Rousseeuw and Yohai [43], is more robust based on trimming out extreme residuals that could reflect outliers. Rousseeuw and Leroy [44] defined the least median of squares (LMS) estimator, which minimises the median of the squared residuals (rather than the mean), and is therefore highly resistant to the influence of outliers.
This article constructs a unified group of predictive mean estimators to maintain more accurate and robust estimation when using survey data with irregularities or outliers by combining effective regression frameworks, including Hampel-M, LAD, LMS, Tukey-M, LTS, Huber-M, and Huber-MM. Based on the structure outlined in Equation (2), we enhance the classical model-based mean estimator by substituting its sampled part, with robust regression-based mean estimators defined by Zaman and Bulut [36,37]. Hence, the adapted outlier-resistant family of predictive estimators for one stratum is given by
y ¯ R P H ϑ ( i ) = f H ϑ y ¯ r H ϑ ( i ) + ( 1 f H ϑ ) 1 N H ϑ n H ϑ j H ϑ = s ¯ H ϑ m ^ j H ϑ ; f o r i = 1 , 2 , , 7
where
y ¯ r H ϑ ( i ) = { y ¯ s H ϑ + Ξ ^ s H ϑ ( l a d ) ( W ¯ s H ϑ w ¯ s H ϑ ) ; f o r i = 1 { y ¯ s H ϑ + Ξ ^ s H ϑ ( l m s ) ( W ¯ s H ϑ w ¯ s H ϑ ) ; f o r i = 2 { y ¯ s H ϑ + Ξ ^ s H ϑ ( l t s ) ( W ¯ s H ϑ w ¯ s H ϑ ) ; f o r i = 3 { y ¯ s H ϑ + Ξ ^ s H ϑ ( h u b e r ) ( W ¯ s H ϑ w ¯ s H ϑ ) ; f o r i = 4 { y ¯ s H ϑ + Ξ ^ s H ϑ ( h a m p l e ) ( W ¯ s H ϑ w ¯ s H ϑ ) ; f o r i = 5 { y ¯ s H ϑ + Ξ ^ s H ϑ ( t u c k e y ) ( W ¯ s H ϑ w ¯ s H ϑ ) ; f o r i = 6 { y ¯ s H ϑ + Ξ ^ s H ϑ ( m m ) ( W ¯ s H ϑ w ¯ s H ϑ ) ; f o r i = 7
For all the strata, y ¯ R P H ϑ ( i ) can be written as
y ¯ R P ( i ) = H ϑ = 1 C P H ϑ y ¯ R P H ϑ ( i ) ; f o r i = 1 , 2 , , 7 .
In y ¯ R P ( i ) , the robust regression method is applied on the sampled units to derive the y ¯ R P H ϑ ( i ) , therefore mitigating the effect of outliers. The non-sampled units are approximated by means of the kernel regression with the aid of auxiliary variable, W. Such an estimator can be applied to heterogeneous or contaminated populations because of its combination of robustness by resistant estimation of its mean and flexibility through nonparametric prediction based on a kernel.

3.2. Proposed Family

The model-based framework builds upon the application of superpopulation models represented by the symbol ξ and the assumption that the finite population of interest is a realisation of random variables according to the model ξ . This method of modelling enables an informed prediction of the unobservable aspects of the population in order to estimate the finite population parameters on the mean population of the study variable, Y. The main benefits of model-based inference are as follows:
  • The model-driven paradigm also known as the prediction paradigm provides a logical and theoretically grounded basis of statistical inference in finite population models, with classical estimators being a natural optimal predictor in appropriate model conditions.
  • This approach is consistent with the current inferential paradigms of applied disciplines like econometrics and biostatistics.
  • Model-based estimators can give results that are similar to those obtained using design-based methods, under regularity conditions, and on sufficiently large samples, making them more attractive to practitioners.
  • Model-based estimators are more likely to show lower variance in comparison to their design-based counterparts, particularly where the model accurately captures the underlying data-generating process.
The constructed two-fold callibration scheme has been craftily constructed in order to reflect the location and proportional change of the supplementary information W. The use of the coefficient of variation (CV) as a constraint turns out to be especially useful in those cases where the supplementary variable experiences variance instability or highly inter-stratum. This allows the calibrating weights to modify the variability scale and the central tendency (mean) and thereby enhances the estimator accuracy. Calibration based on the mean and CV may considerably decrease the mean squared error (MSE), particularly in cases where the connection between the auxiliary and study variables departs from linearity, as indicated by Refs. [31,33]. In this regard, CV-based calibration can be discussed as a mechanism of scale adjustment that normalises the variability by strata, effectively stabilising the estimation process and enhancing its overall efficiency.
In this research, we construct a calibrated model-based predictive family of mean estimators of the StRS framework [28,30,31,33], referred to as y ¯ P P ( i ) , that combines robust regression of the sampled units and the use of the kernel regression of the non-sampled units. The family assumes the following general form:
y ¯ P P ( i ) = H ϑ = 1 C Π H ϑ y ¯ R P H ϑ ( i ) ; f o r i = 1 , 2 , , 7 ,
where Π H ϑ is the weight of the calibration of the weight in the stratum H ϑ and y ¯ R P H ϑ ( i ) is the estimator family of the predictive mean, as specified in Equation (12). Π H ϑ is derived by minimising a chi-square distance of the actual design weights P H ϑ with the constraints, as given in Equations (4)–(6). These constraints are included through Lagrange multipliers ( η 1 ( m ) , η 2 ( m ) , η 3 ( m ) ) in the optimisation process. The last form of Π H ϑ is obtained as a function of the Lagrange multipliers and the auxiliary information as in Equation (8), which provide calibrated weights satisfying the constraints that are more efficient in estimations. It is important to note that the entire process of deriving the final form of Lagrange multipliers ( η 1 ( m ) , η 2 ( m ) , η 3 ( m ) ) and calibrated weights is deliberately omitted in order to avoid redundancy. Nonetheless, curious readers may observe the specific stages in the previous subsection. The new form of y ¯ P P ( i ) is obtained by substituting the derived Lagrange multipliers ( η 1 ( m ) , η 2 ( m ) , η 3 ( m ) ) and the calibrated weight Π H ϑ :
y ¯ P P ( i ) = y ¯ R P ( i ) + η 1 ( m ) H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ R P H ϑ ( i ) + η 2 ( m ) H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ R P H ϑ ( i ) + η 3 ( m ) H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ R P H ϑ ( i ) ,
= H ϑ = 1 C P H ϑ y ¯ R P H ϑ ( i ) + R c ( i ) H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R d ( i ) H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ,
where
R c ( i ) = D 76 ( m ( i ) ) H 1 , R d ( i ) = D 77 ( m ( i ) ) H 1 ,
D 76 ( m ( i ) ) = H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ + H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ + H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ ,
D 77 ( m ( i ) ) = H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ + H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ + H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ y ¯ R P H ϑ ( i ) H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 .
All the family members of y ¯ P P ( i ) based on the generalised final form of Equation (17) are
y ¯ P P ( i ) = H ϑ = 1 C P H ϑ { y ¯ s H ϑ + Ξ ^ s H ϑ ( l a d ) ( W ¯ s H ϑ w ¯ s H ϑ ) } + R c ( 1 ) H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R d ( 1 ) H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ; f o r i = 1 H ϑ = 1 C P H ϑ { y ¯ s H ϑ + Ξ ^ s H ϑ ( l m s ) ( W ¯ s H ϑ w ¯ s H ϑ ) } + R c ( 2 ) H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R d ( 2 ) H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ; f o r i = 2 H ϑ = 1 C P H ϑ { y ¯ s H ϑ + Ξ ^ s H ϑ ( l t s ) ( W ¯ s H ϑ w ¯ s H ϑ ) } + R c ( 3 ) H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R d ( 3 ) H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ; f o r i = 3 H ϑ = 1 C P H ϑ { y ¯ s H ϑ + Ξ ^ s H ϑ ( h u b e r ) ( W ¯ s H ϑ w ¯ s H ϑ ) } + R c ( 4 ) H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R d ( 4 ) H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ; f o r i = 4 H ϑ = 1 C P H ϑ { y ¯ s H ϑ + Ξ ^ s H ϑ ( h a m p l e ) ( W ¯ s H ϑ w ¯ s H ϑ ) } + R c ( 5 ) H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R d ( 5 ) H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ; f o r i = 5 H ϑ = 1 C P H ϑ { y ¯ s H ϑ + Ξ ^ s H ϑ ( t u c k e y ) ( W ¯ s H ϑ w ¯ s H ϑ ) } + R c ( 6 ) H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R d ( 6 ) H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ; f o r i = 6 H ϑ = 1 C P H ϑ { y ¯ s H ϑ + Ξ ^ s H ϑ ( m m ) ( W ¯ s H ϑ w ¯ s H ϑ ) } + R c ( 7 ) H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ + R d ( 7 ) H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ ; f o r i = 7

3.3. Theoretical Framework and Practical Characteristics of y ¯ P P ( i )

The proposed calibrated predictive family y ¯ P P ( i ) in Equation (18) is a combination of robust regression of the sampled units and nonparametric kernel prediction of the non-sampled units, followed by the application of calibration constraints depending on the mean and coefficient of variation (CV) of the auxiliary variable. In order to be more rigorous about the soundness of this construction, we clearly specify the regularity conditions that ensure (i) consistency with stratified sampling, (ii) convergence of kernel predictions m ^ ( w ) to m ( w ) , (iii) existence of a minimiser of the calibration objective, and (iv) control of bias in the hybrid robust-kernel structure.

3.3.1. Regularity Conditions

  • (A1) Stratified sampling regularity: Under StRSWOR, for each stratum, H ϑ , N H ϑ and n H ϑ , and the sampling fractions satisfy 0 < f ̲ f H ϑ = n H ϑ / N H ϑ f ¯ < 1 .
  • (A2) Superpopulation model and moments: Within each stratum of the predictive model, E ξ H ϑ ( ϖ i H ϑ ) = 0 and E ξ H ϑ ( ϖ i H ϑ 2 ) σ 2 < .
  • (A3) Bounded auxiliary variable and smooth mean function: The auxiliary variable w is bounded (i.e., a w b ) and the regression function m ( w ) is ( p + 1 ) -times continuously differentiable on [ a , b ] with bounded derivatives.
  • (A4) Kernel and bandwidth: The kernel K ( · ) is bounded, symmetric, integrates to one, and has finite second moment. The bandwidth satisfies h 0 and n H ϑ h for all strata.
  • (A5) Robust regression stability: For each robust method i = 1 , , 7 , the slope estimator Ξ ^ s H ϑ ( i ) used in (13) is Fisher-consistent under (A2) and satisfies Ξ ^ s H ϑ ( i ) Ξ H ϑ ( i ) = O p ( n H ϑ 1 / 2 ) .
  • (A6) Calibration feasibility: The chi-square distance calibration objective is strictly convex, and the constraint system (4)–(6) has full rank with H 1 0 in (17), ensuring existence and uniqueness of the minimiser and the calibrated weights Π H ϑ .

3.3.2. Asymptotic Implications

Under (A2)–(A4), standard LPR theory implies that the kernel predictor satisfies
sup w W m ^ ( w ) m ( w ) p 0 ,
thus guaranteeing convergence of kernel predictions. Combining this result with (A1) and (A5) yields consistency of the adapted predictive family y ¯ R P ( i ) under stratified sampling, i.e., y ¯ R P ( i ) Y ¯ p 0 . Moreover, under (A6), y ¯ P P ( i ) is well-defined because the calibration objective admits a unique minimiser. Finally, the hybrid bias of the proposed estimator is controlled: the robust regression component is asymptotically unbiased under Fisher consistency, while the smoothing bias follows the standard order O ( h p + 1 ) , which vanishes as h 0 .

3.3.3. Practical Characteristics

The proposed estimator possesses several noteworthy practical attributes:
  • Linearity: The estimator has a linear form, as it incorporates elements of an observed and predicted aspects of the study variable to create a composite mean. It uses a weighting parameter, denoted m, that will vary in response to the contribution of non-sampled units so that there is a balanced representation of the sampled and the predicted data.
  • Data-intensiveness: This estimator is highly data-driven and its use assumes the known observations of the supplementary variable, w n , for the entire population. These calculations require extensive smoothing and prediction algorithms, especially when they are performed with nonparametric models like kernel regression that require immense numerical computing.
  • Model-oriented inference: This estimator is model-based in contrast to more traditional design-based estimators which make explicit use of the inclusion probabilities to make inferences about the population. It substitutes the design weights with the alternative ones, say w 2 , based on the distance measure, robust regression coefficients, and kernel. This approach increases the accuracy of the estimation by prioritising the realised sample structure and is consistent with the conditionality principle; it makes inferences about the observed sample as opposed to an ensemble of possible samples.

4. Numerical Study

4.1. Bandwidth Selection

It should be emphasised that estimators ( y ¯ P M , y ¯ P P ( i ) ) , as well as the other estimators in this case, are essentially dependent on the bandwidth parameter h, which controls the trade-off between bias and variance in local polynomial regression. As the choice of h also influences the effectiveness of the estimator, the selection of h was done using several strategies to guarantee consistency and robustness of the outcome. Specifically, the calculation of bandwidths was done based on: (a) a fixed bandwidth rule, (b) the direct plug-in method ( d p i k ) proposed by Ref. [45], and (c) the biased cross-validation technique ( b c v ) that was suggested by Ref. [46]. These selection techniques are demonstrated to result in asymptotically optimal bandwidths in large sample sizes. When the estimators ( y ¯ B R , y ¯ P M , y ¯ R P ( i ) , y ¯ P P ( i ) ) were evaluated within the current bandwidth selection of each of these bandwidth choices, we simultaneously tested the sensitivity of these estimators and the general behaviour of each across the regimes of smoothing.

4.2. Generated Populations

The simulation framework in this section focuses on determining the efficiency and effectiveness of the estimators y ¯ R P ( i ) and y ¯ P P ( i ) relative to y ¯ B R and y ¯ P M . For this purpose, two simulated datasets were generated.

4.2.1. Population-1

In order to test the behaviour of nonparametric calibration estimators within an StRS framework, a fully synthetic population of two strata with different distributional properties was created. In the first stratum (Stratum 1), 133 values of Y were obtained through a Gamma distribution that was of positive mode, shape 2, and scale 300. The second stratum (Stratum 2) on the other hand comprised 133 observations of Y with a mean of 800 and a standard deviation of 100, which is a more realistic structure, based on normal distribution. To determine the strength of the suggested estimators when some data anomalies were present, the small fraction of the values in each stratum was artificially inflated as a deliberate act to generate artificial outliers. These outliers resembled any unexpected irregularities that could arise in data generating processes. To assess calibrated estimator performance, auxiliary variable W was randomly drawn on a uniform distribution on the range (0,1) on both strata.

4.2.2. Population-2

The second artificial population was created in order to better understand the estimator behaviour in different distributional shapes and contamination patterns in the StRS framework. In the case of Stratum 1, the values of Y were sampled as a log-normal distribution with meanlog = 5.5 and sdlog = 0.6, which has naturally skewed data structure. In order to model the existence of extreme values, a sample of observations were identified and adjusted by introducing large constants, thus introducing a significant number of positive outliers to the dataset. The stratum 2 of Y was shaped to be bimodal, drawing two normal distributions of around 700 and 900 respectively, thus reflecting the heterogeneous underlying distributions. The auxiliary variable W in both strata belonged to the uniform distribution of (0,1). Outliers in both populations were identified through the Interquartile Range (IQR) methodology, see Figure 1 and Figure 2, which allowed the comparative analysis of data when the data were clean and contaminated. This type of controlled simulation environment enabled a thorough analysis of estimator performance in different and adverse situations.

4.2.3. Outlier Generation Mechanism (Simulated Populations)

To evaluate robustness under controlled contamination, outliers are generated by injecting a small number of abnormal response values within each stratum (Zaman and Bulut, [36,37]). In both populations and strata, outliers were introduced by randomly selecting k = 5 units per stratum (out of 133) and injecting large positive shocks into the response variable: Uniform ( 1000 , 1500 ) and Normal ( 1000 , 300 2 ) perturbations for the first population and Uniform ( 1500 , 2000 ) and Normal ( 1200 , 300 2 ) perturbations for the second population. This resulted in an approximate contamination proportion of k / N h 3.76 % per stratum, representing vertical outliers in Y. For diagnostic and graphical purposes only, extreme observations were flagged using Tukey’s IQR rule, i.e., values outside [ Q 1 1.5 IQR , Q 3 + 1.5 IQR ] , see Ref. [41].
Adapting the methodological approach of Koyuncu [28,30], we conducted a simulation experiment with R b = 5000 replications. MSE-based PRE results for the StRS framework are summarised in Table 1, Table 2, Table 3 and Table 4. Across all generated populations, the computed results of y ¯ B R , y ¯ P M , y ¯ R P ( i ) , and y ¯ P P ( i ) were estimated. The comparative metrics (MSEs, PREs) expressions were
MSE ( c ^ ( g 1 ) ) = k b = 1 R b ( c ^ ( g 1 ) μ b ) 2 / R b
PRE ( c ^ ( g 1 ) , y ¯ B R ) = MSE ( c ^ ( g 1 ) ) MSE ( y ¯ B R ) × 100
where c ^ ( g 1 ) = y ¯ P M , y ¯ R P ( i ) , y ¯ P P ( i ) .

4.3. Real Life Applications

4.3.1. Fisheries

Kernel-based LPR has valuable benefits in modelling nonlinear relationships, and, therefore, it is especially applicable to the needs of fisheries science. Here, the nonparametric predictive mean estimation accounts for the inherent biological variation observed among fish species, such as differences in size, weight, and other morphological characteristics. In this research, we used the famous Fish Market dataset, which entails data about different fishes and their physical features like length, height, and width. The dataset consists of an individual fish, which makes it suitable in prediction modelling. We use the dataset as discussed by [17] to implement model-based predictive average estimation.
To stratify the fish dataset, Stratum-I contained the recorded heights for Bream fish (which was the study variable Y), whereas Stratum-II contained the recorded width observations of the same species as Y. This was biologically significant stratification, a stratification with which within-group heterogeneity could be more accurately explored. We integrated these variables into a nonparametric calibration framework to investigate how predictive estimators can enhance the accuracy of estimation in fisheries assessment, which can be used to sustainably make decisions in the aquaculture and resource management sector.

4.3.2. Radiations

Nonparametric regression techniques are also useful in the modelling of environmental phenomena, especially in the use of heterogeneous and nonlinear patterns typical of atmospheric data. This study used the method of kernel regression for a dataset that represents the level of ultraviolet (UV) radiation in a variety of meteorological conditions. The dataset consists of environmental variables like temperature, humidity, ozone concentration, and solar position, all of which cause changes in UV intensity.
The UV Radiation dataset has also been described in [17] and is publicly available; therefore, no additional permissions were required for its use. The dataset was originally gathered to enable predictive modelling of the UV risk levels, and this assisted our purpose of model-based predictive mean estimation in environmental monitoring. These data were stratified into two categories of UV risks, where Stratum-I contained the conditions of low-risk and Stratum-II contained moderate-risk cases. In this context, the solar radiation intensity was the key study variable, denoted by Y.
It is important to note that in both simulated populations, the auxiliary variable W was generated from a uniform distribution on [ 0 , 1 ] , i.e., W U ( 0 , 1 ) , following the methodologies outlined by RSB [6], Qureshi et al. [47], and Subzar et al. [48].
The two case studies taken together illustrate how nonparametric model-based estimation can be very versatile and effective in practice. Both illustrations are based on the pseudo-population concept that is widely utilised in simulation-driven survey designs. Table 5 and Table 6 and Table 7 and Table 8 give the results of PREs obtained with the fisheries and radiation data respectively.
In our simulation and empirical implementation, the suggested families y ¯ R P ( i ) and y ¯ P P ( i ) were estimated at the independent level in each stratum H in three steps: (i) fit the robust regression model to the sampled units of size n H ϑ to get Ξ ^ s H ϑ ( i ) (repeated in all robust methods i = 1 , , 7 ), (ii) compute the LPR/kernel predictor m ^ ( w ) at every non-sampled unit s ¯ H ϑ , and (iii) get the calibrated weights Π H ϑ by solving the mean and CV-based calibration constraints. The prevailing computational cost was the kernel prediction step, as full evaluation would involve calculating the weights of the kernels of all the non-sampled units against the sampled units n H ϑ ; this resulted in
O H ϑ = 1 C ( N H ϑ n H ϑ ) n H ϑ
per replication operations and robust methods. The increase in the cost of robust regression fitting led to O H ϑ = 1 C T i n H ϑ , with T i indicating the number of iterations to use method i and calibration being only O H ϑ = 1 C n H ϑ cost since it only solved a fixed low-dimensional system of constraints. The choice of bandwidth influenced runtime: with a fixed choice of the bandwidth, the above dominant cost is obtained, but with data-driven rules (dpik and bcv), an additional overhead of piloting/optimisation was introduced but did not alter the leading order term. Because the entire process was repeated with R b Monte Carlo repetitions and i = 1 , 2 , . . . 7 robust methods, the total run time was O ( R I H ϑ = 1 C ( N H ϑ n H ϑ ) n H ϑ ) in the naive implementation. However the method is still scalable to large-scale stratified surveys since computations can be easily parallelised both across strata and across replications, and the kernel computation can be done with truncated/local neighborhoods and fast nearest-neighbor search (e.g., KD-tree), which can be done much faster for big populations.

4.4. Interpretation

Table 1, Table 2, Table 3 and Table 4 summarise the PRE values of the competing estimators relative to the baseline estimator y ¯ B R (fixed at PRE = 100 ). Across all settings, the predictive estimator y ¯ P M provided only a slight improvement over y ¯ B R , indicating that the use of auxiliary information alone yielded a modest efficiency gain. In contrast, the adapted families y ¯ R P ( i ) and the proposed calibrated families y ¯ P P ( i ) consistently outperformed the baseline for all indices i = 1 , , 7 , confirming the advantage of combining predictive modelling with robust/adjusted structures. In the first generated population (Table 1 and Table 2), the highest gains were generally achieved by the moderate-complexity indices, particularly i = 3 and i = 4 , with y ¯ R P ( 3 ) and y ¯ P P ( 3 ) yielding the strongest PRE values and y ¯ P P ( 4 ) attaining the maximum PRE under 20 % sampling. For the second generated population (Table 3 and Table 4), PRE values were slightly lower in the 10 % case but the ranking remained stable; again, indices i = 3 or 4 dominated. Notably, the 20 % sampling scenario for the second population (Table 4) produced the largest efficiency improvements, where y ¯ R P ( 4 ) and y ¯ P P ( 4 ) exceeded PRE 101.22 , suggesting that stronger correlation structures and smoother functional relationships favoured the proposed estimators. Overall, these results demonstrate that the proposed predictive families provide reliable efficiency gains over existing benchmarks, particularly when auxiliary information is informative and the underlying relationship between y and w is adequately captured by kernel-based prediction coupled with robust regression.
The first table presents the PRE values for all the estimators for the fisheries data. The results show that y ¯ R P ( 3 ) , y ¯ R P ( 4 ) , and their proposed counterparts y ¯ P P ( 3 ) and y ¯ P P ( 4 ) achieved the highest PREs. Such estimators were therefore more efficient as well as more accurate when estimating the population mean. The PRE increment confirmed the decrease in MSE and showed that the methods had steady improvements. Similarly, in Table 6, the PRE values remained favourable across all proposed estimators. Adapted and suggested estimators remained predominant, with a maximum PRE value of over 101.03. y ¯ P P ( 3 ) and y ¯ P P ( 4 ) still showed strong performances.
The last PRE tables of radiations data showed a plateau in performance. The efficiency improvements began to decrease slightly, even though the proposed estimators were still better than the baseline. Top PRE scores were approximately 100.45, which implied slight improvement. This may have been as a result of greater variability or noise. However, the predictive estimators remained their best in performance, particularly at mid-level model complexity, exhibiting flexibility in varied sampling designs.
Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 further highlight the sensitivity of PRE to the bandwidth selection strategy used in LPR-based prediction. Overall, the fixed bandwidth choices ( h = 0.2 and h = 0.5 ) yielded only marginal gains over y ¯ B R , typically producing PRE values very close to 100 across both simulated and real populations. In contrast, the data-driven bandwidths (dpik and bcv) consistently produce higher PRE values for all predictive families, indicating that automatic bandwidth tuning better balances the bias–variance trade-off in kernel smoothing. In particular, bcv generally delivers the maximum PRE (or remains competitive with dpik), reflecting its stronger adaptation to heterogeneous and contaminated structures. This pattern is evident in the generated populations (Table 1, Table 2, Table 3 and Table 4), where PRE increases notably under dpik/bcv compared to fixed h and becomes even more pronounced in the fisheries data (Table 5 and Table 6), where the proposed estimators reached PRE values exceeding 102 under dpik/bcv. In the radiations data (Table 7 and Table 8), the improvement trend remained positive but comparatively smaller (PRE near 101), suggesting a flatter regression structure and higher noise level. Collectively, these results confirm that the superiority of the proposed families was not restricted to a single smoothing choice; rather, the proposed estimators maintained stable efficiency gains across bandwidth regimes, with the strongest improvements arising under data-driven bandwidths, particularly bcv.
As indicated, the PRE results of all the y ¯ P P ( i ) estimators were more than 100, indicating improved performance of the estimators compared to others. This conclusion was reached based on our simulation study, but we feel that the same would likely hold in other settings.

5. Conclusions

The current paper proposes a unified approach that integrates robust regression and nonparametric kernel-based methods to improve predictive estimation of population means under StRS. The suggested methodology can be used to overcome the major shortcomings of other parametric and nonparametric estimators, specifically their weakness in handling model mis-specification and outliers, which are common in real-world contexts such as environmental monitoring and fisheries management. The proposed calibrated predictive estimators are based on constraints that consider the mean and CV of W. The efficiency and stability of the predictive estimators is vastly enhanced through the use of constraints. The superiority of the proposed estimators is shown by the numerical simulations on two artificial populations with a purposeful contamination by outliers and structural heterogeneity. Specifically, the calibrated proposed estimators (labeled y ¯ P P ( i ) ) consistently performed better for PRE than the classical and adapted kernel-based estimators when the bandwidth was selected based on various strategies (fixed, dpik, or bcv). This stability of the different smoothing parameters also brings out the suitability and flexibility of the proposed family in any given empirical setting. Moreover, the formulation can be flexibly generalised, so that it can be adapted to different stratification and auxiliary information structures. This flexibility is vital in its applications in various areas including survey statistics, environmental science, and analysis of socio-economic data, in which irregularities and complex distribution of data are usual.

Author Contributions

Conceptualization, R.M., H. M A., N.A. and M.H.; Methodology, R.M., H.M.A., N.A. and M.H.; Software, R.M.; Formal analysis, M.H.; Resources, N.A.; Data curation, N.A.; Writing—original draft, R.M., H.M.A., N.A. and M.H.; Writing—review & editing, R.M., H.M.A. and M.H.; Visualization, R.M.; Supervision, N.A. and M.H.; Project administration, H.M.A.; Funding acquisition, H.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2026R299), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The constraint multipliers ( η 1 ( m ) , η 2 ( m ) , η 3 ( m ) ) used in y ¯ P M and y ¯ P P ( i ) estimators are
η 1 ( m ) = D 71 ( m ) H 1 , η 2 ( m ) = D 72 ( m ) H 1 , η 3 ( m ) = D 73 ( m ) H 1 ,
where
D 71 ( m ) = H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ 2 + H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ ,
D 72 ( m ) = H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ + H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ ,
D 73 ( m ) = H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ w ¯ H ϑ H ϑ = 1 C P H ϑ W ¯ H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ + H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ C ^ w H ϑ H ϑ = 1 C P H ϑ C w H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ .
H 1 = H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ 2 x H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ 2 H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ 2 + 2 H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ C ^ w H ϑ H ϑ = 1 C Q ^ H ϑ P H ϑ w ¯ H ϑ C ^ w H ϑ .

References

  1. Opsomer, J.D.; Francisco-Fernandez, M.; Li, X. Model-based nonparametric variance estimation for systematic sampling. Scand. J. Stat. 2012, 39, 528–542. [Google Scholar] [CrossRef]
  2. Wu, C.; Sitter, R.R. A model-calibration approach to using complete auxiliary information from survey data. J. Am. Stat. Assoc. 2001, 96, 185–193. [Google Scholar] [CrossRef]
  3. Dorfman, A.H. Nonparametric regression for estimating totals in finite populations. In Section on Survey Research Methods; American Statistical Association: Alexandria, VA, USA, 1992; pp. 622–625. [Google Scholar]
  4. Dorfman, A.H.; Hall, P. Estimators of the finite population distribution function using nonparametric regression. Ann. Stat. 1993, 21, 1452–1475. [Google Scholar] [CrossRef]
  5. Nadaraya, E.A. On estimating regression. Theory Probab. Its Appl. 1964, 9, 141–142. [Google Scholar] [CrossRef]
  6. Rueda, M.; Sanchez-Borrego, I.R. A predictive estimator of finite population mean using nonparametric regression. Comput. Stat. 2009, 24, 1–14. [Google Scholar] [CrossRef]
  7. Yang, X.; Chen, J.; Li, D.; And Li, R. Functional-Coefficient Quantile Regression for Panel Data with Latent Group Structure. J. Bus. Econ. Stat. 2024, 42, 1026–1040. [Google Scholar] [CrossRef]
  8. Hao, R.; Yang, X. Multiple-output quantile regression neural network. Stat. Comput. 2024, 34, 89. [Google Scholar] [CrossRef]
  9. Tian, Z.; Lee, A.; Zhou, S. Adaptive tempered reversible jump algorithm for Bayesian curve fitting. Inverse Probl. 2024, 40, 045024. [Google Scholar] [CrossRef]
  10. Ren, Y.; Zhang, J.; Xia, Y.; Wang, R.; Xie, F.; Guan, J.; Zhang, H.; Zhou, S. Regression-based Conditional Independence Test with Adaptive Kernels. Artif. Intell. 2025, 347, 104391. [Google Scholar] [CrossRef]
  11. Zaman, T. Efficient estimators of population mean using auxiliary attribute in stratified random sampling. Adv. Appl. Stat. 2019, 56, 153–171. [Google Scholar] [CrossRef]
  12. Subzar, M.; Lone, S.A.; Aslam, M.; AL-Marshadi, A.H.; Maqbool, S. Exponential ratio estimator of the median: An alternative to the regression estimator of the median under stratified sampling. J. King Saud-Univ.-Sci. 2023, 35, 102536. [Google Scholar] [CrossRef]
  13. Kumar, A.; Siddiqui, A.S. Enhanced estimation of population mean using simple random sampling. Res. Stat. 2024, 2, 2335949. [Google Scholar] [CrossRef]
  14. Shahzad, U.; Zhu, H.; Al-Noor, N.H.; Albalawi, O. Ridge regression-based mean estimators using bivariate auxiliary information. Math. Popul. Stud. 2025, 32, 83–103. [Google Scholar] [CrossRef]
  15. Koc, T.; Koc, H. A new class of quantile regression ratio-type estimators for finite population mean in stratified random sampling. Axioms 2023, 12, 713. [Google Scholar] [CrossRef]
  16. Srivastava, S.K. Predictive estimation of finite population mean using product estimator. Metrika 1983, 30, 93–99. [Google Scholar] [CrossRef]
  17. Alshanbari, H.M.; Anas, M.M. Prospective Inference of Central Tendency Through Data-Adaptive Mechanisms. Mathematics 2025, 13, 3622. [Google Scholar] [CrossRef]
  18. Alomair, A.M.; Shahzad, U.; Al-Noor, N.H.; Zhu, H. Probability weighted moments and family of nonparametric regression estimators. Maejo Int. J. Sci. Technol. 2025, 19, 160–170. [Google Scholar]
  19. Chambers, R.L.; Dorfman, A.H.; Wehrly, T.E. Bias robust estimation in finite populations using nonparametric calibration. J. Am. Stat. Assoc. 1993, 88, 268–277. [Google Scholar] [CrossRef]
  20. Breidt, F.J.; Opsomer, J.D. Local polynomial regression estimators in survey sampling. Ann. Stat. 2000, 28, 1026–1053. [Google Scholar] [CrossRef]
  21. Ali, T.H. Modification of the adaptive Nadaraya-Watson kernel method for nonparametric regression (simulation study). Commun.-Stat.-Simul. Comput. 2022, 51, 391–403. [Google Scholar] [CrossRef]
  22. Ali, T.H.; Hayawi, H.A.A.M.; Botani, D.S.I. Estimation of the bandwidth parameter in Nadaraya-Watson kernel nonparametric regression based on universal threshold level. Commun.-Stat.-Simul. Comput. 2023, 52, 1476–1489. [Google Scholar] [CrossRef]
  23. Shahzad, U.; Ahmad, I.; Almanjahie, I.M.; Koyuncu, N.; Hanif, M. Variance estimation based on L-moments and auxiliary information. Math. Popul. Stud. 2022, 29, 31–46. [Google Scholar] [CrossRef]
  24. Zaman, T. Generalized exponential estimators for the finite population mean. Stat. Transition. New Ser. 2020, 21, 159–168. [Google Scholar] [CrossRef]
  25. Leesch, J.; Skopek, J. Five decades of marital sorting in France and the United States—The role of educational expansion and the changing gender imbalance in education. Res. Soc. Stratif. Mobil. 2025, 97, 101044. [Google Scholar] [CrossRef]
  26. Kaiser, C.; Oswald, A.J. The scientific value of numerical measures of human feelings. Proc. Natl. Acad. Sci. USA 2022, 119, e2210412119. [Google Scholar] [CrossRef] [PubMed]
  27. Deville, J.C.; Sarndal, C.E. Calibration estimators in survey sampling. J. Am. Stat. Assoc. 1992, 87, 376–382. [Google Scholar] [CrossRef]
  28. Koyuncu, N. New difference-cum-ratio and exponential type estimators in median ranked set sampling. Hacet. J. Math. Stat. 2016, 45, 207–225. [Google Scholar] [CrossRef]
  29. Singh, S.; Horn, S.; Yu, F. Estimation variance of general regression estimator: Higher level calibration approach. Surv. Methodol. 1998, 48, 41–50. [Google Scholar]
  30. Koyuncu, N. Calibration estimator of population mean under stratified ranked set sampling design. Commun.-Stat.-Theory Methods 2018, 47, 5845–5853. [Google Scholar] [CrossRef]
  31. Sinha, N.; Sisodia, B.V.S.; Singh, S.; Singh, S.K. Calibration approach estimation of the mean in stratified sampling and stratified double sampling. Commun.-Stat.-Theory Methods 2017, 46, 4932–4942. [Google Scholar]
  32. Barranco-Chamorro, I.; Jiménez-Gamero, M.D.; Mayor-Gallego, J.A.; Moreno-Rebollo, J.L. A case-deletion diagnostic for penalized calibration estimators and BLUP under linear mixed models in survey sampling. Comput. Stat. Data Anal. 2015, 87, 18–33. [Google Scholar] [CrossRef]
  33. Garg, N.; Pachori, M. Use of coefficient of variation in calibration estimation of population mean in stratified sampling. Commun.-Stat.-Theory Methods 2019, 49, 5842–5852. [Google Scholar] [CrossRef]
  34. Pal, A.; Varshney, R.; Yadav, S.K.; Zaman, T. Improved memory-type ratio estimator for population mean in stratified random sampling under linear and non-linear cost functions. Soft Comput. 2024, 28, 7739–7754. [Google Scholar] [CrossRef]
  35. Pandey, M.K.; Singh, G.N.; Zaman, T.; Al Mutairi, A.; Mustafa, M.S. Improved estimation of population variance in stratified successive sampling using calibrated weights under non-response. Heliyon 2024, 10, e27738. [Google Scholar] [CrossRef]
  36. Zaman, T.; Bulut, H. Modified ratio estimators using robust regression methods. Commun.-Stat.-Theory Methods 2019, 48, 2039–2048. [Google Scholar] [CrossRef]
  37. Zaman, T.; Bulut, H. Modified regression estimators using robust regression methods and covariance matrices in stratified random sampling. Commun.-Stat.-Theory Methods 2020, 49, 3407–3420. [Google Scholar] [CrossRef]
  38. Huber, P.J. Robust estimation of a location parameter. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
  39. Huber, P.J. Robust regression: Asymptotics, conjectures and Monte Carlo. Ann. Stat. 1973, 1, 799–821. [Google Scholar] [CrossRef]
  40. Hampel, F.R. A general qualitative definition of robustness. Ann. Math. Stat. 1971, 42, 1887–1896. [Google Scholar] [CrossRef]
  41. Tukey, J.W. Exploratory Data Analysis; Addison-Wesley: Boston, MA, USA, 1977. [Google Scholar]
  42. Yohai, V.J. High breakdown-point and high efficiency robust estimates for regression. Ann. Stat. 1987, 15, 642–656. [Google Scholar] [CrossRef]
  43. Rousseeuw, P.J.; Yohai, V. Robust regression by means of S-estimators. In Lecture Notes in Statistics; Springer: New York, NY, USA, 1984; Volume 26, pp. 256–272. [Google Scholar]
  44. Rousseeuw, P.J.; Leroy, A.M. Robust Regression and Outlier Detection; John Wiley and Sons Publication: New York, NY, USA, 1987. [Google Scholar]
  45. Wand, M.P.; Jones, M.C. Chapman and Hall; Kernel Smoothing: London, UK, 1995. [Google Scholar]
  46. Scott, D.W.; Terrell, G.R. Biased and unbiased cross-validation in density estimation. J. Am. Stat. Assoc. 1987, 82, 1131–1146. [Google Scholar] [CrossRef]
  47. Qureshi, M.N.; Khalil, S.; Hanif, M. Joint influence of exponential ratio and exponential product estimator for the estimation clustered population mean in adaptive cluster sampling. Adv. Appl. Stat 2018, 53, 13–28. [Google Scholar] [CrossRef]
  48. Subzar, M.; Alqurashi, T.; Chandawat, D.; Tamboli, S.; Raja, T.A.; Attri, A.K.; Wani, S.A. Generalized robust regression techniques and adaptive cluster sampling for efficient estimation of population mean in case of rare and clustered populations. Sci. Rep. 2025, 15, 2069. [Google Scholar] [CrossRef]
Figure 1. First generated population.
Figure 1. First generated population.
Axioms 15 00134 g001
Figure 2. Second generated population.
Figure 2. Second generated population.
Axioms 15 00134 g002
Table 1. PRE using first generated population with n = 10 % .
Table 1. PRE using first generated population with n = 10 % .
h 0.20.5 dpik bcv
Estimators
y ¯ B R 100.0000100.0000100.0000100.0000
y ¯ P M 100.2809100.2837100.2694100.2715
y ¯ R P ( 1 ) 100.1608100.1609100.8467100.9787
y ¯ R P ( 2 ) 100.1143100.1144100.8000100.9318
y ¯ R P ( 3 ) 100.1198100.1198100.8055100.9373
y ¯ R P ( 4 ) 100.1060100.1061100.7916100.9234
y ¯ R P ( 5 ) 100.0828100.0829100.7683100.9001
y ¯ R P ( 6 ) 100.0995100.0997100.7851100.9169
y ¯ R P ( 7 ) 100.0947100.0948100.7802100.9120
y ¯ P P ( 1 ) 100.4523100.4553101.1391101.2757
y ¯ P P ( 2 ) 100.4001100.4030101.0866101.2230
y ¯ P P ( 3 ) 100.4062100.4090101.0927101.2291
y ¯ P P ( 4 ) 100.3979100.4008101.0843101.2208
y ¯ P P ( 5 ) 100.3741100.3770101.0604101.1968
y ¯ P P ( 6 ) 100.3909100.3939101.0773101.2138
y ¯ P P ( 7 ) 100.3858100.3888101.0722101.2086
Table 2. PRE using first generated population with n = 20 % .
Table 2. PRE using first generated population with n = 20 % .
h 0.20.5 dpik bcv
Estimators
y ¯ B R 100.0000100.0000100.0000100.0000
y ¯ P M 100.2800100.2825100.2741100.2756
y ¯ R P ( 1 ) 100.0726100.0727100.7866100.9606
y ¯ R P ( 2 ) 100.0078100.0078100.7212100.8951
y ¯ R P ( 3 ) 100.0038100.0037100.7172100.8909
y ¯ R P ( 4 ) 100.0315100.0315100.7452100.9191
y ¯ R P ( 5 ) 100.0054100.0054100.7188100.8927
y ¯ R P ( 6 ) 100.0324100.0325100.7461100.9200
y ¯ R P ( 7 ) 100.0304100.0304100.7440100.9179
y ¯ P P ( 1 ) 100.3563100.3588101.0718101.2494
y ¯ P P ( 2 ) 100.2890100.2914101.0039101.1814
y ¯ P P ( 3 ) 100.2853100.2877101.0003101.1776
y ¯ P P ( 4 ) 100.3152100.3177101.0304101.2079
y ¯ P P ( 5 ) 100.2889100.2913101.0039101.1813
y ¯ P P ( 6 ) 100.3159100.3183101.0311101.2086
y ¯ P P ( 7 ) 100.3137100.3161101.0289101.2063
Table 3. PRE using second generated population with n = 10 % .
Table 3. PRE using second generated population with n = 10 % .
h 0.20.5 dpik bcv
Estimators
y ¯ B R 100.0000100.0000100.0000100.0000
y ¯ P M 100.0549100.0546100.0528100.0526
y ¯ R P ( 1 ) 100.0353100.0353100.3942100.4623
y ¯ R P ( 2 ) 100.0406100.0406100.3995100.4676
y ¯ R P ( 3 ) 100.0388100.0388100.3977100.4658
y ¯ R P ( 4 ) 100.0254100.0254100.3843100.4524
y ¯ R P ( 5 ) 100.0142100.0142100.3730100.4411
y ¯ R P ( 6 ) 100.0134100.0134100.3722100.4403
y ¯ R P ( 7 ) 100.0031100.0031100.3619100.4299
y ¯ P P ( 1 ) 100.0882100.0879100.4473100.5150
y ¯ P P ( 2 ) 100.0947100.0944100.4537100.5215
y ¯ P P ( 3 ) 100.0921100.0918100.4512100.5189
y ¯ P P ( 4 ) 100.0778100.0775100.4368100.5046
y ¯ P P ( 5 ) 100.0665100.0661100.4254100.4931
y ¯ P P ( 6 ) 100.0667100.0664100.4256100.4934
y ¯ P P ( 7 ) 100.0561100.0558100.4151100.4828
Table 4. PRE using second generated population with n = 20 % .
Table 4. PRE using second generated population with n = 20 % .
h 0.20.5 dpik bcv
Estimators
y ¯ B R 100.0000100.00000100.00000100.00000
y ¯ P M 100.000199.9997599.9998899.99966
y ¯ R P ( 1 ) 100.0553100.05528100.37851100.51767
y ¯ R P ( 2 ) 100.0675100.06760100.39082100.53005
y ¯ R P ( 3 ) 100.0790100.07908100.40231100.54158
y ¯ R P ( 4 ) 100.0498100.04978100.37302100.51215
y ¯ R P ( 5 ) 100.0380100.03800100.36123100.50031
y ¯ R P ( 6 ) 100.0297100.02971100.35292100.49199
y ¯ R P ( 7 ) 100.0288100.02878100.35198100.49106
y ¯ P P ( 1 ) 100.0536100.05325100.37689100.51561
y ¯ P P ( 2 ) 100.0669100.06663100.39027100.52906
y ¯ P P ( 3 ) 100.0786100.07834100.40199100.54081
y ¯ P P ( 4 ) 100.0480100.04769100.37134100.51002
y ¯ P P ( 5 ) 100.0362100.03585100.35949100.49813
y ¯ P P ( 6 ) 100.0283100.02798100.35159100.49022
y ¯ P P ( 7 ) 100.0274100.02706100.35067100.48930
Table 5. PRE using fisheries population with n = 10 % .
Table 5. PRE using fisheries population with n = 10 % .
h 0.2 0.5 dpik bcv
Estimators
y ¯ B R 100.0000100.0000100.0000100.0000
y ¯ P M 100.8126100.8164100.7849100.7878
y ¯ R P ( 1 ) 100.1455100.1456101.0740101.1641
y ¯ R P ( 2 ) 100.1798100.1798101.1087101.1987
y ¯ R P ( 3 ) 100.2017100.2018101.1308101.2209
y ¯ R P ( 4 ) 100.1630100.1631101.0917101.1818
y ¯ R P ( 5 ) 100.1657100.1658101.0944101.1845
y ¯ R P ( 6 ) 100.1645100.1646101.0932101.1833
y ¯ R P ( 7 ) 100.1525100.1526101.0811101.1712
y ¯ P P ( 1 ) 100.9701100.9740101.9049102.0015
y ¯ P P ( 2 ) 101.0048101.0086101.9399102.0365
y ¯ P P ( 3 ) 101.0280101.0318101.9633102.0599
y ¯ P P ( 4 ) 100.9894100.9933101.9243102.0210
y ¯ P P ( 5 ) 100.9921100.9960101.9270102.0237
y ¯ P P ( 6 ) 100.9914100.9953101.9263102.0230
y ¯ P P ( 7 ) 100.9782100.9821101.9130102.0097
Table 6. PRE using fisheries population with n = 20 % .
Table 6. PRE using fisheries population with n = 20 % .
h 0.2 0.5 dpik bcv
Estimators
y ¯ B R 100.0000100.0000100.0000100.0000
y ¯ P M 101.2838101.2882101.2582101.2611
y ¯ R P ( 1 ) 100.2120100.2122101.2973101.4101
y ¯ R P ( 2 ) 100.1872100.1872101.2723101.3848
y ¯ R P ( 3 ) 100.2179100.2179101.3032101.4158
y ¯ R P ( 4 ) 100.2315100.2317101.3170101.4299
y ¯ R P ( 5 ) 100.2285100.2287101.3139101.4268
y ¯ R P ( 6 ) 100.2221100.2223101.3074101.4203
y ¯ R P ( 7 ) 100.2254100.2256101.3108101.4237
y ¯ P P ( 1 ) 101.5051101.5097102.6036102.7237
y ¯ P P ( 2 ) 101.4793101.4837102.5777102.6974
y ¯ P P ( 3 ) 101.5112101.5157102.6099102.7297
y ¯ P P ( 4 ) 101.5261101.5309102.6249102.7451
y ¯ P P ( 5 ) 101.5230101.5277102.6218102.7420
y ¯ P P ( 6 ) 101.5163101.5211102.6150102.7352
y ¯ P P ( 7 ) 101.5198101.5245102.6185102.7387
Table 7. PRE using radiations population with n = 10 % .
Table 7. PRE using radiations population with n = 10 % .
h 0.2 0.5 dpik bcv
Estimators
y ¯ B R 100.0000100.0000100.0000100.0000
y ¯ P M 100.5602100.5638100.5416100.5440
y ¯ R P ( 1 ) 100.0751100.0751100.6680100.6875
y ¯ R P ( 2 ) 100.2685100.2685100.8626100.8820
y ¯ R P ( 3 ) 100.2275100.2276100.8214100.8409
y ¯ R P ( 4 ) 100.0791100.0792100.6720100.6916
y ¯ R P ( 5 ) 100.0812100.0813100.6741100.6937
y ¯ R P ( 6 ) 100.0805100.0805100.6733100.6929
y ¯ R P ( 7 ) 100.0841100.0841100.6770100.6965
y ¯ P P ( 1 ) 100.6423100.6460101.2381101.2626
y ¯ P P ( 2 ) 100.8516100.8551101.4487101.4730
y ¯ P P ( 3 ) 100.8062100.8098101.4030101.4274
y ¯ P P ( 4 ) 100.6469100.6505101.2427101.2672
y ¯ P P ( 5 ) 100.6492100.6529101.2451101.2695
y ¯ P P ( 6 ) 100.6483100.6519101.2441101.2685
y ¯ P P ( 7 ) 100.6523100.6559101.2482101.2726
Table 8. PRE using radiations population with n = 20 % .
Table 8. PRE using radiations population with n = 20 % .
h 0.2 0.5 dpik bcv
Estimators
y ¯ B R 100.0000100.0000100.0000100.0000
y ¯ P M 100.8112100.8118100.7959100.7954
y ¯ R P ( 1 ) 100.0695100.0695100.7571100.7240
y ¯ R P ( 2 ) 100.0581100.0580100.7456100.7124
y ¯ R P ( 3 ) 100.0341100.0340100.7214100.6883
y ¯ R P ( 4 ) 100.0974100.0974100.7852100.7521
y ¯ R P ( 5 ) 100.1043100.1043100.7921100.7590
y ¯ R P ( 6 ) 100.0956100.0956100.7834100.7502
y ¯ R P ( 7 ) 100.0949100.0950100.7827100.7496
y ¯ P P ( 1 ) 100.8843100.8849101.5795101.5453
y ¯ P P ( 2 ) 100.8719100.8724101.5670101.5327
y ¯ P P ( 3 ) 100.8460100.8465101.5409101.5066
y ¯ P P ( 4 ) 100.9138100.9144101.6092101.5750
y ¯ P P ( 5 ) 100.9210100.9216101.6165101.5823
y ¯ P P ( 6 ) 100.9118100.9124101.6073101.5730
y ¯ P P ( 7 ) 100.9112100.9118101.6066101.5724
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mahmood, R.; Alshanbari, H.M.; Ali, N.; Hanif, M. Robust and Non-Parametric Regression Estimators for Predictive Mean Estimation in Stratified Sampling. Axioms 2026, 15, 134. https://doi.org/10.3390/axioms15020134

AMA Style

Mahmood R, Alshanbari HM, Ali N, Hanif M. Robust and Non-Parametric Regression Estimators for Predictive Mean Estimation in Stratified Sampling. Axioms. 2026; 15(2):134. https://doi.org/10.3390/axioms15020134

Chicago/Turabian Style

Mahmood, Rashid, Huda M. Alshanbari, Nasir Ali, and Muhammad Hanif. 2026. "Robust and Non-Parametric Regression Estimators for Predictive Mean Estimation in Stratified Sampling" Axioms 15, no. 2: 134. https://doi.org/10.3390/axioms15020134

APA Style

Mahmood, R., Alshanbari, H. M., Ali, N., & Hanif, M. (2026). Robust and Non-Parametric Regression Estimators for Predictive Mean Estimation in Stratified Sampling. Axioms, 15(2), 134. https://doi.org/10.3390/axioms15020134

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop