Next Article in Journal
Evolution Model for Epidemic Diseases Based on the Kaplan-Meier Curve Determination
Next Article in Special Issue
Cokriging Prediction Using as Secondary Variable a Functional Random Field with Application in Environmental Pollution
Previous Article in Journal
Common Medical and Statistical Problems: The Dilemma of the Sample Size Calculation for Sensitivity and Specificity Estimation
Previous Article in Special Issue
Reliability Inference for the Multicomponent System Based on Progressively Type II Censored Samples from Generalized Pareto Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Three-Step Regression Based on Comedian and Its Performance in Cell-Wise and Case-Wise Outliers

1
Department of Mathematical Sciences, Universidad Eafit, Medellín 050022, Colombia
2
Department of Informatics and Systems Engineering, Universidad Eafit, Medellín 050022, Colombia
3
School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
4
Department of Mathematical Sciences, University of South Dakota, Vermillion, SD 57069, USA
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(8), 1259; https://doi.org/10.3390/math8081259
Submission received: 19 June 2020 / Revised: 21 July 2020 / Accepted: 29 July 2020 / Published: 1 August 2020
(This article belongs to the Special Issue Statistical Simulation and Computation)

Abstract

:
Both cell-wise and case-wise outliers may appear in a real data set at the same time. Few methods have been developed in order to deal with both types of outliers when formulating a regression model. In this work, a robust estimator is proposed based on a three-step method named 3S-regression, which uses the comedian as a highly robust scatter estimate. An intensive simulation study is conducted in order to evaluate the performance of the proposed comedian 3S-regression estimator in the presence of cell-wise and case-wise outliers. In addition, a comparison of this estimator with recently developed robust methods is carried out. The proposed method is also extended to the model with continuous and dummy covariates. Finally, a real data set is analyzed for illustration in order to show potential applications.

1. Introduction

Regression models are one of most used statistical tools for diverse practitioners [1]. A well-known assumption to infer the parameters of these models is that the error term follows a normal distribution. However, as it was discussed in [2], the presence of outliers could affect the estimation of parameters under normality and lead to inaccurate results [3]. Because outliers frequently are present in real data sets, robust estimation methods are called to be considered in practice to avoid this inaccuracy [4,5,6,7].
A type of model introduced in [8] is known as three-step (3S) regression, which proceed as follows. In the first step, a univariate filter is applied to remove cell-wise outliers. In the second step, a generalized S-estimator (GSE) is used to down-weight the effect of case-wise outliers. In the third step, the regression coefficients are estimated. Nevertheless, some limitations of the 3S-regression method are mentioned in [9]. For instance, the GSE employed in the second step loses robustness against case-wise outliers, when the dimension is greater than ten. In addition, the extended minimum volume ellipsoid (EMVE) estimator, also utilized by the GSE as an initial value, is computationally slow and it does not scale well to higher dimensions. Hence, a new robust estimator, called generalized Rocke S-estimator (GRE), is proposed in [9] to replace the GSE in the second step. Furthermore, a cluster-based algorithm introduced in [9] for faster and more reliable sampling when computing the EMVE estimator, named EMVE-C, is used as an initial value for the GRE algorithm. To our best knowledge, the comedian [10,11] has not been employed as an initial estimate in 3S-regression methods.
In robust linear regression, the case-wise contamination model is known as Tuckey-Huber contamination (THC). This model has been broadly studied in the literature, but its use in practice is not frequent. In the THC model, a small proportion of cases are contaminated. When the contamination is carried out using cell-wise, the independent contamination (IC) model arises. Note that the IC model has a small proportion of individual cells in the covariates that are independently contaminated [12]. However, the literature about the IC model is limited. To our best knowledge, there is few works that can deal with both types of outliers (case-wise and cell-wise) at the same time.
A classical robust regression for case-wise contamination is the least median of square (LMS) method. The LMS regression was proposed in order to optimize the median of the squares of residuals and it allows for a breakdown point of 50 % , but it is computationally inefficient [13,14]. Two alternative methods that use iterative strategies are the regressions via the estimation of: (i) the minimal covariance determinant (MCD) [15,16]; and (ii) the iterative re-weighted least square (IRLS) [17,18]. The MCD regression minimizes the covariance matrix determinant of the central points. The IRLS regression includes additional information regarding error variance and covariance by incorporating a weight matrix into the model estimation, whose diagonal elements depend on a loss function. High breakdown affine equivariant estimators provide down-weighting to outlying cases, such as the least trimmed square (LTS) regression [14], S-regression [19], and MM-regression [20]; see also [21,22] for M-estimators in regression. All of these methods work well in practice under the THC model.
A number of authors [8,23,24,25,26,27,28] have proposed robust regression models that are resilient to case-wise and cell-wise outliers by robustifying the components of the covariance matrix in the solution of the least square (LS) optimization problem. Additionally, the multivariate S-estimator is incorporated instead of the empirical covariance and mean [24,25]. It has been also showed that, under mild assumptions (including symmetry and independence in the residuals), the 2S-regression estimator [8] is Fisher consistent and asymptotically normal, even if the multivariate S-estimators are not. Based on incomplete data, two kinds of estimators were constructed [26]: (i) the GSE and (ii) the extended S-estimator (ESE), which match with the multivariate S-estimator under complete data. Note that the GSE needs a robust initial estimate. Furthermore, the extended EMVE estimator is introduced in [26] as a particular case of the ESE. The EMVE estimator can be considered to be an initial value and a generalization of the minimum volume ellipsoid (MVE) estimator proposed in [15]. Moreover, the shooting S-estimator that is derived in [28] assigns individual weights to each cell in the data table, combining the shooting algorithm [29] and the simple S-regression [19]. Observe that the data may be snipped replacing cell-wise outliers by missing values NA [27]. Moreover, the Gervini-Yohai univariate filter [30] can be used followed by the GSE [26,31]. Notice that the 3S-regression [8] considers an estimator which is analogous to one defined in [26], but with the filter that is consistent for a broader range of distributions.
Based on this bibliographical review, the objective of this study is to propose a comedian-three-step (C3S) regression estimator that considers the comedian as the initial robust scatter value for the GRE algorithm. The proposed estimation method: (i) utilizes an adaptive consistent univariate filter to control the effect of extreme cell-wise outlier propagation; (ii) applies the GRE algorithm, but modified using the sample comedian matrix and wise-median as an initial robust scatter estimate for the filtered data; and (iii) estimates the regression coefficients using the GRE algorithm in the previous step.
This paper is organized, as follows. In Section 2, the general context and notations are provided. Then, the consistent factor in median absolute deviation (MAD), the comedian function, and the empirical comedian covariance matrix are defined. Section 3 introduces the models with continuous and dummy covariates and proposes an estimator based on the C3S-regression, as well as its asymptotic properties. In Section 4, an extensive simulation study is conducted in order to compare the performance of proposed estimator with recently developed robust methods. Additionally, in this section, a real data example is used for illustration and for showing potential applications. Section 5 describes some conclusions and ideas for possible future works.

2. Comedian Covariance Matrix and Comedian Matrix

In this section, the general context and notations used in this work are presented. The consistent factor in MAD, the comedian function, and the empirical comedian covariance matrix are also defined here.

2.1. General Context and Notations

Let X and Y be two continuous random variables. Subsequently, the MAD of X, the comedian between X and Y, and the correlation median between X and Y ( δ ) are, respectively, given as [10]
MAD ( X ) = median ( | X median ( X ) | ) , COM ( X , Y ) = median ( ( X median ( X ) ) ( Y median ( Y ) ) ) , δ = COM ( X , Y ) MAD ( X ) MAD ( Y ) .
Note that the MAD(X) defined in Equation (1) is a robust measure of dispersion (or scatter) of X, COM( X , Y ) is a robust measure of the covariance between X and Y, while δ is a robust measure of the correlation between X and Y. When X = Y , ( COM ( X , X ) ) 1 / 2 can be used as a robust measure of variability, such as it occurs with the standard deviation (SD) and the covariance of X, that is, SD ( X ) = ( COV ( X , X ) ) 1 / 2 . Then, a robust measure of the covariance and correlation matrices for any random vector may be obtained by utilizing the robust measures defined in (1). Notice that the usual covariance between X and Y can be obtained as
ς = COV ( X , Y ) = MAD ( X ) MAD ( Y ) g ( 1 ) g ( ϱ ) ,
where g is the comedian function stated as g ( ϱ ) = COM( X , Y ) (see Lemma 2.1 in [10]) and ϱ is the correlation coefficient between X and Y. Observe that ϱ may be represented in terms of the correlation median as
ϱ = COR ( X , Y ) = g 1 ( g ( 1 ) δ ) .
The comedian function under a non-degenerate bivariate normal distribution was studied in [10], obtaining g ( 1 ) = ( Φ 1 ( 0.75 ) ) 2 , where Φ is the standard normal cumulative distribution function and Φ 1 is the inverse of Φ or normal quantile function. In this work, we also extend g ( 1 ) to other non-normal distributions. Note that Equation (2) can be written as ς = b X MAD ( X ) b Y MAD ( Y ) g ( ϱ ) , where
g ( 1 ) = ( b X b Y ) 1
and the consistent factors b X and b Y depend upon the marginal distributions of X and Y, respectively. The consistent factors for some distributions have been obtained and are presented in Table 1. Detailed calculations of these factors are available upon request from the authors.
Notice that the comedian can be considered as a robust initial scatter estimate for the GRE algorithm. Let ( X 1 , Y 1 ) , , ( X n , Y n ) be independent random vectors following a bivariate distribution. Subsequently, the empirical comedian is established by
COM ^ n ( X , Y ) = median i { 1 , , n } ( X i median ^ n ( X ) ) ( Y i median ^ n ( Y ) ) ,
where median ^ n ( X ) and median ^ n ( Y ) denote the sample medians of X 1 , , X n and Y 1 , , Y n , respectively. Thus, the covariance matrix that is defined in Equation (2) is a sophisticated scatter estimate based on the comedian matrix.

2.2. The Consistent Factor in MAD

The MAD is a very robust scatter estimate, which has 50 % breakdown point (the best possible). According to [32], if we want to estimate the SD consistently, the MAD must be multiplied by a correction factor. Thereby, an alternative robust estimate of the SD of X is given by S ^ X = b X MAD ^ n ( X ) , where MAD ^ n ( X ) = median { | X i median ^ n ( X ) | , i { 1 , , n } } . The consistent factor b X depends exclusively on the distribution of the random variable X. If the marginal distribution of X is unknown, the consistent factor can be estimated via the non-parametric bootstrapping method [33]. Let F ^ n be the empirical cumulative distribution function of the random variable X. Subsequently, the bootstrapping process used to obtain the consistent factor b X is summarized in Algorithm 1.
Algorithm 1 Bootstrapping process used in order to obtain the consistent factor b X .
1:
Generate X 1 * , , X n * F ^ n ( x ) randomly.
2:
Compute T n * = g ( X 1 * , , X n * ) = MAD ^ n ( X 1 * , , X n * ) / S ^ n ( X 1 * , , X n * ) .
3:
Repeat steps 1–2 B times to get T n , 1 * , , T n , B * .
4:
Evaluate b ^ X = ( 1 / B ) b = 1 B T n , b * .

2.3. The Comedian and Empirical Comedian Covariance

The comedian function g ( ϱ ) is needed in order to estimate the correlation median defined in Equation (3). The empirical correlation median δ ^ n = COM ^ n ( X , Y ) ( MAD ^ n ( X ) MAD ^ n ( Y ) ) 1 stated in Equation (5) can be seen as a robust estimate of the correlation coefficient by
ϱ ^ n = g 1 ( g ( 1 ) δ ^ n ) .
The function g was analyzed in [10] when ( X , Y ) has a bivariate normal distribution, but an explicit form was not obtained. However, it may be approximated through Monte Carlo simulations. We conduct an extensive Monte Carlo simulation study for g via the R software [34]. This simulation was carried out by using an R package named MASS and its mvrnorm function. The empirical medians of 10 000 000 random numbers from a bivariate normal distribution were also used, with  ϱ varying from 1 to 1 by 0.01 when ϱ [ 0.1 , 0.1 ] , and by 0.001 when ϱ [ 0.1 , 0.1 ] . The number of replicates is N = 10 and a visualization of the approximation for g is shown in Figure 1.
In general, the exact image value of the inverse comedian function g 1 cannot be obtained, but it may be estimated through an approximation. A discrete approximation of the comedian function is obtained at the aforementioned simulation study. The expression that is given in Equation (6) may be approximated as ϱ ^ n = g ^ 1 ( g ( 1 ) δ ^ n ) , where g ^ 1 is an estimate of g 1 by interpolating the approximated points while using a cubic spline. To carry this method, it is necessary to know all of the values corresponding to the support of the inverse comedian function. If the consistent factors of the marginal distributions defined in Equation (4) are estimated properly, then ϱ ^ n for other bivariate distributions can be obtained.
By using the empirical marginal distributions, the consistent factors may be estimated via bootstrapping, as described in Section 2. Thus, from Equation (6), we have ϱ ^ n = g ^ 1 ( ( b ^ X b ^ Y ) 1 δ ^ n ) .
Let X = ( X 1 , , X p ) be a set of p covariates. Then, a robust version of the empirical covariance between any pair of covariates ( X i , X j ) , for  i , j { 1 , , p } , can be stated as
S ^ X i X j c = b ^ X i MAD ^ n ( X i ) b ^ X j MAD ^ n ( X j ) ϱ ^ n ,
where S ^ X i , X j c is an element of the robust version of the empirical covariance matrix S ^ X X c . We also propose to use the expression given in Equation (7) as an initial scatter estimate for the GRE algorithm instead of the EMVE estimator. This proposal is called here the full version of the C3S-regression estimator.

3. Comedian Three-Step Regression

In this section, the model with continuous and dummy covariates is introduced. Moreover, the proposed estimator that is based on the 3S-regression, as well as its asymptotic properties, are developed.

3.1. The Proposed Estimator

A multiple regression is used to model the linear relationship between a dependent (response) variable Y and p independent (covariates) variables X = ( X 1 , , X p ) with observed values for the case i denoted by x i = ( x i 1 , , x i p ) . Subsequently, the multiple regression model can be written as
Y i = β 0 + β 1 x i 1 + + β p x i p + ε i = β 0 + x i β + ε i , i { 1 , , n } ,
where the error terms ε i , for  i { 1 , , n } , are independent and identically distributed random variables, which are also independent of the values of the covariates x i = ( x i 1 , , x i p ) .
The LS estimates of the parameters ( β 0 , β ) are defined as the solution to an optimization problem in order to minimize the sum squares of residuals as
( β 0 ^ LS , β ^ LS ) = argmin ( β 0 , β ) R ( p + 1 ) i = 1 n ( Y i β 0 x i β ) 2 .
The solution of Equation (9) can be explicitly given by
β 0 ^ LS = μ ^ Y μ ^ X β ^ LS , β ^ LS = Σ ^ X X 1 Σ ^ X Y ,
where Σ ^ X X and Σ ^ X Y are the components of the empirical covariance matrix, and  μ ^ Y and μ ^ X are empirical means of Y and X, respectively.
As suggested by a number of authors [8,23,24,25,26,27,28], the components of the solution stated in Equation (10) can be robustified to immunize the estimator against case-wise and cell-wise outliers. Inspired by [9], we use a modified version of the GRE algorithm to obtain robust estimates of means and covariances needed in the solution presented in Equation (10). The modification proposed by us is that the GRE algorithm considers the comedian as an initial value instead of the EMVE-C estimate. The robust method basically utilizes the empirical median and the robust version of the covariance, introduced in Section 2, as the initial location and scatter estimates for the GRE algorithm. The proposed estimator uses the univariate filter given in [8] and the GRE algorithm for incomplete data developed in [9]. Our proposal works similarly to that used in the 3S-regression, but employing in the second step a different initial robust estimate for the GRE algorithm. In the present work, the initial estimates of location and scatter, the empirical median, and a robust estimate of the covariance, are computed after snipping the data. Therefore, the proposed robust regression estimator (C3S-regression) is established as
β 0 ^ C 3 S = m ^ Y m ^ X β ^ C 3 S , β ^ C 3 S = S ^ X X 1 S ^ X Y ,
where both m ^ and S ^ come from the modified GRE algorithm proposed in this work, and they are computed as in Algorithm 2.
Algorithm 2 Computation of m ^ and S ^ from the modified GRE algorithm.
1:
Filter extreme cell-wise outliers using a univariate filter to prevent cell-wise contaminated cases.
2:
Compute the wise-median and the robust version of the covariance matrix (or comedian matrix) as initial robust location and scatter estimates.
3:
Down-weight the effect of case-wise outliers by applying the GRE algorithm for computing robust location and scatter estimates with the filtered data from Step 1.
Now, consider a set of n data with observed covariates { x 1 , , x n } and the corresponding response variable { Y 1 , , Y n } . Let { z 1 , , z n } be the joint data with z i = ( Y i , x i ) . In the first step, a univariate filter, as described in [8], is applied to each observed covariate x j , for  j { 1 , , p } . Let Z = ( z 1 , , z n ) and U denote the resulting auxiliary matrices of zeros and ones, with zeros indicating the filtered (missing) entries. Subsequently, based on the GRE algorithm, we obtain
m ^ = m ^ GRE ( Z , U ) , S ^ = S ^ GRE ( Z , U ) ,
where m ^ GRE and S ^ GRE are robust location and scatter based on the GRE algorithm for the incomplete data ( Z , U ) . Computation of the 3S-regression and C3S-regression estimates is summarized Algorithm 3.
Algorithm 3 Computation of 3S-regression and C3S-regression estimates.
1:
Snip data.
2:
Apply the GRE algorithm with robust initial location and scatter estimates.
3:
Estimate the regression coefficients as in Equation (10).
Note that the C3S-regression uses a robust estimated covariance matrix as an initial scatter value instead of the EMVE estimate. In addition, the biflat ρ function [35] is employed instead of the Tukey bisquare ρ function for the GSE. More details of the definitions and algorithms of the EMVE estimate and GSE can be found in [26]. Furthermore, the GRE algorithm was studied in [9], showing that, in large dimension, the Rocke biflat function is more robust than the Tuckey bisquare function [35,36].

3.2. Models with Continuous and Dummy Covariates

Notice that the M-regression and 3S-regression were used in [8] in order to deal with continuous and dummy covariates. There, a 3S-regression was employed to estimate the coefficients of the continuous covariates, whereas an M-regression with the Huber ρ function given by ρ H ( t ) = min ( 1 , t 2 / 2 ) [37] was considered to estimate the coefficients of the dummy covariates. This is a modification of the M-regression and 3S-regression proposed in [38]. We act similarly by using the C3S-regression to estimate the coefficients of the continuous covariates and the M-regression for the dummy coefficients. Consider the model with continuous and dummy covariates defined as
Y i = β 0 + x i β 1 + d i β 2 + ε i , i { 1 , , n } ,
where x i = ( x i 1 , , x i p 1 ) and d i = ( d i 1 , , d i p 2 ) are a p 1 × 1 vector of continuous covariates and a p 2 × 1 vector of dummy covariates, respectively. Let X = ( x 1 , , x n ) , D = ( d 1 , , d n ) and Y = ( Y 1 , , Y n ) , where the columns in X and D are linearly independent. More precisely, our method that is based on the M-regression and C3S-regression works as
( β ^ 0 ( r ) , β ^ 1 ( r ) ) = h ( X , Y D β ^ 2 ( r 1 ) ) , β ^ 2 ( r ) = M ( D , Y β ^ 0 ( r ) X β ^ 1 ( r ) ) , r { 1 , , R } ,
where h denotes the operator of a C3S-regression in each iteration for ( X , Y ) ; while M denotes the operator of an M-regression with no intercept for ( D , Y ) , as stated in (11). To control the effect of propagation of cell-wise outliers, let X ^ be the imputed X with the filtered entries by the linear predictor using ( m ^ ( r ) , S ^ ( r ) ) as defined in Equation (12) at the rth iteration of the GRE algorithm. The method that is presented in Equation (14) needs initial estimates ( β ^ 0 ( 0 ) , β ^ 1 ( 0 ) , β ^ 2 ( 0 ) ) to start the algoritm until a maximum of R = 20  iterations [8]. Then, we first remove the effect of d i from the continuous covariates and response. Let Y ¯ = Y D t and X ¯ = X D T , where t = M ( D , Y ) and T is a p 1 × p 2 matrix with the jth column as T j = M ( D , ( x i j , , x n j ) ) . Subsequently, the initial estimates are defined by ( β ^ 0 ( 0 ) , β ^ 1 ( 0 ) ) = h ( X ¯ , Y ¯ ) and β ^ 2 ( 0 ) = M ( D , Y β ^ 0 ( 0 ) X ^ β ^ 1 ( 0 ) ) .

3.3. Asymptotic Properties of the Comedian Three-Step Regression

The strong consistency of the empirical comedian is proved in [10], as well as its asymptotic normality. The strong consistency, asymptotic normality, and regularity assumptions for the GSE were established in [8]. Because the respective estimates under the 3S-regression and C3S-regression are based on the same GSE, independent of the differences in the initial estimates and weight functions, asymptotic properties of the corresponding estimators from the C3S-regression and 3S-regression are guaranteed. Note that the 3S-regression and C3S-regression become a 2S-regression for an enough large n. Therefore, the estimators obtained from the C3S-regression inherit the asymptotic properties of the estimators obtained from the 2S-regression such as the 3S-regression does. The properties of the corresponding asymptotic covariance matrix are also found in [8].
Let H be the distribution of ( X , Y ) , ( m ^ , S ^ ) be the GSE, and ( β 0 ^ C 3 S , β ^ C 3 S ) be the estimated 3S-regression coefficients. Subsequently,  z i = ( x i , Y i ) is replaced by z ^ i = ( x ^ i , Y i ) and x ˜ i = ( 1 , x i ) by x ˜ ^ i = ( 1 , x ^ i ) , where x ^ i is the best linear prediction of x i . Thus, the asymptotic covariance matrix is estimated through the asymptotic S-estimator variance (ASV) matrix stated as ASV ^ ( H ) = C ^ ( H ) 1 D ^ ( H ) C ^ ( H ) 1 in [8], where C ^ ( H ) = ( 1 / n ) i = 1 n ( w ( d n ( z ^ i ) ) + 2 / ( σ ^ ε , n 2 w ( d n ( z ^ i ) ) r ^ i 2 ) ) x ˜ ^ i x ˜ ^ i , D ^ ( H ) = ( 1 / n ) i = 1 n w 2 ( d n ( z ^ i ) ) r ^ i 2 x ˜ ^ i x ˜ ^ i , σ ^ ε , n = ( S ^ Y Y β ^ C 3 S S ^ X X β ^ C 3 S ) 1 / 2 , d n ( z ^ i ) = ( z ^ i m ^ ) S ^ 1 ( z ^ i m ^ ) , r ^ i = Y i x ˜ ^ i β ^ C 3 S , and w ( d n ( z ^ i ) ) = ρ R ( d n ( z ^ i ) ) , with ρ R being the Rocke biflat function and d n defined in Equation (13).

4. Numerical Studies

In this section, the computational framework and simulation scenarios are described. Subsequently, we report the results of an intensive simulation study, which is conducted to evaluate the statistical performance of the C3S-regression coefficient estimators and to compare the proposed estimator and other existing estimators. In addition, the illustration with real data is provided.

4.1. Computational Framework and Simulation Scenarios

Our simulation study is performed by utilizing the R software with a Hewlett–Packard HP Compaq computer, Pro 6300 SFF with 8 cores processor GenuineIntel Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz. The simulation is similar to the one carried out in [8] using the same criteria for evaluating the performance of the C3S-regression. The method proposed in the present paper is also compared with the performance of the LS regression and following two robust alternatives:
  • The first one is the 2S-regression [25] that uses an MVE estimate as an initial value. The MVE estimate is computed by means of an iterative subsampling with a concentration step. The MVE estimate is implemented in an R package named rrcov, using the function CovSest with the option method = "bisquare" [39].
  • The second one is the 3S-regression [8] that reduces the high computational burden of uniform subsampling for the EMVE estimate. The GSE with bisquare ρ function is computed by an iterative algorithm that employs the EMVE-C estimate as an initial value. The 3S-regression without modifications is implemented in an R package, named robreg3S, using the function robreg3S as the default option [40]. Nevertheless, the GSE with the EMVE-C estimate as an initial value is implemented in the GSE package, using the function GSE with the option init = "emve_c" [41].
The univariate filter that is needed by the C3S-regression is implemented in the robreg3S package, while the GRE is computed using the GSE function with the option method = "rocke". Two versions of the C3S-regression to be considered are: (i) the full version using the comedian covariance matrix; and, (ii) the light version using the raw comedian matrix as an initial scatter estimate. The last one is called light version, because it uses less operations to compute than the full version. From now on, the C3S-regression is referred to both versions, unless that an indication is done.
Next, the regression model presented in Equation (8) with p = 15 and n { 150 , 300, 500, 1000, 5000 } is considered. The values of covariates x i , for i { 1 , , n } , are generated from a multivariate normal distribution N p ( μ , Σ ) . We set μ = 0 and Σ j j = 1 [8], for j { 1 , , p } , without loss of generality, because the GSE used by the C3S-regression is location and scale equivariant. (Note that from the location equivariant of the GSE, β 0 = 0 can be set.) To address the fact that the C3S-regression and 3S-regression are neither affine-equivariant nor regression equivariant, the correlation structure Σ may be used. Observe that this correlation structure is described in [27], with the condition number fixed at 100 and random generation of β as β = R b . Let R = 10 and b follow a uniform distribution on the unit spherical surface. The response variable Y i is given by Y i = x i β + ε i , where ε i N ( 0 , σ = 0.5 ) , for i { 1 , , n } , are independent. The scenarios assumed in the simulation study are:
S1
Clean data: the data generation is not altered.
S2
Cell-wise contamination: randomly replace a proportion q of the cells in the covariates by outliers x i j cont = E ( X i j ) + k SD ( X i j ) and of the responses by outliers Y i j cont = E ( Y i j ) + k SD ( ε i ) , where k { 1 , , 10 } .
S3
Case-wise contamination: randomly replace a proportion q of the cases by leverage outliers ( x i cont , Y i cont ) , where x i cont = c v , and Y i cont = x i cont β + ε i cont , with ε i cont N ( k , σ 2 ) , for k { 1 , , n } . Here, v is the eigenvector corresponding to the smallest eigenvalue of Σ with length such that ( v μ ) Σ 1 ( v μ ) = 1 . To compute the value of c { 1 , , 100 } , we follow the same process introduced in [8,27]; that is, a Monte Carlo study with the same number of replicates N = 500 . We observe that c = 22 is the value that produces the worst performance of the scatter estimator.
Let q { 0.01 , 0.05 , 0.09 } for the cell-wise contamination. From the fact that case-wise outliers are unusual in practice, we consider q = 0.03 for the case-wise contamination. The number of replicates for each setting is N = 1000 . In addition, the simulation study is also carried out in order to consider the regression model presented by Equation (13) with p 1 = 12 continuous covariates, p 2 = 3 dummy covariates, and n { 500 , 1000 } . Then, the performance of the M-regression and C3S-regression is evaluated. The values of covariates ( x i , d i ) , for i { 1 , , n } , are first generated from a multivariate normal distribution N p 1 + p 2 ( 0 , Σ ) , where Σ is the randomly generated correlation matrix with a fixed condition number of 100. Subsequently, d i j is dichotomized at Φ 1 ( π j ) , with π j { 1 / 4 , 1 / 3 , 1 / 2 } , for j { 1 , 2 , 3 } , respectively. The generation of the model with continuous and dummy covariates follows the scenarios S1-S2 and for the case-wise contamination follows the scenario S3.
Let Σ 1 be a sub-matrix of Σ , which quantifies the covariance of the continuous covariates. In this new scenario, randomly replace a proportion q of the cases in X by leverage outliers ( x i cont , Y i cont ) , where x i cont = c v , Y i cont = x i cont β 1 + d i β 2 + ε i cont , with ε i cont N ( k , σ 2 ) , and k { 1 , 10 } . Here, v is now the eigenvector corresponding to the smallest eigenvalue of Σ 1 , with length such that ( v μ ) Σ 1 1 ( v μ ) = 1 , and the corresponding least favorable case-wise contamination size for the twelve continuous variables is c = 18 .
Once again, we consider q { 0.01 , 0.05 , 0.09 } for the cell-wise contamination and q = 0.03 for the case-wise contamination. The number of replicates for each setting is N = 1000 . Furthermore, the simulation study is conducted for non-normal covariates to compare the performance of the C3S-regression, 3S-regression, 2S-regression and LS estimators. For the C3S-regression, the full and light versions of the proposed estimator are considered. The same regression model with p = 15 and n = 500 is used, but the covariates are generated from a non-normal distribution [8]. The covariates X i , for i { 1 , , n } , are first generated from a multivariate normal distribution with zero mean and covariance matrix Σ , which is, X i N p ( 0 , Σ ) , where, again, Σ is a randomly generated correlation matrix with a fix condition number of 100. Subsequently, the covariates are transformed by means of ( X i 1 , , X i p ) ( G 1 1 ( Φ ( X i 1 ) ) , , G p 1 ( Φ ( X i p ) ) ) . We consider a distribution for G j as: N ( 0 , 1 ) , with j { 1 , 2 , 3 } ; χ 2 ( 20 ) , with j { 4 , 5 , 6 } ; F ( 90 , 10 ) , with j { 7 , 8 , 9 } ; χ 2 ( 1 ) , with j { 10 , 11 , 12 } ; and Pareto ( 1 , 3 ) , with j { 13 , 14 , 15 } . The scenarios that are evaluated in this simulation study are as S1. For the cell-wise contamination, we replace q = 0.05 by the proportion of cells in the covariates with outliers x i j cont = k G j ( 0.999 ) , and by the proportion of responses with outliers Y i j cont = E ( Y i j ) + k SD ( ε i ) .

4.2. Simulation Results

The statistical performance in the estimation of regression coefficients due to the effect of cell-wise and case-wise outliers can be evaluated using the empirical mean squared error (MSE), defined as
MSE ¯ = 1 N p i = 1 N j = 1 p ( β ^ j ( i ) β j ( i ) ) 2 ,
where β ^ j ( i ) is the estimate of β j ( i ) at the ith Monte Carlo replicate. Table 2 and Table 3 report the MSE ¯ defined in Equation (15) for k = 1 in all the settings with n { 500 , 1000 } . The results for k = { 5 , 10 } are omitted, because they are similar to k = 1 . Figure 2 and Figure 3 show curves of MSE ¯ for cell-wise and case-wise contamination in models with p = 15 continuous covariates and n = 1000 .
Figure 4 and Figure 5 display curves of MSE ¯ for cell-wise and case-wise contamination in models with continuous and dummy covariates, and n { 500 , 1000 } . Figure 6 shows curves of MSE ¯ for cell-wise and case-wise contamination in models with continuous covariates and n = 500 . Note that models with continuous covariates in the M-regression and C3S-regression outperform in all of the assumed scenarios for both cell-wise and case-wise contaminations. In addition, in the four panels of Figure 6, both versions of the C3S-regression have almost the same behavior for all settings assumed. The full version of the C3S-regression is little more robust than its light version, but the estimates of both are almost equal for all contamination settings. The results for n = 1000 are similar to the cell-wise contamination settings. In the cell-wise contamination setting for small and moderate contamination proportions ( q 0.05 ), the C3S-regression is highly robust against moderate and large cell-wise outliers ( k 3 ), but less robust against inliers ( k 2 ). The 3S-regression and C3S-regression perform similarly for moderate and large outliers, but in the presence of inliers ( k 3 ), the 3S-regression is less robust; see first two panels of Figure 6.
The 2S-regression and 3S-regression perform similarly in the presence of inliers, as expected from the simulation studies carried out in [8]. However, the 2S-regression breaks down in cases when the proportion of contaminated cells is q > 0.5 ; that is, when the propagation of large cell-wise outliers is expected to affect more than 50 % of the cases.
For a large contamination proportion ( q = 0.09 ), the C3S-regression, 3S-regression, and 2S-regression perform similarly in the presence of inliers ( k 3 ), but the 3S-regression breaks down for moderate and large cell-wise outliers ( k 4 ). However, the C3S-regression is highly robust against large cell-wise outliers ( k 5 ) although less robust against moderate outliers. In the case-wise contamination setting, the C3S-regression, 3S-regression and 2S-regression perform fairly well and similarly. Nevertheless, the 2S-regression has the best performance, followed by the 3S-regression, which is followed in performance by the C3S-regression.
We also study the performance of the estimator with moderate and large case-wise contamination levels of 10 % and 20 % , in which at a size of leverage outliers of 22, the C3S-regression and 3S-regression break down as k increases. In this settings, the C3S-regression outperforms the 3S-regression, but, as expected, the 2S-regression maintains its robustness for any contamination level.
Note that, in practice, it is unusual to find case-wise outliers and even more at moderate or large levels. Thus, the loss of robustness for the C3S-regression and 3S-regression does not present a disadvantage. We detect that models with continuous and dummy covariates in the M-regression and C3S-regression outperform in all assumed scenarios. Table 4 reports a summary of the performance of the estimators evaluated by MSE ¯ . The performance of the 3S-regression considering non-normal covariates is comparable to all the other estimators for clean data. However, both versions of the C3S-regression outperform all other estimators for any contamination size k in the cell-wise contamination setting. In the cases of non-normal covariates, the C3S-regression maintains its competitive performance, followed by the 3S-regression, while the 2S-regression, as expected, breaks down in the presence of moderate and large cell-wise outliers proportion.
Next, the statistical performance of confidence intervals (CIs) for the regression coefficients based on the asymptotic covariance matrix, as described in SubSection 3.3, is evaluated. The asymptotic 100 ( 1 τ ) % CIs for the coefficients of C3S-regression can be established as
CI ( β ^ j ) = β ^ j Φ 1 ( 1 τ / 2 ) ASV ^ ( β ^ j ) / n ; β ^ j + Φ 1 ( 1 τ / 2 ) ASV ^ ( β ^ j ) / n , j = 0 , 1 , , p .
The performance of CIs defined in Equation (16) may be evaluated using the empirical mean coverage rate (CR) given by
CR ¯ = 1 N p i = 1 N j = 1 p I ( β j ( i ) CI ( β ^ j ( i ) ) )
and the empirical mean CI length (CIL) defined as
CIL ¯ = 1 N p i = 1 N j = 1 p 2 Φ 1 ( 1 τ / 2 ) ASV ^ ( β ^ j ) / n .
Table 5 reports the average CIL defined in Equation (18) obtained from the C3S-regression and 3S-regression in the case of clean data and contaminated data with 1 % cell-wise ( k = 9 ), 5 % cell-wise ( k = 6 ), 9 % cell-wise ( k = 3 ) and 3 % case-wise ( k = 3 ), for n { 150 , 300 , 500 , 1000 , 5000 } . The results of the LS and 2S-regression estimates are not included here, because we are interested in comparing the CIL between the 3S-regression and C3S-regression. The CIL that is obtained from the C3S-regression is comparable to that of the 3S-regression for all considered scenarios. The CIL reached from the 3S-regression are shorter than that for the C3S-regression with clean data and data with small and moderate cell-wise contamination levels. For data with large cell-wise contamination levels or case-wise contamination, the CILs of the C3S-regression are shorter than the CILs of the 3S-regression. Moreover, for any assumed scenario, CILs of the 3S-regression and C3S-regression decrease as the sample size n increases.
Figure 7 shows the CR ¯ defined in Equation (17) in the case of clean data and contaminated data with 5 % cell-wise contamination ( k = 5 ), and 3 % case-wise contamination ( k = 3 ), and for different sample sizes n { 150 , 300 , 500 , 1000 } . Although the results for the sample size n = 5000 are not shown here for visualization, it can be noticed that, for the C3S-regression and 3S-regression, the evaluations of CR ¯ under n = 5000 are better than those when n = 1000 . For contamination settings, the 3S-regression yields the best CR, which is the closest to the nominal level. In general, the CR for the C3S-regression is similar to that of the 3S-regression, and it tends to be equal as the sample size n increases.

4.3. Analysis of Real Data

The airfoil self-noise data set is used for the illustration purpose. These data were obtained from a series of aerodynamic and acoustic tests of two and three-dimensional airfoil blade sections conducted in an anechoic wind tunnel by the NASA. The data set comprises airfoils of different sizes at various wind tunnel speeds and angles of attack with n = 1503 observations (cases). For this data set, Table 6 shows five covariates and one response variable along with their statistical summaries. This data set is available at the UCI repository [42]. The aim of this empirical study is to predict the noise generated by an airfoil, from dimensions, speed and angle of attack. Specifically, the objective is to explain the scaled sound pressure level.
The data set is fitted with the model given by
log ( Y i ) = β 0 + β 1 log ( X 1 i ) + β 2 X 2 i + β 3 X 3 i + β 4 X 4 i + β 5 X 5 i + ε i , i { 1 , , 1503 } ,
where the log function is used for X 1 due to its wide range and high skewness, while the log function is employed for Y in order to improve the R2-adjusted. The corresponding parameters with C3S-regression (in both versions and full version computed by bootstrap estimation), 2S-regression, 3S-regression, and LS estimates are obtained. The regression coefficient estimates and the corresponding p-values are reported in Table 7. Note that the regression coefficients are similar for all the estimates, except for the covariate X 5 , (that is, the suction side displacement thickness). The coefficient of X 5 estimated by 3S-regression and 2S-regression are similar, but are very different from the C3S-regression and LS estimates. For the C3S-regression, X 5 is highly not significant, while for the 2S-regression and 3S-regression, it is only not significant. However, the LS method indicates that X 5 is significant.
The squared norm distance, defined as SND = n j = 1 p ( β ^ j , A β ^ j , B ) 2 MAD ( X i j , , X n j ) 2 , is used to compare the four estimators. Table 8 reports the corresponding SND, which shows that these distances from each two pairs are not large. Therefore, it suggests that the data are not contaminated or the contamination level is very small (inliers).

5. Conclusions and Future Works

We have provided a new form for robustifying the estimation of parameters of a linear regression model in order to immunize these estimators against case-wise and cell-wise outliers. The main idea here was to modify the generalized Rocke S-estimator in order to obtain robust estimators of the corresponding means and covariances. The difference in our proposal was changing, in the generalized Rocke S-estimator, the initial scatter estimate from the extended minimum volume ellipsoid estimate by the empirical median. The proposed estimator used a univariate filter introduced in the literature and the generalized Rocke S-estimator modified for incomplete data. Our method worked well and similar to that used in the 3S-regression, but in the second step with a different initial robust estimate for the generalized Rocke S-estimator. The initial estimates of location and scatter, the empirical median, and the robust version of the covariance, were computed after snipping the data. Therefore, we have obtained the following findings:
  • A new method, called comedian-three-step regression, was proposed, which showed an overall outperformance over the recent developed robust methods.
  • An exact correction factor ( b X ) was calculated in order to estimate consistently the standard deviation by using the median absolute deviation for the exponential, logistic and uniform distributions. In addition, a numerical solution for this correction factor was introduced in the Student-t and Weibull distributions.
  • In continuous covariates, for small contamination proportion and large cell-wise outliers, the 3S-regression performed similarly to the C3S-regression. However, in general, the C3S-regression outperformed the 3S-regression when the cell-wise contamination proportion increase.
  • In continuous and dummy covariates, the C3S-regression outperformed both the 3S-regression and 2S-regression, for different contamination proportions. However, for the case-wise outliers, the performance of the three estimators was quite similar.
  • The performance of the full version of the C3S-regression estimator proposed in this work was better than its light version. However, the latter one is computationally faster and it can also be used without significant loss of robustness.
Therefore, we have contributed to the robust statistic literature modifying the original three-step regression model by introducing a new family of initial estimates based on the comedian. Our method and the original one are useful to deal with both cell-wise and case-wise outliers. Nevertheless, the numerical results reported that the method proposed in the present study showed an overall outperformance over the recent developed robust methods and a better performance for models with continuous and dummy covariates.
The following aspects derived of this paper may be considered for future work:
  • The C3S-regression and 3S-regression estimators work well for cell-wise contamination. However, the performance of these estimators with moderate and large case-wise contamination levels (for example, between 10 % and 20 % ) do not work well when the contamination level increases. Some new kind of shrinkage estimator for the initial scatter estimate should be investigated.
  • A bivariate filter can be considered in the first step in order to snip deviation of cells, which could improve the performance of the estimator.
  • A numerical procedure must be studied to calculate the correction factor for any distribution.

Author Contributions

Data curation, H.V., H.L. and M.T.; formal analysis, H.L., M.T., V.L. and Y.L; investigation, H.V., H.L., M.T., V.L. and Y.L.; methodology, H.V., H.L., M.T., V.L. and Y.L.; writing–original draft, H.V., H.L., M.T., V.L. and Y.L.; writing–review and editing, H.L., M.T., V.L. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research was partially supported by the Departamento Administrativo de Ciencia, Tecnología e Innovación (Colciencias), currently Ministerio de Ciencia y Tecnología de Colombia, (Project 7252015), by the Vicerectoría de Descubrimiento y Creación from the Universidad Eafit (H. Velasco, H. Laniado, and M. Toro), and by FONDECYT (grant 1200525) from the National Agency for Research and Development (ANID) of the Chilean government (V. Leiva).

Acknowledgments

The authors thank the Editors and Reviewers for their constructive comments on an earlier version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Draper, N.R.; Smith, H. Applied Regression Analysis; Wiley: New York, NY, USA, 2014. [Google Scholar]
  2. Andrews, D.F. A robust method for multiple linear regression. Technometrics 1974, 16, 523–531. [Google Scholar] [CrossRef]
  3. Liu, Y.; Mao, G.; Leiva, V.; Liu, S.; Tapia, A. Diagnostic analytics for an autoregressive model under the skew-normal distribution. Mathematics 2020, 8, 693. [Google Scholar] [CrossRef]
  4. Peña, D.; Prieto, F.J. Multivariate outlier detection and robust covariance matrix estimation. Technometrics 2001, 43, 286–310. [Google Scholar] [CrossRef] [Green Version]
  5. Rousseeuw, P.J.; Leroy, A.M. Robust Regression and Outlier Detection; Wiley: New York, NY, USA, 2005. [Google Scholar]
  6. Sánchez, L.; Leiva, V.; Galea, M.; Saulo, H. Birnbaum-Saunders quantile regression models with application to spatial data. Mathematics 2020, 8, 1000. [Google Scholar] [CrossRef]
  7. Athayde, E.; Azevedo, A.; Barros, M.; Leiva, V. Failure rate of Birnbaum-Saunders distributions: Shape, change-point, estimation and robustness. Braz. J. Probab. Stat. 2019, 33, 301–328. [Google Scholar] [CrossRef] [Green Version]
  8. Leung, A.; Zhang, H.; Zamar, R. Robust regression estimation and inference in the presence of cell-wise and case-wise contamination. Comput. Stat. Data Anal. 2016, 99, 1–11. [Google Scholar] [CrossRef] [Green Version]
  9. Leung, A.; Yohai, V.; Zamar, R. Multivariate location and scatter matrix estimation under cell-wise and case-wise contamination. Comput. Stat. Data Anal. 2017, 111, 59–76. [Google Scholar] [CrossRef] [Green Version]
  10. Falk, M. On MAD and comedians. Ann. Inst. Stat. Math. 1997, 49, 615–644. [Google Scholar] [CrossRef]
  11. Di Palma, M.A.; Gallo, M. A co-median approach to detect compositional outliers. J. Appl. Stat. 2016, 43, 2348–2362. [Google Scholar] [CrossRef]
  12. Alqallaf, F.; VanAelst, S.; Yohai, V.J.; Zamar, R.H. Propagation of outliers in multivariate data. Ann. Stat. 2009, 37, 311–331. [Google Scholar] [CrossRef]
  13. Hampel, F.R. Beyond location parameters: Robust concepts and methods. Bull. Int. Stat. Inst. 1975, 46, 375–382. [Google Scholar]
  14. Rousseeuw, P.J. Least median of squares regression. J. Am. Stat. Assoc. 1984, 79, 871–880. [Google Scholar] [CrossRef]
  15. Rousseeuw, P.J. Multivariate estimation with high breakdown point. Math. Stat. Appl. 1985, 8, 283–297. [Google Scholar]
  16. Rousseeuw, P.J.; Driessen, K.V. A fast algorithm for the minimum covariance determinant estimator. Technometrics 1999, 41, 212–223. [Google Scholar] [CrossRef]
  17. Holl, P.W.; Welsch, R.E. Robust regression using iteratively reweighted least-squares. Commun. Stat. Theory Methods 1977, 6, 813–827. [Google Scholar]
  18. Wager, T.D.; Keller, M.C.; Lacey, S.C.; Jonides, J. Increased sensitivity in neuroimaging analyses using robust regression. Neuroimage 2005, 26, 99–113. [Google Scholar] [CrossRef]
  19. Rousseeuw, P.; Yohai, V. Robust Regression by Means of S-Estimators; Springer: New York, NY, USA, 1984. [Google Scholar]
  20. Yohai, V.J. High breakdown-point and high efficiency robust estimates for regression. Ann. Stat. 1987, 20, 642–656. [Google Scholar] [CrossRef]
  21. Leiva, V.; Sanhueza, A.; Sen, P.K.; Araneda, N. M-procedures in the general multivariate nonlinear regression model. Pak. J. Stat. 2010, 26, 1–13. [Google Scholar]
  22. Sanhueza, A.; Sen, P.K.; Leiva, V. A robust procedure in nonlinear models for repeated measurements. Commun. Stat. Theory Methods 2009, 38, 138–155. [Google Scholar] [CrossRef]
  23. Maronna, R.; Morgenthaler, S. Robust regression through robust covariances. Commun. Stat. Theory Methods 1986, 15, 1347–1365. [Google Scholar] [CrossRef]
  24. Davies, P.L. Asymptotic behaviour of s-estimates of multivariate location parameters and dispersion matrices. Ann. Stat. 1987, 15, 1269–1292. [Google Scholar] [CrossRef]
  25. Croux, C.; VanAelst, S.; Dehon, C. Bounded influence regression using high breakdown scatter matrices. Ann. Inst. Stat. Math. 2003, 55, 265–285. [Google Scholar] [CrossRef] [Green Version]
  26. Danilov, M.; Yohai, V.J.; Zamar, R.H. Robust estimation of multivariate location and scatter in the presence of missing data. J. Am. Stat. Assoc. 2012, 107, 1178–1186. [Google Scholar] [CrossRef]
  27. Agostinelli, C.; Leung, A.; Yohai, V.J.; Zamar, R.H. Robust estimation of multivariate location and scatter in the presence of cell-wise and case-wise contamination. TEST 2015, 24, 441–461. [Google Scholar] [CrossRef] [Green Version]
  28. Öllerer, V.; Alfons, A.; Croux, C. The shooting s-estimator for robust regression. Comput. Stat. 2016, 31, 829–844. [Google Scholar] [CrossRef] [Green Version]
  29. Fu, W.J. Penalized regressions: The bridge versus the lasso. J. Comput. Graph. Stat. 1998, 7, 397–416. [Google Scholar]
  30. Gervini, D.; Yohai, V.J. A class of robust and fully efficient regression estimators. Ann. Stat. 2002, 30, 583–616. [Google Scholar] [CrossRef]
  31. Farcomeni, A. Robust constrained clustering in presence of entry-wise outliers. Technometrics 2014, 56, 102–111. [Google Scholar] [CrossRef]
  32. Rousseeuw, P.J.; Croux, C. Alternatives to the median absolute deviation. J. Am. Stat. Assoc. 1993, 88, 1273–1283. [Google Scholar] [CrossRef]
  33. Efron, B. Bootstrap Methods: Another Look at the Jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  34. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
  35. Rocke, D.M. Robustness properties of s-estimators of multivariate location and shape in high dimension. Ann. Stat. 1996, 24, 1327–1345. [Google Scholar] [CrossRef]
  36. Maronna, R.A.; Martin, D.R.; Yohai, V.J. Robust Statistics: Theory and Methods; Wiley: New York, NY, USA, 2006. [Google Scholar]
  37. Huber, P.J.; Ronchetti, E.M. Robust Statistics; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  38. Maronna, R.A.; Yohai, V.J. Robust regression with both continuous and categorical predictors. J. Stat. Plan. Inference 2000, 89, 197–214. [Google Scholar] [CrossRef]
  39. Todorov, V.; Filzmoser, P. An object-oriented framework for robust multivariate analysis. J. Stat. Softw. 2009, 32, 1–47. [Google Scholar] [CrossRef] [Green Version]
  40. Leung, A.; Zhang, H.; Zamar, R. robreg3S: Three-Step Regression and Inference for Cellwise and Casewise Contamination; R Package Version 0.3; R Foundation for Statistical Computing: Vienna, Austria, 2015. [Google Scholar]
  41. Leung, A.; Danilov, M.; Yohai, V.J.; Zamar, R. GSE: Robust Estimation in the Presence of Cellwise and Casewise Contamination and Missing Data; R Package Version 4.1; R Foundation for Statistical Computing: Vienna, Austria, 2016. [Google Scholar]
  42. Dheeru, D.; Karrataniskidou, E. UCI Machine Learning Repository; University of California: Irvine, CA, USA, 2017. [Google Scholar]
Figure 1. Approximation obtained by simulations of the comedian g ( ϱ ) = COM ( X , Y ) as a function of the correlation coefficient ϱ , where g ( 1 ) = ( Φ 1 ( 0.75 ) ) 2 . Source: the authors.
Figure 1. Approximation obtained by simulations of the comedian g ( ϱ ) = COM ( X , Y ) as a function of the correlation coefficient ϱ , where g ( 1 ) = ( Φ 1 ( 0.75 ) ) 2 . Source: the authors.
Mathematics 08 01259 g001
Figure 2. MSE ¯ for indicated cell-wise contamination values k in models with p = 15 continuous covariates and n = 1000 . Source: the authors.
Figure 2. MSE ¯ for indicated cell-wise contamination values k in models with p = 15 continuous covariates and n = 1000 . Source: the authors.
Mathematics 08 01259 g002
Figure 3. MSE ¯ for indicated case-wise contamination values k, in models with p = 15 continuous covariates and n = 1000 . Source: the authors.
Figure 3. MSE ¯ for indicated case-wise contamination values k, in models with p = 15 continuous covariates and n = 1000 . Source: the authors.
Mathematics 08 01259 g003
Figure 4. MSE ¯ for indicated cell-wise and case-wise contamination values k in models with continuous and dummy covariates, and n = 500 . Source: the authors.
Figure 4. MSE ¯ for indicated cell-wise and case-wise contamination values k in models with continuous and dummy covariates, and n = 500 . Source: the authors.
Mathematics 08 01259 g004
Figure 5. MSE ¯ for indicated cell-wise and case-wise contamination values k in models with continuous and dummy covariates and n = 1000 . Source: the authors.
Figure 5. MSE ¯ for indicated cell-wise and case-wise contamination values k in models with continuous and dummy covariates and n = 1000 . Source: the authors.
Mathematics 08 01259 g005
Figure 6. MSE ¯ for indicated cell-wise and case-wise contamination values k in models with continuous covariates and n = 500 . Source: the authors.
Figure 6. MSE ¯ for indicated cell-wise and case-wise contamination values k in models with continuous covariates and n = 500 . Source: the authors.
Mathematics 08 01259 g006
Figure 7. CR ¯ for clean data and for cell-wise and case-wise contaminated data with the indicated n. Source: the authors.
Figure 7. CR ¯ for clean data and for cell-wise and case-wise contaminated data with the indicated n. Source: the authors.
Mathematics 08 01259 g007
Table 1. Consistent factor b X of the indicated distribution.
Table 1. Consistent factor b X of the indicated distribution.
Distribution of XNotation b X
Exponential X Exp ( λ ) 1 / log ( 1 + 5 ) / 2
Logistic X Logistic ( μ , s ) 3 π / ( 3 log ( 3 ) )
Normal X N ( μ , σ ) 1 / Φ 1 ( 3 / 4 )
Student-t X t ( ν ) * ν ν 2 / m median ( ν ) , ν > 2
Uniform X U ( a , b ) 2 / 3
Weibull X Wei ( α , β ) * α Γ 1 + 2 / β Γ 1 + 1 / β 2 / m median ( α , β )
* m median is the solution of m to the non-linear equations for t ( ν ) and Wei ( α , β ) given, respectively, by 2 m Γ ( ν + 1 2 ) π ν Γ ( ν / 2 ) 2 F 1 1 2 , ν + 1 2 ; 3 2 m 2 ν 1 2 = 0 , exp ( log ( 2 ) ) 1 / β m α β exp ( log ( 2 ) ) 1 / β + m α β 1 2 = 0 .
Table 2. Maximum MSE ¯ in all of the considered scenarios for models with continuous covariates.
Table 2. Maximum MSE ¯ in all of the considered scenarios for models with continuous covariates.
EstimatorClean1% Cell-Wise5% Cell-Wise9% Cell-WiseCase-Wise
n = 5001000n = 5001000n = 5001000n = 5001000n = 5001000
C3SFull0.00370.00170.00540.00260.50170.45091.74171.72870.00420.0019
C3S0.00370.00170.00550.00260.51820.47091.79991.76710.00420.0019
3S0.00280.00140.01050.00640.86820.90092.15631.98190.00330.0015
2S0.00270.00140.00920.00623.18633.06894.39964.38610.00310.0014
LS0.00260.00132.35812.34594.77994.75585.46035.46151.32991.3141
Table 3. Maximum MSE ¯ in all scenarios for models with continuous and dummy covariates.
Table 3. Maximum MSE ¯ in all scenarios for models with continuous and dummy covariates.
EstimatorClean1% Cell-Wise5% Cell-Wise9% Cell-WiseCase-Wise
n = 5001000n = 5001000n = 5001000n = 5001000n = 5001000
C3SFull0.00290.00130.00390.00200.16530.12121.51941.45190.00310.0015
C3S0.00280.00130.00390.00200.17000.12381.55721.48450.00310.0015
3S0.00670.00410.01090.00760.51960.58741.80691.75180.00770.0051
2S0.00200.00100.00420.00251.28841.26383.84093.85220.00230.0011
LS0.00180.00092.46182.41734.92494.91555.67365.58160.52300.5037
Table 4. MSE ¯ for the indicated estimator with clean data and cell-wise contaminated data.
Table 4. MSE ¯ for the indicated estimator with clean data and cell-wise contaminated data.
EstimatorCleanCell-Wise
k = 1 k = 5 k = 10
C3SFull0.00500.02950.01800.0382
C3S0.00400.03600.01730.0392
3S0.00940.17120.02420.0494
2S0.00116.48935.31855.8407
L20.00065.22176.48076.6118
Table 5. Average CIL for clean data and for cell-wise and case-wise contamination.
Table 5. Average CIL for clean data and for cell-wise and case-wise contamination.
Size (n)Clean1% Cells, k = 95% Cell, k =69% Cell, k = 33% Cases, k = 3
C3S3SC3S3SC3S3SC3S3SC3S3S
1500.39590.35070.38180.34010.44960.51961.71261.83520.39570.3537
3000.28240.24670.27400.24140.25800.33361.36541.29920.28210.2498
5000.21810.19150.21180.18690.19170.25331.12061.00470.21890.1944
10000.15430.13570.14930.13230.13540.17670.82040.70690.15460.1375
50000.06860.06050.06700.05960.06090.07850.37790.31500.06940.0618
Table 6. Description of the variables in the airfoil self-noise data set.
Table 6. Description of the variables in the airfoil self-noise data set.
VariableLabelUnitsTypeMinimumMeanMaximum
X 1 FrequencyHertzCovariate2002886.3820000
X 2 Angle of attackDegreesCovariate0.00006.782322.2000
X 3 Chord lengthMetersCovariate0.02540.13650.3048
X 4 Free stream velocityMetersCovariate31.700050.860771.3000
X 5 Suction side
displacement thickness
MetersCovariate0.00040.01110.0584
YScaled sound
pressure level
DecibelsResponse103.38124.836140.987
Table 7. Estimates and p-values of the regression coefficients for the airfoil self-noise data set.
Table 7. Estimates and p-values of the regression coefficients for the airfoil self-noise data set.
VariableC3SFullC3S3S2SLS
Coeff.p-ValueCoeff.p-ValueCoeff.p-ValueCoeff.p-ValueCoeff.p-Value
log( X 1 )−0.0319< 0.0001 −0.0319< 0.0001 −0.0311< 0.0001 −0.0311< 0.0001 −0.0290< 0.0001
X 2 −0.0032< 0.0001 −0.0032< 0.0001 −0.0034< 0.0001 −0.0034< 0.0001 −0.0032< 0.0001
X 3 −0.3299< 0.0001 −0.3299< 0.0001 −0.3026< 0.0001 −0.3026< 0.0001 −0.2828< 0.0001
X 4 0.0006< 0.0001 0.0006< 0.0001 0.0006< 0.0001 0.0006< 0.0001 0.0007< 0.0001
X 5 −0.30080.7186−0.30200.7165−0.85050.2110−0.85610.2690−1.3347< 0.0001
Table 8. Pairwise squared norm distance between the estimates for the airfoil self-noise data set.
Table 8. Pairwise squared norm distance between the estimates for the airfoil self-noise data set.
C3SFullC3S3S2SLS
C3SFull- 4.3055 × 10 08 0.0107 0.0107 0.0389
C3S - 0.0107 0.0107 0.0389
3S - 3.0751 × 10 6 0.0130
2S - 0.0128
LS -

Share and Cite

MDPI and ACS Style

Velasco, H.; Laniado, H.; Toro, M.; Leiva, V.; Lio, Y. Robust Three-Step Regression Based on Comedian and Its Performance in Cell-Wise and Case-Wise Outliers. Mathematics 2020, 8, 1259. https://doi.org/10.3390/math8081259

AMA Style

Velasco H, Laniado H, Toro M, Leiva V, Lio Y. Robust Three-Step Regression Based on Comedian and Its Performance in Cell-Wise and Case-Wise Outliers. Mathematics. 2020; 8(8):1259. https://doi.org/10.3390/math8081259

Chicago/Turabian Style

Velasco, Henry, Henry Laniado, Mauricio Toro, Víctor Leiva, and Yuhlong Lio. 2020. "Robust Three-Step Regression Based on Comedian and Its Performance in Cell-Wise and Case-Wise Outliers" Mathematics 8, no. 8: 1259. https://doi.org/10.3390/math8081259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop