Next Article in Journal
An Improved Coppersmith Algorithm Based on Block Preprocessing
Previous Article in Journal
Global Existence and Uniqueness of Solutions of Integral Equations with Multiple Variable Delays and Integro Differential Equations: Progressive Contractions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Separation of the Linear and Nonlinear Covariates in the Sparse Semi-Parametric Regression Model in the Presence of Outliers †

1
Department of Statistics, School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Tehran P.O. Box 14155-6455, Iran
2
Department of Statistics, Faculty of Mathematics, Statistics and Computer Sciences, Semnan University, Semnan P.O. Box 35195-363, Iran
3
Institute of Mathematical Sciences, Faculty of Science, Universiti Malaya, Kuala Lumpur 50603, Malaysia
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in the Proceedings of the 16th Iranian Statistics Conference, Tehran, Iran, 24–26 August 2022; University of Mazandaran: Babolsar, Mazandaran, Iran, 2022; pp. 55–61.
Mathematics 2024, 12(2), 172; https://doi.org/10.3390/math12020172
Submission received: 21 August 2023 / Revised: 14 October 2023 / Accepted: 24 October 2023 / Published: 5 January 2024
(This article belongs to the Topic Mathematical Modeling)

Abstract

:
Determining the predictor variables that have a non-linear effect as well as those that have a linear effect on the response variable is crucial in additive semi-parametric models. This issue has been extensively investigated by many researchers in the area of semi-parametric linear additive models, and various separation methods are proposed by the authors. A popular issue that might affect both estimation and separation results is the existence of outliers among the observations. In order to address this lack of sensitivity towards extreme observations, robust estimating approaches are frequently applied. We propose a robust method for simultaneously identifying the linear and nonlinear components of a semi-parametric linear additive model, even in the presence of outliers in the observations. Additionally, this model is sparse in that it may be used to determine which explanatory variables are ineffective by giving accurate zero estimates for their coefficients. To assess the effectiveness of the proposed method, a comprehensive Monte Carlo simulation study is conducted along with an application to investigate the dataset, which includes Boston property prices dataset.
MSC:
62G05; 62J07; 62J05

1. Introduction

Semi-parametric linear additive (SLA) models have both the flexibility of non-parametric regression models as well as the simplicity of linear regression models. These applicable models are broadly used as a popular mechanism for data analysis in many fields. In SLA models, an acceptable relationship of the mean response variable is assumed to connect with some explanatory variables linearly, while it relates to other explanatory variables non-linearly in an additive form.
Suppose y = ( y 1 , , y n ) T is the vector of the response variable and X = ( x 1 , , x n ) T is the n × p design matrix with p covariates and n observations x i T = ( x i 1 , x i 2 , , x i p ) . Without loss of generality, assume that x i is partitioned into x i ( 1 ) T = ( x i 1 , x i 2 , , x i q ) and x i ( 2 ) T = ( x i ( q + 1 ) , , x i p ) for some q { 1 , , p 1 } . Then, the semi-parametric linear additive model (see, e.g., [1]) is defined as
y i = x i ( 1 ) T β + j = q + 1 p f j ( x i j ) + ϵ i , i = 1 , , n ,
where β = ( β 1 , β 2 , , β q ) T is a q-dimensional vector of unknown parameters, f q + 1 , , f p are unknown smooth functions, and ϵ i ’s are random error terms, which are presumed to be independent of ( x i ) . It is assumed that the response and the covariates are centered, and thus the intercept term is omitted without loss of generality.
There are several approaches for the estimation of non-parametric additive models, including the back-fitting technique (see [2]), simultaneous estimation and optimization [3,4,5,6], mixed model approach [1,7,8], and Boosting approach [9,10]. Ref. [5] has presented a review of some of these methods, up to 2006, and [11] has performed several comparisons between these techniques. The problem of variable selection and penalized estimation in additive models has been investigated by many researchers [12,13,14,15,16,17,18,19,20,21,22].
An essential concern in practice is to identify the linear and nonlinear parts of the SLA model, i.e., whether the explanatory variables can be considered as the linear or nonlinear parts of the model. Ref. [23] studied an additive regression model as the standard model by assuming that each of the functions is decomposed into linear and nonlinear parts. Their proposed approach of estimation was a penalized regression scheme based on a group mini-max concave penalty. Ref. [24] surveyed the additive model and tried to isolate the linear and nonlinear predictors by using two group penalty functions, one for enforcing the sparsity and the other one for enforcing linearity to the components. Ref. [25] introduced a similar model to that of [23], while they imposed the LASSO and group LASSO penalty functions to the coefficients of the linear parts and the coefficients of the spline estimator of the nonlinear part, respectively. Ref. [26] introduced a similar additive model and they enforced linearity to the spline approximation of the functions using the group penalty function of the second derivative of the B-splines. There are also more contributions on the problem of structure recognition and separation of nonlinear and linear parts of the SLA model [27,28]. Some details of the literature review for the proposed separation approaches are considered in Section 2.
The presence of outliers, which are unusual observations that fail to follow the scheme of the bulk of the observations, is a frequent problem in the model fitting of datasets. In such situations, robust regression approaches are used to solve the undesirable effects of the outliers. Some of the most popular robust regression approaches are M-estimation, S-estimation, the least median of squares, and the least trimmed squares; see [29] for more details. Robust methods are well-known statistical techniques to overcome the complication of outliers. The least trimmed squares (LTS), suggested by Rousseeuw and Leroy [30], is one of the most popular robust regression techniques, as it minimizes the sum of h smallest squared residuals instead of the whole sum of them, for a specified positive integer trimming parameter h n . The LTS estimator is efficient in reaching the maximum possible breakdown point (50%) [31]. There are several works that have studied robust estimations for the semi-parametric and non-parametric linear models (see, e.g., [32,33,34]).
In this paper, we consider the effect of outliers on simultaneous separation and estimation methods in SLA models, and we survey the LTS version of the methods by introducing the LTS version of the separation and sparse estimation approach suggested by [24]. The paper is organized as follows. Section 2 presents a literature review of some simultaneous separation and estimation approaches. Section 3 contains the general LTS version of the approaches presented in Section 2, and then we try to apply the LTS version of the proposed method by [24] in our implementation. Then, the finite sample breakdown point of the proposed model is established with the introduction of a computational algorithm. The comprehensive simulation studies are conducted in Section 4, in which many different criteria are evaluated in six different competitive models. The proposed approach is then applied in the Boston housing prices dataset, along with the prediction achievement of different methods. At the same time, we try to reveal the effect of the outliers using the different partial residual plots of all competitive schemes.

2. Literature Review of the Separation Methods

In this section, we review the available penalized models that separate the nonlinear and linear parts of the semi-parametric regression model in the literature.

2.1. Group Penalization of the Spline Coefficients

Ref. [23] studied the following additive regression model as the guideline model:
y = j = 1 p f j ( x j ) + ϵ ,
and they assumed that each of the functions f j has a linear and a nonlinear part as follows:
f j ( x ) = β 0 + β j x + k = 1 K n θ j k B j k ( x ) ,
where B j 1 , , B j K n are basis functions. They suggested estimation of the model parameters by minimizing the following penalized objective function:
L ( β , θ ) = 1 2 n i = 1 n y i β 0 j = 1 p β j x i j j = 1 p k = 1 K n θ j k B j k ( x i j ) 2 + j = 1 p ρ γ ( | | θ j n | | A j ; K λ ) ,
where ρ ( · ) is a penalty function depending on the penalty parameter λ 0 and a regularization parameter γ . In objective function (3), Lian et al. [23] considered the mini-max concave penalty function for their model, as follows:
ρ γ ( t ; λ ) = λ 0 t ( 1 x / ( γ λ ) ) + d x , t 0 .
Under some conditions, they proved the consistency of the proposed estimators and studied the correctness of the separation performed by their estimators.

2.2. Affine Group Penalization of the Spline Coefficients

Ref. [24] considered model (2), while they relaxed the assumption that each of the functions f j has a linear and a nonlinear part; instead, they made an effort to separate the linear and nonlinear covariates by the following affine group penalized model:
L ( θ ) = 1 2 i = 1 n y i θ 0 j = 1 p k = 1 K n θ j k B j k ( x i j ) 2 + n λ 1 j = 1 p w 1 j | | θ j | | A j + n λ 2 j = 1 p w 2 j | | θ j | | D j ,
where λ 1 0 and λ 2 0 are penalty parameters, w 1 j s and w 2 j s are the proper weights, which are appropriately chosen in order to reach a suitable consistency in model selection, and for any K n × K n matrix B, | | θ j | | B = ( θ j T B θ j ) 1 / 2 . Ref. [24] suggest the use of w 1 j = 1 / | | b j | | A j and w 2 j = 1 / | | b j | | B j , for an initial estimate b j of θ j , j = 1 , , p . To enforce sparsity and linearity to the functions f 1 , , f p , they assumed that | | θ j | | A j = 0 , if and only if k θ j k B j k ( x ) 0 and | | θ j | | D j = 0 , if and only if k θ j k B j k ( x ) is a linear function of x. Letting
A j = 0 1 B j k ( x ) B j k ( x ) d x k , k = 1 K n a n d D j = 0 1 B j k ( x ) B j k ( x ) d x k , k = 1 K n
results in | | θ j | | A j = | | k θ j k B j k | | and | | θ j | | D j = | | k θ j k B j k | | . They proved that the proposed estimators are asymptotically normal.

2.3. LASSO and Group LASSO Penalization of the Linear and Nonlinear Coefficients

Ref. [25] studied the following scheme as a baseline model:
y = Z α + j = 1 p f j ( x j ) + ϵ ,
in which they assumed that there are some covariates Z that we need to consider in the linear part. Similar to [23], they assumed that
f j ( x ) = β 0 + β j x + k = 1 K n θ j k B j k ( x )
Then, they enforced the LASSO and group LASSO penalty functions to the coefficients of the linear parts and the coefficients of the spline estimator of the nonlinear part, respectively, as follows:
L ( β , θ ) = i = 1 n y i β 0 Z i α j = 1 p β j x i j j = 1 p k = 1 K n θ j k B j k ( x i j ) 2 + λ 1 | | α | | 1 + λ 2 | | β | | 1 + λ 3 j = 1 p | | θ j | |

2.4. Group Penalization of the Second Derivative of the B-Splines

Ref. [26] considered (2) as their baseline model and imposed linearity to the spline approximation of the functions f 1 , , f p by minimization of the following penalized objective function, which uses the group penalty function of the second derivative of the B-splines:
L ( θ ) = i = 1 n = 1 q ρ τ y i θ 0 j = 1 p k = 1 K n θ j k B j k ( x i j ) + n q j = 1 p P λ ( | | B j | | ) ,
where ρ τ ( · ) is the quantile regression loss function.

3. Robust Penalized Estimation Methods

All of the penalized loss functions (3), (4), (6), and (7) can be written in the following general form:
L m ( η ) = i = 1 n L i m y i , η + n j = 1 p P j m ( η j ) , m = 1 , , 4 ,
where L i m is the loss function of the ith observation, and P j m is the penalty function of the jth parameter, i = 1 , , n , j = 1 , , p in the mth model, m = 1 , , 4 .
The least trimmed squares (see [35]) penalized loss function associated with the mth model is then as follows:
Q m ( u , η ) = i = 1 n u i L i m y i , η + h j = 1 p P j m ( η j ) , m = 1 , , 4 ,
where u i is the binary indicator clarifying whether the ith observation is a normal observation or is an outlier point, such that i = 1 n u i = h , u i { 0 , 1 } , for i = 1 , , n , and h n is a starting conjecture for the number of normal observations. Let U be the diagonal matrix with diagonal elements u = ( u 1 , u 2 , , u n ) .
The resulting robust sparse semi-parametric linear estimator is obtained by the following optimization problem:
min η , u Q m ( u , η ) s . t . i = 1 n u i = h , u i { 0 , 1 } .
In this work, we only consider the robust version of penalized loss function (4):
Q ( u , θ ) = 1 2 i = 1 n u i y i θ 0 j = 1 p k = 1 K n θ j k B j k ( x i j ) 2 + h λ 1 j = 1 p w 1 j | | θ j | | A j + h λ 2 j = 1 p w 2 j | | θ j | | D j
Hereafter, we name scheme (4) sparse semi-parametric linear additive (SSLA) and scheme (11) robust sparse semi-parametric linear additive (RSSLA). We also name the special case λ 2 = 0 of schemes (4) and (11) sparse nonlinear additive (SNLA) and robust sparse nonlinear additive (RSNLA), respectively, because by letting λ 2 = 0 , the schemes change into nonlinear forms. As an alternative competitor for these schemes, the simple linear LASSO regression is also considered, which is called sparse linear (SL), and its robust version based on the LTS method is called robust sparse linear (RSL) in this research.

3.1. The Breakdown Point of the RSSLA Model

The RSSLA estimator is obtained as
θ ^ RSSLA = argmin θ argmin u E h Q ( u , θ ) ,
where
E h = { u ; u i { 0 , 1 } , i = 1 , 2 , , n , i = 1 n u i = h } .
Conventionally, we consider h = [ [ n ( 1 α ) ] ] , where [ [ a ] ] denotes the ceiling of a and α ( 0 , 1 ) is the percent of leverage observed points. Indeed 1 α is a starting guess for the percent of outlier points. Some researchers propose considering α = 0.75 (see [36] for more details). Others have proposed considering h = [ n / 2 ] + [ ( p + 1 ) / 2 ] . The finite-sample breakdown point (FBP; see, e.g., [29]) is a size or rate for the consistency of a method. For the complete sample Z , the FBP of an estimator S = S ( Z ) is given by
BP ( S ; Z ) = min m m n : sup Z * | | S ( Z * ) | | 2 = ,
where Z * is a corrupted sample obtained from Z by replacing m of the complete n observations by random samples. In the following theorem, the FBP of LTS-SPSRE is established.
Theorem 1. 
The FBP of θ ^ RSSLA estimator is
FBP ( θ ^ RSSLA ; y , X ) = n h 1 n .
Proof. 
Let ( y * , X * ) be the corrupted sample by replacing the last m n h sample points. Then the number of normal points in ( y * , X * ) is n m h . For an arbitrary sample ( y * , X * ) , we can write
min u E h Q ( u , 0 ) = min u E h y * U y * min u E h y U y h M y 2 ,
where M y = max i = 1 , , n | y i | .
Let θ be such that h λ 1 j = 1 p w 1 j | | θ j | | A j + h λ 2 j = 1 p w 2 j | | θ j | | D j h M y 2 + 1 ; then
min u E h Q ( u , θ ) h λ 1 j = 1 p w 1 j | | θ j | | A j + h λ 2 j = 1 p w 2 j | | θ j | | D j h M y 2 + 1 > min u E h Q ( u , 0 ) .
Since min u E h Q ( u , θ ) min u E h Q ( u , 0 ) , we can write
h λ 1 j = 1 p w 1 j | | θ j | | A j + h λ 2 j = 1 p w 2 j | | θ j | | D j | θ = θ ^ RSSLA h M y 2 + 1 ,
and hence BP ( θ ^ RSSLA ; y , X ) n h 1 n .
Let Φ be the n × ( p K n ) matrix of B j k ( x i j ) s, i = 1 , , n , j = 1 , , p , k = 1 , , K n . Change the last m = n h + 1 observations of ( y , X ) such that the last m observations of ( y , Φ ) are changed to ( a M , a e ) , with M > 0 and a > 0 , e = ( e j 1 , , e j p ) T , in which e i is a vector with 1 as its ith elements and zero elsewhere, and
max ( h m , 0 ) ( max i = 1 , . . . , n | y i | + M max i = 1 , . . . , n | | Φ i | | ) 2 + h λ 1 M / p i = 1 p w 1 i | | e j i | | A i + h M / p λ 2 i = 1 p w 2 i | | e j i | | D i a 2 .
Let P ( θ ) = h λ 1 j = 1 p w 1 j | | θ j | | A j + h λ 2 j = 1 p w 2 j | | θ j | | D j and consider the point θ M = a ( M / p ) e . Now, for the last m sample points, according to ( y Φ θ M ) = a M a M = 0 , it can be written that
min u E h Q ( u , θ M ) = min u E h m ( y Φ θ M ) U ( y Φ θ M ) + P ( θ M ) , h > M P ( θ M ) , otherwise .
Therefore,
min u E h Q ( u , θ M ) max ( h m , 0 ) ( max i = 1 , . . . , n | y i | + M max i = 1 , . . . , n | | Φ i | | ) 2 + P ( θ M ) a 2 .
Also, for the corrupted sample, we can write
min u E h Q ( u , θ ) ( M a a θ e ) 2 ,
in which at least one of the last m points of the corrupted sample is in the set of the least possible h residuals. Now, considering θ such that | θ 1 | M 2 , it can be seen that
min u E h Q ( u , θ ) a 2 ( M θ e ) 2 > a 2 ,
since θ e | θ 1 | M 2 , which is a contradiction. Thus, we deduce that
| θ ^ RSSLA 1 | > M 2 ,
which means that FBP occurs as M tends to infinity, i.e., BP ( θ ^ RSSLA ; y , X ) n h 1 n , and the proof completed. □

3.2. Computational Penalized LTS Algorithm

To find u * , we have to look for the minimum of the set E h overall n h combinations of the complete set { 1 , , n } . Thus, for somewhat large values of sample size, achieving the optimal value may need too much time and space. To extend the procedure of obtaining the RSSLA model, an analog of the FAST-LTS algorithm developed by [35] is proposed.
Let u k E h be the indicator vector obtained at iteration k and θ ^ RSSLA ( k ) be the obtained argument that minimizes Q ( u k , θ ) in the kth iteration. Then,
u i k + 1 = 1 , e i k 2 { ( e k ) j : n 2 ; j = 1 , , h } 0 , o t h e r w i s e , .
where ( e k ) 1 : n 2 ( e k ) n : n 2 are the sorted sample of the squared residuals.
It is obvious that
Q u k + 1 , θ ^ RSSLA ( k + 1 ) Q u k , θ ^ RSSLA ( k ) ,
and the algorithm continues until convergence.
To guarantee that the updated solution of the algorithm is as close as possible to the optimal solution of Q ( u , θ ^ ) , the steps of the algorithm are replicated s times with s beginning indicator vectors u 0 1 , , u 0 s . To decrease the computational cost of the algorithmic program, the methodology proposed by [35] is applied, in which only two iterations of the algorithm for each iteration are performed, obtaining u 2 1 , , u 2 s , keeping a small number, k, of them with the lowest values of Q ( u , θ ^ ) , and the algorithm is continued until convergence occurs. The latest result is the indicator with the minimum value for the optimization problem.

4. Simulation Study

In this section, we present an extensive simulation study to examine the performance of the proposed estimators in the presence of the outliers. We consider the simulation scenarios proposed by [24] to generate clean data. The clean data are generated from the model
y i clean = j = 1 p f j ( X i j ) + ϵ i ,
where f 1 ( x ) = 5 sin ( 2 π x ) , f 2 ( x ) = 10 x ( l x ) , f 3 ( x ) = 3 x , f 4 ( x ) = 2 x , f 5 ( x ) = 2 x and f j ( x ) = 0 , for j = 6 , , p . The errors ϵ i are generated from a normal distribution with zero mean and variance σ 2 . The covariates X i j are generated from a multivariate normal distribution with zero mean vector and the covariances C o v ( X i j 1 , X i j 2 ) = 0 . 5 | j 1 j 2 | and then the cumulative distribution function of the standard normal distribution is applied to them to transform their range into [ 0 , 1 ] .
The simulation study is performed for N = 100 iterations of the data generation and estimation. For each iteration of the simulation study, we generate n = 100 , 200 clean train datasets, with p = 50 , 100 , 200 covariates, and σ = 0.2 , 0.5 . The clean train data points are denoted by
( X i 1 t r , , X i p t r , y i tr clean ) , i = 1 , , n .
We further generate n t s = n / 2 test datasets, denoted by
( X i 1 t s , , X i p t s , y i ts clean ) , i = 1 , , n t s .
Then, we contaminate the response values y i tr clean and y i ts clean as follows. From n ( n t s ) samples, we choose 20 % randomly, and we denote this subset by O . Then, for any i * O , we generate U 1 i * and U 2 i * from a uniform distribution over [ 0 , 1 ] independently. Next, we let
y i * outlier = Y i * clean + 2 I ( U 1 i * > 0.5 ) 1 ( 2 + U 2 i * ) S Y clean , for i * O ,
where S Y clean is the sample standard deviation of clean responses. For a Core i5 10210U CPU (1.60 GHz) with 8 GB RAM and R version 4.2.1 (64 Bit), the mean computation time for SSLAM is 16.84 min (with optimization of BIC for choosing penalization parameters), for SLM it is 0.26 s, for SGAM it is 2.41 min, for RSSLAM it is 53.29 s (without optimization of BIC for choosing penalization parameters), for RSLM it is 0.23 s, and for RSGAM it is 4.54 min. Note that the codes of the proposed models are developed in R, while for the SLM and RSLM models, the R package glmnet is used, which implements the main procedures in the C programming language.
Several criteria are considered in this simulation study to examine the performance of the estimators. The mean integrated square error (MISE) for f j is defined as
MISE ( f j ) = 0 1 ( f ^ j ( x ) f j ( x ) ) 2 d x , j = 1 , , p .
For the purpose of testing the prediction efficiency of the proposed methods in the presence of the outliers, we define the clean data mean square error (CMSE) and clean data prediction error (CPE) for the train and test datasets, respectively, as follows:
CMSE = 1 n i = 1 n ( y ^ i t r y i tr clean ) 2 ,
CPE = 1 n t s i = 1 n t s ( y ^ i t s y i ts clean ) 2 .
The false negative rate and the false positive rate are also defined as follows:
FNR = # { j ; 1 j p , f j 0 , f ^ j = 0 } # { j ; 1 j p , f j 0 } ,
FPR = # { j ; 1 j q , f j = 0 , f ^ j 0 } # { j ; 1 j q , f j = 0 } ,
where # A stands for the cardinality of the set A.
We also define the false linear rate (FLR) and false non-linear rate (FNLR) criteria as follows, to examine the separation performance of SSLA and RSSLA models:
FLR = # { j ; 1 j p , f j is   not   linear , f ^ j is   linear } # { j ; 1 j p , f j is   not   linear } ,
FNLR = # { j ; 1 j p , f j is   linear , f ^ j is   not   linear } # { j ; 1 j p , f j is   linear } ,
as well as the false outlier rate (FOR) and false non-outlier rate (FNOR) criteria as follows, to examine the outlier detection performance of robust models:
FOR = # { i ; 1 i n , y i is   not   outlier , y i is   detected   as   outlier } # { i ; 1 i n , y i is   not   outlier } ,
FNOR = # { i ; 1 i n , y i is   an   outlier , y i is   not   detected   as   outlier } # { i ; 1 i n , y i is   an   outlier } .
The means and standard errors of all the above criteria are tabulated for all different scenarios in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. From Table 1 and Table 2, one can see that the robust separative model RSSLA is the most powerful model for estimating the true regression functions, especially for larger values of n ( n = 200 in Table 2), while for n = 100 (Table 1), the RSNLA model is also a successful model for estimation of the nonlinear regression functions. From Table 3, it can be observed that the RSSLA model is almost more efficient than other competitors (except the RSNLA model in a few cases) in the sense of the clean data MSE (CMSE). The clean data prediction performance of the RSSLA model is of course the best among the six models, based on the CPE values tabulated in Table 4. From Table 5, the RSNLA model is the best model based on the FNR criterion, while the best values of the FPR criterion are obtained for the SL model, based on the values in Table 6. However, one can see that the FNR and FPR values for the RSSLA model are better than those of the SSLA model, which means that the robust modeling improves the FNR and FPR values in the separative semi-parametric linear model. From Table 7, it can be seen that both SSLA and RSSLA models have near-zero values of the FLR, while the RSSLA model has significantly lower values of the FNLR than the SSLA model. This shows that the robust modeling helps the model to separate the linear and nonlinear covariates more accurately. Finally, from the values of the FOR and FNOR criteria in Table 8, we can deduce that the RSSLA model is the most powerful model among the three robust models for the correct detection of the outliers.

5. Case Study

To evaluate the performance of the proposed method for a real dataset, we analyze the Boston housing prices dataset [37,38] with 506 observations and 14 features. The R package MASS [39] contains these data. Here, we consider the median value of the price of the owner-occupied homes in USD 1000 (Median Price) as the response variable, and the following covariates:
  • Crime rate: per capita crime rate by town;
  • Nitrogen Oxides: nitrogen oxide concentration (parts per 10 million);
  • Rooms: average number of rooms per dwelling;
  • Age: proportion of owner-occupied units built prior to 1940;
  • Distances: weighted mean of distances to five Boston employment centers;
  • Lower Status: lower status of the population (percent).
The following model is considered:
Median Price = μ + f 1 ( Crime rate ) + f 2 ( Nitrogen Oxides ) + f 3 ( Rooms ) + f 4 ( Age ) + f 5 ( Distances ) + f 6 ( Lower Status ) + ϵ
The leave-one-out cross-validation is considered, by only considering the samples with less than the 90% quantile of the train set square residuals (not considered as outliers) in all models. We call this criterion trimmed leave-one-out cross-validation (TLOOCV), which is as follows:
TLOOCV = i = 1 n u i ( y i y ^ i ( i ) ) 2 i = 1 n u i ,
where y ^ i ( i ) is the prediction of y i by using all observations except ( X i , y i ) , and
u i = 1 ( y i y ^ i ( i ) ) 2 < Quantile 0.9 ( y j y ^ j ) 2 , j { 1 , , n } / i 0 .
Values of TLOOCV are presented in Table 9, along with the percent of test points ( 100 ( 1 i = 1 n u i / n ) % ), considered as the outliers. As one can see from Table 9, the RSSLA model has achieved the smallest value of the TLOOCV among all models.
To draw the partial residual plot for the jth covariate ( j = 1 , , 6 ), we compute the residuals of the regression of the response variable against all covariates except the jth covariate, and then we plot it against the jth covariate. These plots are shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 for all six models. The outliers are the points where their square residual is greater than the 90% quantile of the square residuals.

6. Conclusions and Summary

This research investigates a robust version of the sparse separative semi-parametric linear regressionmodel introduced by [24] by utilising the LTS regression model [35]. The proposed research might potentially encompass other separative semi-parametric linear regression models put forth by [23,25,26], as well as other upcoming comparable techniques. The simulation analysis demonstrates that the proposed strategy substantially enhances the performance of the sparse separative semi-parametric linear regression model in terms of estimation, prediction, variable selection, and separation. The proposed method outperforms previous robust regression models in outlier detection and has the best performance in terms of a truncated variant of the leave-one-out cross-validation measure [37,38] when applied to the well-known Boston housing prices dataset.

Author Contributions

Conceptualization, M.A.; methodology, M.A. and M.R.; software, M.A. and M.R.; validation, M.A., M.R. and N.A.M.; formal analysis, M.A. and M.R.; investigation, M.A.; resources, M.A. and M.R.; data curation, M.A., M.R. and N.A.M.; writing—original draft preparation, M.A., M.R. and N.A.M.; writing—review and editing, M.A., M.R. and N.A.M.; visualization, M.A., M.R. and N.A.M.; supervision, M.A.; project administration, M.A.; funding acquisition, N.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

We want to thank the Ministry of Higher Education Malaysia for their support in funding this research through the Fundamental Research Grant Scheme (FRGS/1/2023/STG06/UM/02/13) awarded to Nur Anisah Mohamed @ A Rahman.

Data Availability Statement

All used datasets are available in R software (R Foundation for Statistical Computing, Vienna, Austria) at the “MASS” library.

Acknowledgments

The authors would like to thank three anonymous reviewers for their valuable comments and corrections to an earlier version of this paper, which significantly improved the quality of our work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ruppert, D.; Wand, M.P.; Carroll, R.J. Semiparametric Regression; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  2. Friedman, J.H.; Stuetzle, W. Projection pursuit regression. J. Am. Stat. Assoc. 1981, 76, 817–823. [Google Scholar] [CrossRef]
  3. Marx, B.D.; Eilers, P.H. Direct generalized additive modeling with penalized likelihood. Comput. Stat. Data Anal. 1998, 28, 193–209. [Google Scholar] [CrossRef]
  4. Wood, S.N. Modelling and smoothing parameter estimation with multiple quadratic penalties. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2000, 62, 413–428. [Google Scholar] [CrossRef]
  5. Wood, S.N. Generalized Additive Models: An Introduction with R; Chapman and Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  6. Wood, S.N. Stable and efficient multiple smoothing parameter estimation for generalized additive models. J. Am. Stat. Assoc. 2004, 99, 673–686. [Google Scholar] [CrossRef]
  7. Speed, T. [That BLUP is a good thing: The estimation of random effects]: Comment. Stat. Sci. 1991, 6, 42–44. [Google Scholar] [CrossRef]
  8. Wang, Y. Mixed effects smoothing spline analysis of variance. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 1998, 60, 159–174. [Google Scholar] [CrossRef]
  9. Breiman, L. Prediction games and arcing algorithms. Neural Comput. 1999, 11, 1493–1517. [Google Scholar] [CrossRef]
  10. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  11. Binder, H.; Tutz, G. A comparison of methods for the fitting of generalized additive models. Stat. Comput. 2008, 18, 87–99. [Google Scholar] [CrossRef]
  12. Meier, L.; Van de Geer, S.; Bühlmann, P. High-dimensional additive modeling. Ann. Stat. 2009, 37, 3779–3821. [Google Scholar] [CrossRef]
  13. Ravikumar, P.; Liu, H.; Lafferty, J.; Wasserman, L. Spam: Sparse additive models. In Proceedings of the 20th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 3–6 December 2007; Curran Associates Inc.: Red Hook, NY, USA, 2007; pp. 1201–1208. [Google Scholar]
  14. Wang, L.; Chen, G.; Li, H. Group SCAD regression analysis for microarray time course gene expression data. Bioinformatics 2007, 23, 1486–1494. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, H.; Xia, Y. Shrinkage estimation of the varying coefficient model. J. Am. Stat. Assoc. 2009, 104, 747–757. [Google Scholar] [CrossRef]
  16. Lin, Y.; Zhang, H.H. Component selection and smoothing in multivariate nonparametric regression. Ann. Stat. 2006, 34, 2272–2297. [Google Scholar] [CrossRef]
  17. Bach, F.R. Consistency of the group lasso and multiple kernel learning. J. Mach. Learn. Res. 2008, 9, 1179–1225. [Google Scholar]
  18. Huang, J.; Horowitz, J.L.; Wei, F. Variable selection in nonparametric additive models. Ann. Stat. 2010, 38, 2282. [Google Scholar] [CrossRef]
  19. Opsomer, J.D.; Ruppert, D. A root-n consistent backfitting estimator for semiparametric additive modeling. J. Comput. Graph. Stat. 1999, 8, 715–732. [Google Scholar] [CrossRef]
  20. Wang, L.; Liu, X.; Liang, H.; Carroll, R.J. Estimation and variable selection for generalized additive partial linear models. Ann. Stat. 2011, 39, 1827. [Google Scholar] [CrossRef]
  21. Liu, X.; Wang, L.; Liang, H. Estimation and variable selection for semiparametric additive partial linear models (ss-09-140). Stat. Sin. 2011, 21, 1225. [Google Scholar] [CrossRef]
  22. Arashi, M.; Asar, Y.; Yüzbaşi, B. SLASSO: A scaled LASSO for multicollinear situations. J. Stat. Comput. Simul. 2021, 91, 3170–3183. [Google Scholar] [CrossRef]
  23. Huang, J.; Wei, F.; Ma, S. Semiparametric regression pursuit. Stat. Sin. 2012, 22, 1403. [Google Scholar] [CrossRef]
  24. Lian, H.; Liang, H.; Ruppert, D. Separation of covariates into nonparametric and parametric parts in high-dimensional partially linear additive models. Stat. Sin. 2015, 25, 591–607. [Google Scholar]
  25. Li, X.; Wang, L.; Nettleton, D. Sparse model identification and learning for ultra-high-dimensional additive partially linear models. J. Multivar. Anal. 2019, 173, 204–228. [Google Scholar] [CrossRef]
  26. Liu, H.; Ma, J.; Peng, C. Shrinkage estimation for identification of linear components in composite quantile additive models. Commun. Stat.-Simul. Comput. 2020, 49, 2678–2692. [Google Scholar] [CrossRef]
  27. Kazemi, M.; Shahsavani, D.; Arashi, M. Variable selection and structure identification for ultrahigh-dimensional partially linear additive models with application to cardiomyopathy microarray data. Stat. Optim. Inf. Comput. 2018, 6, 373–382. [Google Scholar] [CrossRef]
  28. Kazemi, M.; Shahsavani, D.; Arashi, M.; Rodrigues, P.C. Identification for partially linear regression model with autoregressive errors. J. Stat. Comput. Simulation 2021, 91, 1441–1454. [Google Scholar] [CrossRef]
  29. Maronna, R.A.; Martin, R.D.; Yohai, V.J.; Salibián-Barrera, M. Robust Statistics: Theory and Methods (with R); John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  30. Rousseeuw, P.J.; Leroy, A.M. Robust Regression and Outlier Detection; John Wiley and Sons: New York, NY, USA, 1987. [Google Scholar]
  31. Rousseeuw, P.J. Least median of squares regression. J. Am. Stat. Assoc. 1984, 79, 871–880. [Google Scholar] [CrossRef]
  32. Roozbeh, M.; Babaie-Kafaki, S.; Naeimi Sadigh, A. A heuristic approach to combat multicollinearity in least trimmed squares regression analysis. Appl. Math. Model. 2018, 57, 105–120. [Google Scholar] [CrossRef]
  33. Amini, M.; Roozbeh, M. Least trimmed squares ridge estimation in partially linear regression models. J. Stat. Comput. Simul. 2016, 86, 2766–2780. [Google Scholar] [CrossRef]
  34. Mahmoud, H.F.F.; Kim, B.J.; Kim, I. Robust nonparametric derivative estimator. Commun. Stat.—Simul. Comput. 2022, 51, 3809–3829. [Google Scholar] [CrossRef]
  35. Rousseeuw, P.J.; Van Driessen, K. Computing LTS regression for large data sets. Data Min. Knowl. Discov. 2006, 12, 29–45. [Google Scholar] [CrossRef]
  36. Alfons, A.; Croux, C.; Gelper, S. Sparse least trimmed squares regression for analyzing high-dimensional large data sets. Ann. Appl. Stat. 2013, 7, 226–248. [Google Scholar] [CrossRef]
  37. Belsley, D.A.; Kuh, E.; Welsch, R.E. Regression Diagnostics. Identifying Influential Data and Sources of Collinearity; Wiley: New York, NY, USA, 1980. [Google Scholar]
  38. Harrison, D.; Rubinfeld, D.L. Hedonic prices and the demand for clean air. J. Environ. Econ. Manag. 1978, 5, 81–102. [Google Scholar] [CrossRef]
  39. Venables, W.N.; Ripley, B.D. Modern Applied Statistics with S, 4th ed.; Springer: New York, NY, USA, 2002; ISBN 0-387-95457-0. Available online: https://www.stats.ox.ac.uk/pub/MASS4/ (accessed on 15 August 2023).
Figure 1. Partial residual plots for SSLA model.
Figure 1. Partial residual plots for SSLA model.
Mathematics 12 00172 g001
Figure 2. Partial residual plots for RSSLA model.
Figure 2. Partial residual plots for RSSLA model.
Mathematics 12 00172 g002
Figure 3. Partial residual plots for SNLA model.
Figure 3. Partial residual plots for SNLA model.
Mathematics 12 00172 g003
Figure 4. Partial residual plots for RSNLA model.
Figure 4. Partial residual plots for RSNLA model.
Mathematics 12 00172 g004
Figure 5. Partial residual plots for SL model.
Figure 5. Partial residual plots for SL model.
Mathematics 12 00172 g005
Figure 6. Partial residual plots for RSL model.
Figure 6. Partial residual plots for RSL model.
Mathematics 12 00172 g006
Table 1. The means and standard deviations of MISE values for n = 100 from the simulation study for 6 models. The standard deviations are shown in subscripts.
Table 1. The means and standard deviations of MISE values for n = 100 from the simulation study for 6 models. The standard deviations are shown in subscripts.
p σ fSSLARSSLASNLARSNLASLRSL
50 0.2 f ^ 1 0.334 0.309 0.086 0.119 0.398 0.269 0.239 0.217 2.277 0.697 2.541 1.014
f ^ 2 0.203 0.234 0.058 0.106 0.203 0.152 0.135 0.087 0.213 0.177 0.195 0.037
f ^ 3 0.213 0.146 0.046 0.081 0.276 0.116 0.156 0.098 0.466 0.384 0.357 0.167
f ^ 4 0.150 0.169 0.039 0.050 0.143 0.158 0.084 0.061 0.141 0.193 0.112 0.104
f ^ 5 0.127 0.129 0.043 0.054 0.129 0.117 0.088 0.045 0.102 0.077 0.092 0.079
0.5 f ^ 1 0.444 0.415 0.239 0.202 0.461 0.251 0.333 0.217 2.358 0.751 2.458 0.978
f ^ 2 0.224 0.209 0.138 0.123 0.219 0.189 0.171 0.110 0.198 0.059 0.200 0.050
f ^ 3 0.243 0.182 0.146 0.174 0.275 0.128 0.237 0.115 0.485 0.525 0.388 0.225
f ^ 4 0.152 0.165 0.082 0.085 0.134 0.119 0.096 0.062 0.158 0.195 0.123 0.121
f ^ 5 0.143 0.140 0.080 0.054 0.128 0.104 0.111 0.069 0.121 0.210 0.099 0.107
100 0.2 f ^ 1 1.887 0.000 0.887 0.000 0.609 0.286 0.323 0.255 2.332 0.827 2.329 0.991
f ^ 2 0.185 0.004 0.168 0.000 0.193 0.059 0.152 0.059 0.199 0.089 0.191 0.026
f ^ 3 0.321 0.000 0.197 0.116 0.283 0.085 0.315 0.000 0.500 0.447 0.387 0.221
f ^ 4 0.334 0.491 0.080 0.000 0.109 0.098 0.084 0.024 0.134 0.242 0.097 0.076
f ^ 5 0.080 0.000 0.063 0.000 0.106 0.087 0.085 0.026 0.085 0.054 0.079 0.006
0.5 f ^ 1 0.887 0.001 0.587 0.003 0.608 0.286 0.347 0.258 2.151 0.635 2.221 0.768
f ^ 2 0.186 0.000 0.169 0.001 0.194 0.083 0.154 0.052 0.193 0.041 0.190 0.023
f ^ 3 0.321 0.001 0.229 0.102 0.297 0.058 0.318 0.001 0.414 0.275 0.350 0.174
f ^ 4 0.083 0.000 0.080 0.000 0.102 0.068 0.082 0.024 0.092 0.072 0.085 0.040
f ^ 5 0.085 0.000 0.080 0.000 0.121 0.151 0.091 0.044 0.088 0.005 0.084 0.029
200 0.2 f ^ 1 0.964 0.024 0.383 0.172 0.574 0.281 0.258 0.275 2.045 0.525 2.093 0.719
f ^ 2 0.124 0.003 0.113 0.062 0.170 0.046 0.086 0.109 0.187 0.006 0.189 0.026
f ^ 3 0.307 0.000 0.141 0.101 0.264 0.078 0.213 0.102 0.394 0.237 0.317 0.065
f ^ 4 0.094 0.001 0.068 0.032 0.086 0.040 0.076 0.059 0.110 0.090 0.088 0.054
f ^ 5 0.088 0.002 0.076 0.001 0.095 0.055 0.079 0.020 0.081 0.003 0.080 0.003
0.5 f ^ 1 0.925 0.057 0.606 0.018 0.615 0.314 0.484 0.393 1.973 0.458 2.092 0.584
f ^ 2 0.186 0.001 0.173 0.004 0.179 0.057 0.167 0.055 0.188 0.015 0.191 0.040
f ^ 3 0.281 0.002 0.205 0.098 0.272 0.079 0.234 0.100 0.350 0.130 0.318 0.067
f ^ 4 0.079 0.000 0.062 0.007 0.085 0.033 0.089 0.056 0.107 0.097 0.098 0.092
f ^ 5 0.081 0.003 0.059 0.000 0.089 0.041 0.079 0.005 0.080 0.002 0.080 0.003
Table 2. The means and standard deviations of MISE values for n = 200 from the simulation study for 6 models. The standard deviations are shown in subscripts.
Table 2. The means and standard deviations of MISE values for n = 200 from the simulation study for 6 models. The standard deviations are shown in subscripts.
p σ fSSLARSSLASNLARSNLASLRSL
50 0.2 f ^ 1 0.287 0.686 0.080 0.274 0.363 0.218 0.330 0.098 2.985 0.903 2.675 0.676
f ^ 2 0.167 0.076 0.055 0.044 0.196 0.117 0.082 0.075 0.196 0.033 0.191 0.016
f ^ 3 0.261 0.098 0.065 0.091 0.246 0.135 0.070 0.084 0.579 0.320 0.485 0.268
f ^ 4 0.076 0.040 0.034 0.013 0.175 0.154 0.041 0.046 0.130 0.154 0.099 0.065
f ^ 5 0.080 0.033 0.035 0.001 0.163 0.123 0.046 0.047 0.099 0.079 0.077 0.011
0.5 f ^ 1 0.290 0.252 0.086 0.093 0.345 0.179 0.187 0.139 2.719 0.810 3.008 0.850
f ^ 2 0.168 0.040 0.050 0.033 0.222 0.137 0.116 0.085 0.211 0.121 0.192 0.028
f ^ 3 0.248 0.103 0.058 0.049 0.275 0.185 0.142 0.117 0.520 0.289 0.466 0.253
f ^ 4 0.075 0.015 0.044 0.024 0.222 0.176 0.089 0.052 0.138 0.144 0.093 0.065
f ^ 5 0.080 0.002 0.050 0.027 0.213 0.194 0.098 0.069 0.107 0.105 0.085 0.034
100 0.2 f ^ 1 0.295 0.417 0.076 0.122 0.222 0.138 1.115 0.887 2.607 0.701 2.651 0.940
f ^ 2 0.182 0.023 0.063 0.065 0.162 0.056 0.152 0.059 0.196 0.051 0.188 0.016
f ^ 3 0.275 0.097 0.068 0.092 0.233 0.110 0.207 0.123 0.506 0.332 0.402 0.205
f ^ 4 0.079 0.011 0.048 0.035 0.080 0.022 0.066 0.026 0.096 0.077 0.091 0.057
f ^ 5 0.080 0.000 0.049 0.034 0.082 0.019 0.072 0.029 0.090 0.053 0.083 0.036
0.5 f ^ 1 0.317 0.294 0.112 0.124 0.295 0.204 0.151 0.154 2.413 0.672 2.443 0.731
f ^ 2 0.184 0.016 0.074 0.057 0.144 0.056 0.104 0.064 0.189 0.017 0.188 0.010
f ^ 3 0.299 0.069 0.097 0.083 0.209 0.094 0.120 0.096 0.499 0.295 0.381 0.186
f ^ 4 0.079 0.008 0.065 0.020 0.091 0.043 0.068 0.041 0.103 0.099 0.085 0.032
f ^ 5 0.080 0.000 0.063 0.024 0.094 0.045 0.071 0.018 0.083 0.017 0.079 0.010
200 0.2 f ^ 1 0.289 0.312 0.022 0.084 0.149 0.170 0.026 0.056 1.380 1.280 1.392 1.288
f ^ 2 0.191 0.020 0.011 0.013 0.076 0.078 0.018 0.030 0.108 0.092 0.108 0.093
f ^ 3 0.284 0.088 0.020 0.079 0.098 0.104 0.025 0.052 0.260 0.300 0.190 0.183
f ^ 4 0.077 0.008 0.015 0.027 0.048 0.045 0.019 0.028 0.057 0.064 0.051 0.055
f ^ 5 0.081 0.001 0.023 0.018 0.051 0.052 0.025 0.033 0.051 0.056 0.046 0.040
0.5 f ^ 1 0.112 0.095 0.043 0.055 0.176 0.240 0.061 0.092 1.233 1.201 1.226 1.226
f ^ 2 0.082 0.011 0.036 0.024 0.078 0.080 0.045 0.062 0.102 0.095 0.101 0.093
f ^ 3 0.108 0.057 0.033 0.028 0.110 0.122 0.046 0.065 0.254 0.320 0.196 0.223
f ^ 4 0.038 0.006 0.021 0.008 0.043 0.044 0.030 0.036 0.062 0.093 0.056 0.086
f ^ 5 0.023 0.000 0.015 0.013 0.048 0.049 0.034 0.039 0.055 0.079 0.042 0.040
Table 3. The means and standard deviations of CMSE values from the simulation study for 6 models. The standard deviations are shown in subscripts.
Table 3. The means and standard deviations of CMSE values from the simulation study for 6 models. The standard deviations are shown in subscripts.
np σ SSLARSSLASNLARSNLASLRSL
10050 0.2 2.409 0.875 0.736 1.082 2.921 0.394 1.357 0.843 1.427 0.269 1.405 0.358
0.5 1.657 0.388 1.637 0.333 3.231 0.389 2.126 0.814 2.979 0.528 1.955 1.170
100 0.2 2.438 0.290 1.539 0.741 2.986 0.409 1.277 0.635 1.486 0.334 1.575 0.480
0.5 2.615 0.288 1.495 0.758 3.249 0.395 1.743 0.651 1.774 0.388 1.774 0.402
200 0.2 2.489 0.364 1.613 0.381 2.984 0.410 0.953 0.590 1.617 0.843 1.695 0.501
0.5 2.724 0.286 1.628 0.643 3.162 0.416 1.788 0.719 1.745 0.362 1.842 0.500
20050 0.2 0.887 1.091 0.800 0.272 2.874 0.934 0.894 0.276 1.307 0.220 1.284 0.169
0.5 0.988 0.280 0.825 0.370 3.140 0.331 2.156 1.029 1.512 0.189 1.490 0.235
100 0.2 0.836 0.420 0.338 0.336 1.548 1.050 1.879 1.304 1.326 0.192 1.369 0.262
0.5 1.075 0.273 0.884 0.401 2.621 0.922 1.026 0.461 1.563 0.210 1.627 0.298
200 0.2 0.718 0.346 0.218 0.352 1.710 1.477 0.253 0.082 0.780 0.680 0.814 0.726
0.5 1.023 0.249 0.427 0.214 1.765 1.657 0.573 0.705 0.858 0.814 0.913 0.886
Table 4. The meanss and standard deviations of CPE values from the simulation study for 6 models. The standard deviations are shown in subscripts.
Table 4. The meanss and standard deviations of CPE values from the simulation study for 6 models. The standard deviations are shown in subscripts.
np σ SSLARSSLASNLARSNLASLRSL
10050 0.2 4.193 2.098 0.900 1.260 4.606 1.392 1.602 1.084 1.750 0.403 1.734 0.464
0.5 2.018 0.413 1.993 0.507 5.451 1.546 2.397 0.883 3.026 2.100 2.190 1.345
100 0.2 4.220 2.657 1.472 0.728 3.664 0.964 1.861 0.300 1.948 0.686 1.871 0.567
0.5 2.991 1.106 1.083 1.051 4.073 0.889 1.946 0.785 2.195 0.591 2.167 0.598
200 0.2 2.167 0.306 1.023 0.364 3.441 0.969 1.119 0.664 1.997 0.522 2.006 0.589
0.5 2.034 0.346 0.976 0.564 3.901 0.929 2.107 0.766 2.262 0.645 2.197 0.592
20050 0.2 0.923 1.069 0.910 0.336 7.708 1.816 0.997 1.065 1.450 0.250 1.410 0.263
0.5 1.128 0.334 0.932 0.349 9.292 1.983 2.353 0.973 1.739 0.338 1.654 0.290
100 0.2 0.967 0.434 0.434 0.425 2.070 1.616 1.935 1.333 1.525 0.289 1.599 0.385
0.5 1.182 0.372 1.011 0.468 3.925 1.777 1.136 0.464 1.767 0.296 1.837 0.406
200 0.2 1.030 0.209 0.252 0.389 1.909 1.696 0.837 0.603 0.932 0.827 0.942 0.846
0.5 1.202 0.386 0.443 0.465 2.087 2.017 0.636 0.784 1.002 0.950 1.026 0.987
Table 5. The means and standard deviations of FNR values from the simulation study for 6 models. The standard deviations are shown in subscripts.
Table 5. The means and standard deviations of FNR values from the simulation study for 6 models. The standard deviations are shown in subscripts.
np σ SSLARSSLASNLARSNLASLRSL
10050 0.2 0.299 0.239 0.147 0.122 0.292 0.165 0.233 0.176 0.572 0.173 0.527 0.193
0.5 0.273 0.210 0.200 0.144 0.288 0.171 0.267 0.169 0.572 0.193 0.533 0.192
100 0.2 0.631 0.056 0.379 0.089 0.562 0.165 0.337 0.176 0.610 0.174 0.628 0.192
0.5 0.610 0.000 0.517 0.096 0.520 0.160 0.472 0.159 0.685 0.171 0.632 0.187
200 0.2 0.642 0.237 0.461 0.166 0.601 0.148 0.403 0.137 0.707 0.174 0.687 0.176
0.5 0.617 0.203 0.542 0.202 0.611 0.154 0.482 0.127 0.692 0.149 0.715 0.171
20050 0.2 0.729 0.118 0.237 0.240 0.068 0.097 0.043 0.073 0.468 0.161 0.403 0.182
0.5 0.699 0.121 0.198 0.160 0.062 0.091 0.052 0.088 0.453 0.174 0.455 0.160
100 0.2 0.791 0.081 0.478 0.174 0.578 0.291 0.320 0.255 0.674 0.264 0.527 0.151
0.5 0.813 0.064 0.395 0.242 0.382 0.282 0.237 0.230 0.555 0.136 0.537 0.153
200 0.2 0.799 0.101 0.411 0.129 0.212 0.217 0.097 0.148 0.322 0.288 0.297 0.280
0.5 0.818 0.093 0.388 0.214 0.210 0.235 0.133 0.188 0.315 0.317 0.317 0.311
Table 6. The means and standard deviations of FPR values from the simulation study for 6 models. The standard deviations are shown in subscripts.
Table 6. The means and standard deviations of FPR values from the simulation study for 6 models. The standard deviations are shown in subscripts.
np σ SSLARSSLASNLARSNLASLRSL
10050 0.2 0.463 0.216 0.423 0.109 0.493 0.049 0.349 0.054 0.130 0.122 0.180 0.135
0.5 0.542 0.161 0.488 0.087 0.492 0.051 0.383 0.049 0.142 0.121 0.173 0.131
100 0.2 0.251 0.154 0.206 0.052 0.206 0.033 0.150 0.033 0.116 0.105 0.198 0.100
0.5 0.127 0.015 0.106 0.031 0.208 0.034 0.167 0.034 0.079 0.085 0.094 0.075
200 0.2 0.451 0.163 0.317 0.184 0.128 0.021 0.090 0.017 0.047 0.055 0.062 0.068
0.5 0.341 0.244 0.206 0.086 0.125 0.020 0.098 0.016 0.057 0.061 0.064 0.075
20050 0.2 0.445 0.219 0.166 0.119 0.872 0.044 0.676 0.049 0.022 0.010 0.185 0.131
0.5 0.623 0.190 0.181 0.132 0.885 0.038 0.697 0.051 0.023 0.019 0.192 0.136
100 0.2 0.117 0.121 0.103 0.096 0.176 0.231 0.179 0.172 0.032 0.013 0.141 0.099
0.5 0.198 0.175 0.091 0.069 0.360 0.216 0.311 0.135 0.032 0.021 0.109 0.084
200 0.2 0.105 0.107 0.076 0.048 0.141 0.124 0.081 0.090 0.037 0.054 0.046 0.060
0.5 0.205 0.104 0.083 0.066 0.134 0.127 0.084 0.095 0.032 0.053 0.033 0.055
Table 7. The means and standard deviations of FLR and FNLR values from the simulation study for SSLA and RSSLA models. The standard deviations are shown in subscripts.
Table 7. The means and standard deviations of FLR and FNLR values from the simulation study for SSLA and RSSLA models. The standard deviations are shown in subscripts.
Criterion FLRFNLR
n p σ SSLARSSLASSLARSSLA
10050 0.2 0.000 0.000 0.006 0.055 0.549 0.203 0.308 0.288
0.5 0.005 0.052 0.011 0.074 0.422 0.246 0.384 0.265
100 0.2 0.000 0.000 0.000 0.000 0.195 0.163 0.010 0.001
0.5 0.003 0.000 0.000 0.000 0.153 0.127 0.028 0.010
200 0.2 0.000 0.000 0.000 0.000 0.138 0.141 0.014 0.005
0.5 0.002 0.000 0.000 0.000 0.106 0.082 0.017 0.011
20050 0.2 0.000 0.000 0.000 0.000 0.542 0.414 0.125 0.162
0.5 0.000 0.000 0.000 0.000 0.593 0.213 0.163 0.182
100 0.2 0.000 0.000 0.000 0.000 0.319 0.322 0.065 0.134
0.5 0.000 0.000 0.000 0.000 0.571 0.333 0.035 0.103
200 0.2 0.000 0.000 0.000 0.000 0.275 0.249 0.054 0.082
0.5 0.000 0.000 0.000 0.000 0.443 0.218 0.027 0.095
Table 8. The means and standard deviations of FOR and FNOR values from the simulation study for 3 robust models. The standard deviations are shown in subscripts.
Table 8. The means and standard deviations of FOR and FNOR values from the simulation study for 3 robust models. The standard deviations are shown in subscripts.
Criterion FORFNOR
n p σ RSSLARSNLARSLRSSLARSNLARSL
10050 0.2 0.090 0.157 0.122 0.075 0.246 0.152 0.085 0.039 0.093 0.019 0.124 0.038
0.5 0.144 0.073 0.228 0.163 0.316 0.124 0.099 0.018 0.119 0.041 0.142 0.031
100 0.2 0.029 0.049 0.154 0.087 0.220 0.128 0.070 0.012 0.101 0.022 0.118 0.032
0.5 0.000 0.000 0.156 0.081 0.253 0.118 0.062 0.000 0.102 0.020 0.126 0.029
200 0.2 0.017 0.031 0.166 0.085 0.148 0.130 0.065 0.001 0.104 0.021 0.099 0.033
0.5 0.015 0.004 0.170 0.083 0.260 0.138 0.073 0.011 0.105 0.021 0.128 0.034
20050 0.2 0.035 0.047 0.098 0.056 0.136 0.158 0.071 0.011 0.087 0.014 0.096 0.039
0.5 0.041 0.045 0.098 0.047 0.268 0.120 0.073 0.011 0.087 0.012 0.129 0.030
100 0.2 0.026 0.047 0.124 0.065 0.029 0.046 0.069 0.012 0.093 0.016 0.070 0.012
0.5 0.059 0.064 0.130 0.063 0.099 0.091 0.077 0.016 0.095 0.016 0.087 0.023
200 0.2 0.018 0.022 0.078 0.087 0.041 0.076 0.033 0.013 0.056 0.050 0.040 0.045
0.5 0.063 0.070 0.072 0.080 0.085 0.114 0.030 0.010 0.052 0.049 0.049 0.057
Table 9. Trimmed leave-one-out cross-validation and test outlier percent for 6 models.
Table 9. Trimmed leave-one-out cross-validation and test outlier percent for 6 models.
ModelTLOOCVTest Outlier Percent
SSLA5.3213.8%
RSSLA4.7812.6%
SNLA5.5114.4%
RSNLA6.1612.4%
SL11.9810.5%
RSL10.8110.5%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Amini, M.; Roozbeh, M.; Mohamed, N.A. Separation of the Linear and Nonlinear Covariates in the Sparse Semi-Parametric Regression Model in the Presence of Outliers. Mathematics 2024, 12, 172. https://doi.org/10.3390/math12020172

AMA Style

Amini M, Roozbeh M, Mohamed NA. Separation of the Linear and Nonlinear Covariates in the Sparse Semi-Parametric Regression Model in the Presence of Outliers. Mathematics. 2024; 12(2):172. https://doi.org/10.3390/math12020172

Chicago/Turabian Style

Amini, Morteza, Mahdi Roozbeh, and Nur Anisah Mohamed. 2024. "Separation of the Linear and Nonlinear Covariates in the Sparse Semi-Parametric Regression Model in the Presence of Outliers" Mathematics 12, no. 2: 172. https://doi.org/10.3390/math12020172

APA Style

Amini, M., Roozbeh, M., & Mohamed, N. A. (2024). Separation of the Linear and Nonlinear Covariates in the Sparse Semi-Parametric Regression Model in the Presence of Outliers. Mathematics, 12(2), 172. https://doi.org/10.3390/math12020172

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop