Next Article in Journal
Influence of Information Blocking on the Spread of Virus in Multilayer Networks
Next Article in Special Issue
Spatio-Temporal Analysis of Marine Water Quality Data Based on Cross-Recurrence Plot (CRP) and Cross-Recurrence Quantitative Analysis (CRQA)
Previous Article in Journal
Dissipation + Utilization = Self-Organization
Previous Article in Special Issue
Recognition of Vehicles Entering Expressway Service Areas and Estimation of Dwell Time Using ETC Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Variable Selection with Exponential Squared Loss for the Spatial Single-Index Varying-Coefficient Model

College of Science, China University of Petroleum, Qingdao 266580, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(2), 230; https://doi.org/10.3390/e25020230
Submission received: 6 December 2022 / Revised: 21 January 2023 / Accepted: 23 January 2023 / Published: 26 January 2023
(This article belongs to the Special Issue Spatial–Temporal Data Analysis and Its Applications)

Abstract

:
As spatial correlation and heterogeneity often coincide in the data, we propose a spatial single-index varying-coefficient model. For the model, in this paper, a robust variable selection method based on spline estimation and exponential squared loss is offered to estimate parameters and identify significant variables. We establish the theoretical properties under some regularity conditions. A block coordinate descent (BCD) algorithm with the concave–convex process (CCCP) is composed uniquely for solving algorithms. Simulations show that our methods perform well even though observations are noisy or the estimated spatial mass matrix is inaccurate.

1. Introduction

Spatial econometrics is one of the essential branches of econometrics. Its basic content is to consider the spatial effects of variables in regional scientific models. The most widely used spatial econometric model is the spatial autoregressive (SAR) model, first proposed by [1], which has been extensively studied and applied in the fields of economy, finance, and environment.
The SAR model is mainly a parameter model. However, in the practical application, only the parametric model cannot fully explain the complex economic problems and phenomena. Therefore, in order to improve the flexibility and applicability of the spatial econometric model, the non-parametric spatial econometric model has received more attention. Ref. [2] studied the SAR model in the non-parametric frame, obtained the parameter estimators by using the generalized moment estimation, and proved the consistency and asymptotic property of the estimator. The instrumental variable method was used by [3] to study semi-parametric varying-coefficient spatial panel data models with endogenous explanatory variables.
However, for all practical purposes, data may have spatial correlation and spatial heterogeneity simultaneously, which leads to spatial heterogeneity that cannot be fully considered and reflected by the SAR model in the parametric form and the non-parametric SAR model.
The single-index varying-coefficient model is a generalization of the single-index and varying-coefficient models, which can effectively avoid the “curse of dimension” in multidimensional non-parametric regression. Many domestic and foreign researchers have learned this. Refs. [4,5] studied the evaluation of the single-index varying-coefficient model. Ref. [6] constructed the empirical likelihood confidence region of the single-index varying-coefficient model by using the empirical likelihood method; Ref. [7] proposed a new estimated empirical likelihood ratio statistic, obtained maximum likelihood estimators of the model parameters, and proposed a new Profile empirical likelihood ratio, which was shown to be asymptotically close to the standard chi-square distribution.
In addition, selecting significant explanatory variables is one of the most important problems of statistical learning. Some robust regression methods have been proposed, such as quantile regression, composite quantile regression, and modal regression. Ref. [8] presented a new class of robust regression estimators methods for linear models based on exponential square loss. The specific method is as follows: for the linear regression model y i = X i T β + ε i , minimize i = 1 n 1 exp y i X i T β 2 / h this objective function to estimate the regression parameters β , in which h > 0 controls the robustness of the estimation. For a large h, 1 exp r 2 / h r 2 / h . Therefore, the proposed estimation is similar to the least squares estimation in the extreme case. For a small h, the value of | r | is large, and the impact on the estimated value is small. Hence, a small value of h will limit the influence of outliers on the estimation, thus improving the robustness of estimators. Ref. [8] also pointed out that their method is more robust than the other general robust estimators methods. Ref. [9] made a robust estimation based on exponential square loss for some linear regression models and proposed a data driver to select adjustment parameters. The exponential square loss square is used in data simulation, and positive results were obtained by the method. Ref. [10] suggested a robust variable selection for the high-dimensional single-index varying-coefficient model based on exponential square loss, established and proved the theoretical properties of estimators, and demonstrated the robustness of this method through numerical simulation. Ref. [11] applied exponential square loss to conduct robust structure analysis and variable selection for some linear variable coefficient models and obtained good results.
Inspired by the above article, we introduce the spatial position of the observed objects into a single-index variable coefficient model, and a spatial single-index variable coefficient model is proposed. We also presented a variable selection method for the spatial single-index varying-coefficient model based on spline estimation and the exponential loss function. This method was capable of selecting significant predictors while estimating regression coefficients. The following are the main contributions of this work.
  • We propose a novel model: the spatial single-index varying-coefficient model, which can deal with the spatial correlation and spatial heterogeneity of data at the same time.
  • We construct a robust variable selection method for the spatial single-index varying-coefficient model, which uses exponential square loss function to resist the influence of strong noise and inaccurate spatial weight matrix. Furthermore, we present the BCD (block coordinate descent) algorithm to solve the optimization problem of the objective function.
  • Under reasonable assumptions, we give theoretical properties of this method. In addition, we verify the robustness and effectiveness of the variable selection method through numerical simulation studies. The numerical study shows that the method is more robust than other comparative methods in variable selection and parameter estimation when outliers or noise are presented in the observations.
The rest of this paper is organized as follows. In Section 2, we develop the methodology for variable selection with exponential squared loss and give the theoretical properties of the proposed method in Section 3. In Section 4, we present the related algorithms. The experimental results are carried out in Section 5, and we conclude the paper in Section 6. All of the details of the proofs of the main theorems are collected in the Appendix A.

2. Methodology

2.1. Model Setup

Consider the following spatial single-index varying-coefficient model:
y i = ρ j = 1 n w i j y j + g 1 U i T α z i 1 + g 2 U i T α z i 2 + + g q U i T α z i q + ε i
where y i is the response variable, z i = z i 1 , z i 2 , , z i q T is the q-dimensional of the observed variable, and  U i = U i 1 , U i 2 , , U i m T is the m-dimensional spatial location parameter. The n × n matrix of the spatial weights matrix W in dimensional space is w i j . ρ and α = α 1 , α 2 , , α m T are the parameters to be estimated. It is natural to suppose that ε i is independent and subject to a mean value of zero and a variance of σ 2 . g ( · ) is an unknown function. For the identifiability of the model, it is assumed that α = 1 and the first nonzero element of α is positive.
It can be seen from the model (1) that the spatial single-index varying-coefficient model is a semi-parametric varying-coefficient model, and the unknown function g changes with the transformation of geographical location. When ρ = 0 , the model becomes the partial linear single-index varying-coefficient model. When z i 1 = 1 and g 1 U i T α = U i T α while the other g ( · ) = 0 , the model becomes the SAR model.

2.2. Basis Function Expansion

Since g ( · ) is unknown, we replace g ( · ) with its basis function approximations. The specific estimation steps are as follows:
Step 1. The initial value α 0 should be given. This paper uses the method proposed by [12]. We roughly calculate the estimated value of α by the linear regression model:
y i = U i T α z i 1 + U i T α z i 2 + + U i T α z i q + ε i
set the estimated value of α as α 0 , in which α 0 = 1 and the first nonzero element in α 0 is positive.
Step 2. Set a k 1 < k 2 < < k l b as l nodes on the interval [ a , b ] . By the initial value α 0 , let t i = U i T α 0 , then the radial basis function of degree p is
δ t i = 1 , t i , t i 2 , , t t p 1 , t i k 1 2 p 1 , t i k 2 2 p 1 , , t i k l 2 p 1 T
Suppose that the coefficient of the radial basis function is
γ 1 i = ( γ 1 i 0 , γ 1 i 1 , , γ 1 i ( p 1 ) , γ 1 i p , , γ 1 i ( p + l 1 ) ) T ,
then, the sth unknown function g i s t i δ t i T γ 1 s , where i = 1 , 2 , , n , s = 1 , 2 , , q . Substituting the radial basis function into model (1), we can obtain the following:
y i = ρ j = 1 n w i j y j + z i 1 δ t i T γ 1 1 + z i 2 δ t i T γ 1 2 + + z i q δ t i T γ 1 q + ε i
Let Y = y 1 , y 2 , , y n T , γ 1 = γ 1 1 T , γ 1 2 T , , γ 1 n T T , D = D 1 , D 2 , , D n T , where D i = z i 1 δ t i T , z i 2 δ t i T , , z i q δ t i T T , ε = ε 1 , ε 2 , , ε n T , then the matrix form of the model (2) is
Y = ρ W Y + D γ 1 + ε
As can be seen from model (3), model (1) is transformed from the spatial single-index varying-coefficient model to the classical SAR model under the fitting of the radial basis function. The theory of the SAR model is relatively well-equipped, and the exponential squared loss-based variable selection method for the SAR model is used to estimate the unknown parameters.

2.3. The Penalized Robust Regression Estimator

Now, we consider the variable selection for the model (3). To guarantee the model identifiability and to improve the model fitting accuracy and interpretability, we normally assume that the true regression coefficient vector α * is sparse with only a small proportion of nonzeros [13,14]. It is natural to employ the penalized method that simultaneously selects important variables and estimates the values of parameters. The constructed model is recast as follows:  
min L ( γ 1 , ρ ) = 1 n i = 1 n ϕ γ 2 Y i ρ Y ˜ i D i γ 1 + λ j = 1 q P γ 1 j
where λ > 0 , Y ˜ = W Y , j = 1 n P γ 1 j is a penalty term, ϕ γ 2 ( · ) is the exponential squared loss function: ϕ γ 2 ( t ) = 1 exp t 2 / γ 2 , in which γ 2 is the tuning parameter controlling the degree of robustness.
Concerning the choice of the penalty term. The lasso or adaptive lasso penalty could be considered if there is no extra structured information. Assume that γ 1 ^ is a root-n-consistent estimator for γ 1 , for instance, the naive least square estimator γ 1 ^ ( ols ) . Define the weight vector η R p with η j = 1 / γ ^ 1 j r ( j = 1 , , q ) , r > 0 , and then we set r = 1 in this paper as suggested by [15]. An adaptive lasso penalty is described as
j = 1 q P γ 1 j = j = 1 q η j γ 1 j .
The objective function of penalized robust regression that consists of exponential squared loss and an adaptive lasso penalty is formulated as
min L ( γ 1 , ρ ) = 1 n i = 1 n ϕ γ 2 Y i ρ Y ˜ i D i γ 1 + λ j = 1 q η j γ 1 j
The selection of tuning parameter γ 2 and regularization parameter λ is discussed in Section 4.

2.4. Estimation of the Variance of the Noise

Set H = I n ρ W 1 , then the variance of the noise is estimated as
σ ^ 2 = 1 n ( Y H D γ 1 ) T H H T 1 ( y H D γ 1 ) ,
where ρ and γ 1 could be estimated by the solutions of (6). It is pointed out that H is a nonsingular matrix, then H H T 1 = H T 1 H 1 = I n ρ W T I n ρ W . Let u = H D γ 1 , then u = H D γ 1 = I n ρ W 1 ( D γ 1 ) and then σ ^ 2 defined by (7) can be computed by
σ ^ 2 = 1 n I n ρ W ( Y u ) 2 2

3. Theoretical Properties

To discuss the theoretical properties, let the parameters θ = ρ , γ 1 T T , with  θ 0 , α 0 and g 0 ( · ) , be the true values of θ , α and g ( · ) . It is generally assumed that α l 0 = 0 , l = s + 1 , , p , and  α l 0 , l = 1 , , s , are all nonzero parts of α 0 . Moreover, we assume that g j 0 = 0 , j = d + 1 , , q , and  g j 0 , j = 1 , , d are all nonzero parts of g ( · ) . Set ϕ = α 2 , α 3 , , α m T , α ( ϕ ) = 1 ϕ 2 , ϕ T T ; the real parameters of ϕ 0 satisfy ϕ 0 < 1 . Hence, α φ is differentiable within the neighborhood of ϕ 0 , and the Jacobian matrix is
J ϕ = 1 ϕ 2 1 / 2 ϕ T I m 1
Assumption:
(C1)
The density function f ( t ) of U α is uniformly bounded on T = { t = U α } and far from 0. Furthermore, f ( t ) is assumed to satisfy the Lipschitz condition of order 1 on T.
(C2)
The function g j ( t ) , j = 1 , , q , has bounded and continuous derivatives up to order r ( 2 ) on T, where g j ( t ) is the jth components of g ( t ) .
(C3)
E U 6 < , E Z 6 < and E | ε | 6 < .
(C4)
y i , U i , z i , 1 i n is a strictly stationary and strongly mixing sequence with coefficient γ ( n ) = O ξ n , where 0 < ξ < 1 .
(C5)
Let c 1 , , c K be the interior knots of [ a , b ] , where a = inf { t : t T } , b = sup { t : t T } . Moreover, we set c 0 = a , c K + 1 = b , h i = c i c i 1 , h = max h i . Then, a positive constant C 0 exists such that
h min h i < C 0 , max h i + 1 h i = o K 1 .
(C6)
Let b n = max j | p ¨ j γ 1 j 0 : γ 1 j 0 0 } and then b n 0 as n . Further, let lim n inf
lim γ 1 j inf 0 λ j 1 p ˙ j γ 1 j > 0 , where j = d + 1 , , q .
(C7)
H ( ρ ) = I n ρ W 1 is a nonsingular matrix, invertible for any ρ Θ , Θ is a compact parameter space, and the absolute row and column sums of H ( ρ ) , H ( ρ ) 1 are uniformly bounded on ρ Θ ;
(C8)
Let
I ( ϕ , γ 1 ; γ 2 ) = 2 γ 2 G ( ϕ ) G T ( ϕ ) e r 2 / γ 2 2 r 2 γ 2 1 d F ( G , y )
where r = Y I n ρ W 1 D ( ϕ ) γ 1 = Y G ( ϕ ) γ 1 , G ( ϕ ) = I n ρ W 1 D ( ϕ ) . Suppose that I ( ϕ , γ 1 ; γ 2 ) is negative definite.
(C9)
Σ = E G G T is positive definite.
Under the above preparations, we give the following sampling properties for our proposed estimators. The following theorem presents the consistency of the penalized exponential squared loss estimators.
Theorem 1.
Assume that conditions C 1 C 9 hold and the number of knots K = O n 1 / ( 2 r + 1 ) . Further, we suppose that γ 2 n γ 2 0 = o p ( 1 ) for some γ 2 0 > 0 and I ( ϕ 0 , γ 1 0 ; γ 2 0 ) is negative definite. Then,
(i)
α α 0 = O p n 1 / ( 2 r + 1 ) + a n ;
(ii)
g ^ j ( · ) g j 0 ( · ) = O p n r / ( 2 r + 1 ) + a n , for  j = 1 , , q ,
where a n = max j | p ˙ j γ 1 j 0 : γ 1 j 0 0 } , r is defined in condition (C2), and  p ˙ λ ( · ) represents the first order derivative of p λ ( · ) .
In addition, we have proved that when some suitable conditions hold, the consistent estimation must be sparse, as described below.
Theorem 2.
Suppose that conditions C 1 C 9 hold, and the number of knots K = O n 1 / ( 2 r + 1 ) . We assume that n a n = O p ( 1 ) and n γ 2 n γ 2 0 = o p ( 1 ) . Let
λ j ( max ) 0 , n r / ( 2 r + 1 ) λ j ( min ) ( n ) .
Then, with probability approaching 1, α ^ and g ^ ( · ) satisfy
(i)
α ^ l = 0 , l = s + 1 , , p ;
(ii)
g ^ j ( · ) = 0 , j = d + 1 , , q .
We then show that the estimators of nonzero coefficients for the parameter components have the same asymptotic distribution as the estimators based on the correct submodel. Set
α * = α 1 , , α s T , g * ( t ) = g 1 T ( t ) , , g d T ( t ) T ,
and let α 0 * and g 0 * ( t ) be true values of α * and g * , respectively. Corresponding covariates are denoted by U i * and Z i * , i = 1 , , n . Furthermore, let Σ 2 = cov exp r 2 / γ 2 0 2 r γ 2 0 G i 1 , Σ 1 = diag p ¨ λ 1 γ 1 01 * , , p ¨ λ d γ 1 0 d * ,
= p ˙ λ 1 γ 1 01 * sign γ 1 01 * , , p ˙ λ d γ 1 01 * sign γ 1 01 * T , I 1 ϕ 01 * , γ 1 01 * ; γ 2 0 = 2 γ 2 0 E exp r 2 / γ 2 0 2 r 2 γ 2 0 1 E G i 1 G i 1 T .
The following result presents the asymptotic properties of α ^ * .
Theorem 3.
If the assumptions of Theorem 2 hold, we have
n I 1 ϕ 01 * , γ 1 01 * ; γ 2 0 + Σ 1 α ^ * α 0 * + I 1 ϕ 01 * , γ 1 01 * ; γ 2 0 1 L N 0 , J ϕ 0 * Σ 2 J ϕ 0 * T
where ‘ L ’ represents the convergence in distribution.
Theorems 1 and 2 show that the proposed variable selection procedure is consistent, and Theorems 1 and 3 show that the penalized estimators have the oracle property. This demonstrates that if the subset of true zero coefficients are known, the penalty estimators perform well.

4. Algorithm

In this section, we talk about a feasible algorithm for the solution of (6). A data-driven procedure for γ 2 and a simple selection method for λ are considered. Moreover, effective optimization algorithms have been composed to solve non-convex and non-differentiable objective functions.

4.1. Choice of the Tuning Parameter γ 2

The tuning parameter γ 2 controls the level of robustness and performance of the proposed robust regression estimators. Ref. [16] propose a data-driven procedure to choose γ 2 for ordinary regression. We follow its steps and apply it to the spatial single-index varying-coefficient model. Firstly, a set of tuning parameters is determined to ensure that the proposed penalized robust estimators have an asymptotic breakdown point at 1 / 2 . Then, the tuning parameter is selected with the maximum efficiency.
The whole procedures are presented as follows:
Step 1. Initialize ρ ^ = ρ ( 0 ) and γ 1 ^ = γ 1 ( 0 ) . Set ρ ( 0 ) = 1 2 , γ 1 ( 0 ) a robust estimator. The model Y = ρ W Y + D γ 1 + ϵ can be recasted as Y * = D γ 1 + ϵ , where Y * = Y ρ W Y .
Step 2. Find the pseudo outlier set of the sample:
Let A n = D 1 , Y 1 * , , D n , Y n * . Calculate r i ( γ 1 ^ ) = Y i * D i γ 1 ^ , i = 1 , , n and S n = 1.4826 × median i r i ( γ 1 ^ ) median j r j ( γ 1 ^ ) . Then, take the pseudo outlier set A m = D i , Y i : r i ( γ 1 ^ ) 2.5 S n , set m = 1 i n : r i ( γ 1 ^ ) 2.5 S n , and A n m = A n / A m .
Step 3. Select the tuning parameter γ 2 n : construct V ^ ( γ 2 ) = { I ^ ( γ 1 ^ ) } 1 Σ ˜ 2 { I ^ ( γ 1 ^ ) } 1 , in which
I ^ ( γ 1 ^ ) = 2 γ 2 1 n i = 1 n exp r i 2 ( γ 1 ^ ) / γ 2 2 r i ( γ 1 ^ ) γ 2 1 · 1 n i = 1 n D i D i T
Σ ˜ 2 = Cov exp r 1 2 ( γ 1 ^ ) / γ 2 2 r 1 ( γ 1 ^ ) γ 2 D 1 , , exp r n 2 ( γ 1 ^ ) / γ 2 2 r n ( γ 1 ^ ) γ 2 D n .
Let γ 2 n be the minimizer of det ( V ^ ( γ 2 ) ) in the set G = { γ 2 : ζ ( γ 2 ) ( 0 , 1 ] } , where ζ ( · ) has the same definition in [8] and det ( · ) means the determinant operator.
Step 4. Update ρ ^ and γ 1 ^ as the optimal solution of min i = 1 n ϕ γ 2 Y i ρ Y ˜ i D i γ 1 , where Y ˜ = W Y . Repeat step 2 to step 4 until convergence.
It is noted that an initial robust estimator γ 1 ( 0 ) is needed in the initial step above. In practice, we make the estimator of the LAD loss as the initial estimator. In this sense, the selection of γ 2 does not depend on λ basically. Meanwhile, one could also select the two parameters γ 2 and λ jointly by cross-validation as discussed in [8]. Nevertheless, this approach needs huge computation. Moreover, the candidate interval of γ 2 is { γ 2 : ζ ( γ 2 ) ( 0 , 1 ] } . Practically, we find the threshold of γ 2 1 subject to ζ γ 2 1 = 1 . The choice of γ 2 is usually located in the interval of [ 5 γ 2 1 , 30 γ 2 1 ] .

4.2. Choice of the Regularization Parameter λ and η j

With regard to the choice of the regularization parameter λ and η j in (6), as the parameter λ can be unified with η j , we set λ j = λ · η j . Generally, many methods can be applied to select λ j , such as AIC, BIC, and cross-validation. To ensure that variable selection is consistent and that the intensive computation can be reduced, we propose the regularization parameter by minimizing a BIC-type objective function as [16]:
i = 1 n 1 exp Y i * D i γ 1 2 / γ 2 n + n j = 1 q λ j γ 1 j j = 1 q log 0.5 n λ j log ( n )
where Y i * = Y i ρ W n Y i . This results in λ j = log ( n ) n γ 1 j . γ 1 j can be easily estimated by the unpenalized exponential squares loss estimator γ 1 j ˜ , where the parameter value of γ 2 has been estimated as described in Section 4.1. Note that this simple choice satisfies the conditions n λ ^ j 0 for j d and n λ ^ j for j > d , with  d the number of nonzeros in the true value of γ 1 . Thus, the consistent variable selection is ensured by the final estimator.

4.3. Block Coordinate Descent (BCD) Algorithm

We seek to compose an effective algorithm to solve the objective function (6). Finding an effective algorithm is difficult because the optimization problem is non-convex and non-differentiable. We embark on using the BCD algorithm proposed by [17] and then overcome the above challenges. The BCD algorithm framework is shown in Algorithm 1 specifically.
Algorithm 1 The block coordinate descent (BCD) algorithm
1.
Set initial value for γ 1 0 R p and ρ 0 ( 0 , 1 ) ;
2.
repeat for k = 0 , 1 , 2 ,
3.
Solve the subproblem about ρ with initial point ρ k :
ρ k + 1 min ρ [ 0 , 1 ] L γ 1 k , ρ
4.
Solve the subproblem with initial value γ 1 k ,
min γ 1 R q L γ 1 , ρ k + 1
to get a solution γ 1 k + 1 , ensuring that L γ 1 k , ρ k + 1 L γ 1 k + 1 , ρ k + 1 0 , and  γ 1 k + 1 is a stationary point of L γ 1 , ρ k + 1 .
5.
until convergence.

4.4. DC Decomposition and CCCP Algorithm

An elemental observation for problem (12) is that the exponential squared loss function is a DC function, and the lasso or the adaptive lasso penalty function is convex. As a result, problem (12) is a DC programming. It can be solved by the following algorithms.
We first analyze whether the exponential squared loss function ϕ γ 2 ( t ) can be denoted as the difference of two convex functions:
ϕ γ 2 ( t ) : = ϕ γ 2 ( t ) + v ( t ) v ( t ) : = u ( t ) v ( t )
where ϕ γ 2 ( t ) = 1 e t 2 γ 2 , v ( t ) = 1 3 γ 2 2 t 4 , u ( t ) = ϕ γ 2 ( t ) + v ( t ) .
Set
J vex ( γ 1 ) = 1 n i = 1 n u Y i ρ k w i , Y D i γ 1 + λ j = 1 q P γ 1 j J cav ( γ 1 ) = 1 n i = 1 n v Y i ρ k w i , Y D i γ 1
in which u ( · ) , v ( · ) is defined in (13), w i is in the ith row of the weight matrix W, and j = 1 q P γ 1 j a convex penalty with regard to γ 1 . Then, J vex ( · ) and J cav ( · ) are convex and concave functions, respectively. Subproblem (12) is recast as follows:
min γ 1 R n L γ 1 , ρ k = J vex ( γ 1 ) + J cav ( γ 1 ) ,
Furthermore, it can be solved by the concave–convex procedure algorithm structure proposed by [18] as shown in Algorithm 2.
Algorithm 2 The Concave–Convex Procedure
1.
Initialize γ 1 0 . Set k = 0 .
2.
repeat for k = 0 , 1 , 2 ,
3.
γ 1 k + 1 = argmin γ 1 J vex ( γ 1 ) + J cav γ 1 k · γ 1
4.
until convergence of γ 1 k .
We focus on the lasso and the adaptive lasso penalty. Since J cav γ 1 k · γ 1 is linear to γ 1 , according to the definition in (15), the objective function of (16) can be expressed as
min γ 1 R q ψ ( γ 1 ) + λ i = 1 q P γ 1 i ,
where ψ ( γ 1 ) is a convex and continuously differentiable function, i = 1 q P γ 1 i is the lasso penalty, i = 1 q γ 1 i , or the more general adaptive lasso penalty: i = 1 q η i γ 1 i , η i 0 , i = 1 , , q . Therefore, we can refer to an efficient algorithm ISTA and FISTA proposed by [19] to solve the model with a framework (17) for the lasso penalty. The iterative steps of ISTA is simply γ 1 k = Θ L γ 1 k 1 , where L is the unknown Lipschitz constant. FISTA is an accelerated version of ISTA that has been shown to have a better convergence rate in theory and practice, proven by [19]. Ref. [17] extended it to solve the model by adaptive lasso penalty and can ensure numerical efficiency.
Now consider solving subproblem (11) to update ρ k . Since problem (11) minimizes a function of univariate variable, we employ the classical golden section search algorithm based on parabolic interpolation (see [20] for details).
In accordance with Beck and Teboulle, the value of the iterative function generated by FISTA for solving the subproblem (16) of CCCP converges to the optimal function value at the speed of O 1 / k 2 , with an iteration step of k. The ordinary termination criterion of ISTA and FISTA is γ 1 k γ 1 k 1 max γ 1 k , 1 tol γ 1 , where t o l γ 1 is a tolerance approaching zero and greater than zero. Under the criterion of either γ 1 k γ 1 k 1 ϵ 1 , or L γ 1 k L γ 1 k + 1 ϵ 2 , Algorithm 1 terminates. Therefore, to obtain an optimal solution of ϵ , the required iterations of the FISTA algorithm are O ( 1 / ϵ ) and the gradient ψ ( γ 1 ) of (17) is computed for each iteration. Suppose that the BCD algorithm converges with a specified number of iterations and the CCCP algorithm terminates at most m times in each iteration. Since O ( np ) computation is needed to calculate the gradient ψ ( γ 1 ) , the total computational complexity is O ( mnp / ϵ ) .

5. Simulation Studied

In this section, we conduct numerical studies to illustrate the performance of the proposed method, including the cases of normal data and noisy data.

5.1. Simulation Sampling

The data is generated from model (1). We set α = α 1 , α 2 , 0 q T , where α 1 , α 2 generates from a 2-dimensional normal distribution of the mean vector ( 0.6 , 0.8 ) and covariance matrix 0.01 · I 2 , with I 2 the unit matrix R 2 × 2 , 0 q is the zero vector of q dimension. Set the sample size n { 25 , 144 , 324 } , and spatial coefficient ρ is generated by uniform distribution on interval ρ 1 0.1 , ρ 1 + 0.1 , where ρ 1 { 0.8 , 0.5 , 0.2 } . For comparison’s sake, we also consider ρ = 0 , which means that there is no spatial dependency, and model (1) changes into the normal single-index varying-coefficient model.
The variable Y n follows ε N 0 , σ 2 I n , in which σ 2 is generated from a uniform distribution by σ 1 { 1 , 2 } on interval σ 1 0.1 , σ 1 + 0.1 . We also consider the case when there are outliers in the response. The error term follows a mixed normal distribution 1 δ 1 · N ( 0 , 1 ) + δ 1 · N 10 , 6 2 , where δ 1 { 0.01 , 0.05 } . z i j is independent and randomly taken from the normal distribution N ( 0 , 1 ) , and the space weight matrix W n = I R B m , where B m = ( 1 / ( m 1 ) ) 1 m · 1 m T I m , ⊗ is a Kronecker product, and 1 m is the m-dimensional column vector of ones. We take different values of m = 2 and R, where R = 20,100.
Moreover, we construct the spatial location information, where two-dimensional plane coordinates are used in this paper. Take a square to simulate the geographical area object, set the end point of the lower left corner of the square as the origin, and establish a rectangular coordinate system along the horizontal and vertical directions. Each side is divided into h 1 equal points, and corresponding equal points are connected along the horizontal and vertical axes to form h * h crossing points (including the equal points of each square side). Each crossing point is the geographical location point. Set sampling capacity n = h 2 ; then, the geographical location coordinate U i = ( u i 1 , u i 2 , , u i q ) T is expressed as:
U i = ( 0.5 m o d ( i 1 , h ) , 0.5 f l o o r ( i 1 , h ) , 0 , , 0 ) T
where mod and floor are the representations of built-in functions in MATLAB, m o d ( i 1 , h ) represents the remainder of i 1 divided by h, and f l o o r ( i 1 , h ) represents the integer part of the quotient of i 1 divided by h. We set
g ( t ) = g 1 ( t ) , , g q ( t ) T = ( sin ( t ) , 3 t 2 , 1 6 t , 0 , , 0 ) T
The true surface of the three coefficient functions is shown in Figure 1.
Another important problem of the spatial single-index varying-coefficient model is the estimation of weight matrix W. Since W R n × n is composed of the correlation of every two observations, it is usually difficult to obtain an accurate estimation of the weight matrix W in practical applications. In order to confirm the effect of inaccurate estimation of the matrix W, we randomly remove 30 % , 50 % , and 80 % non-zero weights from each row of the true weight matrix W, respectively.
For each case of the simulation experiment, all of the results shown below are averaged over 100 replications to avoid unintended effects. We adopt the node selection method proposed by [12], with s t e p = 10 and the number of radial basis functions p = 3 .

5.2. Simulation Results

The evaluation of simulation results is shown as follows. We use the median of squared error (MedSE) proposed by [21]. It is defined as α α ^ 2 in this paper, where α = i = 1 n α i 2 , α = α 1 , , α n , α ^ is the estimator of α . The square root of mean deviation (MAISE) is used as the evaluation index for the unknown function. Specifically, M A I S E = m c n 1 i = 1 m c n n 1 j = 1 n g t ( U α ) δ ( U α ^ ) T γ 1 t ^ 2 , t = 1 , 2 , , q , where m c n represents the total simulation times of the model, and t represents the tth unknown function of the model, g t ( · ) . The smaller the value of each index, the higher the accuracy of parameter estimation and the better fitting effect of the unknown function.
Table 1 illustrates the results of the estimated coefficient by the spatial single-index varying-coefficient model with q = 5 , the null penalty term, and Gaussian noise in y, where “E”, “S”, and “L” indicate the exponential squared loss, the square loss, and the LAD loss, respectively. It is shown that both of the three loss functions bring nonzero estimates of α 1 and α 2 , which are close to the true values (the mean of the true values of α 1 and α 2 are 0.6, 0.8 resp.). Comparatively, the model with the square loss produces the most accurate estimation. As the sample size increases, all three loss functions bring an accurate estimate of α and σ 2 .
Table 2 presents the results of the estimated coefficient by the spatial single-index varying-coefficient model when the dimension is comparatively close to the sample size. Similar results in Table 1 have been observed, except for q = 5 . As the sample size is not enough compared with the dimension, these results are as expected.
Table 3 illustrates the results of the model when the observations of y have outliers. Compared with the square loss model and LAD loss model, the model with exponential square loss shows advantages in parameter estimation in terms of MedSE, especially when the sample size is large.
We list the results of the estimated coefficients with inaccurate weight matrix W in Table 4. Compared with the results with normal data (Table 1), the MedSE values increase, and the estimations of ρ ^ and σ ^ 2 become worse for each loss functions in total. Particularly, for removing a certain part (30%, 50%, and 80%) of nonzero weights of the matrix W, MedSE increases as the moving nonzeros increase and decreases as the sample size n increases for each of the three loss functions. The exponential squared loss has the lowest MedSE among the three loss functions.
Correspondingly, Table 5, Table 6, Table 7 and Table 8 show the variable selection results compared with other loss functions. The average number of zero coefficients that are correctly chosen is labeled as “Correct”. The label “Incorrect” depicts the average number of nonzero coefficients incorrectly identified as zero. “ ˜ 1 ”, “ l 1 ”, and “null” express the adaptive lasso penalty, the lasso penalty, and without penalty term, respectively.
Table 5 shows the variable section results of the lasso and the adaptive lasso regularizer on normal data with q = 5 . In almost all of the tested cases, the model with the exponential squared loss and the lasso penalty or the adaptive lasso penalty (i.e., E + l 1 , E + ˜ 1 ) identifies more numbers of true zero coefficients (“Correct”) and much lower MedSE.
Similar results have been observed when the sample dimension is close to the sample size, which is presented in Table 6. In the tested cases of n = 360 , q = 200 , the model with l 1 ˜ 1 almost correctly identifies all the zero coefficients. The above performance of the proposed exponential squared loss and lasso or adaptive lasso penalty is beyond our expectations.
Table 7 and Table 8 list the variable selections results with noise in the observations and the inaccurate weight matrix. The model with the exponential squared loss and the lasso penalty or the adaptive lasso penalty (i.e., E + l 1 , E + ˜ 1 ) identifies many more numbers of true coefficients (“Correct”) and has much lower MedSE. Compared with the results in the normal cases (Table 5), the superiority of E + l 1 and E + ˜ 1 is more evident.
For the fitting effect diagram of the coefficient function surface, we select the case at the median position in 100 repeated experiments as the standard. Note, we present the situation when h = 16 on the normal data. The fitting surfaces of g 1 ^ , g 2 ^ , and g 3 ^ are shown in Figure 2.
From the fitting effect of each coefficient function, it can be seen that the model has an excellent fitting effect for unknown coefficient functions, which shows that in the case of limited samples, the fitting effect of the spatial single-index varying-coefficient model based on radial basis function and exponential squared loss is excellent. In other cases, the fitting effect of each coefficient function also performs well.
We also present the fitting evaluation index MAISE when h = 16 , 18 on normal data, which is shown in Table 9. It can be seen that with the increase in the total number of spatial objects, the value of the unknown function fitting evaluation index MAISE shows a downward trend. That is, the fitting effect is getting better and better. Similarly, the MAISE value of y also shows a downward trend, indicating that for the model as a whole, the relevant data is getting closer to the real data.
When the observations of y have outliers, the coefficient function surface fitting effect is compared. We still select the one in the median of 100 repetitions and take the fitted surface g 3 as an example. When ρ 1 = 0.5 , σ 1 = 1 , δ 1 = 0.05 , the fitting effect of loss functions with adaptive lasso is shown in Figure 3. This shows that our method performs better. The same conclusion can be conducted in the case of noisy weighting matrix W. Figure 4 illustrates the results when we remove 50% nonzero weights.

6. Summary

In this paper, we propose a novel model (the spatial single-index varying-coefficient model) and introduce a robust variable selection based on spline estimation and exponential squared loss for the model. The theoretical properties of the proposed estimators are established under reasonable assumptions. We especially design a BCD algorithm equipped with a CCCP procedure for efficiently solving the non-convex and non-differentiable mathematical optimization problem about the variable selection process. Numerical studies show that our proposed method is particularly robust and applicable when observations and the weight matrix are noisy.

Author Contributions

Conceptualization, Y.S. and Y.W.; methodology, Y.W.; software, Y.W.; validation, Z.W. and Y.S.; formal analysis, Y.W.; investigation, Z.W.; resources, Y.S.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W., Z.W. and Y.S.; visualization, Y.W.; supervision, Y.S.; project administration, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

The researches are supported by the National Key Research and Development Program of China (2021YFA1000102).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The main abbreviations used in this work are as follows:
SAR model:Spatial autoregressive model;
BCD algorithm:Block-coordinate descent algorithm;
DC function:Difference between two convex functions;
CCCP:concave–convex procedure;
ISTA:Iterative shrinkage-thresholding algorithm;
FISTA:Fast iterative shrinkage-thresholding algorithm;
MedSE:Median of squared error;
MAISE:Square root of mean deviation.

Appendix A. Proofs

Appendix A.1. The Related Lemmas

Lemma A1
(Convexity Lemma). Let h n ( t ) : t T be a sequence of random convex functions defined on a convex, open subset T of R d . Assume h ( t ) is a real-valued function on T for which h n ( t ) h ( t ) in probability, for each t T . Then, for each compact subset J of t
sup t J h n ( t ) h ( t ) 0 in probability .
The function h ( · ) is necessary convex on T .
Proof of Lemma A1.
For this well-known convexity lemma, there are many versions of proof, one of which can be referred to [22]. □
Lemma A2.
If g j ( t ) , j = 1 , , q , satisfy condition (C2), then there a constant C > 0 exists relying only on M such that
sup t T g j ( t ) η T ( t ) γ 1 j C K r .
Proof of Lemma A1.
The proof of Lemma A2 is similar to the proof of inference 6.21 in [23]. □

Appendix A.2. Poof of Main Theorems

Proof of Theorem 1. 
Let
η = n r / ( 2 r + 1 ) + a n , ϕ = ϕ 0 + η t 1 , γ 1 = γ 1 0 + η t 2 , t = t 1 T , t 2 T T .
(i) Let
Q ( ϕ , γ 1 ) = i = 1 n exp Y i G i T ( ϕ ) 2 / γ 1
We will present that, for any given ϵ > 0 , a large constant exists C such that
P inf t = C Q ϕ 0 + η t 1 , γ 1 0 + η t 2 > Q ϕ 0 , γ 1 0 1 ϵ ,
where the true value of ϕ . and γ 1 are ϕ 0 and γ 1 0 . Let
W n ( ϕ , γ 1 ) = i = 1 n 2 γ 2 exp Y i G i T ( ϕ ) γ 1 2 / γ 2 Y i G i ( ϕ ) T γ 1 G ˙ i ( ϕ ) T γ 1 J ϕ T U i
and
V n ( ϕ , γ 1 ) = i = 1 n 2 γ 2 exp Y i G i T ( ϕ ) γ 1 2 / γ 2 Y i G i ( ϕ ) T γ 1 G i ( ϕ ) ,
where G ˙ i T ( ϕ ) ϕ 0 = I n ρ W 1 D ˙ ( ϕ ) Let L n ( τ ) = K 1 n ( ϕ , γ 1 ) n ϕ 0 , γ 1 0 . Then, through the Taylor expansion and a simple calculation, we obtain
L n ( τ ) = 1 K n ϕ 0 + η τ 1 , γ 1 0 + η τ 2 n ϕ 0 , γ 1 0 1 K η W n ϕ 0 , γ 0 T , V n ϕ 0 , γ 1 0 T t 1 , t 2 T 1 2 K t 1 , t 2 T I ϕ 0 , γ 1 0 t 1 , t 2 n η 2 1 + o p ( 1 ) n K j = 1 d ] p λ j γ 1 j 0 p λ j γ 1 j 0 = S 1 + S 2 + S 3 + o p ( 1 ) .
Notice that S 1 = O p 1 + n r / ( 2 r + 1 ) a n t and S 2 = O p n K 1 η 2 t 2 = O p 1 + 2 n r / ( 2 r + 1 ) a n t 2 . Hence, though selecting a sufficiently large C, S 2 dominates S 1 uniformly in t = C . Moreover, invoking p λ ( 0 ) = 0 , and by the standard argument of the Taylor expansion, we obtain
S 3 n j = 1 d η p ˙ λ j γ 1 j 0 sgn γ 1 j 0 t l j + η 2 p ¨ λ j γ 1 j 0 t 1 j 2 { 1 + o ( 1 ) } s 1 n K 1 η a n t + n K 1 η 2 b n t 2
Then, it is clear to present that S 3 is dominated by S 2 uniformly in u = C . Therefore, selecting a sufficiently large C, (A.1) holds. Hence, there exist local minimizers ϕ ^ and γ 1 ^ such that
γ 1 ^ γ 1 0 = O p ( η ) , ϕ ^ ϕ 0 = O p ( η ) .
By calculating, we obtain ϕ ^ ϕ = O p ( η ) , which finishes the proof of (i).
(ii) Note that
g ^ j ( t ) g j 0 ( t ) 2 = T g ^ j ( t ) g j 0 ( t ) 2 d t = T δ T ( t ) γ ^ 1 j δ T ( t ) γ 1 j 0 + R j ( t ) 2 d t 2 T δ T ( t ) γ ^ 1 j δ T ( t ) γ 1 j 0 2 d t + 2 T R j 2 ( u ) = 2 γ ^ 1 j γ 1 j 0 T H γ ^ 1 j γ 1 j 0 + 2 T R j 2 ( t )
Then, invoking H = O ( 1 ) , a simple calculation shows
γ ^ 1 k γ 1 k 0 T H γ ^ 1 k γ 1 k 0 = O p n 2 r / ( 2 r + 1 ) + a n 2 .
In addition, it is easy to show that
T R j 2 ( t ) d t = O p n 2 r / ( 2 r + 1 ) .
Invoking (A.2) and (A.3), we finish the proof of (ii). □
Proof of Theorem 2 
(i) From λ max 0 , it is easy to show that a n = 0 for large n. Then, by Theorem 1, it is sufficient to show that, for any ϕ j which satisfies
ϕ j ϕ j 0 = O p n r / ( 2 r + 1 ) , j = 1 , , s 1 ,
and some given small ϵ = C n r / ( 2 r + 1 ) and j = s , , p 1 , when n , with probability approximating to one, we obtain Q ( ϕ , γ ) / ϕ j > 0 for 0 < ϕ j < ϵ , and Q ( ϕ , γ ) / ϕ j < 0 for ϵ < ϕ j < 0 . Let
Q n ( ϕ , γ 1 ) = i = 1 n exp Y i G i ( ϕ ) γ 1 2 / γ 2
a simple calculation shows that
( ϕ , γ 1 ) ϕ j = Q n ( ϕ , γ 1 ) ϕ j = i = 1 n 2 γ 2 exp Y i G i T ( ϕ ) γ 1 2 / γ 2 Y i G i ( ϕ ) T γ 1 G ˙ i ( ϕ ) T γ 1 e ϕ j T U i
where e ϕ j = 1 ϕ 2 1 / 2 ϕ j , 0 , , 0 , 1 , 0 , , 0 T with ( j + 1 ) th component 1. Under conditions (C1), (C2), (C3), and Theorem 1, it is easy to present that
( ϕ , γ ) ϕ j = O p n r / ( 2 r + 1 ) .
The sign of the derivative is completely determined by that of ϕ j . Then, Q ( ϕ , γ 1 ) / ϕ j > 0 for 0 < ϕ j < ϵ , and Q ( ϕ , γ 1 ) / ϕ j < 0 for ϵ < ϕ j < 0 hold. This finishes the proof of (i).
(ii) Applying the similar arguments as in the proof of (i), we obtain, with probability approximating to one, γ 1 ^ j = 0 , j = d + 1 , , q . Then, invoking
sup t δ ( t ) = O ( 1 ) ,
the result of this theorem is obtained from g ^ j ( t ) = δ T ( t ) γ ^ 1 j . □
Proof of Theorem 3.
Though Theorems 1 and 2, we obtain that, as n , with probability approximating to one, ( ϕ , γ 1 ) reaches the local maximizer at ϕ ^ * T , 0 T and γ 1 ^ * T , 0 T . Let
Q 1 n ( ϕ , γ 1 ) = ( ϕ , γ 1 ) ϕ * , Q 2 n ( ϕ , γ 1 ) = ( ϕ , γ 1 ) γ 1 * .
Then, ϕ ^ * T , 0 T and γ 1 ^ * T , 0 T satisfy
1 n Q 1 n ϕ ^ * T , 0 T , γ 1 ^ * T , 0 T = 2 n i = 1 n 2 γ 2 exp Y i G i T ( ϕ ) γ 1 2 / γ 2 Y i G i * T ϕ ^ 0 * γ 1 ^ * G ˙ i * T ϕ ^ * γ 1 ^ * J ϕ ^ * T U i * T V 1 = 0
where
V 1 = p ˙ λ 1 γ ^ 1 1 γ ^ 1 1 T H γ ^ 1 1 , , p ˙ λ d γ ^ 1 d γ ^ 1 d T H γ ^ 1 d H T .
Applying the Taylor expansion to p ˙ λ j γ ^ 1 j , we have
p ˙ λ j γ ^ 1 j = p ˙ λ j γ ^ 1 j 0 + p ¨ λ j γ ^ 1 j 0 + o p ( 1 ) γ ^ 1 j γ 1 j 0 .
Furthermore, condition (C6) implies that p ¨ λ j γ ^ 1 10 = o p ( 1 ) , and note that p ˙ λ j γ ^ 1 10 = 0 as λ max 0 . From Theorems 1 and 2, we have
p ˙ λ j γ ^ 1 j γ ^ 1 j T H γ ^ 1 j = o p γ 1 ^ * γ 1 0 * . .
Therefore, a simple calculation shows that
1 n i = 1 n Y i G i * T ϕ ^ 0 * γ 1 ^ * G i * ϕ ^ * = 1 n i = 1 n ε i + R T α 0 * T U i * Z i * G i * T ϕ 0 * γ 1 ^ * γ 1 0 * G i * ϕ ^ 0 * G i * ϕ 0 * T γ 1 ^ * G i * ϕ 0 * + G i * ϕ ^ * G i * ϕ 0 *
Let
Λ n = 1 n i = 1 n G i * ϕ 0 * ε i + R T U i * T α 0 * Z i * .
Then, from conditions (C8) and (C9), Theorem 1, and sup t δ ( t ) = O ( 1 ) , we obtain
γ 1 ^ * γ 1 0 * = Φ n + o p ( 1 ) 1 Λ n Ψ n ϕ ^ * ϕ 0 * .
Thus, we can have
n ϕ ^ * T , 0 T , γ 1 ^ * T , 0 T ϕ j * = Q n ϕ 0 * T , 0 T , γ 1 * T , 0 T ϕ j * = Q n ϕ 0 * T , 0 T , γ 1 0 * T , 0 T ϕ j * + l = 1 s 1 2 Q n ϕ 0 * T , 0 T , γ 1 0 * T , 0 T ϕ j * ϕ l * + o p ( 1 ) ϕ ^ l * ϕ 01 *
where Q n ( ϕ , γ 1 ) is defined in (A.4). As the definition of J ϕ 0 * that α α 0 = J ϕ 0 * ϕ ϕ 0 * + O p n 1 . Since n γ 2 n γ 2 0 = o p ( 1 ) , the proof is proved by Slutsky’s lemma and the central limit theorem. This ends with proof of Theorem 3. □
Thus, the proof is completed.

References

  1. Cliff, A.D. Spatial Autocorrelation; Technical Report; Pion: London, UK, 1973. [Google Scholar]
  2. Su, L. Semiparametric GMM estimation of spatial autoregressive models. J. Econom. 2012, 167, 543–560. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Shen, D. Estimation of semi-parametric varying-coefficient spatial panel data models with random-effects. J. Stat. Plan. Inference 2015, 159, 64–80. [Google Scholar] [CrossRef]
  4. Fan, J.; Yao, Q.; Cai, Z. Adaptive varying-coefficient linear models. J. R. Stat. Soc. Ser. B 2003, 65, 57–80. [Google Scholar] [CrossRef] [Green Version]
  5. Lu, Z.; Tjøstheim, D.; Yao, Q. Adaptive varying-coefficient linear models for stochastic processes: Asymptotic theory. Stat. Sin. 2007, 17, 177-S35. [Google Scholar]
  6. Xue, L.; Wang, Q. Empirical likelihood for single-index varying-coefficient models. Bernoulli 2012, 18, 836–856. [Google Scholar] [CrossRef]
  7. Huang, Z.; Zhang, R. Profile empirical-likelihood inferences for the single-index-coefficient regression model. Stat. Comput. 2013, 23, 455–465. [Google Scholar] [CrossRef]
  8. Wang, X.; Jiang, Y.; Huang, M.; Zhang, H. Robust variable selection with exponential squared loss. J. Am. Stat. Assoc. 2013, 108, 632–643. [Google Scholar] [CrossRef]
  9. Jiang, Y. Robust estimation in partially linear regression models. J. Appl. Stat. 2015, 42, 2497–2508. [Google Scholar] [CrossRef]
  10. Song, Y.; Jian, L.; Lin, L. Robust exponential squared loss-based variable selection for high-dimensional single-index varying-coefficient model. J. Comput. Appl. Math. 2016, 308, 330–345. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, K.; Lin, L. Robust structure identification and variable selection in partial linear varying coefficient models. J. Stat. Plan. Inference 2016, 174, 153–168. [Google Scholar] [CrossRef]
  12. Yu, Y.; Ruppert, D. Penalized spline estimation for partially linear single-index models. J. Am. Stat. Assoc. 2002, 97, 1042–1054. [Google Scholar] [CrossRef]
  13. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  14. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  15. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, H.; Li, G.; Jiang, G. Robust regression shrinkage and consistent variable selection through the LAD-lasso. J. Bus. Econ. Stat. 2007, 25, 347–355. [Google Scholar] [CrossRef]
  17. Song, Y.; Liang, X.; Zhu, Y.; Lin, L. Robust variable selection with exponential squared loss for the spatial autoregressive model. Comput. Stat. Data Anal. 2021, 155, 107094. [Google Scholar] [CrossRef]
  18. Yuille, A.L.; Rangarajan, A. The concave–convex procedure (CCCP). In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 3–8 December 2001; Volume 14. [Google Scholar]
  19. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  20. Forsythe, G.E. Computer Methods for Mathematical Computations; Prentice-Hall: Hoboken, NJ, USA, 1977; Volume 259. [Google Scholar]
  21. Liang, H.; Li, R. Variable selection for partially linear models with measurement errors. J. Am. Stat. Assoc. 2009, 104, 234–248. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Pollard, D. Asymptotics for least absolute deviation regression estimators. Econom. Theory 1991, 7, 186–199. [Google Scholar] [CrossRef]
  23. Schumaker, L. Spline Functions: Basic Theory; Cambridge Mathematical Library: Cambridge, UK, 1981. [Google Scholar]
Figure 1. Real surfaces of coefficient functions.
Figure 1. Real surfaces of coefficient functions.
Entropy 25 00230 g001
Figure 2. Estimated surfaces of coefficient functions with exponential squared loss.
Figure 2. Estimated surfaces of coefficient functions with exponential squared loss.
Entropy 25 00230 g002
Figure 3. Comparison of g 3 ^ when y have outliers.
Figure 3. Comparison of g 3 ^ when y have outliers.
Entropy 25 00230 g003
Figure 4. Comparison of g 3 ^ in the case of noisy weighting matrix w.
Figure 4. Comparison of g 3 ^ in the case of noisy weighting matrix w.
Entropy 25 00230 g004
Table 1. Estimation with no regularizer on normal data (q = 5).
Table 1. Estimation with no regularizer on normal data (q = 5).
n = 25, q = 5n = 144, q = 5n = 324, q = 5
E+nullS+nullL+nullE+nullS+nullL+nullE+nullS+nullL+null
ρ 1 = 0.8 , σ 1 = 1
α 1 0.800.610.830.500.660.280.530.620.67
α 2 0.880.610.750.780.770.660.740.790.96
ρ ^ 0.800.940.750.900.800.910.880.890.84
σ ^ 2 0.450.780.820.790.720.700.860.710.80
MedSE0.280.440.700.220.230.420.170.160.19
ρ 1 = 0.5 , σ 1 = 1
α 1 0.690.650.640.490.660.280.530.630.67
α 2 0.820.610.880.780.760.720.740.800.96
ρ ^ 0.520.610.500.680.400.740.620.650.55
σ ^ 2 0.440.820.710.830.710.740.890.730.82
MedSE0.220.450.770.220.230.410.170.160.19
ρ 1 = 0.2 , σ 1 = 1
α 1 0.670.660.570.510.650.320.530.650.65
α 2 0.810.590.960.800.750.720.760.810.98
ρ ^ 0.140.000.270.330.000.500.190.260.16
σ ^ 2 0.440.820.680.870.700.780.910.760.84
MedSE0.230.490.720.210.230.410.170.160.22
ρ 1 = 0 , σ 1 = 1
α 1 0.680.660.610.510.660.320.530.650.65
α 2 0.810.600.950.810.730.700.760.810.98
ρ ^ 0.000.000.220.190.000.360.030.110.04
σ ^ 2 0.440.830.690.880.720.790.910.760.84
MedSE0.220.470.700.210.250.390.160.160.22
ρ 1 = 0.8 , σ 1 = 2
α 1 0.730.771.370.410.790.340.580.630.42
α 2 0.740.270.640.660.650.550.690.680.96
ρ ^ 0.860.980.620.950.830.970.930.940.90
σ ^ 2 1.833.265.783.203.082.683.532.893.25
MedSE0.461.001.950.490.520.900.380.330.48
ρ 1 = 0.5 , σ 1 = 2
α 1 0.720.790.660.430.780.230.590.650.42
α 2 0.740.320.900.680.680.540.720.701.01
ρ ^ 0.580.680.500.770.290.860.700.740.62
σ ^ 2 1.873.493.053.443.033.003.743.083.45
MedSE0.460.981.590.480.480.950.380.320.48
ρ 1 = 0.2 , σ 1 = 2
α 1 0.760.770.530.460.780.190.600.680.46
α 2 0.760.351.060.720.650.650.750.721.01
ρ ^ 0.140.000.390.390.000.610.210.320.23
σ ^ 2 1.883.523.003.682.993.193.883.243.57
MedSE0.450.971.510.460.500.840.360.320.48
ρ 1 = 0 , σ 1 = 2
α 1 0.770.780.570.470.790.230.590.680.47
α 2 0.770.341.070.720.640.550.760.721.04
ρ ^ 0.000.000.310.230.000.520.010.140.07
σ ^ 2 1.883.543.043.743.093.323.903.263.60
MedSE0.450.971.500.460.520.890.350.330.48
Table 2. Estimation with no regularizer on normal data when the dimension is close to the sample size.
Table 2. Estimation with no regularizer on normal data when the dimension is close to the sample size.
n = 25 , q = 20 n = 144 , q = 80 n = 324 , q = 200
E+nullS+nullL+nullE+nullS+nullL+nullE+nullS+nullL+null
ρ 1 = 0.8 , σ 1 = 1
α 1 0.740.390.240.040.49−0.210.780.720.62
α 2 0.670.182.811.050.882.150.860.860.71
ρ ^ 0.840.930.500.540.800.500.800.800.50
σ ^ 2 0.180.533.540.300.261.000.370.411.68
MedSE2.792.207.922.781.724.561.441.662.21
ρ 1 = 0.5 , σ 1 = 1
α 1 0.720.390.100.190.490.060.750.680.56
α 2 0.650.271.760.940.831.680.840.840.65
ρ ^ 0.610.660.500.460.540.500.520.530.50
σ ^ 2 0.170.560.440.230.240.470.360.400.72
MedSE2.872.043.301.571.712.431.401.601.62
ρ 1 = 0.2 , σ 1 = 1
α 1 0.700.370.080.170.500.240.750.670.55
α 2 0.640.401.450.930.791.600.840.850.60
ρ ^ 0.450.040.500.000.310.500.130.190.50
σ ^ 2 0.180.570.220.220.240.500.360.400.73
MedSE3.161.712.401.591.822.501.411.591.64
ρ 1 = 0 , σ 1 = 1
α 1 0.710.370.060.200.500.260.750.680.61
α 2 0.640.381.470.940.801.600.840.850.62
ρ ^ 0.350.000.500.000.160.500.000.040.50
σ ^ 2 0.180.570.210.230.240.520.360.400.77
MedSE3.191.762.421.571.812.621.411.601.84
ρ 1 = 0.8 , σ 1 = 2
α 1 0.580.08−0.47−1.050.29−1.220.780.680.54
α 2 0.45−0.572.661.310.873.600.910.710.72
ρ ^ 0.770.970.500.610.810.500.840.870.50
σ ^ 2 4.182.2712.432.201.064.131.631.636.24
MedSE8.374.419.855.683.599.272.973.244.38
ρ 1 = 0.5 , σ 1 = 2
α 1 0.630.11−0.38−0.430.33−0.450.750.680.57
α 2 1.23−0.232.891.280.782.710.930.710.62
ρ ^ 0.640.600.500.470.590.500.600.610.50
σ ^ 2 2.332.441.881.761.002.031.731.683.08
MedSE5.363.806.834.343.555.022.973.263.35
ρ 1 = 0.2 , σ 1 = 2
α 1 0.650.13−0.81−0.150.36−0.170.750.690.65
α 2 1.14−0.042.370.880.742.480.960.720.51
ρ ^ 0.370.000.500.000.310.500.260.220.50
σ ^ 2 1.892.430.951.951.032.011.911.723.17
MedSE5.123.435.013.423.664.813.073.303.32
ρ 1 = 0 , σ 1 = 2
α 1 0.640.12−0.80−0.100.35−0.140.760.700.64
α 2 0.58−0.182.230.880.762.470.930.720.53
ρ ^ 0.210.000.500.000.150.500.010.010.50
σ ^ 2 0.782.460.972.091.032.071.731.713.24
MedSE5.913.724.903.423.644.902.983.303.40
Table 3. Estimation with no regularizer when the observations of y have outliers.
Table 3. Estimation with no regularizer when the observations of y have outliers.
n = 25 , q = 5 n = 144 , q = 5 n = 324 , q = 5
E+nullS+nullL+nullE+nullS+nullL+nullE+nullS+nullL+null
ρ 1 = 0.5 , σ 1 = 1, δ 1 = 0.01
α 1 0.710.700.590.520.530.540.520.600.42
α 2 0.490.761.200.730.940.860.720.661.14
ρ ^ 0.450.640.500.670.540.510.590.490.55
σ ^ 2 0.800.820.730.870.860.981.021.090.91
MedSE0.460.340.510.300.250.210.180.220.35
ρ 1 = 0.5 , σ 1 = 2 , δ 1 = 0.01
α 1 1.060.690.640.450.400.450.590.610.35
α 2 0.140.861.580.621.280.800.710.541.38
ρ ^ 0.480.690.660.760.590.700.670.520.63
σ ^ 2 3.843.242.823.723.483.984.214.463.79
MedSE1.520.701.070.550.630.560.390.400.67
ρ 1 = 0.5 , σ 1 = 1 , δ 1 = 0.05
α 1 0.881.010.670.570.740.390.420.580.58
α 2 0.430.721.010.560.781.000.650.511.05
ρ ^ 0.600.750.500.750.850.600.670.740.65
σ ^ 2 2.643.724.323.753.024.444.044.643.83
MedSE0.670.591.070.730.450.470.350.480.28
ρ 1 = 0.5 , σ 1 = 2 , δ 1 = 0.05
α 1 1.091.000.740.560.610.610.480.610.41
α 2 0.230.831.580.431.130.730.640.401.42
ρ ^ 0.390.760.500.770.820.500.690.690.64
σ ^ 2 5.026.176.236.135.677.627.087.946.33
MedSE0.980.831.610.870.590.720.460.640.67
Table 4. Estimation with no regularizer with noisy weighting matrix w.
Table 4. Estimation with no regularizer with noisy weighting matrix w.
n = 25 , q = 5 n = 144 , q = 5 n = 324 , q = 5
E+nullS+nullL+nullE+nullS+nullL+nullE+nullS+nullL+null
Remove 30% nonzero weights
α 1 0.590.580.170.430.320.470.550.650.46
α 2 0.590.971.630.820.800.970.730.781.12
ρ ^ 0.610.550.520.700.570.500.650.540.53
σ ^ 2 1.051.130.991.081.081.031.141.091.22
MedSE0.480.451.100.350.330.490.200.250.25
Remove 50% nonzero weights
α 1 0.670.570.160.440.300.310.520.660.54
α 2 0.610.901.480.730.811.150.740.791.10
ρ ^ 0.540.480.500.640.490.410.570.490.49
σ ^ 2 1.071.170.971.111.111.091.181.121.25
MedSE0.410.261.120.340.370.640.190.270.31
Remove 80% nonzero weights
α 1 0.680.630.260.460.330.240.580.700.48
α 2 0.590.941.290.710.821.310.770.781.19
ρ ^ 0.400.340.510.520.340.370.420.330.36
σ ^ 2 1.151.300.961.261.241.141.341.261.40
MedSE0.500.400.890.400.400.780.200.290.33
Table 5. Variable section with regularizer on normal data ( q = 5 ), E: the exponential loss; S: the square loss; L: the LAD loss; l 1 : the lasso penalty; and ˜ 1 : the adaptive lasso penalty.
Table 5. Variable section with regularizer on normal data ( q = 5 ), E: the exponential loss; S: the square loss; L: the LAD loss; l 1 : the lasso penalty; and ˜ 1 : the adaptive lasso penalty.
n = 25 , q = 5 n = 324 , q = 5
E + l 1 E + ˜ 1 S + l 1 S + ˜ 1 L + l 1 L + ˜ 1 E + l 1 E + ˜ 1 S + l 1 S + ˜ 1 L + l 1 L + ˜ 1
ρ 1 = 0.8 , σ 1 = 1
Correct4.005.004.005.000.003.005.005.005.005.005.005.00
Incorrect0.000.000.000.000.000.000.000.000.000.000.001.00
ρ ^ 0.990.860.860.970.730.820.890.880.880.890.890.92
MedSE0.420.370.440.361.430.540.140.140.200.160.210.24
ρ 1 = 0.5 , σ 1 = 1
Correct4.004.003.005.005.004.005.005.005.005.005.005.00
Incorrect0.000.000.000.000.000.000.000.000.000.000.001.00
ρ ^ 0.520.570.580.810.510.460.500.580.580.570.560.68
MedSE0.240.310.450.400.490.430.170.110.150.160.230.22
ρ 1 = 0.2 , σ 1 = 1
Correct4.004.003.005.005.003.005.005.005.005.005.005.00
Incorrect0.000.000.001.000.000.000.000.000.000.000.001.00
ρ ^ 0.260.380.400.610.370.200.330.350.350.310.350.49
MedSE0.240.320.470.420.500.520.140.120.160.160.220.21
ρ 1 = 0 , σ 1 = 1
Correct4.004.003.005.005.003.005.005.005.005.005.005.00
Incorrect0.000.000.001.000.000.000.000.000.000.000.001.00
ρ ^ 0.000.060.090.300.010.000.000.000.000.000.000.11
MedSE0.240.310.460.440.560.550.140.130.170.160.170.21
ρ 1 = 0.8 , σ 1 = 2
Correct4.002.001.003.000.001.005.005.005.004.002.004.00
Incorrect0.000.000.001.000.000.001.000.000.000.000.001.00
ρ ^ 0.880.900.920.990.860.880.940.920.920.940.920.96
MedSE0.530.670.940.652.091.050.320.290.320.360.450.44
ρ 1 = 0.5 , σ 1 = 2
Correct4.002.001.002.003.003.005.005.005.004.002.004.00
Incorrect0.000.000.001.001.000.001.000.000.000.000.001.00
ρ ^ 0.450.640.690.890.510.520.660.630.650.640.620.81
MedSE0.520.670.960.700.990.950.310.290.310.350.460.48
ρ 1 = 0.2 , σ 1 = 2
Correct4.002.001.001.002.003.005.005.005.004.002.003.00
Incorrect0.000.000.001.001.000.001.000.000.000.000.001.00
ρ ^ 0.030.450.510.760.500.500.380.370.390.340.400.57
MedSE0.550.690.970.761.131.020.290.300.330.340.470.48
ρ 1 = 0 , σ 1 = 2
Correct4.002.001.001.002.002.005.005.005.004.003.003.00
Incorrect0.000.000.001.001.000.001.000.000.000.000.001.00
ρ ^ 0.000.100.180.490.030.410.000.000.000.000.000.21
MedSE0.510.680.960.821.091.210.280.310.360.340.380.50
Table 6. Variable section with regularizer on normal data when the dimension is close to the sample size, E: the exponential loss; S: the square loss; L: the LAD loss; l 1 : the lasso penalty; and ˜ 1 : the adaptive lasso penalty.
Table 6. Variable section with regularizer on normal data when the dimension is close to the sample size, E: the exponential loss; S: the square loss; L: the LAD loss; l 1 : the lasso penalty; and ˜ 1 : the adaptive lasso penalty.
n = 25 , q = 20 n = 324 , q = 200
E + l 1 E + ˜ 1 S + l 1 S + ˜ 1 L + l 1 L + ˜ 1 E + l 1 E + ˜ 1 S + l 1 S + ˜ 1 L + l 1 L + ˜ 1
ρ 1 = 0.8 , σ 1 = 1
Correct7.009.005.006.008.0016.00195.00200.00187.00195.00180.00187.00
Incorrect1.001.000.000.001.000.001.000.000.000.000.001.00
ρ ^ 0.810.820.890.880.580.690.840.870.850.880.650.73
MedSE2.891.383.382.061.390.561.230.531.521.361.691.53
ρ 1 = 0.5 , σ 1 = 1
Correct6.0010.005.004.0017.0013.00197.00200.00192.00196.00199.00197.00
Incorrect0.001.000.000.000.000.000.000.000.000.000.001.00
ρ ^ 0.500.250.550.610.510.420.620.550.540.610.500.50
MedSE2.021.413.382.200.640.761.050.521.431.330.881.00
ρ 1 = 0.2 , σ 1 = 1
Correct5.0010.005.005.0013.0014.00197.00200.00191.00193.00199.00198.00
Incorrect1.001.000.000.000.000.000.000.001.000.000.001.00
ρ ^ 0.550.000.400.470.500.230.480.340.400.480.500.50
MedSE2.981.393.472.450.900.741.110.521.411.360.911.02
ρ 1 = 0 , σ 1 = 1
Correct9.0011.005.004.0013.0013.00197.00200.00192.00193.00200.00196.00
Incorrect1.001.000.000.000.000.000.000.001.000.000.001.00
ρ ^ 0.550.000.000.120.380.000.130.000.030.160.500.49
MedSE1.781.243.412.281.040.821.070.521.421.360.921.13
ρ 1 = 0.8 , σ 1 = 2
Correct6.006.005.001.005.0013.00162.00172.00133.00137.00156.00138.00
Incorrect1.000.000.000.001.000.000.001.001.000.000.001.00
ρ ^ 0.810.921.000.980.760.730.940.890.890.940.730.73
MedSE3.333.657.305.302.300.922.281.842.992.712.302.78
ρ 1 = 0.5 , σ 1 = 2
Correct8.006.004.001.009.009.00160.00173.00134.00135.00185.00173.00
Incorrect1.001.000.000.000.000.000.001.001.000.000.001.00
ρ ^ 0.770.530.910.900.600.470.740.630.610.750.500.50
MedSE2.523.397.635.951.871.442.341.812.962.771.641.94
ρ 1 = 0.2 , σ 1 = 2
Correct7.007.004.001.008.009.00162.00167.00129.00130.00181.00173.00
Incorrect1.001.000.000.000.000.000.001.001.000.001.001.00
ρ ^ 0.240.100.680.790.630.150.530.460.460.550.500.50
MedSE2.873.427.625.981.841.472.331.832.972.811.721.99
ρ 1 = 0 , σ 1 = 2
Correct8.006.004.001.009.009.00144.00172.00131.00131.00178.00171.00
Incorrect1.000.000.000.000.000.000.001.001.000.001.001.00
ρ ^ 0.000.000.320.540.500.000.350.000.000.260.500.50
MedSE2.983.607.655.701.951.582.731.832.992.861.872.11
Table 7. Variable selection with regularizer when the observations y have outliers, E: the exponential loss; S: the square loss; L: the LAD loss; l 1 : the lasso penalty; and ˜ 1 : the adaptive lasso penalty.
Table 7. Variable selection with regularizer when the observations y have outliers, E: the exponential loss; S: the square loss; L: the LAD loss; l 1 : the lasso penalty; and ˜ 1 : the adaptive lasso penalty.
n = 25 , q = 5 n = 324 , q = 5
E + l 1 E + ˜ 1 S + l 1 S + ˜ 1 L + l 1 L + ˜ 1 E + l 1 E + ˜ 1 S + l 1 S + ˜ 1 L + l 1 L + ˜ 1
ρ 1 = 0.5 , σ 1 = 1 , δ 1 = 0.01
Correct4.004.004.005.004.003.005.005.005.005.005.005.00
Incorrect0.000.000.001.000.000.000.000.000.000.001.000.00
ρ ^ 0.770.640.630.770.700.790.740.660.660.700.640.61
MedSE0.480.320.480.360.620.530.140.140.170.180.310.30
σ ^ 2
ρ 1 = 0.5 , σ 1 = 2 , δ 1 = 0.01
Correct3.001.002.003.001.001.005.005.005.005.003.003.00
Incorrect0.000.000.000.002.000.000.000.000.000.001.000.00
ρ ^ 0.580.470.520.710.510.760.570.560.580.640.500.50
MedSE0.610.970.930.791.191.170.180.350.350.340.660.63
ρ 1 = 0.5 , σ 1 = 1 , δ 1 = 0.05
Correct1.004.003.003.000.003.003.004.004.004.004.005.00
Incorrect0.001.001.001.001.001.000.001.000.000.000.000.00
ρ ^ 0.780.730.730.900.770.880.830.750.790.790.830.81
MedSE0.690.870.920.691.750.660.380.300.350.320.520.22
ρ 1 = 0.5 , σ 1 = 2 , δ 1 = 0.05
Correct1.003.001.003.000.003.005.004.004.004.004.003.00
Incorrect0.000.001.001.001.000.000.001.000.000.001.000.00
ρ ^ 0.650.420.410.830.510.790.750.590.640.670.630.57
MedSE0.961.281.260.741.930.540.360.460.460.460.750.54
Table 8. Variable selection with regularizer and noisy weighting matrix w, E: the exponential loss; S: the square loss; L: the LAD loss; l 1 : the lasso penalty; and ˜ 1 : the adaptive lasso penalty.
Table 8. Variable selection with regularizer and noisy weighting matrix w, E: the exponential loss; S: the square loss; L: the LAD loss; l 1 : the lasso penalty; and ˜ 1 : the adaptive lasso penalty.
n = 25 , q = 5 n = 324 , q = 5
E + l 1 E + ˜ 1 S + l 1 S + ˜ 1 L + l 1 L + ˜ 1 E + l 1 E + ˜ 1 S + l 1 S + ˜ 1 L + l 1 L + ˜ 1
Remove 30% nonzero weights
Correct4.005.005.005.000.005.005.005.005.005.005.005.00
Incorrect0.000.000.000.001.000.001.000.000.000.000.000.00
ρ ^ 0.540.530.500.640.280.480.570.550.550.500.550.56
MedSE0.500.220.410.320.900.280.160.180.260.160.170.28
Remove 50% nonzero weights
Correct4.005.002.004.002.004.005.005.005.005.005.005.00
Incorrect0.000.000.000.001.000.000.000.000.000.000.000.00
ρ ^ 0.550.360.370.540.160.350.380.410.400.350.420.46
MedSE0.570.430.630.360.860.400.090.220.290.170.120.28
Remove 80% nonzero weights
Correct4.005.005.003.000.005.005.004.004.005.004.004.00
Incorrect0.000.000.000.001.000.000.000.000.000.000.000.00
ρ ^ 0.370.470.430.890.500.720.520.480.460.370.580.49
MedSE0.630.350.500.770.910.420.200.480.500.290.360.41
Table 9. Results of MAISE for the total number of different spatial objects.
Table 9. Results of MAISE for the total number of different spatial objects.
h = 16 h = 18
g 1 ^ 0.04370.0388
g 2 ^ 0.05420.0539
g 3 ^ 0.05150.0496
y ^ 0.05660.0548
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Wang, Z.; Song, Y. Robust Variable Selection with Exponential Squared Loss for the Spatial Single-Index Varying-Coefficient Model. Entropy 2023, 25, 230. https://doi.org/10.3390/e25020230

AMA Style

Wang Y, Wang Z, Song Y. Robust Variable Selection with Exponential Squared Loss for the Spatial Single-Index Varying-Coefficient Model. Entropy. 2023; 25(2):230. https://doi.org/10.3390/e25020230

Chicago/Turabian Style

Wang, Yezi, Zhijian Wang, and Yunquan Song. 2023. "Robust Variable Selection with Exponential Squared Loss for the Spatial Single-Index Varying-Coefficient Model" Entropy 25, no. 2: 230. https://doi.org/10.3390/e25020230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop