Next Article in Journal
On the Bivariate Vector-Valued Rational Interpolation and Recovery Problems
Previous Article in Journal
A Penalized Orthogonal Kriging Method for Selecting a Global Trend
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Robust Weighted Expectile Screening for Ultra-High-Dimensional Data

1
School of Statistics and Data Science, Qufu Normal University, Qufu 273100, China
2
School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan 430073, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(5), 340; https://doi.org/10.3390/axioms14050340
Submission received: 2 April 2025 / Revised: 21 April 2025 / Accepted: 25 April 2025 / Published: 28 April 2025
(This article belongs to the Section Mathematical Physics)

Abstract

:
This paper investigates robust feature screening for ultra-high dimensional data in the presence of outliers and heterogeneity. Considering the susceptibility of likelihood methods to outliers, we propose a Sparse Robust Weighted Expectile Regression (SRoWER) method that combines the L 2 E criterion with expectile regression. By utilizing the IHT algorithm, our method effectively incorporates correlations of covariates and enables joint feature screening. The proposed approach demonstrates robustness against heavy-tailed errors and outliers in data. Simulation studies and a real data analysis are provided to demonstrate the superior performance of the SRoWER method when dealing with outlier-contaminated explanatory variables and/or heavy-tailed error distributions.

1. Introduction

With the exponential growth of data sets in various fields over the past two decades, numerous exhaustive methods have been proposed to address the issue of coefficient sparsity in high-dimensional statistical models, such as bridge regression [1], LASSO [2], SCAD and other folded-concave penalties [3], and the Dantzig selector [4]. While these methods have demonstrated their effectiveness both theoretically and practically, real-world scenarios present new challenges such as identifying disease-causing genes among millions of other genes or analyzing key factors contributing to stock price fluctuations from vast amounts of business data. To tackle ultra-high dimensional data, a range of techniques has emerged. One notable technique is Sure Independent Screening (SIS), initially developed by Fan and Lv [5] for screening out irrelevant factors before conducting variable selection in ultra-high dimensional linear models. There are numerous further developments based on SIS [6,7,8,9]. However, these methods overlook the correlations of covariates, despite their computational efficiency. Consequently, additional procedures have been proposed to address this limitation, including ISIS [6], FR [10], SMLE [11].
The aforementioned approaches, which are all based on the maximum likelihood function or Pearson’s correlation, become invalid in the presence of outliers. Therefore, robust methods have been extensively studied in the literature. Although quantile regression [12] is effective in handling heterogeneous data, the significantly higher computational cost compared to least squares error necessitates an investigation of the asymmetric least squares (ALS) regression (i.e., expectile regression [13,14,15,16]). ALS regression provides a more comprehensive interpretation of the conditional distribution than the ordinary least squares (OLS) method by allocating different squared error losses to positive and negative residuals, respectively. Moreover, its smooth differentiability greatly reduces computational costs and facilitates theoretical research. Building upon ALS and quantile regression, numerous methods have been proposed to address heterogeneous data with high-dimensionality, such as [17,18] for variable selection and [19,20,21,22,23,24] for feature screening. The study of [25] proposed an expectile partial correlation screening (EPCS) procedure to sequentially identify important variables for expectile regressions in ultra-high dimensions, and proved that this procedure can lead to a sure screening set. Another robust parametric technique called DPD-SIS [26,27] has been developed for ultra-high-dimensional linear regression models and generalized linear models. This approach is based on the robust minimum density power divergence estimator [28], but it is still limited to addressing marginal aspects without accounting for the correlations between features. In addition, the DPD-SIS cannot handle heterogeneity, which is often a feature of ultra-high-dimensional data.
In the context of heterogeneity and outliers in the data, we propose a new method called Robust Weighted Expectile Regression (RoWER), which combines the L 2 error criterion with expectile regression to achieve the robustness and address the heterogeneity. Furthermore, we developed a sparse restricted RoWER (SRoWER) approach to achieve feature screening. Under general assumptions, we show that the SRoWER can be used for sure screening property. Numerical studies validate the robustness and efficacy of SRoWER. There are three advantages of our SRoWER method, including the following: (1) The SRoWER can provide more reliable screening results, particularly in the presence of outliers in both covariates and the response; (2) In the case of heteroscedasticity, the SRoWER yields superior performance in estimation and feature screening as demonstrated in simulation studies; (3) The SRoWER can be efficiently solved by an iterative hard-thresholding-based algorithm.
The remaining sections of this article are organized as follows. Section 2 introduces the model and the RoWER method. In Section 3, we present the SRoWER method for feature screening and establish sure independent screening property. Section 4 describes simulation studies and a real data analysis that evaluates the finite sample performance of the SRoWER method. Concluding remarks are provided in Section 5. The proofs of the main results can be found in the Appendix A.

2. Model and Method

2.1. L 2 E Criterion for Asymmetric Normal Distribution

To address the problem that likelihood methods are sensitive to outliers, Scott [29] proposed a L 2 E method, for which the objective function is
+ f ( v | θ ) 2 d v 2 n i = 1 n f ( v i | θ ) ,
where f ( v | θ ) is a parametric probability density function (pdf) of a random variable V and v i , i = 1 , , n is a given random sample.
Here, we assume that V follows the asymmetric normal distribution, i.e., V A N ( v ; μ τ , σ 2 , τ ) . The corresponding pdf is
f ( v ; μ τ , σ 2 , τ ) = 2 π σ 2 τ ( 1 τ ) τ + 1 τ exp ρ τ v μ τ σ ,
where ρ τ ( t ) = ω τ ( t ) t 2 is asymmetric squared error loss [13], ω τ ( t ) = | τ I ( t 0 ) | with I ( · ) being the indicator function. Moreover, μ τ , σ and τ are the location, scale and asymmetric parameters, respectively. The following proposition gives the L 2 E criterion of asymmetric normal distribution.
Proposition 1. 
Suppose V A N ( v ; μ τ , σ 2 , τ ) , then the L 2 E criterion is
1 π σ 2 2 τ ( 1 τ ) τ + 1 τ 1 2 2 1 n i = 1 n exp ρ τ v i μ τ σ .

2.2. RoWER

We consider τ -mean [13] of the random variable Z R ,
E τ ( Z ) = arg min a R E { ρ τ ( Z a ) } , τ ( 0 , 1 ) .
In fact, the τ -mean corresponds to Efron’s w-mean [30], where w = τ / ( 1 τ ) . In economics, the τ -mean is also called the τ -expectile. Let y = ( y 1 , , y n ) T be the n-dimensional response vector, X = ( x 1 , , x n ) T be the n × p design matrix with x i = ( x i 1 , , x i p ) T , i = 1 , , n . The ALS regression is carried out using the following
arg min β R p i = 1 n ρ τ ( y i x i T β ) ,
which degenerates to the OLS regression when τ = 0.5 .
Consider the following linear model
y = X β τ + ε τ ,
where β τ = ( β 1 ( τ ) , , β p ( τ ) ) T is a p-dimensional parameter vector and ε τ = ( ε 1 τ , , ε n τ ) T is the vector of n independent errors satisfying E τ ( ε i τ | x i ) = 0 , i = 1 , , n for some τ ( 0 , 1 ) . We adopt the sparsity assumption on β τ , that is, the regression coefficient vector β τ has many zero components. In model (2), it is crucial to understand that varying τ allows for variations in the coefficient vector β τ , so we can model different locations of the conditional distribution. For convenience, the superscript τ of β τ and ε τ is omitted in the following so that no confusion arises.
By substituting μ τ = x i T β and σ = 1 into (1), we can obtain the following loss function by disregarding the terms that are independent of β ,
L n ( β ) = 1 n i = 1 n exp { ρ τ ( y i x i T β ) } .
However, (3) may not be strictly convex, so we propose a new loss function in the following Proposition 2 by Taylor’s expansion and logarithmic transformation.
Proposition 2. 
Given a consistent estimator β ˜ of β, minimizing (3) is transformed into minimizing the following loss
D n ( β ) = i = 1 n π i ( β ˜ ) ρ τ ( y i x i T β ) ,
where
π i ( β ˜ ) = exp { ρ τ ( y i x i T β ˜ ) } l = 1 n exp { ρ τ ( y l x l T β ˜ ) } , i = 1 , , n ,
which is abbreviated as π i .
Here π i ’s can be treated as the weights of the asymmetric least squares loss, and the loss (4) is referred to as the RoWER. When τ = 0.5 , the RoWER degenerates to the weighted least squares regression. This paper chooses the consistent estimator β ˜ as β L based on Lemma A5. We assume that π i ’s are lower bounded.

3. The SRoWER and Sure Screening Property

Let M be any subset of { 1 , , p } , which corresponds to a submodel with the relevant regression coefficient vector β M = ( β j , j M ) T and the design matrix x M = ( x 1 M , , x n M ) T , x i M = ( x i j , j M ) T . In addition, let · 2 be the L 2 -norm, and · 0 be the L 0 -norm, which denotes the number of non-zero components of a vector. The size of model M is denoted as s ( M ) . The true model is represented by M * = { j : β j * 0 } , with β * being the true regression coefficient vector, and s ( M * ) = β * 0 = k 0 .

3.1. The IHT Algorithm

For the objective function D n ( β ) , assuming that β is sparse with s ( M * ) = k 0 k for some known k, the RoWER method with sparsity restriction (SRoWER) yields an estimator of β defined as
β ^ [ k ] = arg min β D n ( β ) , β 0 k ,
and M ^ = { j : β ^ [ k ] j 0 } stands for the set of subscripts of the non-zero components of β ^ [ k ] .
For feature screening, the goal is to retain a relatively small number of features from p features. Currently, many studies have proposed methods to solve such problems. For example, Mallat and Zhang [31] proposed the matching pursuit algorithm. Moreover, the hard thresholding method proposed by Blumensath and Davies [32] is particularly effective for linear models. We now follow the idea of an iterative hard thresholding (IHT) algorithm to compute the SRoWER estimate. For γ within the neighborhood of a given β , the IHT uses the approximation of D n ( · ) ,
Q n ( γ ; β ) = D n ( β ) + ( γ β ) T T n ( β ) + ( u / 2 ) γ β 2 2 ,
where
T n ( β ) = D n ( β ) β = i = 1 n 2 π i ω τ , i ( β ) ( y i x i T β ) x i ,
ω τ , i ( β ) = | τ I ( y i x i T β ) | , and u > 0 is a scale parameter. Denote T n ( β ) = ( T n j ( β ) , j = 1 , , p ) T .
By (6), the approximate solution of (5) can be obtained by the following iterative procedure
β ( t + 1 ) = arg min γ Q n ( γ ; β ( t ) ) , γ 0 k .
The optimization of (7) is equivalent to
β ( t + 1 ) = arg min γ γ u 1 { u β ( t ) T n ( β ( t ) ) } 2 2 , γ 0 k .
If there is no constraint γ 0 k , the analytic solution of (8) is γ ˜ = β ( t ) u 1 T n ( β ( t ) ) . However, due to the sparsity restriction, β ( t + 1 ) can be obtained by selecting the component of γ ˜ with the largest absolute value before k, i.e.,
V ( γ ˜ ; k ) = ( V ( γ ˜ 1 ; r ) , , V ( γ ˜ p ; r ) ) T ,
where r is the k-th largest component of ( | γ ˜ | 1 , , | γ ˜ | p ) T , and V ( γ ; r ) = γ I ( | γ | r ) is a hard thresholding function. Given the sparse solution β ( t ) obtained at the t-th iteration, iterating (8) is equivalent to iterating the following expression
β ( t + 1 ) = V ( β ( t ) u 1 T n ( β ( t ) ) ; k ) .
The ultra-high dimensional case is often faced with a huge amount of computational tasks including matrix operations. However, the use of thresholding functions can eliminate this issue. Moreover, it naturally incorporates information on the correlations between predictors. Theorem 1 shows that the value of D n ( · ) decreases as the number of iterations increases.
Theorem 1. 
Let { β ( t ) } be the sequence obtained by(7), λ max be the maximum eigenvalue of X T X . If u / 2 c ¯ λ max 0 with c ¯ = τ ( 1 τ ) , the value of D n ( · ) decreases as the number of iterations increases, i.e., D n ( β ( t + 1 ) ) D n ( β ( t ) ) .

3.2. Sure Screening Property

This subsection will prove the sure screening property of feature screening based on the SRoWER method. Define
M + k = { M : M * M ; M 0 k } ,
M k = { M : M * M ; M 0 k }
as the collections of the over-fitted models and the under-fitted models, respectively. When p, k 0 , k and β * vary along with the sample size n, we provide the asymptotic property of β ^ [ k ] . Additionally, we make the following assumptions, some of which are completely technical and only help us comprehend the SRoWER method theoretically.
(A1)
log p = O ( n α ) for some 0 α < 1 .
(A2)
There exist w 1 , w 2 > 0 and some non-negative constants η 1 , η 2 , such that
min j M * | β j * | w 1 n η 1
and
k 0 k w 2 n η 2 .
(A3)
There exists a constant η 3 > 0 , such that max 1 i n max 1 j p | x i j | η 3 .
(A4)
Suppose that the random errors ε i are i.i.d. sub-Gaussian random variables satisfying E τ ( ε i | x i ) = 0 , i = 1 , , n .
(A5)
Let δ = β β * and
1 / 2 R n = D n ( β ) D n ( β * ) δ T T n ( β * ) .
There exists a constant v > 0 , such that, for sufficiently large n,
R n v n δ M * 2 2
for δ M c * 1 3 δ M * 1 with M c * being the complement of M * .
Condition (A1) shows that p diverges exponentially with n, which is a common setting in the ultra-high dimension. The two requirements in Condition (A2) are crucial for establishing the sure screening property. The former one implies that the signals of the true model are stronger than the random errors, so they are detectable. The latter one implies that the sparsity of β makes sure screening possible with s ( M ^ ) = k k 0 . Condition (A3) is a regular condition for the theoretical derivation. Condition (A4) is the same as the assumption of [17]. Condition (A5) is similar to [11].
Theorem 2. 
Suppose that Conditions (A1)–(A5) are satisfied with 2 η 1 + η 2 < 1 α . Let M ^ be the estimated model obtained by the SRoWER with size k; then, we have
P { M * M ^ } 1 , as n .
By using feature screening, important features that are highly correlated with the response variable can be kept in M ^ . However, it is necessary to note that there is no explicit choice for k, because it depends on the different dimensions. Note that the IHT algorithm needs a initial estimate β ( 0 ) . To further enhance computational efficiency, the LASSO estimate is chosen as the initial value of the iterations. The following theorem shows that with the initial value β ( 0 ) obtained using LASSO, the IHT-implemented SRoWER can satisfy the property of sure independent screening within a finite iteration.
Theorem 3. 
Let β ( t ) be the t-th update of the IHT procedure. The scale parameter u ξ k n for some ξ > 0 , and let M ( t ) = { j : β j ( t ) 0 } be the screening features. The initial value of iteration is
β ( 0 ) = arg min β { D n ( β ) + n λ β 1 } ,
where λ satisfies λ n ( 1 α ) / 2 and λ = o ( n ( η 1 + η 2 ) ) . Then, under Conditions (A1)–(A5), for any finite t 1 , we have
P { M * M ( t ) } 1 , as n .

3.3. The Choice of k

For the SRoWER method, we need prespecified k, such as log ( n ) n 1 / 3 [4,6,11]. Here, we treat k as a tuning parameter to control model complexity, and determine k by minimizing the following EBIC score:
E B I C ( k ) = log 1 n i = 1 n π i ( β ˜ ) ρ τ ( y i x i M T β M ) + k log ( n ) log ( p ) 2 n ,
where k = s ( M ) . The study of [33] proposed the EBIC for model selection for large model spaces. Here, we use it to determine k for comparing the SRoWER with the EPCS proposed by [25], which also used the EBIC for model selection.
Note that the EBIC selector for determining k requires searching over k = 1 , 2 , , p . To balance the computation and model selection accuracy in practice, we minimize E B I C ( k ) for k = 1 , , log ( n ) n 1 / 3 .

4. Numerical Studies

4.1. Simulation Studies

In this subsection, the finite sample performance of SRoWER is evaluated using simulation studies and compared with EPCS [25] and SMLE [11] based on expectile regression, i.e., π i = 1 , i = 1 , , n in the SRoWER. The IHT algorithm is used to carry out feature screening based on SRoWER, and the iteration is stopped when β ( t ) β ( t 1 ) 2 10 3 .
We take p = 500 , n = 100 , 200 , and expectile level τ = 0.5 , 0.05 , which correspond to the mean regression and an extreme expectile regression, respectively. All simulation results are based on 200 replications (with standard deviations in parentheses). To evaluate the performance of the screening approach, we use three criteria: the number of true positive variables (TP), the percentage of correctly fitted models (CF), and the root mean-squared error (RMSE) β ^ [ k ] β * 2 .
Example 1. 
Consider the linear model
y i = 3 x i 6 + 1.5 x i 12 + x i 15 + x i 20 + [ ε i E τ ( ε i ) ] , i = 1 , , n ,
where the candidate features x i are i.i.d. generated from multivariate normal distribution N p ( 0 , Σ ) with Σ = ( σ j k ) , and σ j k = ρ | j k | . We set ρ = 0 and 0.5 . The true model is M * = { 6 , 12 , 15 , 20 } . ε i is generated from the standard gumbel distribution (Gumbel), the standard normal distribution (Normal), and the t distribution with three degrees of freedom (T), respectively.
Example 1 considers a relatively simple case. The simulation results are given in Table 1 and Table 2. For τ = 0.5 , we can see that all three considered methods can almost screen all important features for three different error distributions. No one method can control the other two methods in all cases, but the SRoWER performs better than the SMLE and EPCS in most instances. However, at extreme expectiles ( τ = 0.05 ) , the SRoWER method performs much better than the SMLE and EPCS in terms of RMSE except for the case of Gumbel with n = 200 . In addition, all the results become better when the sample size increases from 100 to 200.
Example 2. 
For the linear model in Example 1, to examine the robustness of the SRoWER, we consider the case where there are outliers in the covariates. We first generate data as those in Example 1. Next, we artificially add outliers from N ( 0 , 5 2 ) to random 50 covariates of 10% of the observations. The other settings are the same as those in Example 1.
Example 2 considers the case where both covariates and response variables have outliers. The simulation results are shown in Table 3 and Table 4. We can see that the SRoWER has the smallest RMSE compared with SMLE and EPCS. Three considered methods have similar performance in variable selection, except for the case of T. Both SRoWER and SMLE perform better than EPCS in terms of CF for the case of T.
Example 3. 
Here we consider a heterogeneous model. We first generate ( z 1 , , z p ) T from multivariate normal distribution N p ( 0 , Σ ) with Σ = ( σ j k ) , and σ j k = ρ | j k | . We set ρ = 0 and 0.5 . Let x 1 = Φ ( z 1 ) and x j = z j , j = 2 , 3 , , p , where Φ ( · ) is the cumulative distribution function of standard normal distribution. The response is then simulated from the following normal linear heteroscedastic model
y i = 3 x i 6 + 1.5 x i 12 + x i 15 + x i 20 + ( 0.7 x i 1 ) [ ε i E τ ( ε i ) ] ,
where ε i N ( 0 , 1 ) . Meanwhile, the other settings are the same as those in Example 2.
From Table 5 and Table 6, we can see that the conclusions are similar to those of Examples 1 and 2. Hence, the SRoWER performs well even in the case that there are outliers in the heterogeneous model.

4.2. Real Data Example

This subsection applies the SRoWER method for feature screening to the Mid-Atlantic wage data with 3000 observations and 8 predictors from [34] that are available in the `ISLR’ package in R. A total of eight predictors (two continuous and six categorical) are considered. The continuous variables are year that wage information was recorded (year) and age of worker (age). The categorical factors include a factor with the following levels: 1. Never Married, 2. Married, 3. Widowed, 4. Divorced, and 5. Separated, which indicate marital status (marital). Another factor (race) contains the following levels: 1. White, 2. Black, 3. Asian and 4. Other. Another factor (education) contains the following levels 1. <HS Grad, 2. HS Grad, 3. Some College, 4. College Grad and 5. Advanced Degree. Another factor (jobclass) contains the following levels: 1. Industrial and 2. Information indicating type of job. Another factor (health) contains the following levels, 1. ≤Good and 2. ≥Very Good, indicating health level of worker. Another factor contains the following levels, 1. Yes and 2. No, indicating whether worker has health insurance (health_ins). We use the dummy variables to represent six categorical variables. Therefore, there are 16 covariates, and the response is the logarithm of wage. Following the set up of [35], to demonstrate the application in high dimension, we extend the data by introducing the following artificial covariates:
X j = Z j + 2 W 3 , j = 17 , , 500 ,
where Z j is the standard normal random variables and W follows the standard uniform distribution.
To test the prediction performance of SRoWER, EPCS, and SMLE, we randomly generatde 100 partitions of the full data, and divided the data into two parts, where n t r = 1500 samples are treated as training data and the remaining n t e = 1500 samples are treated as testing data. The average of model size (Size), the number of selected noise variables (SNV) and expectile prediction error (EPE) at τ = 0.5 and 0.05 , where EPE is computed by the test data
E P E τ = 1 n t e i = 1 n t e π i ( β ˜ ) ρ τ ( y i y ^ i ) ,
where y ^ i = x i T β ^ τ is the τ expectile estimate of the i-th observation with β ^ τ being calculated based on the training data set, and y i is the i-th observation in the test set.
The results are reported in Table 7. For τ = 0.5 , both SRoWER and SMLE perform similarly in terms of EPE and SNV, while the SRoWER includes more than one variable compared with the SMLE. Although the model size of the SRoWER and EPCS are similar, the EPE of the EPCS is the largest among three methods. For τ = 0.05 , the SRoWER performs best, while the EPCS performs worst. The selected model sizes vary for different τ , it indicates the heteroscedasticity of the model. This conclusion agrees with the results of [36].

5. Conclusions

To deal with the heterogeneity and the outliers in the covariates and/or the response, this paper proposes the RoWER method, which is further applied to screen the features in ultra-high dimensional data. We have also proposed an iterative hard-thresholding algorithm for implementing the feature screening procedure, and establish the sure screening property for the SRoWER method. Simulation studies and a real data analysis verify that the SRoWER method not only reduces the huge computational effort faced by ultra-high dimensional data, but also shows excellent robustness in heterogeneous data. Compared with ISIS [6], the SRoWER naturally accounts for the joint effects between features, and benefits from the advantage of the SMLE in terms of computational efficiency. Based on the proposed method, the problem of robust feature screening for classification data also presents a promising direction for future research.

Author Contributions

Conceptualization, M.W.; methodology, X.W., P.H. and M.W.; software, X.W.; formal analysis, X.W. and P.H.; writing—original draft preparation, X.W.; writing—review and editing, P.H. and M.W.; supervision, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (12271294), and the Natural Science Foundation of Shandong Province (ZR2024MA089).

Data Availability Statement

Data sets were provided in the `ISLR’ package in R.

Acknowledgments

The authors are grateful to the editor and reviewers for their valuable comments and suggestions. We also sincerely thank Yundong Tu for providing their codes for us.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Proposition 1. 
It is seen that
+ f 2 d u = + 4 τ ( 1 τ ) π σ 2 ( 1 + 2 τ ( 1 τ ) ) exp 2 ρ τ u μ τ σ d u = 4 τ ( 1 τ ) π σ 2 ( 1 + 2 τ ( 1 τ ) ) + exp 2 | τ I ( u μ τ ) | ( u μ τ ) 2 σ 2 d u = 4 τ ( 1 τ ) π σ 2 ( 1 + 2 τ ( 1 τ ) ) × μ τ exp 2 τ ( u μ τ ) 2 σ 2 d u + μ τ exp 2 ( 1 τ ) ( u μ τ ) 2 σ 2 d u .
Note that
μ τ + exp 2 τ ( u μ τ ) 2 σ 2 d u = σ π 2 2 τ ,
μ τ exp 2 ( 1 τ ) ( u μ τ ) 2 σ 2 d u = σ π 2 2 ( 1 τ ) .
Then, we have
+ f 2 d u = 4 τ ( 1 τ ) π σ 2 ( 1 + 2 τ ( 1 τ ) ) σ π 2 2 τ + σ π 2 2 ( 1 τ ) = ( τ + 1 τ ) 2 τ ( 1 τ ) σ π ( 1 + 2 τ ( 1 τ ) ) = 1 π σ 2 2 τ ( 1 τ ) τ + 1 τ .
Hence, the L 2 E criterion for asymmetric normal distribution is
1 π σ 2 2 τ ( 1 τ ) τ + 1 τ 1 2 2 1 n i = 1 n exp ρ τ v i μ τ σ .
The proposition is proved. □
Proof of Proposition 2. 
By Taylor’s expansion, we have
log L n ( β ) log L n ( β ˜ ) π i ( β ˜ ) ρ τ ( y i x i T β ) ρ τ ( y i x i T β ˜ ) ,
for β β ˜ , where
π i ( β ˜ ) = exp { ρ τ ( y i x i T β ˜ ) } l = 1 n exp { ρ τ ( y l x l T β ˜ ) } .
Therefore, the minimization of L n ( β ) is transformed into the minimization of the following objective function
D n ( β ) = i = 1 n π i ( β ˜ ) ρ τ ( y i x i T β ) .
The proposition is then proved. □
Before proving Theorem 1, we give the following lemmas.
Lemma A1 
([17]). The asymmetric least squared loss ρ τ ( · ) is continuously differentiable, but is not twice differentiable at zero when τ 0.5 . Moreover, for any t , t 0 R and τ ( 0 , 1 ) , we have
c ̲ ( t t 0 ) 2 ρ τ ( t ) ρ τ ( t 0 ) ρ τ ( t 0 ) ( t t 0 ) c ¯ ( t t 0 ) 2 ,
where c ̲ = τ ( 1 τ ) , c ¯ = τ ( 1 τ ) , which confirms that ρ τ is strongly convex.
Lemma A2 
([17]). For any t , t 0 R and τ ( 0 , 1 ) , we have
2 c ̲ | t t 0 | | ρ τ ( t ) ρ τ ( t 0 ) | 2 c ¯ | t t 0 | .
The lemma follows that ρ τ ( · ) is Lipschitz continuous.
Lemma A3 
([17]). Let Z 1 , , Z n R be i.i.d. sub-Gaussian random variables, and K = max i Z i SG be sub-Gaussian norm, i = 1 , , n , where Z i SG = sup s 1 s 1 / 2 ( E | Z i | s ) 1 / s . Then for any a = ( a 1 , , a n ) T R n and h 0 , we have
P i = 1 n a i Z i h 2 exp C h 2 K 2 a 2 2 .
where C > 0 is a constant.
Lemma A4 
([17]). Let Z be sub-Gaussian random variable, Z + = max ( Z , 0 ) , Z = max ( Z , 0 ) ; then, random variables Z + and Z are also sub-Gaussian. For any b 1 , b 2 R , b 1 Z + + b 2 Z is sub-Gaussian.
Proof of Theorem 1. 
The definition of the IHT algorithm based on Q n ( · ) is
Q n ( γ ; β ) = D n ( β ) + ( γ β ) T T n ( β ) + ( u / 2 ) γ β 2 2 ,
β ( t + 1 ) = argmin γ Q n ( γ ; β ( t ) ) , γ 0 k .
By Lemma A1 and the assumption u / 2 c ¯ λ max , we have
D n ( β ( t ) ) = Q n ( β ( t ) ; β ( t ) ) Q n ( β ( t + 1 ) ; β ( t ) ) = D n ( β ( t + 1 ) ) [ D n ( β ( t + 1 ) ) D n ( β ( t ) ) ( β ( t + 1 ) β ( t ) ) T T n ( β ( t ) ) ] + ( u / 2 ) β ( t + 1 ) β ( t ) 2 2 D n ( β ( t + 1 ) ) c ¯ x ( β ( t + 1 ) β ( t ) ) 2 2 + u / 2 β ( t + 1 ) β ( t ) 2 2 D n ( β ( t + 1 ) ) + ( u / 2 c ¯ λ max ) β ( t + 1 ) β ( t ) 2 2 .
This proves that D n ( β ( t ) ) decreases after each iteration. □
Lemma A5. 
For some λ > 0 , denote β L = argmin β { D n ( β ) + n λ β 1 } . Under Conditions of Theorem 3, we have
P { β L β * 1 16 v 1 λ k 0 } 1 , n ,
where v is the same as defined in Condition (A5).
Proof of Lemma A5. 
Based on the definition of β L , we have
D n ( β L ) + n λ β L 1 D n ( β * ) + n λ β * 1 ,
that is,
D n ( β L ) D n ( β * ) n λ β * 1 n λ β L 1 .
Denote
1 / 2 R n = D n ( β L ) D n ( β * ) ( β L β * ) T T n ( β * )
and δ L = β L β * , it follows that
n 1 R n 2 n 1 i = 1 n 2 π i ω τ , i ( β * ) ( y i x i T β * ) x i T δ L + 2 λ β * 1 2 λ β L 1 2 n 1 max 1 j p | T n j ( β * ) | · δ L 1 + 2 λ β * 1 2 λ β L 1 .
We now derive a bound on T n ( β * ) . Define
A = max 1 j p | T n j ( β * ) | n λ / 2 .
By Lemma A3 and Condition (A3), for each j { 1 , , p } , we have
P ( T n j ( β * ) > n λ / 2 ) = P i = 1 n π i x i j ζ i > n λ / 2 2 exp C n 2 λ 2 4 K 2 i = 1 n ( π i x i j ) 2 2 exp C n λ 2 4 K 2 η 3 2 ,
where ζ i = 2 ω τ , i ( β * ) ( y i x i T β * ) , which is sub-Gaussian with Condition (A4) and Lemma A4. Hence, for any α ( 0 , 1 ) , and some generic positive constants c , a ,
P ( A c ) j = 1 p P T n j ( β * ) > n λ / 2 2 p exp C n λ 2 4 K 2 η 3 2 exp { a n α c λ 2 n } 0 ,
as n 0 . This implies P ( A ) 1 . Therefore, under the event A , we have
n 1 R n λ δ L 1 + 2 λ β * 1 2 λ β L 1 .
This further implies that
n 1 R n + λ δ L 1 2 λ i = 1 p | β j β j * | + | β j * | | β j | = 2 λ j M * | β j β j * | + | β j * | | β j | 4 λ δ M * L 1 .
Since R n 0 by Lemma A1, we have δ L 1 4 δ M * L 1 , which leads to δ M c * L 1 3 δ M * L 1 . By Condition (A5), (A2) and Cauchy inequality, it follows that
δ M * L 1 2 k 0 δ M * L 2 2 k 0 v 1 ( n 1 R n ) 4 v 1 k 0 λ δ M * L 1 ,
which gives rise to δ M * L 1 4 v 1 λ k 0 . Thus, under the event A , we have
δ L 1 = δ M c * L 1 + δ M * L 1 4 δ M * L 1 16 v 1 λ k 0 .
The lemma is proved. □
Proof of Theorem 2. 
Let β ^ M be the estimator of β obtained by SRoWER based on model M . If P { M ^ M + k } 1 , the theorem is then proved. This suffices to show that
P { min M M k D n ( β ^ M ) max M M + k D n ( β ^ M ) } 0 ,
as n .
For any M M k , define M = M M * M + 2 k . Consider β M close to β M * such that β M β M * 2 = w 1 n η 1 , for some w 1 , η 1 > 0 . When n is sufficiently large, β M falls into a small neighborhood of β M * . By Lemma A1, we have
D n ( β M * ) D n ( β M ) = i = 1 n π i ρ τ ( y i x i , M T β M * ) ρ τ ( y i x i , M T β M ) i = 1 n π i ( β M β M * ) T ζ i x i , M c ̲ π i ( x i , M T ( β M β M * ) ) 2 i = 1 n π i ( β M β M * ) T ζ i x i , M c ̲ π ̲ λ min w 1 2 n 2 η 1 ,
where ζ i = 2 ω τ ( β M * ) ( y i x i , M T β M * ) , λ min is the smallest eigenvalue of x M T x M , and π ̲ is the lower bound of { π i , 1 i n } . By Lemma A3, we have
P { D n ( β M * ) D n ( β M ) 0 } P i = 1 n π i ( β M β M * ) T ζ i x i , M π ̲ c ̲ λ min w 1 2 n 2 η 1 2 exp c ̲ 2 π ̲ 2 λ min 2 w 1 4 n 4 η 1 4 K 2 i = 1 n π i 2 ( ( β M β M * ) T x i , M ) 2 2 exp c ̲ 2 π ̲ 2 λ min 2 w 1 2 n 4 η 1 4 K 2 λ max n 2 η 1 ,
which leads to
P { D n ( β M * ) D n ( β M ) } 2 exp c ̲ 2 π ̲ 2 λ min 2 w 1 2 n 4 η 1 4 K 2 λ max n 2 η 1 .
Thus, by Bonferroni inequality and Condition (A1),
P { D n ( β M * ) min M M k D n ( β M ) } M M k P { D n ( β M * ) D n ( β M ) } 2 p k exp { b n 1 2 η 1 } 2 exp { c w 2 n η 2 n α b n 1 2 η 1 } = o ( 1 ) ,
where b is some generic positive constant. Due to the convexity of D n ( β M ) in β M , the above conclusion holds for any β M , such that β M β M * 2 w 1 n η 1 .
For any M M k , let β ˙ M be β ^ M augmented with zero corresponding to the elements in M / M * . By Condition (A2), it is seen that
β ˙ M β M * 2 β M / M * * 2 w 1 n η 1 .
Consequently,
P { max M M + k D n ( β ^ M ) min M M k D n ( β ^ M ) } P { D n ( β M * ) min M M k D n ( β ˙ M ) } = o ( 1 ) .
The theorem is proved. □
Proof of Theorem 3. 
Let q = min j M * | β j * | . By Condition (A2), q = O ( w 1 n η 1 ) , it suffices to prove for any t 0
P β ( t ) β * q / 2 1 .
It is clearly implied by
β ( t ) β * = o P ( q ) .
We use mathematical induction to prove (A3).
Step 1. 
When t = 0 , the initial value for the iteration β ( 0 ) is defined as the LASSO estimator of β , that is β L . By Lemma A5, we have
P { β ( 0 ) β * 1 16 v 1 λ k 0 } 1 , as n .
Under Condition (A2), we have k 0 = O ( n η 2 ) , q 1 = O ( n η 1 ) . By λ = o ( n ( η 1 + η 2 ) ) , λ k 0 = o ( q ) . Hence, when t = 0 , (A3) holds.
Step 2. 
Suppose (A3) is true for t 1 , we show that (A3) is also true for t with t 1 . By the definition of V , β ( t ) = V ( γ ( t ) ; k ) , where γ ( t ) = β ( t 1 ) u 1 T n ( β ( t 1 ) ) , then
β ( t ) β * = V ( γ ( t ) ; k ) β * γ ( t ) β * .
Thus, (A3) is proved by showing γ ( t ) β * = o P ( q ) . Note that
γ ( t ) β * β ( t 1 ) β * + u 1 T n ( β ( t 1 ) ) .
Lemma A2 implies that for any j { 1 , , p } , we have
| T n j ( β ( t 1 ) ) T n j ( β * ) | 2 c ¯ i = 1 n | x i T ( β ( t 1 ) β * ) | | π i x i j | ,
which further implies
u 1 | T n j ( β ( t 1 ) ) | u 1 | T n j ( β * ) | + 2 u 1 c ¯ β ( t 1 ) β * max 1 j p i = 1 n h M t | x i j x i h | u 1 T n ( β * ) + 2 u 1 n k η 3 2 β ( t 1 ) β * ,
where M t = M ( t ) M * , s ( M t ) 2 k . Meanwhile, (A1) gives rise to T n ( β * ) = O P ( n λ ) . By λ n η 1 + η 2 = o ( 1 ) and u ξ n k , we get u 1 T n ( β * ) = o P ( q ) , due to
2 u 1 n k η 3 2 β ( t 1 ) β * 2 ( ξ n k ) 1 n k η 3 2 β ( t 1 ) β * = o P ( q ) ,
so we have γ ( t ) β * = o P ( q ) . Thus the second step of the mathematical induction is completed. The theorem is therefore proved. □

References

  1. Frank, L.E.; Friedman, J.H. A statistical view of some chemometrics regression tools. Technometrics 1993, 35, 109–135. [Google Scholar] [CrossRef]
  2. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  3. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  4. Candes, E.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
  5. Fan, J.; Lv, J. Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B Stat. Methodol. 2008, 70, 849–911. [Google Scholar] [CrossRef]
  6. Fan, J.; Samworth, R.; Wu, Y. Ultrahigh dimensional feature selection: Beyond the linear model. J. Mach. Learn. Res. 2009, 10, 2013–2038. [Google Scholar]
  7. Zhu, L.P.; Li, L.; Li, R.; Zhu, L.X. Model-free feature screening for ultrahigh-dimensional data. J. Am. Stat. Assoc. 2011, 106, 1464–1475. [Google Scholar] [CrossRef]
  8. Li, R.; Zhong, W.; Zhu, L. Feature screening via distance correlation learning. J. Am. Stat. Assoc. 2012, 107, 1129–1139. [Google Scholar] [CrossRef]
  9. Fan, J.; Song, R. Sure independence screening in generalized linear models with NP-dimensionality. Ann. Stat. 2010, 38, 3567–3604. [Google Scholar] [CrossRef]
  10. Wang, H. Forward regression for ultra-high dimensional variable screening. J. Am. Stat. Assoc. 2009, 104, 1512–1524. [Google Scholar] [CrossRef]
  11. Xu, C.; Chen, J. The sparse MLE for ultrahigh-dimensional feature screening. J. Am. Stat. Assoc. 2014, 109, 1257–1269. [Google Scholar] [CrossRef] [PubMed]
  12. Koenker, R. Quantile Regression; Cambridge University Press: New York, NY, USA, 2005. [Google Scholar]
  13. Newey, W.K.; Powell, J.L. Asymmetric least squares estimation and testing. Econom. J. Econom. Soc. 1987, 55, 819–847. [Google Scholar] [CrossRef]
  14. Zhao, J.; Chen, Y.; Zhang, Y. Expectile regression for analyzing heteroscedasticity in high dimension. Stat. Probab. Lett. 2018, 137, 304–311. [Google Scholar] [CrossRef]
  15. Ciuperca, G. Variable selection in high-dimensional linear model with possibly asymmetric errors. Comput. Stat. Data Anal. 2021, 155, 107112. [Google Scholar] [CrossRef]
  16. Song, S.; Lin, Y.; Zhou, Y. Linear expectile regression under massive data. Fundam. Res. 2021, 1, 574–585. [Google Scholar] [CrossRef]
  17. Gu, Y.; Zou, H. High-dimensional generalizations of asymmetric least squares regression and their applications. Ann. Stat. 2016, 44, 2661–2694. [Google Scholar] [CrossRef]
  18. Wang, L.; Wu, Y.; Li, R. Quantile regression for analyzing heterogeneity in ultra-high dimension. J. Am. Stat. Assoc. 2012, 107, 214–222. [Google Scholar] [CrossRef]
  19. He, X.; Wang, L.; Hong, H.G. Quantile-adaptive model-free variable screening for high-dimensional heterogeneous data. Ann. Stat. 2013, 41, 342–369. [Google Scholar] [CrossRef]
  20. Wu, Y.; Yin, G. Conditional quantile screening in ultrahigh-dimensional heterogeneous data. Biometrika 2015, 102, 65–76. [Google Scholar] [CrossRef]
  21. Zhong, W.; Zhu, L.; Li, R.; Cui, H. Regularized quantile regression and robust feature screening for single index models. Stat. Sin. 2016, 26, 69–95. [Google Scholar] [CrossRef]
  22. Ma, Y.; Li, Y.; Lin, H. Concordance measure-based feature screening and variable selection. Stat. Sin. 2017, 27, 1967–1985. [Google Scholar] [CrossRef]
  23. Chen, L.P. A note of feature screening via a rank-based coefficient of correlation. Biom. J. 2023, 65, 2100373. [Google Scholar] [CrossRef]
  24. Chen, L.P. Feature screening via concordance indices for left-truncated and right-censored survival data. J. Stat. Plan. Inference 2024, 232, 106153. [Google Scholar] [CrossRef]
  25. Tu, Y.; Wang, S. Variable screening and model averaging for expectile regressions. Oxf. Bull. Econ. Stat. 2023, 85, 574–598. [Google Scholar] [CrossRef]
  26. Ghosh, A.; Ponzi, E.; Sandanger, T.; Thoresen, M. Robust sure independence screening for nonpolynomial dimensional generalized linear models. Scand. J. Stat. 2023, 50, 1232–1262. [Google Scholar] [CrossRef]
  27. Ghosh, A.; Thoresen, M. A robust variable screening procedure for ultra-high dimensional data. Stat. Methods Med Res. 2021, 30, 1816–1832. [Google Scholar] [CrossRef]
  28. Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M. Robust and efficient estimation by minimising a density power divergence. Biometrika 1998, 85, 549–559. [Google Scholar] [CrossRef]
  29. Scott, D.W. Parametric statistical modeling by minimum integrated square error. Technometrics 2001, 43, 274–285. [Google Scholar] [CrossRef]
  30. Efron, B. Regression percentiles using asymmetric squared error loss. Stat. Sin. 1991, 1, 93–125. [Google Scholar]
  31. Mallat, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef]
  32. Blumensath, T.; Davies, M.E. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef]
  33. Chen, J.; Chen, Z. Extended Bayesian information criteria for model selection with large model spaces. Biometrika 2008, 95, 759–771. [Google Scholar] [CrossRef]
  34. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer-Verlag: New York, NY, USA, 2013. [Google Scholar]
  35. Fan, J.; Ma, Y.; Dai, W. Nonparametric independence screening in sparse ultra-high-dimensional varying coefficient models. J. Am. Stat. Assoc. 2014, 109, 1270–1284. [Google Scholar] [CrossRef] [PubMed]
  36. Wang, M.; Kang, X.; Liang, J.; Wang, K.; Wu, Y. Heteroscedasticity identification and variable selection via multiple quantile regression. J. Stat. Comput. Simul. 2024, 94, 297–314. [Google Scholar] [CrossRef]
Table 1. Simulation results of Example 1 for τ = 0.5 .
Table 1. Simulation results of Example 1 for τ = 0.5 .
ρ ε nMethodTPCFRMSE
0Normal100SRoWER4 (0 )0.880 (0.326)0.985 (0.427)
SMLE4 (0 )0.910 (0.287)1.092 (0.418)
EPCS4 (0 )0.935 (0.247)0.951 (0.424)
200SRoWER4 (0 )0.95 (0.218)0.514 (0.077)
SMLE4 (0 )0.95 (0.218)0.693 (0.162)
EPCS4 (0 )0.95 (0.218)0.696 (0.162)
Gumbel100SRoWER3.995 (0.071)0.880 (0.326)1.153 (0.349)
SMLE4.000 (0.000)0.905 (0.294)1.518 (0.514)
EPCS3.990 (0.100)0.920 (0.272)1.390 (0.416)
200SRoWER4 (0 )0.900 (0.301)0.756 (0.156)
SMLE4 (0 )0.965 (0.184)0.877 (0.216)
EPCS4 (0 )0.965 (0.184)0.922 (0.244)
T100SRoWER3.900 (0.375)0.765 (0.425)1.665 (0.587)
SMLE3.845 (0.471)0.795 (0.405)2.351 (0.914)
EPCS3.795 (0.578)0.765 (0.425)2.450 (1.143)
200SRoWER3.980 (0.199)0.91 (0.287)0.854 (0.241)
SMLE3.960 (0.262)0.92 (0.272)1.260 (0.451)
EPCS3.945 (0.320)0.92 (0.272)1.286 (0.457)
0.5Normal100SRoWER4 (0 )0.935 (0.247)0.714 (0.245)
SMLE4 (0 )0.945 (0.229)0.834 (0.330)
EPCS4 (0 )0.955 (0.208)0.741 (0.327)
200SRoWER4 (0 )0.905 (0.294)0.594 (0.094)
SMLE4 (0 )0.975 (0.157)0.322 (0.214)
EPCS4 (0 )0.985 (0.122)0.290 (0.178)
Gumbel100SRoWER3.995 (0.071)0.90 (0.301)0.943 (0.379)
SMLE3.995 (0.071)0.91 (0.287)1.175 (0.501)
EPCS3.975 (0.186)0.91 (0.287)1.187 (0.519)
200SRoWER4 (0 )0.905 (0.294)0.695 (0.169)
SMLE4 (0 )0.975 (0.157)0.570 (0.289)
EPCS4 (0 )0.990 (0.100)0.397 (0.167)
T100SRoWER3.870 (0.429)0.775 (0.419)1.924 (0.580)
SMLE3.780 (0.551)0.770 (0.422)2.724 (1.067)
EPCS3.525 (0.795)0.550 (0.499)2.711 (0.907)
200SRoWER3.970 (0.222)0.845 (0.363)0.986 (0.277)
SMLE3.955 (0.252)0.950 (0.218)0.665 (0.234)
EPCS3.940 (0.327)0.935 (0.247)0.729 (0.223)
Table 2. Simulation results of Example 1 for τ = 0.05 .
Table 2. Simulation results of Example 1 for τ = 0.05 .
ρ ε nMethodTPCFRMSE
0Normal100SRoWER3.995 (0.071)0.725 (0.448)1.565 (0.422)
SMLE4.000 (0.000)0.760 (0.428)2.187 (0.661)
EPCS3.995 (0.071)0.510 (0.501)3.067 (0.951)
200SRoWER4 (0 )0.725 (0.448)0.891 (0.229)
SMLE4 (0 )0.690 (0.464)1.635 (0.589)
EPCS4 (0 )0.540 (0.500)2.031 (0.697)
Gumbel100SRoWER3.980 (0.140)0.875 (0.332)1.050 (0.192)
SMLE3.995 (0.071)0.950 (0.218)0.932 (0.291)
EPCS3.995 (0.071)0.830 (0.377)1.391 (0.393)
200SRoWER4 (0 )0.885 (0.320)0.601 (0.111)
SMLE4 (0 )0.985 (0.122)0.422 (0.131)
EPCS4 (0 )0.960 (0.196)0.619 (0.184)
T100SRoWER3.120 (1.049)0.295 (0.457)4.441 (1.483)
SMLE3.325 (0.907)0.235 (0.425)7.941 (3.112)
EPCS3.120 (1.025)0.065 (0.247)9.146 (3.500)
200SRoWER3.765 (0.558)0.400 (0.491)2.918 (1.069)
SMLE3.800 (0.540)0.180 (0.385)7.217 (2.362)
EPCS3.800 (0.530)0.125 (0.332)7.767 (2.407)
0.5Normal100SRoWER3.985 (0.122)0.79 (0.408)1.355 (0.364)
SMLE3.990 (0.141)0.83 (0.377)1.746 (0.645)
EPCS3.880 (0.487)0.50 (0.501)2.710 (0.887)
200SRoWER4 (0 )0.665 (0.473)0.962 (0.251)
SMLE4 (0 )0.725 (0.448)1.752 (0.515)
EPCS4 (0 )0.705 (0.457)1.733 (0.499)
Gumbel100SRoWER4.00 (0.000)0.890 (0.314)0.979 (0.246)
SMLE4.00 (0.000)0.935 (0.247)1.035 (0.253)
EPCS3.92 (0.405)0.635 (0.483)1.563 (0.618)
200SRoWER4 (0 )0.765 (0.425)0.774 (0.120)
SMLE4 (0 )0.935 (0.247)0.670 (0.301)
EPCS4 (0 )0.935 (0.247)0.541 (0.372)
T100SRoWER3.260 (0.926)0.270 (0.445)4.738 (1.573)
SMLE3.415 (0.852)0.265 (0.442)7.569 (2.619)
EPCS2.615 (1.159)0.065 (0.247)8.153 (2.855)
200SRoWER3.725 (0.593)0.425 (0.496)2.296 (0.635)
SMLE3.770 (0.518)0.220 (0.415)7.199 (2.344)
EPCS3.460 (0.801)0.145 (0.353)6.908 (2.204)
Table 3. Simulation results of Example 2 for τ = 0.5 .
Table 3. Simulation results of Example 2 for τ = 0.5 .
ρ ε nMethodTPCFRMSE
0Normal100SRoWER3.685 (0.713)0.645 (0.480)1.695 (0.569)
SMLE3.770 (0.537)0.685 (0.466)2.771 (1.039)
EPCS3.700 (0.716)0.690 (0.464)2.880 (0.987)
200SRoWER3.940 (0.277)0.755 (0.431)1.492 (0.503)
SMLE3.935 (0.285)0.820 (0.385)2.216 (0.806)
EPCS3.915 (0.372)0.820 (0.385)2.150 (0.711)
Gumbel100SRoWER3.655 (0.720)0.635 (0.483)1.553 (0.521)
SMLE3.680 (0.663)0.695 (0.462)2.986 (1.355)
EPCS3.605 (0.782)0.675 (0.470)2.806 (0.871)
200SRoWER3.825 (0.464)0.725 (0.448)1.296 (0.551)
SMLE3.820 (0.478)0.780 (0.415)2.050 (0.688)
EPCS3.805 (0.528)0.805 (0.397)1.456 (0.585)
T100SRoWER3.495 (0.821)0.560 (0.498)2.030 (0.712)
SMLE3.445 (0.819)0.585 (0.494)3.532 (1.271)
EPCS3.380 (0.927)0.540 (0.500)3.123 (1.117)
200SRoWER3.895 (0.380)0.675 (0.470)1.611 (0.518)
SMLE3.805 (0.508)0.785 (0.412)1.831 (0.558)
EPCS3.830 (0.482)0.810 (0.393)2.013 (0.693)
0.5Normal100SRoWER3.735 (0.646)0.660 (0.475)1.740 (0.586)
SMLE3.760 (0.587)0.715 (0.453)3.033 (0.881)
EPCS3.660 (0.760)0.690 (0.464)3.292 (1.296)
200SRoWER3.950 (0.240)0.645 (0.480)1.588 (0.421)
SMLE3.860 (0.437)0.690 (0.464)2.469 (0.457)
EPCS3.855 (0.525)0.745 (0.437)2.582 (0.500)
Gumbel100SRoWER3.735 (0.622)0.655 (0.477)1.839 (0.567)
SMLE3.725 (0.601)0.690 (0.464)2.801 (1.184)
EPCS3.685 (0.615)0.670 (0.471)2.463 (1.101)
200SRoWER3.885 (0.377)0.640 (0.481)1.617 (0.487)
SMLE3.840 (0.394)0.675 (0.470)2.523 (0.661)
EPCS3.820 (0.468)0.715 (0.453)2.624 (0.710)
T100SRoWER3.455 (0.801)0.445 (0.498)2.114 (0.549)
SMLE3.425 (0.792)0.475 (0.501)3.765 (1.072)
EPCS3.100 (1.070)0.405 (0.492)3.186 (1.153)
200SRoWER3.910 (0.304)0.520 (0.501)1.949 (0.523)
SMLE3.775 (0.525)0.685 (0.466)2.756 (0.671)
EPCS3.670 (0.703)0.660 (0.475)2.877 (0.735)
Table 4. Simulation results of Example 2 for τ = 0.05 .
Table 4. Simulation results of Example 2 for τ = 0.05 .
ρ ε nMethodTPCFRMSE
0Normal100SRoWER3.530 (0.918)0.385 (0.488)2.090 (0.576)
SMLE3.755 (0.638)0.535 (0.500)4.484 (1.384)
EPCS3.575 (0.865)0.325 (0.470)4.070 (1.368)
200SRoWER3.75 (0.671)0.550 (0.499)1.406 (0.587)
SMLE3.98 (0.140)0.490 (0.501)4.210 (1.145)
EPCS3.87 (0.463)0.495 (0.501)3.615 (1.053)
Gumbel100SRoWER3.490 (0.930)0.395 (0.490)1.649 (0.537)
SMLE3.745 (0.626)0.630 (0.484)4.217 (1.552)
EPCS3.535 (0.856)0.475 (0.501)3.409 (1.670)
200SRoWER3.755 (0.606)0.600 (0.491)1.370 (0.578)
SMLE3.885 (0.391)0.655 (0.477)3.338 (1.196)
EPCS3.785 (0.592)0.675 (0.470)3.088 (1.085)
T100SRoWER2.795 (1.127)0.165 (0.372)3.357 (0.890)
SMLE2.985 (1.082)0.185 (0.389)8.103 (2.779)
EPCS2.735 (1.184)0.070 (0.256)8.865 (2.696)
200SRoWER3.44 (0.900)0.34 (0.475)2.686 (0.759)
SMLE3.67 (0.635)0.17 (0.377)7.480 (2.210)
EPCS3.56 (0.692)0.09 (0.287)7.741 (2.436)
0.5Normal100SRoWER3.610 (0.966)0.40 (0.491)1.966 (0.582)
SMLE3.795 (0.524)0.54 (0.500)4.450 (1.279)
EPCS3.365 (1.170)0.30 (0.459)4.641 (1.729)
200SRoWER3.835 (0.547)0.480 (0.501)1.556 (0.400)
SMLE3.925 (0.346)0.515 (0.501)3.623 (0.840)
EPCS3.760 (0.628)0.545 (0.499)3.360 (0.939)
Gumbel100SRoWER3.700 (0.702)0.435 (0.497)1.739 (0.621)
SMLE3.785 (0.529)0.685 (0.466)3.233 (1.333)
EPCS3.560 (0.895)0.505 (0.501)3.989 (1.709)
200SRoWER3.835 (0.509)0.530 (0.500)1.620 (0.572)
SMLE3.910 (0.335)0.620 (0.487)3.390 (1.064)
EPCS3.770 (0.607)0.665 (0.473)3.121 (1.015)
T100SRoWER2.815 (1.099)0.115 (0.320)3.167 (0.756)
SMLE2.960 (0.992)0.155 (0.363)8.137 (2.316)
EPCS2.265 (1.068)0.025 (0.157)8.689 (2.627)
200SRoWER3.445 (0.843)0.305 (0.462)2.703 (0.689)
SMLE3.620 (0.662)0.155 (0.363)7.437 (2.066)
EPCS3.115 (0.998)0.145 (0.353)8.051 (1.960)
Table 5. Simulation results of Example 3 for τ = 0.5 .
Table 5. Simulation results of Example 3 for τ = 0.5 .
ρ nMethodTPCFRMSE
0100SRoWER3.81 (0.579)0.690 (0.464)1.600 (0.642)
SMLE3.86 (0.471)0.665 (0.473)3.671 (1.112)
EPCS3.83 (0.532)0.670 (0.471)3.186 (1.061)
200SRoWER3.945 (0.25)0.785 (0.412)0.968 (0.513)
SMLE3.965 (0.21)0.765 (0.425)1.956 (0.784)
EPCS3.965 (0.21)0.815 (0.389)1.762 (0.859)
0.5100SRoWER3.895 (0.495)0.65 (0.478)1.432 (0.566)
SMLE3.915 (0.372)0.64 (0.481)2.811 (0.826)
EPCS3.845 (0.559)0.70 (0.459)2.841 (0.937)
200SRoWER3.970 (0.198)0.675 (0.470)1.325 (0.557)
SMLE3.980 (0.140)0.665 (0.473)1.968 (0.585)
EPCS3.965 (0.184)0.675 (0.470)2.221 (0.641)
Table 6. Simulation results of Example 3 for τ = 0.05 .
Table 6. Simulation results of Example 3 for τ = 0.05 .
ρ nMethodTPCFRMSE
0100SRoWER3.610 (0.907)0.545 (0.499)1.189 (0.659)
SMLE3.845 (0.531)0.530 (0.500)4.331 (1.956)
EPCS3.725 (0.783)0.515 (0.501)4.003 (1.500)
200SRoWER3.750 (0.735)0.600 (0.491)0.947 (0.715)
SMLE3.945 (0.250)0.535 (0.500)4.572 (1.206)
EPCS3.900 (0.401)0.560 (0.498)3.445 (1.388)
0.5100SRoWER3.73 (0.735)0.435 (0.497)1.602 (0.523)
SMLE3.92 (0.323)0.550 (0.499)3.820 (1.434)
EPCS3.73 (0.692)0.520 (0.501)3.843 (1.253)
200SRoWER3.950 (0.240)0.49 (0.501)1.427 (0.527)
SMLE3.985 (0.122)0.53 (0.500)3.478 (1.163)
EPCS3.905 (0.383)0.54 (0.500)3.212 (0.929)
Table 7. Expectile prediction error (EPE), model size (Size), and selected noise variables (SNV) over 100 repetitions and their standard errors (in parentheses) for wage data.
Table 7. Expectile prediction error (EPE), model size (Size), and selected noise variables (SNV) over 100 repetitions and their standard errors (in parentheses) for wage data.
τ MethodEPESizeSNV
0.5SRoWER0.041 (0.002)6.63 (1.107)0.06 (0.239)
SMLE0.042 (0.002)5.44 (0.978)0.04 (0.243)
EPCS0.048 (0.005)6.45 (0.687)0.00 (0.000)
0.05SRoWER0.017 (0.001)4.60 (1.511)0.33 (0.620)
SMLE0.030 (0.014)6.03 (2.422)1.12 (1.565)
EPCS0.275 (0.183)10.11 (3.127)3.62 (2.107)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, X.; Han, P.; Wang, M. Sparse Robust Weighted Expectile Screening for Ultra-High-Dimensional Data. Axioms 2025, 14, 340. https://doi.org/10.3390/axioms14050340

AMA Style

Wu X, Han P, Wang M. Sparse Robust Weighted Expectile Screening for Ultra-High-Dimensional Data. Axioms. 2025; 14(5):340. https://doi.org/10.3390/axioms14050340

Chicago/Turabian Style

Wu, Xianjun, Pingping Han, and Mingqiu Wang. 2025. "Sparse Robust Weighted Expectile Screening for Ultra-High-Dimensional Data" Axioms 14, no. 5: 340. https://doi.org/10.3390/axioms14050340

APA Style

Wu, X., Han, P., & Wang, M. (2025). Sparse Robust Weighted Expectile Screening for Ultra-High-Dimensional Data. Axioms, 14(5), 340. https://doi.org/10.3390/axioms14050340

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop