Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = adaptive E-NET penalty

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 418 KB  
Article
Robust Variable Selection and Regularization in Quantile Regression Based on Adaptive-LASSO and Adaptive E-NET
by Innocent Mudhombo and Edmore Ranganai
Computation 2022, 10(11), 203; https://doi.org/10.3390/computation10110203 - 21 Nov 2022
Cited by 1 | Viewed by 2129
Abstract
Although the variable selection and regularization procedures have been extensively considered in the literature for the quantile regression (QR) scenario via penalization, many such procedures fail to deal with data aberrations in the design space, namely, high leverage points ( [...] Read more.
Although the variable selection and regularization procedures have been extensively considered in the literature for the quantile regression (QR) scenario via penalization, many such procedures fail to deal with data aberrations in the design space, namely, high leverage points (X-space outliers) and collinearity challenges simultaneously. Some high leverage points referred to as collinearity influential observations tend to adversely alter the eigenstructure of the design matrix by inducing or masking collinearity. Therefore, in the literature, it is recommended that the problems of collinearity and high leverage points should be dealt with simultaneously. In this article, we suggest adaptive LASSO and adaptive E-NET penalized QR (QR-ALASSO and QR-AE-NET) procedures where the weights are based on a QR estimator as remedies. We extend this methodology to their penalized weighted QR versions of WQR-LASSO, WQR-E-NET procedures we had suggested earlier. In the literature, adaptive weights are based on the RIDGE regression (RR) parameter estimator. Although the use of this estimator may be plausible at the 1 estimator (QR at τ=0.5) for the symmetrical distribution, it may not be so at extreme quantile levels. Therefore, we use a QR-based estimator to derive adaptive weights. We carried out a comparative study of QR-LASSO, QR-E-NET, and the ones we suggest here, viz., QR-ALASSO, QR-AE-NET, weighted QRALASSO penalized and weighted QR adaptive AE-NET penalized (WQR-ALASSO and WQR-AE-NET) procedures. The simulation study results show that QR-ALASSO, QR-AE-NET, WQR-ALASSO and WQR-AE-NET generally outperform their nonadaptive counterparts. At predictor matrices with collinearity inducing points under normality, the QR-ALASSO and QR-AE-NET, respectively, outperform the non-adaptive procedures in the unweighted scenarios, as follows: in all 16 cases (100%) with respect to correctly selected (shrunk) zero coefficients; in 88% with respect to correctly fitted models; and in 81% with respect to prediction. In the weighted penalized WQR scenarios, WQR-ALASSO and WQR-AE-NET outperform their non-adaptive versions as follows: in 75% of the time with respect to both correctly fitted models and correctly shrunk zero coefficients and in 63% with respect to prediction. At predictor matrices with collinearity masking points under normality, the QR-ALASSO and QR-AE-NET, respectively, outperform the non-adaptive procedures in the unweighted scenarios as follows: in prediction, in 100% and 88% of the time; with respect to correctly fitted models in 100% and 50% (while in 50% equally); and with respect to correctly shrunk zero coefficients in 100% of the time. In the weighted scenario, WQR-ALASSO and WQR-AE-NET outperform their respective non-adaptive versions as follows; with respect to prediction, both in 63% of the time; with respect to correctly fitted models, in 88% of the time while with respect to correctly shrunk zero coefficients in 100% of the time. At predictor matrices with collinearity inducing points under the t-distribution, the QR-ALASSO and QR-AE-NET procedures outperform their respective non-adaptive procedures in the unweighted scenarios as follows: in prediction, in 100% and 75% of the time; with respect to correctly fitted models 88% of the time each; and with respect to correctly shrunk zero 88% and in 100% of the time. Additionally, the procedures WQR-ALASSO and WQR-AE-NET and their unweighted versions result in the former outperforming the latter in all respective cases with respect to prediction whilst there is no clear "winner" with respect to the other two measures. Overall, the WQR-ALASSO generally outperforms all other models with respect to all measures. At the predictor matrix with collinearity-masking points under the t-distribution, all adaptive versions outperformed their respective non-adaptive versions with respect to all metrics. In the unweighted scenarios, the QR-ALASSO and QR-AE-NET dominate their non-adaptive versions as follows: in prediction, in 63% and 75% of the time; with respect to correctly fitted models, in 100% and 38% (while in 62% equally); in 100% of the time with respect to correctly shrunk zero coefficients. In the weighted scenarios, all adaptive versions outperformed their non-adaptive versions as follows: 62% of the time in both respective cases with respect to prediction while it is vice-versa with respect to correctly fitted models and with respect to correctly shrunk zero coefficients. In the weighted scenarios, WQR-ALASSO and WQR-AE-NET dominate their respective non-adaptive versions as follows; with respect to correctly fitted models, in 62% of the time while with respect to correctly shrunk zero coefficients in 100% of the time in both cases. At the design matrix with both collinearity and high leverage points under the heavy-tailed distributions (t-distributions with d(1;6) degrees of freedom) scenarios, the dominance of the adaptive procedures over the non-adaptive ones is again evident. In the unweighted scenarios, the procedures QR-ALASSO and QR-AE-NET outperform their non-adaptive versions as follows; in prediction, in 75% and 62% of the time; with respect to correctly fitted models, they perform better in 100% and 88% of the time, while with respect to correctly shrunk zero coefficients, they outperform their non-adaptive ones 100% of the time in both cases. In the weighted scenarios, WQR-ALASSO and WQR-AE-NET dominate their non-adaptive versions as follows; with respect to prediction, in 100% of the time in both cases; and with respect to both correctly fitted models and correctly shrunk zero coefficients, they both do 88% of the time. Results from applications of the suggested procedures to real life data sets are more or less in line with the simulation studies results. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

23 pages, 633 KB  
Article
Combination of Ensembles of Regularized Regression Models with Resampling-Based Lasso Feature Selection in High Dimensional Data
by Abhijeet R Patil and Sangjin Kim
Mathematics 2020, 8(1), 110; https://doi.org/10.3390/math8010110 - 10 Jan 2020
Cited by 24 | Viewed by 4994
Abstract
In high-dimensional data, the performances of various classifiers are largely dependent on the selection of important features. Most of the individual classifiers with the existing feature selection (FS) methods do not perform well for highly correlated data. Obtaining important features using the FS [...] Read more.
In high-dimensional data, the performances of various classifiers are largely dependent on the selection of important features. Most of the individual classifiers with the existing feature selection (FS) methods do not perform well for highly correlated data. Obtaining important features using the FS method and selecting the best performing classifier is a challenging task in high throughput data. In this article, we propose a combination of resampling-based least absolute shrinkage and selection operator (LASSO) feature selection (RLFS) and ensembles of regularized regression (ERRM) capable of dealing data with the high correlation structures. The ERRM boosts the prediction accuracy with the top-ranked features obtained from RLFS. The RLFS utilizes the lasso penalty with sure independence screening (SIS) condition to select the top k ranked features. The ERRM includes five individual penalty based classifiers: LASSO, adaptive LASSO (ALASSO), elastic net (ENET), smoothly clipped absolute deviations (SCAD), and minimax concave penalty (MCP). It was built on the idea of bagging and rank aggregation. Upon performing simulation studies and applying to smokers’ cancer gene expression data, we demonstrated that the proposed combination of ERRM with RLFS achieved superior performance of accuracy and geometric mean. Full article
(This article belongs to the Special Issue Uncertainty Quantification Techniques in Statistics)
Show Figures

Figure 1

Back to TopTop