A Two-Stage Feature Selection Approach Based on Artificial Bee Colony and Adaptive LASSO in High-Dimensional Data
Abstract
1. Introduction
2. Related Studies
2.1. Filtering Method, Wrapper Technique, and Embedded Algorithm
2.2. Metaheuristic-Based Feature Selection
2.3. Hybrid Feature-Selection Approaches
3. Materials and Methods
3.1. Linear Regression Model
3.2. Stepwise Regression
3.3. Least Angle Regression (LARS)
- Start with where r is residual.
- Identify the feature X that has the highest correlation with the residual (or equivalently, the feature that forms the least angle with the residual).
- Continue following the highly correlated predictor until the residual and another feature x have an equal correlation.
- Move in a direction equiangular to both the features.
- Repeat steps until all the features are included in the model.
3.4. Regularization Methods
3.4.1. Least Absolute Shrinkage and Selection Operator (LASSO)
3.4.2. Adaptive LASSO
3.5. Artificial Bee Colony Optimization (ABC)
3.6. The Proposed ABC-ADLASSO Method for Feature Selection
- Representation of BeesEach bee in the ABC algorithm represents a potential solution, which is a binary vector corresponding to a subset of features. For example, given a dataset with 100 features, a bee might be represented as a vector [1, 0, 0, 1, 0, 1, 0, …, 1], where 1 s indicates the selected features.
- Objective FunctionChoosing the appropriate objective function in the optimization process is critical to ensure the accuracy and effectiveness of the solution. The Extended-Bayesian Information Criterion (ExBIC) was utilized as a fitness function for the proposed feature-selection method. ExBIC is a model selection criterion developed especially for high-dimensional data and is commonly used for feature selection [36]. ExBIC is also effective in controlling false positives while balancing model fit and complexity and is defined by the Equation (12):
- The Control Parameters for ABCBy trial and error, the following parameters have been defined for the ABC-based proposed method:Number of food sources: SN, as SN is the number of features in the data.Maximum number of iterations: 100.Max Limit: 10, where the max limit is how many times a food source can be selected without improvement before it is abandoned.
4. Simulation Study
- Scenario1: p = 60 and σ = 1.5. The rows are independent in data matrix X. The first 10 features and the remaining 50 features are independent in the j-th row. The pairwise correlation among r-th and d-th components in where ρ = 0.5 and r, d = 1, …, 10. Also, the pairwise correlation among r-th and d-th components in where = 0.5 and r, d = 11, …, 60.
- Scenario2: This is identical to Scenario1, with the exception that = 0.90.
- Scenario3: This is identical to Scenario1, with the exception that p = 100.
- Scenario4: This is identical to Scenario2, with the exception that p = 100.
- Scenario5: p = 60 and σ = 1.5. The features are generated:
- for j = 1, 2, …, 5 and
- for j = 6, 7, …, 10 where . are 1.5 for the first 10 components and 0 for the rest of the components.
- Scenario6: This is identical to Scenario1, with the exception that p = 100.
5. Simulation Results
- Real dataset application
- b.
- Real dataset application results
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sancar, N.; Onakpojeruo, E.P.; Inan, D.; Uzun, O.D. Adaptive Elastic Net Based on Modified PSO for Variable Selection in Cox Model with High-Dimensional Data: A Comprehensive Simulation Study. IEEE Access 2023, 11, 127302–127316. [Google Scholar] [CrossRef]
- Jain, R.; Xu, W. HDSI: High dimensional selection with interactions algorithm on feature selection and testing. PLoS ONE 2021, 16, e0246159. [Google Scholar] [CrossRef] [PubMed]
- Amini, A.A.; Wainwright, M.J. High-dimensional analysis of semidefinite relaxations for sparse principal components. In Proceedings of the IEEE International Symposium on Information Theory ISIT 2008, Toronto, ON, Canada, 6–11 July 2008; pp. 2454–2458. [Google Scholar]
- Holtzman, G.; Soffer, A.; Vilenchik, D. A greedy anytime algorithm for sparse PCA. In Proceedings of the 33rd Conference on Learning Theory (COLT 2020), Graz, Austria, 9–12 July 2020; pp. 1939–1956. [Google Scholar]
- Rouhi, A.; Nezamabadi-Pour, H. Feature Selection in High-Dimensional Data. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; Volume 1123, pp. 85–128. Available online: https://link.springer.com/chapter/10.1007/978-3-030-34094-0_5 (accessed on 7 October 2024).
- Pudjihartono, N.; Fadason, T.; Kempa-Liehr, A.W.; O’Sullivan, J.M. A Review of Feature Selection Methods for Machine Learning-Based Disease Risk Prediction. Front. Bioinform. 2022, 2, 927312. Available online: www.frontiersin.org (accessed on 25 October 2024). [CrossRef] [PubMed]
- Curreri, F.; Fiumara, G.; Xibilia, M.G. Input Selection Methods for Soft Sensor Design: A Survey. Future Internet 2020, 12, 97. Available online: https://www.mdpi.com/1999-5903/12/6/97/htm (accessed on 25 October 2024). [CrossRef]
- Maseno, E.M.; Wang, Z. Hybrid Wrapper Feature Selection Method Based on Genetic Algorithm and Extreme Learning Machine for Intrusion Detection. J. Big Data 2024, 11, 24. [Google Scholar] [CrossRef]
- Bohrer, J.S.; Dorn, M. Enhancing Classification with Hybrid Feature Selection: A Multi-Objective Genetic Algorithm for High-Dimensional Data. Expert Syst. Appl. 2024, 255, 124518. [Google Scholar] [CrossRef]
- Owoc, M.L. Usability of Honeybee Algorithms in Practice. In Towards Nature-Inspired Sustainable Development; IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2024; Volume 693, pp. 161–176. Available online: https://link.springer.com/chapter/10.1007/978-3-031-61069-1_12 (accessed on 15 October 2024).
- Stamadianos, T.; Taxidou, A.; Marinaki, M.; Marinakis, Y. Swarm Intelligence and Nature-Inspired Algorithms for Solving Vehicle Routing Problems: A Survey. Oper. Res. 2024, 24, 47. Available online: https://link.springer.com/article/10.1007/s12351-024-00862-5 (accessed on 15 October 2024). [CrossRef]
- Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report TR06; Computer Engineering Department, Engineering Faculty, Erciyes University: Kayseri, Türkiye, 2005. [Google Scholar]
- Karaboga, D.; Kaya, E. An Adaptive and Hybrid Artificial Bee Colony Algorithm (aABC) for ANFIS Training. Appl. Soft Comput. 2016, 49, 423–436. [Google Scholar] [CrossRef]
- Nozohour-Leilabady, B.; Fazelabdolabadi, B. On the Application of Artificial Bee Colony (ABC) Algorithm for Optimization of Well Placements in Fractured Reservoirs: Efficiency Comparison with the Particle Swarm Optimization (PSO) Methodology. Petroleum 2016, 2, 79–89. [Google Scholar] [CrossRef]
- Yarat, S.; Senan, S.; Orman, Z. A Comparative Study on PSO with Other Metaheuristic Methods. In International Series in Operations Research and Management Science; Springer: Cham, Switzerland, 2021; Volume 306, pp. 49–72. Available online: https://link.springer.com/chapter/10.1007/978-3-030-70281-6_4 (accessed on 15 October 2024).
- Theng, D.; Bhoyar, K.K. Feature Selection Techniques for Machine Learning: A Survey of More Than Two Decades of Research. Knowl. Inf. Syst. 2024, 66, 1575–1637. Available online: https://link.springer.com/article/10.1007/s10115-023-02010-5 (accessed on 7 October 2024). [CrossRef]
- Liu, X.Y.; Liang, Y.; Wang, S.; Yang, Z.Y.; Ye, H.S. A Hybrid Genetic Algorithm with Wrapper-Embedded Approaches for Feature Selection. IEEE Access 2018, 6, 22863–22874. [Google Scholar] [CrossRef]
- Guyon, I.; Elisseeff, A. An Introduction to Variable and Feature Selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
- Yerlikaya-Özkurt, F.; Taylan, P. Enhancing Classification Modeling Through Feature Selection and Smoothness: A Conic-Fused Lasso Approach Integrated with Mean Shift Outlier Modelling. J. Dyn. Games 2024, 12, 1–23. Available online: http://staging.xml2html.mdpi.lab/articles/appliedmath-04-00081 (accessed on 7 October 2024). [CrossRef]
- Tibshirani, R. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Huang, J.; Ma, S.; Zhang, C.H. Adaptive Lasso for Sparse High-Dimensional Regression Models. Ann. Stat. 2008, 18, 1603–1618. [Google Scholar]
- Zou, H. The Adaptive Lasso and Its Oracle Properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. Available online: https://www.tandfonline.com/doi/abs/10.1198/016214506000000735 (accessed on 25 October 2024). [CrossRef]
- Zhang, Z.; Tong, T.; Fang, Y.; Zheng, J.; Zhang, X.; Niu, C.; Li, J.; Zhang, X.; Xue, D. Genome-Wide Identification of Barley ABC Genes and Their Expression in Response to Abiotic Stress Treatment. Plants 2020, 9, 1281. [Google Scholar] [CrossRef]
- Garg, S.; Kaur, K.; Batra, S.; Aujla, G.S.; Morgan, G.; Kumar, N.; Zomaya, A.Y.; Ranjan, R. En-ABC: An Ensemble Artificial Bee Colony Based Anomaly Detection Scheme for Cloud Environment. J. Parallel Distrib. Comput. 2020, 135, 219–233. [Google Scholar] [CrossRef]
- Hancer, E.; Xue, B.; Karaboga, D.; Zhang, M. A Binary ABC Algorithm Based on Advanced Similarity Scheme for Feature Selection. Appl. Soft Comput. 2015, 36, 334–348. [Google Scholar] [CrossRef]
- Chamchuen, S.; Siritaratiwat, A.; Fuangfoo, P.; Suthisopapan, P.; Khunkitti, P. High-Accuracy Power Quality Disturbance Classification Using the Adaptive ABC-PSO as Optimal Feature Selection Algorithm. Energies 2021, 14, 1238. [Google Scholar] [CrossRef]
- Guo, Y.; Zhang, C. A Hybrid Artificial Bee Colony Algorithm for Satisfiability Problems Based on Tabu Search. In Proceedings of the 3rd IEEE International Conference on Computer and Communications (ICCC 2017), Chengdu, China, 13–16 October 2017; IEEE: New York, NY, USA, 2018; pp. 2226–2230. [Google Scholar]
- Gu, T.; Chen, H.; Chang, L.; Li, L. Intrusion Detection System Based on Improved ABC Algorithm with Tabu Search. IEEJ Trans. Electr. Electron. Eng. 2019, 14, 1652–1660. Available online: https://onlinelibrary.wiley.com/doi/full/10.1002/tee.22987 (accessed on 15 October 2024). [CrossRef]
- Kiliçarslan, S.; Dönmez, E. Improved Multi-Layer Hybrid Adaptive Particle Swarm Optimization Based Artificial Bee Colony for Optimizing Feature Selection and Classification of Microarray Data. Multimed. Tools Appl. 2024, 83, 67259–67281. Available online: https://link.springer.com/article/10.1007/s11042-023-17234-4 (accessed on 7 October 2024). [CrossRef]
- Kumar, H. Decision Making for Hotel Selection Using Rough Set Theory: A Case Study of Indian Hotels. Int. J. Appl. Eng. Res. 2018, 13, 3988–3998. [Google Scholar]
- Kutner, M.H.; Nachtsheim, C.J.; Neter, J.; Li, W. Applied Linear Statistical Models, 5th ed.; McGraw-Hill: New York, NY, USA, 2005. [Google Scholar]
- Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least Angle Regression. Ann. Statist. 2004, 32, 407–499. [Google Scholar] [CrossRef]
- Sirimongkolkasem, T.; Drikvandi, R. On Regularisation Methods for Analysis of High Dimensional Data. Ann. Data Sci. 2019, 6, 737–763. Available online: https://link.springer.com/article/10.1007/s40745-019-00209-4 (accessed on 26 November 2024). [CrossRef]
- Akay, B.; Karaboga, D.; Gorkemli, B.; Kaya, E. A Survey on the Artificial Bee Colony Algorithm Variants for Binary, Integer, and Mixed Integer Programming Problems. Appl. Soft Comput. 2021, 106, 107351. [Google Scholar] [CrossRef]
- Bansal, J.C.; Joshi, S.K.; Sharma, H. Modified Global Best Artificial Bee Colony for Constrained Optimization Problems. Comput. Electr. Eng. 2018, 67, 365–382. [Google Scholar] [CrossRef]
- Chen, J.; Chen, Z. Extended Bayesian Information Criteria for Model Selection with Large Model Spaces. Biometrika 2008, 95, 759–771. [Google Scholar] [CrossRef]
- Communities and Crime—UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/dataset/183/communities+and+crime (accessed on 24 October 2024).
- Neshat, M.; Alexander, B.; Sergiienko, N.Y.; Wagner, M. Optimization of Large Wave Farms Using a Multi-Strategy Evolutionary Framework. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference, Cancún, Mexico, 8–12 July 2020. [Google Scholar]
- Putten, P. Insurance Company Benchmark (COIL 2000) [Dataset]. UCI Machine Learning Repository. [CrossRef]
- Federal Reserve Bank of St. Louis. Federal Reserve Economic Data (FRED). Available online: https://fred.stlouisfed.org (accessed on 27 November 2024).
Scenario | Method | Sensitivity | Specificity | Accuracy |
---|---|---|---|---|
Scenario1 n = 50, p = 60, = 0.50 | ABC-ADLASSO | 0.879 | 0.958 | 0.914 |
AD_LASSO | 0.715 | 0.892 | 0.817 | |
LASSO | 0.686 | 0.887 | 0.799 | |
STEPWISE | 0.652 | 0.857 | 0.795 | |
LARS | 0.633 | 0.861 | 0.789 | |
Scenario2 n = 50, p = 60, = 0.90 | ABC-ADLASSO | 0.911 | 0.961 | 0.924 |
AD_LASSO | 0.742 | 0.919 | 0.846 | |
LASSO | 0.674 | 0.893 | 0.802 | |
STEPWISE | 0.564 | 0.731 | 0.633 | |
LARS | 0.643 | 0.890 | 0.786 | |
Scenario3 n = 50, p = 100, = 0.50 | ABC-ADLASSO | 0.928 | 0.973 | 0.938 |
AD_LASSO | 0.784 | 0.925 | 0.909 | |
LASSO | 0.627 | 0.849 | 0.742 | |
STEPWISE | 0.577 | 0.721 | 0.609 | |
LARS | 0.615 | 0.856 | 0.743 | |
Scenario4 n = 50, p = 100, = 0.90 | ABC-ADLASSO | 0.905 | 0.961 | 0.922 |
AD_LASSO | 0.734 | 0.905 | 0.825 | |
LASSO | 0.600 | 0.868 | 0.734 | |
STEPWISE | 0.482 | 0.659 | 0.527 | |
LARS | 0.617 | 0.872 | 0.755 | |
Scenario5 n = 50, p = 60, grouping effect | ABC-ADLASSO | 0.923 | 0.974 | 0.931 |
AD_LASSO | 0.739 | 0.916 | 0.859 | |
LASSO | 0.573 | 0.831 | 0.744 | |
STEPWISE | 0.432 | 0.630 | 0.508 | |
LARS | 0.548 | 0.862 | 0.783 | |
Scenario6 n = 50, p = 100, grouping effect | ABC-ADLASSO | 0.935 | 0.984 | 0.943 |
AD_LASSO | 0.793 | 0.933 | 0.850 | |
LASSO | 0.552 | 0.824 | 0.735 | |
STEPWISE | 0.337 | 0.442 | 0.411 | |
LARS | 0.524 | 0.846 | 0.731 |
Communities and Crime Dataset | Mean | Standard Deviation | Median | Min | Max | IQR (25th Percentile–75th Percentile) | |
---|---|---|---|---|---|---|---|
ABC-ADLASSO | Adjusted R2 | 0.769 | 0.023 | 0.764 | 0.739 | 0.796 | 0.750–0.791 |
RMSE | 0.100 | 0.014 | 0.091 | 0.073 | 0.116 | 0.084–0.113 | |
MAE | 0.076 | 0.013 | 0.080 | 0.057 | 0.095 | 0.065–0.084 | |
AD_LASSO | Adjusted R2 | 0.575 | 0.047 | 0.580 | 0.517 | 0.643 | 0.532–0.607 |
RMSE | 0.202 | 0.021 | 0.199 | 0.175 | 0.247 | 0.186–0.211 | |
MAE | 0.161 | 0.022 | 0.166 | 0.123 | 0.190 | 0.146–0.174 | |
LASSO | Adjusted R2 | 0.496 | 0.014 | 0.497 | 0.472 | 0.515 | 0.486–0.507 |
RMSE | 0.239 | 0.007 | 0.243 | 0.232 | 0.268 | 0.225–0.249 | |
MAE | 0.192 | 0.006 | 0.190 | 0.184 | 0.202 | 0.187–0.197 | |
STEPWISE | Adjusted R2 | 0.320 | 0.006 | 0.318 | 0.311 | 0.330 | 0.316–0.324 |
RMSE | 0.321 | 0.005 | 0.317 | 0.312 | 0.330 | 0.318–0.324 | |
MAE | 0.227 | 0.002 | 0.227 | 0.224 | 0.230 | 0.225–0.228 | |
LARS | Adjusted R2 | 0.460 | 0.007 | 0.461 | 0.448 | 0.470 | 0.456–0.464 |
RMSE | 0.249 | 0.006 | 0.247 | 0.240 | 0.260 | 0.245–0.252 | |
MAE | 0.206 | 0.005 | 0.210 | 0.198 | 0.224 | 0.203–0.217 |
Large-Scale Wave Energy Farm Dataset | Mean | Standard Deviation | Median | Min | Max | IQR (25th Percentile–75th Percentile) | |
---|---|---|---|---|---|---|---|
ABC-ADLASSO | Adjusted R2 | 0.695 | 0.010 | 0.690 | 0.684 | 0.708 | 0.689–0.700 |
RMSE | 0.110 | 0.002 | 0.113 | 0.108 | 0.119 | 0.109–0.115 | |
MAE | 0.084 | 0.002 | 0.084 | 0.082 | 0.087 | 0.083–0.085 | |
AD_LASSO | Adjusted R2 | 0.616 | 0.006 | 0.614 | 0.608 | 0.623 | 0.610–0.619 |
RMSE | 0.187 | 0.005 | 0.187 | 0.180 | 0.192 | 0.185–0.190 | |
MAE | 0.147 | 0.005 | 0.147 | 0.140 | 0.152 | 0.145–0.150 | |
LASSO | Adjusted R2 | 0.527 | 0.008 | 0.529 | 0.517 | 0.536 | 0.521–0.530 |
RMSE | 0.279 | 0.006 | 0.274 | 0.270 | 0.285 | 0.277–0.282 | |
MAE | 0.205 | 0.007 | 0.204 | 0.197 | 0.214 | 0.200–0.209 | |
STEPWISE | Adjusted R2 | 0.369 | 0.006 | 0.370 | 0.361 | 0.379 | 0.365–0.372 |
RMSE | 0.523 | 0.005 | 0.520 | 0.508 | 0.535 | 0.522–0.530 | |
MAE | 0.447 | 0.008 | 0.445 | 0.438 | 0.460 | 0.442–0.449 | |
LARS | Adjusted R2 | 0.425 | 0.005 | 0.428 | 0.418 | 0.432 | 0.421–0.429 |
RMSE | 0.355 | 0.006 | 0.356 | 0.347 | 0.364 | 0.349–0.360 | |
MAE | 0.273 | 0.007 | 0.272 | 0.265 | 0.283 | 0.267–0.278 |
Insurance Company Benchmark (COIL 2000) DataSet | Mean | Standard Deviation | Median | Min | Max | IQR (25th Percentile–75th Percentile) | |
---|---|---|---|---|---|---|---|
ABC-ADLASSO | Adjusted R2 | 0.636 | 0.012 | 0.628 | 0.620 | 0.653 | 0.624–0.647 |
RMSE | 0.113 | 0.002 | 0.113 | 0.110 | 0.118 | 0.112–0.115 | |
MAE | 0.087 | 0.002 | 0.087 | 0.084 | 0.092 | 0.085–0.089 | |
AD_LASSO | Adjusted R2 | 0.611 | 0.005 | 0.612 | 0.600 | 0.619 | 0.608–0.615 |
RMSE | 0.170 | 0.003 | 0.171 | 0.165 | 0.176 | 0.168–0.172 | |
MAE | 0.140 | 0.003 | 0.140 | 0.135 | 0.146 | 0.138–0.142 | |
LASSO | Adjusted R2 | 0.566 | 0.004 | 0.568 | 0.559 | 0.574 | 0.563–0.570 |
RMSE | 0.228 | 0.003 | 0.228 | 0.223 | 0.235 | 0.225–0.230 | |
MAE | 0.183 | 0.003 | 0.183 | 0.180 | 0.191 | 0.181–0.186 | |
STEPWISE | Adjusted R2 | 0.476 | 0.004 | 0.477 | 0.470 | 0.484 | 0.473–0.479 |
RMSE | 0.522 | 0.003 | 0.523 | 0.514 | 0.527 | 0.520–0.525 | |
MAE | 0.428 | 0.003 | 0.428 | 0.421 | 0.432 | 0.426–0.430 | |
LARS | Adjusted R2 | 0.514 | 0.005 | 0.514 | 0.505 | 0.522 | 0.510–0.518 |
RMSE | 0.468 | 0.004 | 0.470 | 0.462 | 0.475 | 0.466–0.471 | |
MAE | 0.378 | 0.004 | 0.379 | 0.372 | 0.385 | 0.376–0.381 |
Federal Reserve Economic Data (FRED) | Mean | Standard Deviation | Median | Min | Max | IQR (25th Percentile–75th Percentile) | |
---|---|---|---|---|---|---|---|
ABC-ADLASSO | Adjusted R2 | 0.758 | 0.013 | 0.754 | 0.736 | 0.776 | 0.746–0.766 |
RMSE | 0.112 | 0.002 | 0.117 | 0.093 | 0.133 | 0.103–0.123 | |
MAE | 0.089 | 0.002 | 0.092 | 0.067 | 0.107 | 0.077–0.097 | |
AD_LASSO | Adjusted R2 | 0.703 | 0.009 | 0.700 | 0.680 | 0.720 | 0.690–0.710 |
RMSE | 0.169 | 0.003 | 0.168 | 0.148 | 0.188 | 0.158–0.178 | |
MAE | 0.139 | 0.003 | 0.140 | 0.120 | 0.160 | 0.130–0.150 | |
LASSO | Adjusted R2 | 0.625 | 0.017 | 0.620 | 0.603 | 0.643 | 0.613–0.633 |
RMSE | 0.228 | 0.006 | 0.231 | 0.210 | 0.250 | 0.220–0.240 | |
MAE | 0.188 | 0.006 | 0.185 | 0.165 | 0.205 | 0.175–0.195 | |
STEPWISE | Adjusted R2 | 0.483 | 0.008 | 0.487 | 0.465 | 0.505 | 0.475–0.495 |
RMSE | 0.414 | 0.006 | 0.410 | 0.392 | 0.434 | 0.402–0.422 | |
MAE | 0.325 | 0.007 | 0.324 | 0.303 | 0.345 | 0.313–0.333 | |
LARS | Adjusted R2 | 0.555 | 0.016 | 0.552 | 0.532 | 0.572 | 0.542–0.562 |
RMSE | 0.416 | 0.005 | 0.418 | 0.398 | 0.438 | 0.408–0.428 | |
MAE | 0.317 | 0.005 | 0.319 | 0.299 | 0.339 | 0.309–0.329 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Onakpojeruo, E.P.; Sancar, N. A Two-Stage Feature Selection Approach Based on Artificial Bee Colony and Adaptive LASSO in High-Dimensional Data. AppliedMath 2024, 4, 1522-1538. https://doi.org/10.3390/appliedmath4040081
Onakpojeruo EP, Sancar N. A Two-Stage Feature Selection Approach Based on Artificial Bee Colony and Adaptive LASSO in High-Dimensional Data. AppliedMath. 2024; 4(4):1522-1538. https://doi.org/10.3390/appliedmath4040081
Chicago/Turabian StyleOnakpojeruo, Efe Precious, and Nuriye Sancar. 2024. "A Two-Stage Feature Selection Approach Based on Artificial Bee Colony and Adaptive LASSO in High-Dimensional Data" AppliedMath 4, no. 4: 1522-1538. https://doi.org/10.3390/appliedmath4040081
APA StyleOnakpojeruo, E. P., & Sancar, N. (2024). A Two-Stage Feature Selection Approach Based on Artificial Bee Colony and Adaptive LASSO in High-Dimensional Data. AppliedMath, 4(4), 1522-1538. https://doi.org/10.3390/appliedmath4040081