Metaheuristic-Driven Feature Selection for Human Activity Recognition on KU-HAR Dataset Using XGBoost Classifier
Abstract
1. Introduction
- This scientific research has introduced a simple and effective feature engineering strategy for preprocessing the KU-HAR dataset;
- This research has employed two different MHAs—Golden Jackal Optimization and War Strategy Optimization—for feature selection and dimensionality reduction;
- This work has demonstrated the effectiveness of a traditional classifier, XGB, in achieving a superior HAR performance in terms of accuracy, F-score, precision, recall, and Area Under the Curve (AUC);
- We have applied Shapley Additive Explanations (SHAP), an explainable AI technique, to interpret the model predictions and assess the feature importance, including an in-depth analysis of misclassifications.
2. Materials and Methods
2.1. Dataset
2.2. Data Preprocessing
2.3. The Extreme Gradient Boosting Classifier
2.4. The Metaheuristic Algorithm
2.4.1. Golden Jackal Optimization
2.4.2. War Strategy Optimization
2.4.3. Justification of the Choice of the Classifier and Optimizers
2.4.4. Optimizer Problem Development
Algorithm 1 MHA-based optimization for each training fold. |
|
2.4.5. Performance Evaluation
2.4.6. SHAP Explanation
3. Results
3.1. Optimization Outcomes
3.2. Classification Outcomes
3.3. Time Complexity
3.4. SHAP Analysis
4. Discussion
- Question: How does statistical feature engineering of time-series sensor data affect model efficiency and generalization in human activity recognition?Answer: Statistical feature engineering extracts key characteristics from raw sensor data—such as the mean, median, RMS, SD, and MAD (see Table 4)—which reduces the data dimensionality and computational cost. This simplification helps the model train faster and generalize better, improving the efficiency and performance of human activity recognition.
- Question: How effectively do metaheuristic algorithms enhance the feature selection for human activity recognition using the KU-HAR dataset?Answer: Metaheuristic algorithms (GJO and WARSO) effectively reduced the feature dimensionality while simultaneously tuning the XGBoost hyperparameters, resulting in a more optimized and accurate HAR model.
- Question: Can an optimized XGB classifier deliver a competitive performance in human activity recognition?Answer: Yes, the optimized XGB models (GJO-XGB and WARSO-XGB) achieved a competitive performance, with WARSO-XGB reaching 94.02% accuracy and the optimized metaheuristic algorithms outperforming several traditional models and approaching deep learning benchmarks. Additionally, GJO-XGB achieved 93.55% accuracy using only 23 features, reduced by the MHA. Meanwhile, the performance was increased up to 93.80% (GJO-XGB) and 94.19% (WARSO-XGB) by changing the random seed to 20.
- Question: To what extent can SHAP interpret model predictions and analyze misclassifications in human activity recognition?Answer: SHAP effectively interprets model predictions by identifying influential features through bar plots and explaining the decisions using decision and waterfall plots. It also provides insights into both correct and incorrect classifications.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Akter, M.; Ansary, S.; Khan, M.A.M.; Kim, D. Human activity recognition using attention-mechanism-based deep learning feature combination. Sensors 2023, 23, 5715. [Google Scholar] [CrossRef] [PubMed]
- Pavliuk, O.; Mishchuk, M.; Strauss, C. Transfer learning approach for human activity recognition based on continuous wavelet transform. Algorithms 2023, 16, 77. [Google Scholar] [CrossRef]
- Martinez-Villasenor, L.; Ponce, H. A concise review on sensor signal acquisition and transformation applied to human activity recognition and human–robot interaction. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719853987. [Google Scholar] [CrossRef]
- Reyes-Ortiz, J.; Anguita, D.; Ghio, A.; Oneto, L.; Parra, X. Human Activity Recognition Using Smartphones [Dataset]; UCI Machine Learning Repository: Irvine, CA, USA, 2013. [Google Scholar] [CrossRef]
- Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM Sigkdd Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
- Sztyler, T.; Stuckenschmidt, H. On-body localization of wearable devices: An investigation of position-aware activity recognition. In Proceedings of the 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom), Sydney, Australia, 14–19 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–9. [Google Scholar]
- Nahid, A.A.; Sikder, N.; Rafi, I. KU-HAR: An Open Dataset for Human Activity Recognition (Version 5) [Data Set]; Mendeley Data: London, UK, 2021. [Google Scholar] [CrossRef]
- AlMuhaideb, S.; AlAbdulkarim, L.; AlShahrani, D.M.; AlDhubaib, H.; AlSadoun, D.E. Achieving more with less: A lightweight deep learning solution for advanced human activity recognition (har). Sensors 2024, 24, 5436. [Google Scholar] [CrossRef] [PubMed]
- Sikder, N.; Nahid, A.A. KU-HAR: An open dataset for heterogeneous human activity recognition. Pattern Recognit. Lett. 2021, 146, 46–54. [Google Scholar] [CrossRef]
- Al-Qaness, M.A.; Helmi, A.M.; Dahou, A.; Elaziz, M.A. The applications of metaheuristics for human activity recognition and fall detection using wearable sensors: A comprehensive analysis. Biosensors 2022, 12, 821. [Google Scholar] [CrossRef] [PubMed]
- Guo, X.; Kim, Y.; Ning, X.; Min, S.D. Enhancing the Transformer Model with a Convolutional Feature Extractor Block and Vector-Based Relative Position Embedding for Human Activity Recognition. Sensors 2025, 25, 301. [Google Scholar] [CrossRef]
- Kumar, P.; Suresh, S. Deep-HAR: An ensemble deep learning model for recognizing the simple, complex, and heterogeneous human activities. Multimed. Tools Appl. 2023, 82, 30435–30462. [Google Scholar] [CrossRef]
- Sarkar, D.; Bali, R.; Sharma, T.; Sarkar, D.; Bali, R.; Sharma, T. Feature engineering and selection. In Practical Machine Learning with Python: A Problem-Solver’s Guide to Building Real-World Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2018; pp. 177–253. [Google Scholar]
- Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Sarker, P.; Ksibi, A.; Jamjoom, M.M.; Choi, K.; Nahid, A.A.; Samad, M.A. Breast cancer prediction with feature-selected XGB classifier, optimized by metaheuristic algorithms. J. Big Data 2025, 12, 78. [Google Scholar] [CrossRef]
- Hamadneh, T.; Batiha, B.; Al-Refai, O.; Ibraheem, I.K.; Smerat, A.; Montazeri, Z.; Dehghani, M.; Aribowo, W.; Malik Madhloom AL-Salih, A.A.; Ahmed, M.A. Program Manager Optimization Algorithm: A New Method for Engineering Applications. Int. J. Intell. Eng. Syst. 2025, 18, 746. [Google Scholar]
- Hamadneh, T.; Batiha, B.; Gharib, G.M.; Montazeri, Z.; Dehghani, M.; Aribowo, W.; Majeed, M.A.; Ahmed, M.A.; Jawad, R.K.; Ibraheem, I.K.; et al. Makeup artist optimization algorithm: A novel approach for engineering design challenges. Int. J. Intell. Eng. Syst. 2025, 18, 484–493. [Google Scholar] [CrossRef]
- Hamadneh, T.; Batiha, B.; Gharib, G.M.; Montazeri, Z.; Dehghani, M.; Aribowo, W.; Zalzala, A.M.; Jawad, R.K.; Ahmed, M.A.; Ibraheem, I.K.; et al. Perfumer optimization algorithm: A novel human-inspired metaheuristic for solving optimization tasks. Int. J. Intell. Eng. Syst. 2025, 18, 633–643. [Google Scholar] [CrossRef]
- Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
- Ayyarao, T.S.; Ramakrishna, N.S.S.; Elavarasan, R.M.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War strategy optimization algorithm: A new effective metaheuristic algorithm for global optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
- Bentejac, C.; Csorgo, A.; Martinez-Munoz, G. A comparative analysis of gradient boosting algorithms. Artif. Intell. Rev. 2021, 54, 1937–1967. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4765–4774. [Google Scholar]
- Shapley, L.S. A value for n-person games. In Contributions to the Theory of Games II; Princeton University Press: Princeton, NJ, USA, 1953. [Google Scholar]
Type | SL | Author | Model | Accuracy (%) |
---|---|---|---|---|
Traditional Classifier | 01 | Sikder et al. [9] | RF | 89.67 |
02 | Al-Qaness et al. [10] | AO + RF | 88.53 | |
Deep Learning Model | 03 | Akter et al. [1] | AM-DLFC & LGBM | 96.86 |
04 | Guo et al. [11] | vRPE + CFEB | 96.80 | |
05 | Pavliuk et al. [2] | DenseNet121 + MWT | 97.48 | |
06 | Kumar et al. [12] | Deep-HAR model | 99.98 |
Columns | Description | Columns | Description |
---|---|---|---|
1–300 | Accelerometer X-axis readings | 901–1200 | Gyroscope X-axis readings |
301–600 | Accelerometer Y-axis readings | 1201–1500 | Gyroscope Y-axis readings |
601–900 | Accelerometer Z-axis readings | 1501–1800 | Gyroscope Z-axis readings |
1801 | Activity class ID (0–17) | ||
1802 | Channel length | ||
1803 | Subsample serial number |
Class | ID | Description | Duration | Samples | Subsamples |
---|---|---|---|---|---|
Stand | 0 | Standing still | 1 min | 91 | 1886 |
Sit | 1 | Sitting still | 1 min | 90 | 1874 |
Talk–sit | 2 | Talking while sitting | 1 min | 86 | 1797 |
Talk–stand | 3 | Talking while standing | 1 min | 88 | 1866 |
Stand–sit * | 4 | Sit-to-stand transitions | 5 times | 339 | 2178 |
Lay | 5 | Lying down | 1 min | 87 | 1813 |
Lay–stand * | 6 | Lie-to-stand transitions | 5 times | 148 | 1762 |
Pick | 7 | Picking up object | 10 times | 105 | 1333 |
Jump | 8 | Jumping in place | 10 times | 130 | 666 |
Push-up | 9 | Push-ups (wide hands) | 5 times | 111 | 480 |
Sit-up | 10 | Sit-ups (straight legs) | 5 times | 121 | 1005 |
Walk | 11 | Walking 20 m | ≈12 s | 188 | 882 |
Walk-back | 12 | Walking backward | ≈20 s | 50 | 317 |
Walk-circle | 13 | Walking in circle | ≈20 s | 35 | 259 |
Run | 14 | Running 20 m | ≈7 s | 146 | 595 |
Stair-up | 15 | Going upstairs | ≈1 min | 53 | 798 |
Stair-down | 16 | Going downstairs | ≈50 s | 57 | 781 |
Table tennis | 17 | Playing table tennis | 1 min | 20 | 458 |
SL | Feature | Formula |
---|---|---|
01 | Mean | |
02 | Median | Middle value of sorted |
03 | Root Mean Square (RMS) | |
04 | Minimum | |
05 | Maximum | |
06 | Standard Deviation (SD) | |
07 | Range | |
08 | Mean Absolute Deviation (MAD) |
Serial | Name | Serial | Name | Serial | Name | Serial | Name |
---|---|---|---|---|---|---|---|
0 | mean_acc_x | 12 | max_acc_y | 24 | mean_gyro_x | 36 | max_gyro_y |
1 | median_acc_x | 13 | std_acc_y | 25 | median_gyro_x | 37 | std_gyro_y |
2 | rms_acc_x | 14 | range_acc_y | 26 | rms_gyro_x | 38 | range_gyro_y |
3 | min_acc_x | 15 | mad_acc_y | 27 | min_gyro_x | 39 | mad_gyro_y |
4 | max_acc_x | 16 | mean_acc_z | 28 | max_gyro_x | 40 | mean_gyro_z |
5 | std_acc_x | 17 | median_acc_z | 29 | std_gyro_x | 41 | median_gyro_z |
6 | range_acc_x | 18 | rms_acc_z | 30 | range_gyro_x | 42 | rms_gyro_z |
7 | mad_acc_x | 19 | min_acc_z | 31 | mad_gyro_x | 43 | min_gyro_z |
8 | mean_acc_y | 20 | max_acc_z | 32 | mean_gyro_y | 44 | max_gyro_z |
9 | median_acc_y | 21 | std_acc_z | 33 | median_gyro_y | 45 | std_gyro_z |
10 | rms_acc_y | 22 | range_acc_z | 34 | rms_gyro_y | 46 | range_gyro_z |
11 | min_acc_y | 23 | mad_acc_z | 35 | min_gyro_y | 47 | mad_gyro_z |
48 | Target Class: activity |
Model | Train F-Score (%) | Test F-Score (%) |
---|---|---|
CB | 98.50 | 91.90 |
XGB | 100.00 | 92.55 |
LGBM | 100.00 | 92.37 |
GB | 99.11 | 90.33 |
Hyperparameter | Name | Type | Range |
---|---|---|---|
Number of Estimators | n_estimators () | Integer | 100 to 300 |
Learning Rate | learning_rate () | Float | 0.001 to 0.2 |
Max Depth | max_depth () | Integer | 3 to 7 |
Minimum Child Weight | min_child_weight () | Integer | 1 to 10 |
Feature Range | features | Binary Vector (length = 48) | Binary (0 or 1) |
Optimizer | Fold | Features | ||||
---|---|---|---|---|---|---|
GJO | 1 | 5, 6, 7, 11, 14, 23, 27, 32, 34, 38, 41 | 279 | 0.116176736 | 7 | 1 |
2 | 5, 10, 15, 20, 32, 34, 39, 41 | 299 | 0.110572987 | 7 | 1 | |
3 | 6, 8, 10, 20, 21, 27, 30, 32, 36 | 272 | 0.084533705 | 7 | 1 | |
4 | 6, 7, 10, 14, 20, 30, 32, 34, 45, 47 | 186 | 0.126345040 | 7 | 1 | |
5 | 5, 7, 10, 14, 15, 34, 45 | 253 | 0.193675087 | 7 | 1 | |
6 | 5, 7, 8, 10, 23, 24, 34, 41 | 274 | 0.123926983 | 7 | 1 | |
7 | 1, 8, 14, 20, 21, 23, 34 | 280 | 0.179786016 | 7 | 1 | |
8 | 5, 10, 14, 24, 34, 45 | 266 | 0.132827173 | 7 | 1 | |
9 | 5, 6, 14, 20, 23, 32, 34, 36, 41, 45 | 280 | 0.099581423 | 7 | 1 | |
10 | 5, 14, 20, 27, 32, 34, 38, 41 | 278 | 0.143630097 | 7 | 1 | |
WARSO | 1 | 1, 5, 16, 17, 18, 21, 22, 23, 24, 25, 29, 30, 35, 36, 40, 41, 44, 46 | 225 | 0.171671644 | 6 | 8 |
2 | 1, 5, 15, 16, 17, 21, 23, 25, 26, 29, 30, 32, 35, 36, 37, 40, 41, 44, 46 | 219 | 0.177963083 | 5 | 2 | |
3 | 13, 16, 21, 23, 24, 25, 29, 30, 32, 35, 37, 41, 43, 46 | 233 | 0.200000000 | 6 | 8 | |
4 | 1, 5, 6, 15, 16, 17, 18, 23, 24, 25, 26, 27, 28, 29, 30, 35, 36, 40, 41, 43, 44, 46 | 214 | 0.200000000 | 6 | 7 | |
5 | 5, 15, 16, 17, 18, 21, 23, 24, 25, 26, 29, 30, 35, 36, 40, 41, 43, 44, 46 | 254 | 0.177088074 | 6 | 9 | |
6 | 1, 5, 15, 16, 17, 18, 21, 23, 24, 25, 29, 30, 35, 36, 40, 41, 43, 44, 46 | 230 | 0.200000000 | 6 | 8 | |
7 | 1, 4, 5, 6, 7, 8, 9, 10, 11, 15, 16, 17, 26, 30, 31, 33, 38, 39, 40, 42, 43, 44, 47 | 208 | 0.200000000 | 4 | 2 | |
8 | 2, 5, 9, 10, 11, 13, 14, 16, 31, 32, 33, 34, 36, 38, 40, 44, 46, 47 | 211 | 0.172988329 | 6 | 2 | |
9 | 2, 5, 8, 11, 15, 18, 21, 22, 23, 24, 25, 26, 29, 30, 31, 38, 40, 41, 43 | 230 | 0.130140028 | 5 | 2 | |
10 | 1, 9, 11, 12, 13, 15, 16, 21, 24, 25, 26, 32, 35, 36, 37, 38, 40, 41, 43, 44, 45 | 211 | 0.200000000 | 6 | 7 |
Optimizer | Features | No. of Features | ||||
---|---|---|---|---|---|---|
GJO | 1, 5, 6, 7, 8, 10, 11, 14, 15, 20, 21, 23, 24, 27, 30, 32, 34, 36, 38, 39, 41, 45, 47 | 23 | 280 | 0.116176736 | 7 | 1 |
WARSO | 1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47 | 44 | 230 | 0.2 | 6 | 2 |
Train | Test | ||||||||
---|---|---|---|---|---|---|---|---|---|
Models | Fold | Accuracy | F-Score | Precision | Recall | Accuracy | F-Score | Precision | Recall |
GJO-XGB | 1 | 100% | 100% | 100% | 100% | 93.54% | 92.28% | 93.18% | 91.58% |
2 | 100% | 100% | 100% | 100% | 93.41% | 92.02% | 92.72% | 91.44% | |
3 | 100% | 100% | 100% | 100% | 93.93% | 92.82% | 93.42% | 92.32% | |
4 | 100% | 100% | 100% | 100% | 93.35% | 91.93% | 92.37% | 91.60% | |
5 | 100% | 100% | 100% | 100% | 93.67% | 92.39% | 92.92% | 91.98% | |
6 | 100% | 100% | 100% | 100% | 93.77% | 92.34% | 92.52% | 92.19% | |
7 | 100% | 100% | 100% | 100% | 93.67% | 91.85% | 92.28% | 91.48% | |
8 | 100% | 100% | 100% | 100% | 93.35% | 92.16% | 93.06% | 91.46% | |
9 | 100% | 100% | 100% | 100% | 93.43% | 92.16% | 93.20% | 91.40% | |
10 | 100% | 100% | 100% | 100% | 93.38% | 92.46% | 93.13% | 91.91% | |
WARSO-XGB | 1 | 100% | 100% | 100% | 100% | 94.17% | 93.03% | 93.84% | 92.39% |
2 | 100% | 100% | 100% | 100% | 94.47% | 93.54% | 94.04% | 93.12% | |
3 | 100% | 100% | 100% | 100% | 94.23% | 93.06% | 93.57% | 92.62% | |
4 | 100% | 100% | 100% | 100% | 93.72% | 92.47% | 92.72% | 92.32% | |
5 | 100% | 100% | 100% | 100% | 93.98% | 92.75% | 93.25% | 92.35% | |
6 | 100% | 100% | 100% | 100% | 93.65% | 92.28% | 92.79% | 91.85% | |
7 | 100% | 100% | 100% | 100% | 93.83% | 92.15% | 92.69% | 91.71% | |
8 | 100% | 100% | 100% | 100% | 93.88% | 92.97% | 93.46% | 92.55% | |
9 | 100% | 100% | 100% | 100% | 94.23% | 93.24% | 94.22% | 92.52% | |
10 | 100% | 100% | 100% | 100% | 94.07% | 93.28% | 94.11% | 92.61% |
Model | Train | Test | ||||||
---|---|---|---|---|---|---|---|---|
Accuracy | F-Score | Precision | Recall | Accuracy | F-Score | Precision | Recall | |
GJO-XGB | 0.000 | 0.000 | 0.000 | 0.000 | 0.200 | 0.285 | 0.388 | 0.336 |
WARSO-XGB | 0.000 | 0.000 | 0.000 | 0.000 | 0.259 | 0.455 | 0.589 | 0.399 |
Train | Test | |||||||
---|---|---|---|---|---|---|---|---|
Models | Accuracy | F-Score | Precision | Recall | Accuracy | F-Score | Precision | Recall |
GJO-XGB | 100% | 100% | 100% | 100% | 93.55% | 92.24% | 92.88% | 91.74% |
WARSO-XGB | 100% | 100% | 100% | 100% | 94.02% | 92.88% | 93.47% | 92.40% |
Model | Fold 1 | Fold 2 | Fold 3 | Fold 4 | Fold 5 | Fold 6 | Fold 7 | Fold 8 | Fold 9 | Fold 10 | Mean |
---|---|---|---|---|---|---|---|---|---|---|---|
GJO-XGB | 0.99675 | 0.99715 | 0.99754 | 0.99729 | 0.99663 | 0.99736 | 0.99816 | 0.99724 | 0.99703 | 0.99579 | 0.99709 |
WARSO-XGB | 0.99712 | 0.99717 | 0.99736 | 0.99710 | 0.99718 | 0.99754 | 0.99805 | 0.99651 | 0.99639 | 0.99547 | 0.99699 |
Train (%) | Test (%) | ||||||||
---|---|---|---|---|---|---|---|---|---|
Models | R. Seed | Accuracy | F-Score | Precision | Recall | Accuracy | F-Score | Precision | Recall |
GJO-XGB | 5 | 100 | 100 | 100 | 100 | 93.57 | 92.25 | 92.94 | 91.71 |
10 | 100 | 100 | 100 | 100 | 93.58 | 92.28 | 92.85 | 91.81 | |
15 | 100 | 100 | 100 | 100 | 93.60 | 92.35 | 92.96 | 91.86 | |
20 | 100 | 100 | 100 | 100 | 93.80 | 92.54 | 93.19 | 92.02 | |
25 | 100 | 100 | 100 | 100 | 93.64 | 92.31 | 92.97 | 91.81 | |
30 | 100 | 100 | 100 | 100 | 93.60 | 92.32 | 92.86 | 91.91 | |
35 | 100 | 100 | 100 | 100 | 93.64 | 92.37 | 92.99 | 91.92 | |
40 | 100 | 100 | 100 | 100 | 93.59 | 92.22 | 92.90 | 91.67 | |
45 | 100 | 100 | 100 | 100 | 93.56 | 92.28 | 92.92 | 91.76 | |
50 | 100 | 100 | 100 | 100 | 93.64 | 92.34 | 92.95 | 91.85 | |
WARSO-XGB | 5 | 100 | 100 | 100 | 100 | 94.00 | 92.80 | 93.36 | 92.37 |
10 | 100 | 100 | 100 | 100 | 93.93 | 92.72 | 93.21 | 92.32 | |
15 | 100 | 100 | 100 | 100 | 94.05 | 92.83 | 93.35 | 92.40 | |
20 | 100 | 100 | 100 | 100 | 94.19 | 93.11 | 93.64 | 92.69 | |
25 | 100 | 100 | 100 | 100 | 93.94 | 92.69 | 93.24 | 92.26 | |
30 | 100 | 100 | 100 | 100 | 93.99 | 92.74 | 93.26 | 92.34 | |
35 | 100 | 100 | 100 | 100 | 94.02 | 92.82 | 93.38 | 92.40 | |
40 | 100 | 100 | 100 | 100 | 93.92 | 92.64 | 93.15 | 92.21 | |
45 | 100 | 100 | 100 | 100 | 93.95 | 92.74 | 93.30 | 92.29 | |
50 | 100 | 100 | 100 | 100 | 94.12 | 93.00 | 93.60 | 92.51 |
GJO-XGB Framework | WARSO-XGB Framework | |||
---|---|---|---|---|
Fold | TrT (s) | TsT (s) | TrT (s) | TsT (s) |
1 | 31.88 | 0.62 | 42.59 | 0.54 |
2 | 28.31 | 0.62 | 29.01 | 0.62 |
3 | 41.44 | 0.81 | 29.06 | 0.43 |
4 | 41.59 | 0.71 | 30.77 | 0.42 |
5 | 41.85 | 1.08 | 29.22 | 0.47 |
6 | 45.59 | 0.79 | 29.67 | 0.99 |
7 | 40.84 | 0.85 | 28.54 | 0.39 |
8 | 40.99 | 0.72 | 30.06 | 0.43 |
9 | 41.03 | 0.82 | 30.25 | 0.41 |
10 | 40.50 | 1.08 | 29.19 | 0.43 |
Mean | 39.40 | 0.81 | 30.84 | 0.51 |
SL | Model | Accuracy (%) | F-Score (%) | Precision (%) | Recall (%) |
---|---|---|---|---|---|
Traditional Classifier Models | |||||
01 [9] | RF | 89.67 | 87.59 | – | – |
02 [10] | AO + RF | 88.53 | – | – | – |
Deep Learning Models | |||||
03 [1] | AM-DLFC and LGBM | 96.86 | 96.92 | – | – |
04 [11] | vRPE + CFEB | 96.80 | 97.50 | – | – |
05 [2] | DenseNet121 + MWT | 97.48 | 97.52 | 97.62 | 97.41 |
06 [12] | Deep-HAR model | 99.98 | 98.96 | 97.38 | 100 |
Proposed Models | |||||
01 | Only XGB | 93.81 | 92.64 | 93.23 | 92.16 |
01 | GJO-XGB | 93.55 | 92.24 | 92.88 | 91.74 |
02 | WARSO-XGB | 94.02 | 92.88 | 93.47 | 92.40 |
03 | GJO-XGB (random seed = 20) | 93.80 | 92.54 | 93.19 | 92.02 |
04 | WARSO-XGB (random seed = 20) | 94.19 | 93.11 | 93.64 | 92.69 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sarker, P.; Tiang, J.-J.; Nahid, A.-A. Metaheuristic-Driven Feature Selection for Human Activity Recognition on KU-HAR Dataset Using XGBoost Classifier. Sensors 2025, 25, 5303. https://doi.org/10.3390/s25175303
Sarker P, Tiang J-J, Nahid A-A. Metaheuristic-Driven Feature Selection for Human Activity Recognition on KU-HAR Dataset Using XGBoost Classifier. Sensors. 2025; 25(17):5303. https://doi.org/10.3390/s25175303
Chicago/Turabian StyleSarker, Proshenjit, Jun-Jiat Tiang, and Abdullah-Al Nahid. 2025. "Metaheuristic-Driven Feature Selection for Human Activity Recognition on KU-HAR Dataset Using XGBoost Classifier" Sensors 25, no. 17: 5303. https://doi.org/10.3390/s25175303
APA StyleSarker, P., Tiang, J.-J., & Nahid, A.-A. (2025). Metaheuristic-Driven Feature Selection for Human Activity Recognition on KU-HAR Dataset Using XGBoost Classifier. Sensors, 25(17), 5303. https://doi.org/10.3390/s25175303