Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (172)

Search Parameters:
Keywords = heterogeneous ensemble learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 829 KiB  
Article
Enhanced Face Recognition in Crowded Environments with 2D/3D Features and Parallel Hybrid CNN-RNN Architecture with Stacked Auto-Encoder
by Samir Elloumi, Sahbi Bahroun, Sadok Ben Yahia and Mourad Kaddes
Big Data Cogn. Comput. 2025, 9(8), 191; https://doi.org/10.3390/bdcc9080191 - 22 Jul 2025
Abstract
Face recognition (FR) in unconstrained conditions remains an open research topic and an ongoing challenge. The facial images exhibit diverse expressions, occlusions, variations in illumination, and heterogeneous backgrounds. This work aims to produce an accurate and robust system for enhanced Security and Surveillance. [...] Read more.
Face recognition (FR) in unconstrained conditions remains an open research topic and an ongoing challenge. The facial images exhibit diverse expressions, occlusions, variations in illumination, and heterogeneous backgrounds. This work aims to produce an accurate and robust system for enhanced Security and Surveillance. A parallel hybrid deep learning model for feature extraction and classification is proposed. An ensemble of three parallel extraction layer models learns the best representative features using CNN and RNN. 2D LBP and 3D Mesh LBP are computed on face images to extract image features as input to two RNNs. A stacked autoencoder (SAE) merged the feature vectors extracted from the three CNN-RNN parallel layers. We tested the designed 2D/3D CNN-RNN framework on four standard datasets. We achieved an accuracy of 98.9%. The hybrid deep learning model significantly improves FR against similar state-of-the-art methods. The proposed model was also tested on an unconstrained conditions human crowd dataset, and the results were very promising with an accuracy of 95%. Furthermore, our model shows an 11.5% improvement over similar hybrid CNN-RNN architectures, proving its robustness in complex environments where the face can undergo different transformations. Full article
Show Figures

Figure 1

17 pages, 2307 KiB  
Article
DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning
by Doston Khasanov, Halimjon Khujamatov, Muksimova Shakhnoza, Mirjamol Abdullaev, Temur Toshtemirov, Shahzoda Anarova, Cheolwon Lee and Heung-Seok Jeon
Diagnostics 2025, 15(15), 1841; https://doi.org/10.3390/diagnostics15151841 - 22 Jul 2025
Viewed by 27
Abstract
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new [...] Read more.
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new ensemble-based deep learning model designed to perform robust multiclass classification of insect bites from RGB images. Our model aggregates three semantically diverse convolutional neural networks—DenseNet121, EfficientNet-B0, and MobileNetV3-Small—using a stacked meta-classifier designed to aggregate their predicted outcomes into an integrated, discriminatively strong output. Our technique balances heterogeneous feature representation with suppression of individual model biases. Our model was trained and evaluated on a hand-collected set of 1932 labeled images representing eight classes, consisting of common bites such as mosquito, flea, and tick bites, and unaffected skin. Our domain-specific augmentation pipeline imputed practical variability in lighting, occlusion, and skin tone, thereby boosting generalizability. Results: Our model, DeepBiteNet, achieved a training accuracy of 89.7%, validation accuracy of 85.1%, and test accuracy of 84.6%, and surpassed fifteen benchmark CNN architectures on all key indicators, viz., precision (0.880), recall (0.870), and F1-score (0.875). Our model, optimized for mobile deployment with quantization and TensorFlow Lite, enables rapid on-client computation and eliminates reliance on cloud-based processing. Conclusions: Our work shows how ensemble learning, when carefully designed and combined with realistic data augmentation, can boost the reliability and usability of automatic insect bite diagnosis. Our model, DeepBiteNet, forms a promising foundation for future integration with mobile health (mHealth) solutions and may complement early diagnosis and triage in dermatologically underserved regions. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnostics and Analysis 2024)
Show Figures

Figure 1

32 pages, 2529 KiB  
Article
Cloud Adoption in the Digital Era: An Interpretable Machine Learning Analysis of National Readiness and Structural Disparities Across the EU
by Cristiana Tudor, Margareta Florescu, Persefoni Polychronidou, Pavlos Stamatiou, Vasileios Vlachos and Konstadina Kasabali
Appl. Sci. 2025, 15(14), 8019; https://doi.org/10.3390/app15148019 - 18 Jul 2025
Viewed by 121
Abstract
As digital transformation accelerates across Europe, cloud computing plays an increasingly central role in modernizing public services and private enterprises. Yet adoption rates vary markedly among EU member states, reflecting deeper structural differences in digital capacity. This study employs explainable machine learning to [...] Read more.
As digital transformation accelerates across Europe, cloud computing plays an increasingly central role in modernizing public services and private enterprises. Yet adoption rates vary markedly among EU member states, reflecting deeper structural differences in digital capacity. This study employs explainable machine learning to uncover the drivers of national cloud adoption across 27 EU countries using harmonized panel datasets spanning 2014–2021 and 2014–2024. A methodological pipeline combining Random Forests (RF), XGBoost, Support Vector Machines (SVM), and Elastic Net regression is implemented, with model tuning conducted via nested cross-validation. Among individual models, Elastic Net and SVM delivered superior predictive performance, while a stacked ensemble achieved the best overall accuracy (MAE = 0.214, R2 = 0.948). The most interpretable model, a standardized RF with country fixed effects, attained MAE = 0.321, and R2 = 0.864, making it well-suited for policy analysis. Variable importance analysis reveals that the density of ICT specialists is the strongest predictor of adoption, followed by broadband access and higher education. Fixed-effect modeling confirms significant national heterogeneity, with countries like Finland and Luxembourg consistently leading adoption, while Bulgaria and Romania exhibit structural barriers. Partial dependence and SHAP analyses reveal nonlinear complementarities between digital skills and infrastructure. A hierarchical clustering of countries reveals three distinct digital maturity profiles, offering tailored policy pathways. These results directly support the EU Digital Decade’s strategic targets and provide actionable insights for advancing inclusive and resilient digital transformation across the Union. Full article
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)
Show Figures

Figure 1

22 pages, 9057 KiB  
Article
A Multi-Stage Framework for Kawasaki Disease Prediction Using Clustering-Based Undersampling and Synthetic Data Augmentation: Cross-Institutional Validation with Dual-Center Clinical Data in Taiwan
by Heng-Chih Huang, Chuan-Sheng Hung, Chun-Hung Richard Lin, Yi-Zhen Shie, Cheng-Han Yu and Ting-Hsin Huang
Bioengineering 2025, 12(7), 742; https://doi.org/10.3390/bioengineering12070742 - 7 Jul 2025
Viewed by 407
Abstract
Kawasaki disease (KD) is a rare yet potentially life-threatening pediatric vasculitis that, if left undiagnosed or untreated, can result in serious cardiovascular complications. Its heterogeneous clinical presentation poses diagnostic challenges, often failing to meet classical criteria and increasing the risk of oversight. Leveraging [...] Read more.
Kawasaki disease (KD) is a rare yet potentially life-threatening pediatric vasculitis that, if left undiagnosed or untreated, can result in serious cardiovascular complications. Its heterogeneous clinical presentation poses diagnostic challenges, often failing to meet classical criteria and increasing the risk of oversight. Leveraging routine laboratory tests with AI offers a promising strategy for enhancing early detection. However, due to the extremely low prevalence of KD, conventional models often struggle with severe class imbalance, limiting their ability to achieve both high sensitivity and specificity in practice. To address this issue, we propose a multi-stage AI-based predictive framework that incorporates clustering-based undersampling, data augmentation, and stacking ensemble learning. The model was trained and internally tested on clinical blood and urine test data from Chang Gung Memorial Hospital (CGMH, n = 74,641; 2010–2019), and externally validated using an independent dataset from Kaohsiung Medical University Hospital (KMUH, n = 1582; 2012–2020), thereby supporting cross-institutional generalizability. At a fixed recall rate of 95%, the model achieved a specificity of 97.5% and an F1-score of 53.6% on the CGMH test set, and a specificity of 74.7% with an F1-score of 23.4% on the KMUH validation set. These results underscore the model’s ability to maintain high specificity even under sensitivity-focused constraints, while still delivering clinically meaningful predictive performance. This balance of sensitivity and specificity highlights the framework’s practical utility for real-world KD screening. Full article
Show Figures

Figure 1

28 pages, 10204 KiB  
Article
Wildfire Susceptibility Mapping in Greece Using Ensemble Machine Learning
by Panagiotis Symeonidis, Thanasis Vafeiadis, Dimosthenis Ioannidis and Dimitrios Tzovaras
Earth 2025, 6(3), 75; https://doi.org/10.3390/earth6030075 - 5 Jul 2025
Viewed by 385
Abstract
This study explores the use of ensemble machine learning models to develop wildfire susceptibility maps (WFSMs) in Greece, focusing on their application as regressors. We provide a continuous assessment of wildfire risk, enhancing the interpretability and accuracy of predictions. Two key metrics were [...] Read more.
This study explores the use of ensemble machine learning models to develop wildfire susceptibility maps (WFSMs) in Greece, focusing on their application as regressors. We provide a continuous assessment of wildfire risk, enhancing the interpretability and accuracy of predictions. Two key metrics were developed: Ensemble Mean and Ensemble Max. This dual-metric approach improves predictive robustness and provides critical insights for wildfire management strategies. The ensemble mode effectively handles complex, high-dimensional data, addressing challenges such as over fitting and data heterogeneity. Utilizing advanced techniques like XGBoost, GBM, LightGBM, and CatBoost regressors, our research demonstrates the potential of these methods to enhance wildfire risk estimation. The Ensemble Mean model classified 50% of the land as low risk and 21% as high risk, while the Ensemble Max model identified 38% as low risk and 33% as high risk. Notably, 83% of wildfires between 2000 and 2024 occurred in areas marked as high-risk by both models. The findings reveal that a significant proportion of wildfires occurred in areas identified as high risk by both ensemble models, underscoring their effectiveness. This approach offers significant potential to mitigate wildfires’ environmental, economic, and social impacts, enhance climate resilience, and strengthen preparedness for future wildfire events. Full article
Show Figures

Figure 1

26 pages, 4486 KiB  
Article
Predicting Groundwater Level Dynamics and Evaluating the Impact of the South-to-North Water Diversion Project Using Stacking Ensemble Learning
by Hangyu Wu, Rong Liu, Chuiyu Lu, Qingyan Sun, Chu Wu, Lingjia Yan, Wen Lu and Hang Zhou
Sustainability 2025, 17(13), 6120; https://doi.org/10.3390/su17136120 - 3 Jul 2025
Viewed by 314
Abstract
This study aims to improve the accuracy and interpretability of deep groundwater level forecasting in Cangzhou, a typical overexploitation area in the North China Plain. To address the limitations of traditional models and existing machine learning approaches, we develop a Stacking ensemble learning [...] Read more.
This study aims to improve the accuracy and interpretability of deep groundwater level forecasting in Cangzhou, a typical overexploitation area in the North China Plain. To address the limitations of traditional models and existing machine learning approaches, we develop a Stacking ensemble learning framework that integrates meteorological, spatial, and anthropogenic variables, including lagged groundwater levels to reflect aquifer memory. The model combines six heterogeneous base learners with a meta-model to enhance prediction robustness. Performance evaluation shows that the ensemble model consistently outperforms individual models in accuracy, generalization, and spatial adaptability. Scenario-based simulations are further conducted to assess the effects of the South-to-North Water Diversion Project. Results indicate that the diversion project significantly mitigates groundwater depletion, with the most overexploited zones showing water level recovery of up to 17 m compared to the no-diversion scenario. Feature importance analysis confirms that lagged water levels and pumping volumes are dominant predictors, aligning with groundwater system dynamics. These findings demonstrate the effectiveness of ensemble learning in modeling complex groundwater behavior and provide a practical tool for water resource regulation. The proposed framework is adaptable to other groundwater-stressed regions and supports dynamic policy design for sustainable groundwater management. Full article
(This article belongs to the Special Issue Sustainable Water Management in Rapid Urbanization)
Show Figures

Figure 1

21 pages, 3747 KiB  
Article
An Optimized Multi-Stage Framework for Soil Organic Carbon Estimation in Citrus Orchards Based on FTIR Spectroscopy and Hybrid Machine Learning Integration
by Yingying Wei, Xiaoxiang Mo, Shengxin Yu, Saisai Wu, He Chen, Yuanyuan Qin and Zhikang Zeng
Agriculture 2025, 15(13), 1417; https://doi.org/10.3390/agriculture15131417 - 30 Jun 2025
Viewed by 352
Abstract
Soil organic carbon (SOC) is a critical indicator of soil health and carbon sequestration potential. Accurate, efficient, and scalable SOC estimation is essential for sustainable orchard management and climate-resilient agriculture. However, traditional visible–near-infrared (Vis–NIR) spectroscopy often suffers from limited chemical specificity and weak [...] Read more.
Soil organic carbon (SOC) is a critical indicator of soil health and carbon sequestration potential. Accurate, efficient, and scalable SOC estimation is essential for sustainable orchard management and climate-resilient agriculture. However, traditional visible–near-infrared (Vis–NIR) spectroscopy often suffers from limited chemical specificity and weak adaptability in heterogeneous soil environments. To overcome these limitations, this study develops a five-stage modeling framework that systematically integrates Fourier Transform Infrared (FTIR) spectroscopy with hybrid machine learning techniques for non-destructive SOC prediction in citrus orchard soils. The proposed framework includes (1) FTIR spectral acquisition; (2) a comparative evaluation of nine spectral preprocessing techniques; (3) dimensionality reduction via three representative feature selection algorithms, namely the Successive Projections Algorithm (SPA), Competitive Adaptive Reweighted Sampling (CARS), and Principal Component Analysis (PCA); (4) regression modeling using six machine learning algorithms, namely the Random Forest (RF), Support Vector Regression (SVR), Gray Wolf Optimized SVR (SVR-GWO), Partial Least Squares Regression (PLSR), Principal Component Regression (PCR), and the Back-propagation Neural Network (BPNN); and (5) comprehensive performance assessments and the identification of the optimal modeling pathway. The results showed that second-derivative (SD) preprocessing significantly enhanced the spectral signal-to-noise ratio. Among feature selection methods, the SPA reduced over 300 spectral bands to 10 informative wavelengths, enabling efficient modeling with minimal information loss. The SD + SPA + RF pipeline achieved the highest prediction performance (R2 = 0.84, RMSE = 4.67 g/kg, and RPD = 2.51), outperforming the PLSR and BPNN models. This study presents a reproducible and scalable FTIR-based modeling strategy for SOC estimation in orchard soils. Its adaptive preprocessing, effective variable selection, and ensemble learning integration offer a robust solution for real-time, cost-effective, and transferable carbon monitoring, advancing precision soil sensing in orchard ecosystems. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

20 pages, 6086 KiB  
Article
Analysis of Evolutionary Characteristics and Prediction of Annual Runoff in Qianping Reservoir
by Xiaolong Kang, Haoming Yu, Chaoqiang Yang, Qingqing Tian and Yadi Wang
Water 2025, 17(13), 1902; https://doi.org/10.3390/w17131902 - 26 Jun 2025
Viewed by 330
Abstract
Under the combined influence of climate change and human activities, the non-stationarity of reservoir runoff has significantly intensified, posing challenges for traditional statistical models to accurately capture its multi-scale abrupt changes. This study focuses on Qianping (QP) Reservoir and systematically integrates climate-driven mechanisms [...] Read more.
Under the combined influence of climate change and human activities, the non-stationarity of reservoir runoff has significantly intensified, posing challenges for traditional statistical models to accurately capture its multi-scale abrupt changes. This study focuses on Qianping (QP) Reservoir and systematically integrates climate-driven mechanisms with machine learning approaches to uncover the patterns of runoff evolution and develop high-precision prediction models. The findings offer a novel paradigm for adaptive reservoir operation under non-stationary conditions. In this paper, we employ methods including extreme-point symmetric mode decomposition (ESMD), Bayesian ensemble time series decomposition (BETS), and cross-wavelet transform (XWT) to investigate the variation trends and mutation features of the annual runoff in QP Reservoir. Additionally, four models—ARIMA, LSTM, LSTM-RF, and LSTM-CNN—are utilized for runoff prediction and analysis. The results indicate that: (1) the annual runoff of QP Reservoir exhibits a quasi-8.25-year mid-short-term cycle and a quasi-13.20-year long-term cycle on an annual scale; (2) by using Bayesian estimators based on abrupt change year detection and trend variation algorithms, an abrupt change point with a probability of 79.1% was identified in 1985, with a confidence interval spanning 1984 to 1986; (3) cross-wavelet analysis indicates that the periodic associations between the annual runoff of QP Reservoir and climate-driving factors exhibit spatiotemporal heterogeneity: the AMO, AO, and PNA show multi-scale synergistic interactions; the DMI and ENSO display only phase-specific weak coupling; while solar sunspot activity modulates runoff over long-term cycles; and (4) The NSE of the ARIMA, LSTM, LSTM-RF, and LSTM-CNN models all exceed 0.945, the RMSE is below 0.477 × 109 m3, and the MAE is below 0.297 × 109 m3, Among them, the LSTM-RF model demonstrated the highest accuracy and the most stable predicted fluctuations, indicating that future annual runoff will continue to fluctuate but with a decreasing amplitude. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

15 pages, 1882 KiB  
Article
Predicting Rheological Properties of Asphalt Modified with Mineral Powder: Bagging, Boosting, and Stacking vs. Single Machine Learning Models
by Haibing Huang, Zujie Xu, Xiaoliang Li, Bin Liu, Xiangyang Fan, Haonan Ding and Wen Xu
Materials 2025, 18(12), 2913; https://doi.org/10.3390/ma18122913 - 19 Jun 2025
Viewed by 346
Abstract
This study systematically compares the predictive performance of single machine learning (ML) models (KNN, Bayesian ridge regression, decision tree) and ensemble learning methods (bagging, boosting, stacking) for quantifying the rheological properties of mineral powder-modified asphalt, specifically the complex shear modulus (G*) and the [...] Read more.
This study systematically compares the predictive performance of single machine learning (ML) models (KNN, Bayesian ridge regression, decision tree) and ensemble learning methods (bagging, boosting, stacking) for quantifying the rheological properties of mineral powder-modified asphalt, specifically the complex shear modulus (G*) and the phase angle (δ). We used two emulsifiers and three mineral powders for fabricating modified emulsified asphalt and conducting rheological property tests, respectively. Dynamic shear rheometer (DSR) test data were preprocessed using the local outlier factor (LOF) algorithm, followed by K-fold cross-validation (K = 5) and Bayesian optimization to tune model hyperparameters. This framework uniquely employs cross-validated predictions from base models as input features for the meta-learner, reducing information leakage and enhancing generalization. Traditional single ML models struggle to characterize accurately as a result, and an innovative stacking model was developed, integrating predictions from four heterogeneous base learners—KNN, decision tree (DT), random forest (RF), and XGBoost—with a Bayesian ridge regression meta-learner. Results demonstrate that ensemble models outperform single models significantly, with the stacking model achieving the highest accuracy (R2 = 0.9727 for G* and R2 = 0.9990 for δ). Shapley additive explanations (SHAP) analysis reveals temperature and mineral powder type as key factors, addressing the “black box” limitation of ML in materials science. This study validates the stacking model as a robust framework for optimizing asphalt mixture design, offering insights into material selection and pavement performance improvement. Full article
Show Figures

Graphical abstract

29 pages, 3472 KiB  
Article
Modeling of Battery Storage of Photovoltaic Power Plants Using Machine Learning Methods
by Rad Stanev, Tanyo Tanev, Venizelos Efthymiou and Chrysanthos Charalambous
Energies 2025, 18(12), 3210; https://doi.org/10.3390/en18123210 - 19 Jun 2025
Viewed by 408
Abstract
The massive integration of variable renewable energy sources (RESs) poses the gradual necessity for new power system architectures with wide implementation of distributed battery energy storage systems (BESSs), which support power system stability, energy management, and control. This research presents a methodology and [...] Read more.
The massive integration of variable renewable energy sources (RESs) poses the gradual necessity for new power system architectures with wide implementation of distributed battery energy storage systems (BESSs), which support power system stability, energy management, and control. This research presents a methodology and realization of a set of 11 BESS models based on different machine learning methods. The performance of the proposed models is tested using real-life BESS data, after which a comparative evaluation is presented. Based on the results achieved, a valuable discussion and conclusions about the models’ performance are made. This study compares the results of feedforward neural networks (FNNs), a homogeneous ensemble of FNNs, multiple linear regression, multiple linear regression with polynomial features, decision-tree-based models like XGBoost, CatBoost, and LightGBM, and heterogeneous ensembles of decision tree modes in the day-ahead forecasting of an existing real-life BESS in a PV power plant. A Bayesian hyperparameter search is proposed and implemented for all of the included models. Among the main objectives of this study is to propose hyperparameter optimization for the included models, research the optimal training period for the available data, and find the best model from the ones included in the study. Additional objectives are to compare the test results of heterogeneous and homogeneous ensembles, and grid search vs. Bayesian hyperparameter optimizations. Also, as part of the deep learning FNN analysis study, a customized early stopping function is introduced. The results show that the heterogeneous ensemble model with three decision trees and linear regression as main model achieves the highest average R2 of 0.792 and the second-best nRMSE of 0.669% using a 30-day training period. CatBoost provides the best results, with an nRMSE of 0.662% for a 30-day training period, and offers competitive results for R2—0.772. This study underscores the significance of model selection and training period optimization for improving battery performance forecasting in energy management systems. The trained models or pipelines in this study could potentially serve as a foundation for transfer learning in future studies. Full article
(This article belongs to the Topic Smart Solar Energy Systems)
Show Figures

Figure 1

26 pages, 2591 KiB  
Article
RHAD: A Reinforced Heterogeneous Anomaly Detector for Robust Industrial Control System Security
by Xiaopeng Han, Yukun Niu, Zhigang Cao, Ding Zhou and Bo Liu
Electronics 2025, 14(12), 2440; https://doi.org/10.3390/electronics14122440 - 16 Jun 2025
Viewed by 366
Abstract
Industrial Control Systems (ICS) are increasingly targeted by sophisticated and evolving cyberattacks, while conventional static defense mechanisms and isolated intrusion detection models often lack the robustness required to cope with such dynamic threats. To overcome these limitations, we propose RHAD (Reinforced Heterogeneous Anomaly [...] Read more.
Industrial Control Systems (ICS) are increasingly targeted by sophisticated and evolving cyberattacks, while conventional static defense mechanisms and isolated intrusion detection models often lack the robustness required to cope with such dynamic threats. To overcome these limitations, we propose RHAD (Reinforced Heterogeneous Anomaly Detector), a resilient and adaptive anomaly detection framework specifically designed for ICS environments. RHAD combines a heterogeneous ensemble of detection models with a confidence-aware scheduling mechanism guided by reinforcement learning (RL), alongside a time-decaying sliding window voting strategy to enhance detection accuracy and temporal robustness. The proposed architecture establishes a modular collaborative framework that enables dynamic and fine-grained protection for industrial network traffic. At its core, the RL-based scheduler leverages the Proximal Policy Optimization (PPO) algorithm to dynamically assign model weights and orchestrate container-level executor replacement in real time, driven by network state observations and runtime performance feedback. We evaluate RHAD using two publicly available ICS datasets—SCADA and WDT—achieving 99.19% accuracy with an F1-score of 0.989 on SCADA, and 98.35% accuracy with an F1-score of 0.987 on WDT. These results significantly outperform state-of-the-art deep learning baselines, confirming RHAD’s robustness under class imbalance conditions. Thus, RHAD provides a promising foundation for resilient ICS security and shows strong potential for broader deployment in cyber-physical systems. Full article
Show Figures

Figure 1

30 pages, 1869 KiB  
Review
Clinical Applications of Artificial Intelligence in Periodontology: A Scoping Review
by Georgios S. Chatzopoulos, Vasiliki P. Koidou, Lazaros Tsalikis and Eleftherios G. Kaklamanos
Medicina 2025, 61(6), 1066; https://doi.org/10.3390/medicina61061066 - 10 Jun 2025
Viewed by 2036
Abstract
Background and Objectives: This scoping review aimed to identify and synthesize current evidence on the clinical applications of artificial intelligence (AI) in periodontology, focusing on its potential to improve diagnosis, treatment planning, and patient care. Materials and Methods: A comprehensive literature [...] Read more.
Background and Objectives: This scoping review aimed to identify and synthesize current evidence on the clinical applications of artificial intelligence (AI) in periodontology, focusing on its potential to improve diagnosis, treatment planning, and patient care. Materials and Methods: A comprehensive literature search was conducted using electronic databases including PubMed-MEDLINE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Scopus, and Web of Science™ Core Collection. Studies were included if they met predefined PICO criteria relating to AI applications in periodontology. Due to the heterogeneity of study designs, imaging modalities, and outcome measures, a scoping review approach was employed rather than a systematic review. Results: A total of 6394 articles were initially identified and screened. The review revealed a significant interest in utilizing AI, particularly convolutional neural networks (CNNs), for various periodontal applications. Studies demonstrated the potential of AI models to accurately detect and classify alveolar bone loss, intrabony defects, furcation involvements, gingivitis, dental biofilm, and calculus from dental radiographs and intraoral images. AI systems often achieved diagnostic accuracy, sensitivity, and specificity comparable to or exceeding that of dental professionals. Various CNN architectures and methodologies, including ensemble models and task-specific designs, showed promise in enhancing periodontal disease assessment and management. Conclusions: AI, especially deep learning techniques, holds considerable potential to revolutionize periodontology by improving the accuracy and efficiency of diagnostic and treatment planning processes. While challenges remain, including the need for further research with larger and more diverse datasets, the reviewed evidence supports the integration of AI technologies into dental practice to aid clinicians and ultimately improve patient outcomes. Full article
(This article belongs to the Section Dentistry and Oral Health)
Show Figures

Figure 1

23 pages, 1601 KiB  
Article
Level-Wise Feature-Guided Cascading Ensembles for Credit Scoring
by Yao Zou and Guanghua Cheng
Symmetry 2025, 17(6), 914; https://doi.org/10.3390/sym17060914 - 10 Jun 2025
Viewed by 326
Abstract
Accurate credit scoring models are essential for financial risk management, yet conventional approaches often fail to address the complexities of high-dimensional, heterogeneous credit data, particularly in capturing nonlinear relationships and hierarchical dependencies, ultimately compromising predictive performance. To overcome these limitations, this paper introduces [...] Read more.
Accurate credit scoring models are essential for financial risk management, yet conventional approaches often fail to address the complexities of high-dimensional, heterogeneous credit data, particularly in capturing nonlinear relationships and hierarchical dependencies, ultimately compromising predictive performance. To overcome these limitations, this paper introduces the level-wise feature-guided cascading ensemble (LFGCE) model, a novel framework that integrates hierarchical feature selection with cascading ensemble learning to systematically uncover latent feature hierarchies. The LFGCE framework leverages symmetry principles in its cascading architecture, where each ensemble layer maintains structural symmetry in processing its assigned feature subset while asymmetrically contributing to the final prediction through hierarchical information fusion. The LFGCE model operates through two synergistic mechanisms: (1) a hierarchical feature selection strategy that quantifies feature importance and partitions the feature space into progressively predictive subsets, thereby reducing dimensionality while preserving discriminative information, and (2) a cascading ensemble architecture where each layer specializes in learning risk patterns from its assigned feature subset, while iteratively incorporating outputs from preceding layers to enable cross-level information fusion. This dual process of hierarchical feature refinement and layered ensemble learning allows the LFGCE to extract deep, robust representations of credit risk. Empirical validation on four public credit datasets (Australian Credit, German Credit, Japan Credit, and Taiwan Credit) demonstrates that the LFGCE achieves an average AUC improvement of 0.23% over XGBoost (Python 3.13) and 0.63% over deep neural networks, confirming its superior predictive accuracy. Full article
(This article belongs to the Special Issue Symmetric Studies of Distributions in Statistical Models)
Show Figures

Figure 1

27 pages, 16706 KiB  
Article
Examination of Landslide Susceptibility Modeling Using Ensemble Learning and Factor Engineering
by Lizhou Zhang, Siqiao Ye, Deping He, Linfeng Wang, Weiping Li, Bijing Jin and Taorui Zeng
Appl. Sci. 2025, 15(11), 6192; https://doi.org/10.3390/app15116192 - 30 May 2025
Viewed by 488
Abstract
Current research lacks an in-depth exploration of ensemble learning and factor engineering applications in regard to landslide susceptibility modeling. In the Three Gorges Reservoir area of China, a region prone to frequent landslides that endanger lives and infrastructure, this study advances landslide susceptibility [...] Read more.
Current research lacks an in-depth exploration of ensemble learning and factor engineering applications in regard to landslide susceptibility modeling. In the Three Gorges Reservoir area of China, a region prone to frequent landslides that endanger lives and infrastructure, this study advances landslide susceptibility prediction by integrating ensemble learning with systematic factor engineering. Four homogeneous ensemble models (random forest, XGBoost, LightGBM, and CatBoost) and two heterogeneous ensembles (bagging and stacking) were implemented to evaluate 14 influencing factors. The key results demonstrate the Digital Elevation Model (DEM) as the dominant factor, while the stacking ensemble achieved superior performance (AUC = 0.876), outperforming single models by 4.4%. Iterative factor elimination and hyperparameter tuning increased the high-susceptibility zones in the stacking predictions to 42.54% and enhanced XGBoost’s low-susceptibility classification accuracy from 12.96% to 13.57%. The optimized models were used to generate a high-resolution landslide susceptibility map, identifying 23.8% of the northern and central regions as high-susceptibility areas, compared to only 9.3% as eastern and southern low-susceptibility zones. This methodology improved the prediction accuracy by 12–18% in comparison to a single model, providing actionable insights for landslide risk mitigation. Full article
Show Figures

Figure 1

19 pages, 742 KiB  
Review
Artificial Intelligence-Based Models for Automated Bone Age Assessment from Posteroanterior Wrist X-Rays: A Systematic Review
by Isidro Miguel Martín Pérez, Sofia Bourhim and Sebastián Eustaquio Martín Pérez
Appl. Sci. 2025, 15(11), 5978; https://doi.org/10.3390/app15115978 - 26 May 2025
Viewed by 986
Abstract
Introduction: Bone-age assessment using posteroanterior left hand–wrist radiographs is indispensable in pediatric endocrinology and forensic age determination. Traditional methods—Greulich–Pyle atlas and Tanner–Whitehouse scoring—are time-consuming, operator-dependent, and prone to inter- and intra-observer variability. Aim: To systematically review the performance of AI-based models for automated [...] Read more.
Introduction: Bone-age assessment using posteroanterior left hand–wrist radiographs is indispensable in pediatric endocrinology and forensic age determination. Traditional methods—Greulich–Pyle atlas and Tanner–Whitehouse scoring—are time-consuming, operator-dependent, and prone to inter- and intra-observer variability. Aim: To systematically review the performance of AI-based models for automated bone-age estimation from left PA hand–wrist radiographs. Materials and Methods: A systematic review was carried out and previously registered in PROSPERO (CRD42024619808) in MEDLINE (PubMed), Google Scholar, ELSEVIER (Scopus), EBSCOhost, Cochrane Library, Web of Science (WoS), IEEE Xplore, and ProQuest for original studies published between 2019 and 2024. Two independent reviewers extracted study characteristics and outcomes, assessed methodological quality via the Newcastle–Ottawa Scale, and evaluated bias using ROBINS-E. Results: Seventy-seven studies met inclusion criteria, encompassing convolutional neural networks, ensemble and hybrid models, and transfer-learning approaches. Commercial systems (e.g., BoneXpert®, Physis®, VUNO Med®-BoneAge) achieved mean absolute errors of 2–31.8 months—significantly surpassing Greulich–Pyle and Tanner–Whitehouse benchmarks—and reduced reading times by up to 87%. Common limitations included demographic bias, heterogeneous imaging protocols, and scarce external validation. Conclusions: AI-based approaches have substantially advanced automated bone-age estimation, delivering clinical-grade speed and mean absolute errors below 6 months. To ensure equitable, generalizable performance, future work must prioritize demographically diverse training cohorts, implement bias-mitigation strategies, and perform local calibration against region-specific standards. Full article
(This article belongs to the Special Issue Radiology and Biomedical Imaging in Musculoskeletal Research)
Show Figures

Figure 1

Back to TopTop