Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (141)

Search Parameters:
Keywords = CNN-LSTM-ML

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1854 KB  
Review
Machine Learning Techniques for Battery State of Health Prediction: A Comparative Review
by Leila Mbagaya, Kumeshan Reddy and Annelize Botes
World Electr. Veh. J. 2025, 16(11), 594; https://doi.org/10.3390/wevj16110594 - 28 Oct 2025
Viewed by 513
Abstract
Accurate estimation of the state of health (SOH) of lithium-ion batteries is essential for the safe and efficient operation of electric vehicles (EVs). Conventional approaches, including Coulomb counting, electrochemical impedance spectroscopy, and equivalent circuit models, provide useful insights but face practical limitations such [...] Read more.
Accurate estimation of the state of health (SOH) of lithium-ion batteries is essential for the safe and efficient operation of electric vehicles (EVs). Conventional approaches, including Coulomb counting, electrochemical impedance spectroscopy, and equivalent circuit models, provide useful insights but face practical limitations such as error accumulation, high equipment requirements, and limited applicability across different conditions. These challenges have encouraged the use of machine learning (ML) methods, which can model nonlinear relationships and temporal degradation patterns directly from cycling data. This paper reviews four machine learning algorithms that are widely applied in SOH estimation: support vector regression (SVR), random forest (RF), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Their methodologies, advantages, limitations, and recent extensions are discussed with reference to the existing literature. To complement the review, MATLAB-based simulations were carried out using the NASA Prognostics Center of Excellence (PCoE) dataset. Training was performed on three cells (B0006, B0007, B0018), and testing was conducted on an unseen cell (B0005) to evaluate cross-battery generalisation. The results show that the LSTM model achieved the highest accuracy (RMSE = 0.0146, MAE = 0.0118, R2 = 0.980), followed by CNN and RF, both of which provided acceptable accuracy with errors below 2% SOH. SVR performed less effectively (RMSE = 0.0457, MAPE = 4.80%), reflecting its difficulty in capturing sequential dependencies. These outcomes are consistent with findings in the literature, indicating that deep learning models are better suited for modelling long-term battery degradation, while ensemble approaches such as RF remain competitive when supported by carefully engineered features. This review also identifies ongoing and future research directions, including the use of optimisation algorithms for hyperparameter tuning, transfer learning for adaptation across battery chemistries, and explainable AI to improve interpretability. Overall, LSTM and hybrid models that combine complementary methods (e.g., CNN-LSTM) show strong potential for deployment in battery management systems, where reliable SOH prediction is important for safety, cost reduction, and extending battery lifetime. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

29 pages, 2242 KB  
Systematic Review
Artificial Intelligence for Optimizing Solar Power Systems with Integrated Storage: A Critical Review of Techniques, Challenges, and Emerging Trends
by Raphael I. Areola, Abayomi A. Adebiyi and Katleho Moloi
Electricity 2025, 6(4), 60; https://doi.org/10.3390/electricity6040060 - 25 Oct 2025
Viewed by 673
Abstract
The global transition toward sustainable energy has significantly accelerated the deployment of solar power systems. Yet, the inherent variability of solar energy continues to present considerable challenges in ensuring its stable and efficient integration into modern power grids. As the demand for clean [...] Read more.
The global transition toward sustainable energy has significantly accelerated the deployment of solar power systems. Yet, the inherent variability of solar energy continues to present considerable challenges in ensuring its stable and efficient integration into modern power grids. As the demand for clean and dependable energy sources intensifies, the integration of artificial intelligence (AI) with solar systems, particularly those coupled with energy storage, has emerged as a promising and increasingly vital solution. It explores the practical applications of machine learning (ML), deep learning (DL), fuzzy logic, and emerging generative AI models, focusing on their roles in areas such as solar irradiance forecasting, energy management, fault detection, and overall operational optimisation. Alongside these advancements, the review also addresses persistent challenges, including data limitations, difficulties in model generalization, and the integration of AI in real-time control scenarios. We included peer-reviewed journal articles published between 2015 and 2025 that apply AI methods to PV + ESS, with empirical evaluation. We excluded studies lacking evaluation against baselines or those focusing solely on PV or ESS in isolation. We searched IEEE Xplore, Scopus, Web of Science, and Google Scholar up to 1 July 2025. Two reviewers independently screened titles/abstracts and full texts; disagreements were resolved via discussion. Risk of bias was assessed with a custom tool evaluating validation method, dataset partitioning, baseline comparison, overfitting risk, and reporting clarity. Results were synthesized narratively by grouping AI techniques (forecasting, MPPT/control, dispatch, data augmentation). We screened 412 records and included 67 studies published between 2018 and 2025, following a documented PRISMA process. The review revealed that AI-driven techniques significantly enhance performance in solar + battery energy storage system (BESS) applications. In solar irradiance and PV output forecasting, deep learning models in particular, long short-term memory (LSTM) and hybrid convolutional neural network–LSTM (CNN–LSTM) architectures repeatedly outperform conventional statistical methods, obtaining significantly lower Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and higher R-squared. Smarter energy dispatch and market-based storage decisions are made possible by reinforcement learning and deep reinforcement learning frameworks, which increase economic returns and lower curtailment risks. Furthermore, hybrid metaheuristic–AI optimisation improves control tuning and system sizing with increased efficiency and convergence. In conclusion, AI enables transformative gains in forecasting, dispatch, and optimisation for solar-BESSs. Future efforts should focus on explainable, robust AI models, standardized benchmark datasets, and real-world pilot deployments to ensure scalability, reliability, and stakeholder trust. Full article
Show Figures

Figure 1

47 pages, 36851 KB  
Article
Comparative Analysis of ML and DL Models for Data-Driven SOH Estimation of LIBs Under Diverse Temperature and Load Conditions
by Seyed Saeed Madani, Marie Hébert, Loïc Boulon, Alexandre Lupien-Bédard and François Allard
Batteries 2025, 11(11), 393; https://doi.org/10.3390/batteries11110393 - 24 Oct 2025
Viewed by 398
Abstract
Accurate estimation of lithium-ion battery (LIB) state of health (SOH) underpins safe operation, predictive maintenance, and lifetime-aware energy management. Despite recent advances in machine learning (ML), systematic benchmarking across heterogeneous real-world cells remains limited, often confounded by data leakage and inconsistent validation. Here, [...] Read more.
Accurate estimation of lithium-ion battery (LIB) state of health (SOH) underpins safe operation, predictive maintenance, and lifetime-aware energy management. Despite recent advances in machine learning (ML), systematic benchmarking across heterogeneous real-world cells remains limited, often confounded by data leakage and inconsistent validation. Here, we establish a leakage-averse, cross-battery evaluation framework encompassing 32 commercial LIBs (B5–B56) spanning diverse cycling histories and temperatures (≈4 °C, 24 °C, 43 °C). Models ranging from classical regressors to ensemble trees and deep sequence architectures were assessed under blocked 5-fold GroupKFold splits using RMSE, MAE, R2 with confidence intervals, and inference latency. The results reveal distinct stratification among model families. Sequence-based architectures—CNN–LSTM, GRU, and LSTM—consistently achieved the highest accuracy (mean RMSE ≈ 0.006; per-cell R2 up to 0.996), demonstrating strong generalization across regimes. Gradient-boosted ensembles such as LightGBM and CatBoost delivered competitive mid-tier accuracy (RMSE ≈ 0.012–0.015) yet unrivaled computational efficiency (≈0.001–0.003 ms), confirming their suitability for embedded applications. Transformer-based hybrids underperformed, while approximately one-third of cells exhibited elevated errors linked to noise or regime shifts, underscoring the necessity of rigorous evaluation design. Collectively, these findings establish clear deployment guidelines: CNN–LSTM and GRU are recommended where robustness and accuracy are paramount (cloud and edge analytics), while LightGBM and CatBoost offer optimal latency–efficiency trade-offs for embedded controllers. Beyond model choice, the study highlights data curation and leakage-averse validation as critical enablers for transferable and reliable SOH estimation. This benchmarking framework provides a robust foundation for future integration of ML models into real-world battery management systems. Full article
Show Figures

Figure 1

34 pages, 385 KB  
Review
Machine Learning in MRI Brain Imaging: A Review of Methods, Challenges, and Future Directions
by Martyna Ottoni, Anna Kasperczuk and Luis M. N. Tavora
Diagnostics 2025, 15(21), 2692; https://doi.org/10.3390/diagnostics15212692 - 24 Oct 2025
Viewed by 673
Abstract
In recent years, machine learning (ML) has been increasingly used in many fields, including medicine. Magnetic resonance imaging (MRI) is a non-invasive and effective diagnostic technique; however, manual image analysis is time-consuming and prone to human variability. In response, ML models have been [...] Read more.
In recent years, machine learning (ML) has been increasingly used in many fields, including medicine. Magnetic resonance imaging (MRI) is a non-invasive and effective diagnostic technique; however, manual image analysis is time-consuming and prone to human variability. In response, ML models have been developed to support MRI analysis, particularly in segmentation and classification tasks. This work presents an updated narrative review of ML applications in brain MRI, with a focus on tumor classification and segmentation. A literature search was conducted in PubMed and Scopus databases and Mendeley Catalog (MC)—a publicly accessible bibliographic catalog linked to Elsevier’s Scopus indexing system—covering the period from January 2020 to April 2025. The included studies focused on patients with primary or secondary brain neoplasms and applied machine learning techniques to MRI data for classification or segmentation purposes. Only original research articles written in English and reporting model validation were considered. Studies using animal models, non-imaging data, lacking proper validation, or without accessible full texts (e.g., abstract-only records or publications unavailable through institutional access) were excluded. In total, 108 studies met all inclusion criteria and were analyzed qualitatively. In general, models based on convolutional neural networks (CNNs) were found to dominate current research due to their ability to extract spatial features directly from imaging data. Reported classification accuracies ranged from 95% to 99%, while Dice coefficients for segmentation tasks varied between 0.83 and 0.94. Hybrid architectures (e.g., CNN-SVM, CNN-LSTM) achieved strong results in both classification and segmentation tasks, with accuracies above 95% and Dice scores around 0.90. Transformer-based models, such as the Swin Transformer, reached the highest performance, up to 99.9%. Despite high reported accuracy, challenges remain regarding overfitting, generalization to real-world clinical data, and lack of standardized evaluation protocols. Transfer learning and data augmentation were frequently applied to mitigate limited data availability, while radiomics-based models introduced new avenues for personalized diagnostics. ML has demonstrated substantial potential in enhancing brain MRI analysis and supporting clinical decision-making. Nevertheless, further progress requires rigorous clinical validation, methodological standardization, and comparative benchmarking to bridge the gap between research settings and practical deployment. Full article
(This article belongs to the Special Issue Brain/Neuroimaging 2025–2026)
16 pages, 363 KB  
Article
Machine Learning-Enhanced Last-Mile Delivery Optimization: Integrating Deep Reinforcement Learning with Queueing Theory for Dynamic Vehicle Routing
by Tsai-Hsin Jiang and Yung-Chia Chang
Appl. Sci. 2025, 15(21), 11320; https://doi.org/10.3390/app152111320 - 22 Oct 2025
Viewed by 513
Abstract
We present the ML-CALMO framework, which integrates machine learning with queueing theory for last-mile delivery optimization under dynamic conditions. The system combines Long Short-Term Memory (LSTM) demand forecasting, Convolutional Neural Network (CNN) traffic prediction, and Deep Q-Network (DQN)-based routing with theoretical stability guarantees. [...] Read more.
We present the ML-CALMO framework, which integrates machine learning with queueing theory for last-mile delivery optimization under dynamic conditions. The system combines Long Short-Term Memory (LSTM) demand forecasting, Convolutional Neural Network (CNN) traffic prediction, and Deep Q-Network (DQN)-based routing with theoretical stability guarantees. Evaluation on modern benchmarks, including the 2022 Multi-Depot Dynamic VRP with Stochastic Road Capacity (MDDVRPSRC) dataset and real-world compatible data from OSMnx-based spatial extraction, demonstrates measurable improvements: 18.5% reduction in delivery time and +8.9 pp (≈12.2% relative) gain in service efficiency compared to current state-of-the-art methods, with statistical significance (p < 0.01). Critical limitations include (1) computational requirements that necessitate mid-range GPU hardware, (2) performance degradation under rapid parameter changes (drift rate > 0.5/min), and (3) validation limited to simulation environments. The framework provides a foundation for integrating predictive machine learning with operational guarantees, though field deployment requires addressing identified scalability and robustness constraints. All code, data, and experimental configurations are publicly available for reproducibility. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

29 pages, 3574 KB  
Article
CBATE-Net: An Accurate Battery Capacity and State-of-Health (SoH) Estimation Tool for Energy Storage Systems
by Fazal Ur Rehman, Concettina Buccella and Carlo Cecati
Energies 2025, 18(20), 5533; https://doi.org/10.3390/en18205533 - 21 Oct 2025
Viewed by 422
Abstract
In battery energy storage systems, accurately estimating battery capacity and state of health is crucial to ensure satisfactory operation and system efficiency and reliability. However, these tasks present particular challenges under irregular charge–discharge conditions, such as those encountered in renewable energy integration and [...] Read more.
In battery energy storage systems, accurately estimating battery capacity and state of health is crucial to ensure satisfactory operation and system efficiency and reliability. However, these tasks present particular challenges under irregular charge–discharge conditions, such as those encountered in renewable energy integration and electric vehicles, where heterogeneous cycling accelerates degradation. This study introduces a hybrid deep learning framework to address these challenges. It combines convolutional layers for localized feature extraction, bidirectional recurrent units for sequential learning and a temporal attention mechanism. The proposed hybrid deep learning model, termed CBATE-Net, uses ensemble averaging to improve stability and emphasizes degradation-critical intervals. The framework was evaluated using voltage, current and temperature signals from four benchmark lithium-ion cells across complete life cycles, as part of the NASA dataset. The results demonstrate that the proposed method can accurately track both smooth and abrupt capacity fade while maintaining stability near the end of the life cycle, an area in which conventional models often struggle. Integrating feature learning, temporal modelling and robustness enhancements in a unified design provides the framework with the ability to make accurate and interpretable predictions, making it suitable for deployment in real-world battery energy storage applications. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

19 pages, 4178 KB  
Article
Gait Event Detection and Gait Parameter Estimation from a Single Waist-Worn IMU Sensor
by Roland Stenger, Hawzhin Hozhabr Pour, Jonas Teich, Andreas Hein and Sebastian Fudickar
Sensors 2025, 25(20), 6463; https://doi.org/10.3390/s25206463 - 19 Oct 2025
Viewed by 684
Abstract
Changes in gait are associated with an increased risk of falling and may indicate the presence of movement disorders related to neurological diseases or age-related weakness. Continuous monitoring based on inertial measurement unit (IMU) sensor data can effectively estimate gait parameters that reflect [...] Read more.
Changes in gait are associated with an increased risk of falling and may indicate the presence of movement disorders related to neurological diseases or age-related weakness. Continuous monitoring based on inertial measurement unit (IMU) sensor data can effectively estimate gait parameters that reflect changes in gait dynamics. Monitoring using a waist-level IMU sensor is particularly useful for assessing such data, as it can be conveniently worn as a sensor-integrated belt or observed through a smartphone application. Our work investigates the efficacy of estimating gait events and gait parameters based on data collected from a waist-worn IMU sensor. The results are compared to measurements obtained using a GAITRite® system as reference. We evaluate two machine learning (ML)-based methods. Both ML methods are structured as sequence to sequence (Seq2Seq). The efficacy of both approaches in accurately determining gait events and parameters is assessed using a dataset comprising 17,643 recorded steps from 69 subjects, who performed a total of 3588 walks, each covering approximately 4 m. Results indicate that the Convolutional Neural Network (CNN)-based algorithm outperforms the long short-term memory (LSTM) method, achieving a detection accuracy of 98.94% for heel strikes (HS) and 98.65% for toe-offs (TO), with a mean error (ME) of 0.09 ± 4.69 cm in estimating step lengths. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

37 pages, 8530 KB  
Article
AI-Driven Optimization of Plastic-Based Mortars Incorporating Industrial Waste for Modern Construction
by Aïssa Rezzoug
Buildings 2025, 15(20), 3751; https://doi.org/10.3390/buildings15203751 - 17 Oct 2025
Viewed by 289
Abstract
Cementitious composites with recycled plastic often suffer from reduced strength. This study explores the partial substitution of cement with industrial by-products in plastic-based mortar mixes (PBMs) to enhance performance while reducing environmental impact. To achieve this, five hybrid machine learning (ML) models CNN-LSTM, [...] Read more.
Cementitious composites with recycled plastic often suffer from reduced strength. This study explores the partial substitution of cement with industrial by-products in plastic-based mortar mixes (PBMs) to enhance performance while reducing environmental impact. To achieve this, five hybrid machine learning (ML) models CNN-LSTM, XGBoost-PSO, SVM + K-Means, SVM-PSO, and XGBoost + K-Means were developed to predict flexural strength, production cost, and CO2 emissions using a large dataset compiled from peer-reviewed sources. The CNN-LSTM model consistently outperformed the other approaches, showing high predictive capability for both mechanical and sustainability-related outputs. Sensitivity analysis revealed that water content and superplasticizer dosage are the most influential factors in improving flexural strength, while excessive cement and plastic waste were found to negatively impact performance. The proposed ML framework was also successful in estimating production cost and CO2 emissions, demonstrating strong alignment between predicted and actual values. Beyond mechanical and environmental predictions, the framework was extended through the RA-PSO model to estimate compressive and tensile strengths with high reliability. To support practical adoption, the study proposes a graphical user interface (GUI) that allows engineers and researchers to efficiently evaluate durability, cost, and environmental indicators. In addition, the establishment of an open access data-sharing platform is recommended to encourage broader utilization of PBMs in the production of paving blocks and non-structural masonry units. Overall, this work highlights the potential of hybrid ML approaches to optimize sustainable cementitious composites, bridging the gap between performance requirements and environmental responsibility. Full article
Show Figures

Figure 1

36 pages, 3174 KB  
Review
A Bibliometric-Systematic Literature Review (B-SLR) of Machine Learning-Based Water Quality Prediction: Trends, Gaps, and Future Directions
by Jeimmy Adriana Muñoz-Alegría, Jorge Núñez, Ricardo Oyarzún, Cristian Alfredo Chávez, José Luis Arumí and Lien Rodríguez-López
Water 2025, 17(20), 2994; https://doi.org/10.3390/w17202994 - 17 Oct 2025
Viewed by 983
Abstract
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified [...] Read more.
Predicting the quality of freshwater, both surface and groundwater, is essential for the sustainable management of water resources. This study collected 1822 articles from the Scopus database (2000–2024) and filtered them using Topic Modeling to create the study corpus. The B-SLR analysis identified exponential growth in scientific publications since 2020, indicating that this field has reached a stage of maturity. The results showed that the predominant techniques for predicting water quality, both for surface and groundwater, fall into three main categories: (i) ensemble models, with Bagging and Boosting representing 43.07% and 25.91%, respectively, particularly random forest (RF), light gradient boosting machine (LightGBM), and extreme gradient boosting (XGB), along with their optimized variants; (ii) deep neural networks such as long short-term memory (LSTM) and convolutional neural network (CNN), which excel at modeling complex temporal dynamics; and (iii) traditional algorithms like artificial neural network (ANN), support vector machines (SVMs), and decision tree (DT), which remain widely used. Current trends point towards the use of hybrid and explainable architectures, with increased application of interpretability techniques. Emerging approaches such as Generative Adversarial Network (GAN) and Group Method of Data Handling (GMDH) for data-scarce contexts, Transfer Learning for knowledge reuse, and Transformer architectures that outperform LSTM in time series prediction tasks were also identified. Furthermore, the most studied water bodies (e.g., rivers, aquifers) and the most commonly used water quality indicators (e.g., WQI, EWQI, dissolved oxygen, nitrates) were identified. The B-SLR and Topic Modeling methodology provided a more robust, reproducible, and comprehensive overview of AI/ML/DL models for freshwater quality prediction, facilitating the identification of thematic patterns and research opportunities. Full article
(This article belongs to the Special Issue Machine Learning Applications in the Water Domain)
Show Figures

Figure 1

29 pages, 2068 KB  
Article
Voice-Based Early Diagnosis of Parkinson’s Disease Using Spectrogram Features and AI Models
by Danish Quamar, V. D. Ambeth Kumar, Muhammad Rizwan, Ovidiu Bagdasar and Manuella Kadar
Bioengineering 2025, 12(10), 1052; https://doi.org/10.3390/bioengineering12101052 - 29 Sep 2025
Viewed by 1014
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that significantly affects motor functions, including speech production. Voice analysis offers a less invasive, faster and more cost-effective approach for diagnosing and monitoring PD over time. This research introduces an automated system to distinguish between [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that significantly affects motor functions, including speech production. Voice analysis offers a less invasive, faster and more cost-effective approach for diagnosing and monitoring PD over time. This research introduces an automated system to distinguish between PD and non-PD individuals based on speech signals using state-of-the-art signal processing and machine learning (ML) methods. A publicly available voice dataset (Dataset 1, 81 samples) containing speech recordings from PD patients and non-PD individuals was used for model training and evaluation. Additionally, a small supplementary dataset (Dataset 2, 15 samples) was created although excluded from experiment, to illustrate potential future extensions of this work. Features such as Mel-frequency cepstral coefficients (MFCCs), spectrograms, Mel spectrograms and waveform representations were extracted to capture key vocal impairments related to PD, including diminished vocal range, weak harmonics, elevated spectral entropy and impaired formant structures. These extracted features were used to train and evaluate several ML models, including support vector machine (SVM), XGBoost and logistic regression, as well as deep learning (DL)architectures such as deep neural networks (DNN), convolutional neural networks (CNN) combined with long short-term memory (LSTM), CNN + gated recurrent unit (GRU) and bidirectional LSTM (BiLSTM). Experimental results show that DL models, particularly BiLSTM, outperform traditional ML models, achieving 97% accuracy and an AUC of 0.95. The comprehensive feature extraction from both datasets enabled robust classification of PD and non-PD speech signals. These findings highlight the potential of integrating acoustic features with DL methods for early diagnosis and monitoring of Parkinson’s Disease. Full article
Show Figures

Figure 1

52 pages, 3501 KB  
Review
The Role of Artificial Intelligence and Machine Learning in Advancing Civil Engineering: A Comprehensive Review
by Ali Bahadori-Jahromi, Shah Room, Chia Paknahad, Marwah Altekreeti, Zeeshan Tariq and Hooman Tahayori
Appl. Sci. 2025, 15(19), 10499; https://doi.org/10.3390/app151910499 - 28 Sep 2025
Cited by 1 | Viewed by 2201
Abstract
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of [...] Read more.
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of peer-reviewed publications from the past decade, employing bibliometric mapping and critical evaluation to analyse methodological advances, practical applications, and limitations. A novel taxonomy is introduced, classifying AI/ML approaches by civil engineering domain, learning paradigm, and adoption maturity to guide future development. Key applications include pavement condition assessment, slope stability prediction, traffic flow forecasting, smart water management, and flood forecasting, leveraging techniques such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Support Vector Machines (SVMs), and hybrid physics-informed neural networks (PINNs). The review highlights challenges, including limited high-quality datasets, absence of AI provisions in design codes, integration barriers with IoT-based infrastructure, and computational complexity. While explainable AI tools like SHAP and LIME improve interpretability, their practical feasibility in safety-critical contexts remains constrained. Ethical considerations, including bias in training datasets and regulatory compliance, are also addressed. Promising directions include federated learning for data privacy, transfer learning for data-scarce regions, digital twins, and adherence to FAIR data principles. This study underscores AI as a complementary tool, not a replacement, for traditional methods, fostering a data-driven, resilient, and sustainable built environment through interdisciplinary collaboration and transparent, explainable systems. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

35 pages, 3558 KB  
Article
Realistic Performance Assessment of Machine Learning Algorithms for 6G Network Slicing: A Dual-Methodology Approach with Explainable AI Integration
by Sümeye Nur Karahan, Merve Güllü, Deniz Karhan, Sedat Çimen, Mustafa Serdar Osmanca and Necaattin Barışçı
Electronics 2025, 14(19), 3841; https://doi.org/10.3390/electronics14193841 - 27 Sep 2025
Viewed by 626
Abstract
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized [...] Read more.
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized conditions and their actual effectiveness in realistic deployment scenarios. This study presents a comprehensive comparative analysis of two distinct preprocessing methodologies for 6G network slicing classification: Pure Raw Data Analysis (PRDA) and Literature-Validated Realistic Transformations (LVRTs). We evaluate the impact of these strategies on algorithm performance, resilience characteristics, and practical deployment feasibility to bridge the laboratory–reality gap in 6G network optimization. Our experimental methodology involved testing eleven machine learning algorithms—including traditional ML, ensemble methods, and deep learning approaches—on a dataset comprising 10,000 network slicing samples (expanded to 21,033 through realistic transformations) across five network slice types. The LVRT methodology incorporates realistic operational impairments including market-driven class imbalance (9:1 ratio), multi-layer interference patterns, and systematic missing data reflecting authentic 6G deployment challenges. The experimental results revealed significant differences in algorithm behavior between the two preprocessing approaches. Under PRDA conditions, deep learning models achieved perfect accuracy (100% for CNN and FNN), while traditional algorithms ranged from 60.9% to 89.0%. However, LVRT results exposed dramatic performance variations, with accuracies spanning from 58.0% to 81.2%. Most significantly, we discovered that algorithms achieving excellent laboratory performance experience substantial degradation under realistic conditions, with CNNs showing an 18.8% accuracy loss (dropping from 100% to 81.2%), FNNs experiencing an 18.9% loss (declining from 100% to 81.1%), and Naive Bayes models suffering a 34.8% loss (falling from 89% to 58%). Conversely, SVM (RBF) and Logistic Regression demonstrated counter-intuitive resilience, improving by 14.1 and 10.3 percentage points, respectively, under operational stress, demonstrating superior adaptability to realistic network conditions. This study establishes a resilience-based classification framework enabling informed algorithm selection for diverse 6G deployment scenarios. Additionally, we introduce a comprehensive explainable artificial intelligence (XAI) framework using SHAP analysis to provide interpretable insights into algorithm decision-making processes. The XAI analysis reveals that Packet Loss Budget emerges as the dominant feature across all algorithms, while Slice Jitter and Slice Latency constitute secondary importance features. Cross-scenario interpretability consistency analysis demonstrates that CNN, LSTM, and Naive Bayes achieve perfect or near-perfect consistency scores (0.998–1.000), while SVM and Logistic Regression maintain high consistency (0.988–0.997), making them suitable for regulatory compliance scenarios. In contrast, XGBoost shows low consistency (0.106) despite high accuracy, requiring intensive monitoring for deployment. This research contributes essential insights for bridging the critical gap between algorithm development and deployment success in next-generation wireless networks, providing evidence-based guidelines for algorithm selection based on accuracy, resilience, and interpretability requirements. Our findings establish quantitative resilience boundaries: algorithms achieving >99% laboratory accuracy exhibit 58–81% performance under realistic conditions, with CNN and FNN maintaining the highest absolute accuracy (81.2% and 81.1%, respectively) despite experiencing significant degradation from laboratory conditions. Full article
Show Figures

Figure 1

17 pages, 571 KB  
Systematic Review
Artificial Intelligence in Predictive Healthcare: A Systematic Review
by Abeer Al-Nafjan, Amaal Aljuhani, Arwa Alshebel, Asma Alharbi and Atheer Alshehri
J. Clin. Med. 2025, 14(19), 6752; https://doi.org/10.3390/jcm14196752 - 24 Sep 2025
Viewed by 1631
Abstract
Background/Objectives: Today, Artificial intelligence (AI) and machine learning (ML) significantly enhance predictive analytics in the healthcare landscape, enabling timely and accurate predictions that lead to proactive interventions, personalized treatment plans, and ultimately improved patient care. As healthcare systems increasingly adopt data-driven approaches, the [...] Read more.
Background/Objectives: Today, Artificial intelligence (AI) and machine learning (ML) significantly enhance predictive analytics in the healthcare landscape, enabling timely and accurate predictions that lead to proactive interventions, personalized treatment plans, and ultimately improved patient care. As healthcare systems increasingly adopt data-driven approaches, the integration of AI and data analysis has garnered substantial interest, as reflected in the growing number of publications highlighting innovative applications of AI in clinical settings. This review synthesizes recent evidence on application areas, commonly used models, metrics, and challenges. Methods: We conducted a systematic literature review between using Web of Science and Google Scholar databases from 2021–2025 covering a diverse range of AI and ML techniques applied to disease prediction. Results: Twenty-two studies met criteria. The most frequently used machine learning approaches were tree-based ensemble models (e.g., Random Forest, XGBoost, LightGBM) for structured clinical data, and deep learning architectures (e.g., CNN, LSTM) for imaging and time-series tasks. Evaluation most commonly relied on AUROC, F1-score, accuracy, and sensitivity. key challenges remain regarding data privacy, integration with clinical workflows, model interpretability, and the necessity for high-quality representative datasets. Conclusions: Future research should focus on developing interpretable models that clinicians can understand and trust, implementing robust privacy-preserving techniques to safeguard patient data, and establishing standardized evaluation frameworks to effectively assess model performance. Full article
Show Figures

Figure 1

38 pages, 2833 KB  
Systematic Review
Customer Churn Prediction: A Systematic Review of Recent Advances, Trends, and Challenges in Machine Learning and Deep Learning
by Mehdi Imani, Majid Joudaki, Ali Beikmohammadi and Hamid Reza Arabnia
Mach. Learn. Knowl. Extr. 2025, 7(3), 105; https://doi.org/10.3390/make7030105 - 21 Sep 2025
Viewed by 4652
Abstract
Background: Customer churn significantly impacts business revenues. Machine Learning (ML) and Deep Learning (DL) methods are increasingly adopted to predict churn, yet a systematic synthesis of recent advancements is lacking. Objectives: This systematic review evaluates ML and DL approaches for churn prediction, identifying [...] Read more.
Background: Customer churn significantly impacts business revenues. Machine Learning (ML) and Deep Learning (DL) methods are increasingly adopted to predict churn, yet a systematic synthesis of recent advancements is lacking. Objectives: This systematic review evaluates ML and DL approaches for churn prediction, identifying trends, challenges, and research gaps from 2020 to 2024. Data Sources: Six databases (Springer, IEEE, Elsevier, MDPI, ACM, Wiley) were searched via Lens.org for studies published between January 2020 and December 2024. Study Eligibility Criteria: Peer-reviewed original studies applying ML/DL techniques for churn prediction were included. Reviews, preprints, and non-peer-reviewed works were excluded. Methods: Screening followed PRISMA 2020 guidelines. A two-phase strategy identified 240 studies for bibliometric analysis and 61 for detailed qualitative synthesis. Results: Ensemble methods (e.g., XGBoost, LightGBM) remain dominant in ML, while DL approaches (e.g., LSTM, CNN) are increasingly applied to complex data. Challenges include class imbalance, interpretability, concept drift, and limited use of profit-oriented metrics. Explainable AI and adaptive learning show potential but limited real-world adoption. Limitations: No formal risk of bias or certainty assessments were conducted. Study heterogeneity prevented meta-analysis. Conclusions: ML and DL methods have matured as key tools for churn prediction, yet gaps remain in interpretability, real-world deployment, and business-aligned evaluation. Systematic Review Registration: Registered retrospectively in OSF. Full article
Show Figures

Graphical abstract

15 pages, 330 KB  
Article
Detecting Diverse Seizure Types with Wrist-Worn Wearable Devices: A Comparison of Machine Learning Approaches
by Louis Faust, Jie Cui, Camille Knepper, Mona Nasseri, Gregory Worrell and Benjamin H. Brinkmann
Sensors 2025, 25(17), 5562; https://doi.org/10.3390/s25175562 - 6 Sep 2025
Viewed by 1679
Abstract
Objective: To evaluate the feasibility and effectiveness of wrist-worn wearable devices combined with machine learning (ML) approaches for detecting a diverse array of seizure types beyond generalized tonic–clonic (GTC), including focal, generalized, and subclinical seizures. Materials and Methods: Twenty-eight patients undergoing [...] Read more.
Objective: To evaluate the feasibility and effectiveness of wrist-worn wearable devices combined with machine learning (ML) approaches for detecting a diverse array of seizure types beyond generalized tonic–clonic (GTC), including focal, generalized, and subclinical seizures. Materials and Methods: Twenty-eight patients undergoing inpatient video-EEG monitoring at Mayo Clinic were concurrently monitored using Empatica E4 wrist-worn devices. These devices captured accelerometry, blood volume pulse, electrodermal activity, skin temperature, and heart rate. Seizures were annotated by neurologists. The data were preprocessed to experiment with various segment lengths (10 s and 60 s) and multiple feature sets. Three ML strategies, XGBoost, deep learning models (LSTM, CNN, Transformer), and ROCKET, were evaluated using leave-one-patient-out cross-validation. Performance was assessed using area under the receiver operating characteristic curve (AUROC), seizure-wise recall (SW-Recall), and false alarms per hour (FA/h). Results: Detection performance varied by seizure type and model. GTC seizures were detected most reliably (AUROC = 0.86, SW-Recall = 0.81, FA/h = 3.03). Hyperkinetic and tonic seizures showed high SW-Recall but also high FA/h. Subclinical and aware-dyscognitive seizures exhibited the lowest SW-Recall and highest FA/h. MultiROCKET and XGBoost performed best overall, though no single model was optimal for all seizure types. Longer segments (60 s) generally reduced FA/h. Feature set effectiveness varied, with multi-biosignal sets improving performance across seizure types. Conclusions: Wrist-worn wearables combined with ML can extend seizure detection beyond GTC seizures, though performance remains limited for non-motor types. Optimizing model selection, feature sets, and segment lengths, and minimizing false alarms, are key to clinical utility and real-world adoption. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Back to TopTop