Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = fair machine learning (ML)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2446 KB  
Article
Machine Learning & Artificial Intelligence Powered Credit Scoring Models for Islamic Microfinance Institutions: A Blockchain Approach
by Mohammad Mushfiqul Haque Mukit, Fakhrul Hasan, Tonmoy Choudhury, Amer Al Fadli and Abubaker Fadul
Risks 2026, 14(1), 12; https://doi.org/10.3390/risks14010012 - 5 Jan 2026
Viewed by 257
Abstract
Islamic Microfinance Institutions (IMFIs) encounter distinct difficulties with credit scoring because they need to follow Shariah principles that combine riba bans with fair financial dealings regulations. Conventional credit scoring models exhibit two shortcomings: a poor capability to incorporate non-financial behavioral data and inadequate [...] Read more.
Islamic Microfinance Institutions (IMFIs) encounter distinct difficulties with credit scoring because they need to follow Shariah principles that combine riba bans with fair financial dealings regulations. Conventional credit scoring models exhibit two shortcomings: a poor capability to incorporate non-financial behavioral data and inadequate support for Islamic Microfinance Institutions’ requirements. Researchers use machine learning coupled with blockchain technology to create an adaptive Shariah-compliant credit scoring method that solves problems found in standard evaluation systems. Using a dataset of 1275 farmers with 52 weeks of transaction data, we implemented and compared three ML models: Linear Regression, Random Forest, and Gradient Boosting. Data preparation involved addressing 53% missing transaction data, followed by summing weekly financial activity to prepare it for predictive evaluations. Our analysis shows that the Random Forest model produced the best results with an R-squared value of 0.87 and a Mean Squared Error (MSE) of 12.4. In creditworthiness binary classification tasks, Gradient Boosting delivered an F1 score of 0.91 while maintaining precision at 0.89 and recall at 0.93. Blockchain integration exists to protect data through secure mechanisms that also conserve Islamic financial integrity and promote transparency. The research shows how ML and Blockchain technology enable fundamental changes in IMFIs by delivering elevated predictive accuracy, operational enhancements, and complete transparency. The conceptual framework guides ethical financial inclusion strategy by offering a solution for marginalized communities, but remains consistent with global sustainability objectives. The research established foundational elements for implementing cutting-edge technologies within IMFIs, which will promote new economic growth and build confidence in Shariah-compliant financial systems. Full article
(This article belongs to the Special Issue Artificial Intelligence Risk Management)
Show Figures

Figure 1

22 pages, 1143 KB  
Review
AI-Enabled Precision Nutrition in the ICU: A Narrative Review and Implementation Roadmap
by George Briassoulis and Efrossini Briassouli
Nutrients 2026, 18(1), 110; https://doi.org/10.3390/nu18010110 - 28 Dec 2025
Viewed by 549
Abstract
Background: Artificial intelligence (AI) is increasingly used in intensive care units (ICUs) to enable personalized care, real-time analytics, and decision support. Nutritional therapy—a major determinant of ICU outcomes—often remains delayed or non-individualized. Objective: This study aimed to review current and emerging AI applications [...] Read more.
Background: Artificial intelligence (AI) is increasingly used in intensive care units (ICUs) to enable personalized care, real-time analytics, and decision support. Nutritional therapy—a major determinant of ICU outcomes—often remains delayed or non-individualized. Objective: This study aimed to review current and emerging AI applications in ICU nutrition, highlighting clinical potential, implementation barriers, and ethical considerations. Methods: A narrative review of English-language literature (January 2018–November 2025) searched in PubMed/MEDLINE, Scopus, and Web of Science, complemented by a pragmatic Google Scholar sweep and backward/forward citation tracking, was conducted. We focused on machine learning (ML), deep learning (DL), natural language processing (NLP), and reinforcement learning (RL) applications for energy/protein estimation, feeding tolerance prediction, complication prevention, and adaptive decision support in critical-care nutrition. Results: AI models can estimate energy/protein needs, optimize EN/PN initiation and composition, predict gastrointestinal (GI) intolerance and metabolic complications, and adapt therapy in real time. Reinforcement learning (RL) and multi-omics integration enable precision nutrition by leveraging longitudinal physiology and biomarker trajectories. Key barriers are data quality/standardization, interoperability, model interpretability, staff training, and governance (privacy, fairness, accountability). Conclusions: With high-quality data, robust oversight, and clinician education, AI can complement human expertise to deliver safer, more targeted ICU nutrition. Implementation should prioritize transparency, equity, and workflow integration. Full article
(This article belongs to the Special Issue Nutritional Support for Critically Ill Patients)
Show Figures

Figure 1

11 pages, 914 KB  
Review
Artificial Intelligence and Innovation in Oral Health Care Sciences: A Conceptual Review
by Marco Dettori, Demetrio Lamloum, Peter Lingström and Guglielmo Campus
Healthcare 2025, 13(24), 3327; https://doi.org/10.3390/healthcare13243327 - 18 Dec 2025
Viewed by 573
Abstract
Background/Objectives: Artificial intelligence (AI) has rapidly evolved from experimental algorithms to transformative tools in clinical dentistry. Between 2020 and 2025, advances in machine learning (ML) and deep learning (DL) have reshaped diagnostic imaging, caries detection, prosthodontic design, and teledentistry, while raising new [...] Read more.
Background/Objectives: Artificial intelligence (AI) has rapidly evolved from experimental algorithms to transformative tools in clinical dentistry. Between 2020 and 2025, advances in machine learning (ML) and deep learning (DL) have reshaped diagnostic imaging, caries detection, prosthodontic design, and teledentistry, while raising new ethical and regulatory challenges. This study aimed to provide a comprehensive bibliometric and conceptual review of AI applications in dental care, highlighting research trends, thematic clusters, and future directions for equitable and responsible integration of AI technologies. In addition, the review further considers the implications of AI adoption for patient-centered care, including its potential role in supporting shared decision-making processes in oral healthcare. Methods: A comprehensive search was conducted in PubMed, Scopus and Embase for articles published between January 2020 and October 2025 using AI-related keywords in dentistry. Eligible records were analyzed using VOSviewer (v.1.6.20) to map co-occurrence networks of keywords, authors, and citations. A narrative synthesis complemented the bibliometric mapping, emphasizing conceptual and ethical dimensions of AI adoption in oral health care. Results: A total of 50 documents met the inclusion criteria. Bibliometric network visualization identified that the largest and most interconnected clusters were centered around the keywords “artificial intelligence,” “machine learning,” and “deep learning,” reflecting the technological backbone of AI-based applications in dentistry. Thematic evolution analysis indicated increasing interest in generative and multimodal AI models, explainability, and fairness in clinical deployment. Conclusions: AI has become a core driver of innovation in dentistry, enabling precision diagnostics and personalized care. However, responsible translation requires robust validation, transparency, and ethical oversight. Future research should integrate interdisciplinary approaches linking AI performance, patient outcomes, and equity in oral health. Full article
Show Figures

Graphical abstract

44 pages, 1431 KB  
Article
Balancing Fairness and Accuracy in Machine Learning-Based Probability of Default Modeling via Threshold Optimization
by Essodjolo Kpatcha
J. Risk Financial Manag. 2025, 18(12), 724; https://doi.org/10.3390/jrfm18120724 - 17 Dec 2025
Viewed by 850
Abstract
This study presents a fairness-aware framework for modeling the Probability of Default (PD) in individual credit scoring, explicitly addressing the trade-off between predictive accuracy and fairness. As machine learning (ML) models become increasingly prevalent in financial decision-making, concerns around bias and transparency have [...] Read more.
This study presents a fairness-aware framework for modeling the Probability of Default (PD) in individual credit scoring, explicitly addressing the trade-off between predictive accuracy and fairness. As machine learning (ML) models become increasingly prevalent in financial decision-making, concerns around bias and transparency have grown, particularly when improvements in fairness are achieved at the expense of predictive performance. To mitigate these issues, we propose a model-agnostic, post-processing threshold optimization framework that adjusts classification cut-offs using a tunable parameter, enabling institutions to balance fairness and performance objectives. This approach does not require model retraining and supports a scalarized optimization of fairness–performance trade-offs. We conduct extensive experiments with logistic regression, random forests, and XGBoost, evaluating predictive accuracy using Balanced Accuracy alongside fairness metrics such as Statistical Parity Difference and Equal Opportunity Difference. Results demonstrate that the proposed framework can substantially improve fairness outcomes with minimal impact on predictive reliability. In addition, we analyze model-specific trade-off behaviors and introduce diagnostic tools, including quadrant-based and ratio-based analyses, to guide threshold selection under varying institutional priorities. Overall, the framework offers a scalable, interpretable, and regulation-aligned solution for deploying responsible credit risk models, contributing to the broader goal of ethical and equitable financial decision-making. Full article
Show Figures

Figure 1

27 pages, 3549 KB  
Systematic Review
Machine Learning Approaches for Evaluating STEM Education Projects Within Artificial Intelligence in Education: A Systematic Review of Frameworks and Rubrics
by Catalina Muñoz-Collazos and Carolina González-Serrano
Appl. Sci. 2025, 15(23), 12812; https://doi.org/10.3390/app152312812 - 3 Dec 2025
Viewed by 712
Abstract
Objectively evaluating STEM education projects is essential for ensuring fairness, consistency, and evidence-based instructional decisions. Recent interest in data-informed approaches highlights the use of standardized frameworks, rubric-based assessment, and computational techniques to support more transparent evaluation practices. This systematic review examines how machine [...] Read more.
Objectively evaluating STEM education projects is essential for ensuring fairness, consistency, and evidence-based instructional decisions. Recent interest in data-informed approaches highlights the use of standardized frameworks, rubric-based assessment, and computational techniques to support more transparent evaluation practices. This systematic review examines how machine learning (ML) techniques—within the broader field of Artificial Intelligence in Education (AIED)—contribute to the evaluation of STEM projects. Following Kitchenham’s guidelines and PRISMA 2020, searches were conducted across Scopus, Web of Science, ScienceDirect, and IEEE Xplore, resulting in 39 studies published between 2020 and 2025. The findings show that current STEM frameworks emphasize disciplinary integration, inquiry, creativity, and collaboration, while rubrics operationalize these principles through measurable criteria. ML techniques have been applied to classification, prediction, and multidimensional analysis; however, these computational approaches remain largely independent from established frameworks and rubric structures. Existing ML models demonstrate feasibility for modeling evaluative indicators but do not yet integrate pedagogical constructs within automated assessment pipelines. By synthesizing evidence across frameworks, rubrics, and ML techniques, this review clarifies the methodological landscape and identifies opportunities to advance scalable, transparent, and pedagogically aligned evaluation practices. The results provide a conceptual foundation for future work aimed at developing integrative and trustworthy ML-supported evaluation systems in STEM education. Full article
Show Figures

Figure 1

30 pages, 875 KB  
Article
AspectFL: Aspect-Oriented Programming for Trustworthy and Compliant Federated Learning Systems
by Anas AlSobeh, Amani Shatnawi and Aws Magableh
Information 2025, 16(12), 1048; https://doi.org/10.3390/info16121048 - 1 Dec 2025
Cited by 2 | Viewed by 674
Abstract
Federated learning (FL) has emerged as a paradigm-shifting approach for collaborative machine learning (ML) while preserving data privacy. However, existing FL frameworks face significant challenges in ensuring trustworthiness, regulatory compliance, and security across heterogeneous institutional environments. We introduce AspectFL, a novel aspect-oriented programming [...] Read more.
Federated learning (FL) has emerged as a paradigm-shifting approach for collaborative machine learning (ML) while preserving data privacy. However, existing FL frameworks face significant challenges in ensuring trustworthiness, regulatory compliance, and security across heterogeneous institutional environments. We introduce AspectFL, a novel aspect-oriented programming (AOP) framework that seamlessly integrates trust, compliance, and security concerns into FL systems through cross-cutting aspect weaving. Our framework implements four core aspects: FAIR (Findability, Accessibility, Interoperability, Reusability) compliance, security threat detection and mitigation, provenance tracking, and institutional policy enforcement. AspectFL employs a sophisticated aspect weaver that intercepts FL execution at critical joinpoints, enabling dynamic policy enforcement and real-time compliance monitoring without modifying core learning algorithms. We demonstrate AspectFL’s effectiveness through experiments on healthcare and financial datasets, including a detailed and reproducible evaluation on the real-world MIMIC-III dataset. Our results, reported with 95% confidence intervals and validated with appropriate statistical tests, show significant improvements in model performance, with a 4.52% and 0.90% increase in Area Under the Curve (AUC) for the healthcare and financial scenarios, respectively. Furthermore, we present a detailed ablation study, a comparative benchmark against existing FL frameworks, and an empirical scalability analysis, demonstrating the practical viability of our approach. AspectFL achieves high FAIR compliance scores (0.762), robust security (0.798 security score), and consistent policy adherence (over 84%), establishing a new standard for trustworthy FL. Full article
Show Figures

Figure 1

39 pages, 3961 KB  
Article
Traditional Machine Learning Outperforms EEGNet for Consumer-Grade EEG Emotion Recognition: A Comprehensive Evaluation with Cross-Dataset Validation
by Carlos Rodrigo Paredes Ocaranza, Bensheng Yun and Enrique Daniel Paredes Ocaranza
Sensors 2025, 25(23), 7262; https://doi.org/10.3390/s25237262 - 28 Nov 2025
Viewed by 1151
Abstract
Objective. Consumer-grade EEG devices have the potential for widespread brain–computer interface deployment but pose significant challenges for emotion recognition due to reduced spatial coverage and the variable signal quality encountered in uncontrolled deployment environments. While deep learning approaches have employed increasingly complex architectures, [...] Read more.
Objective. Consumer-grade EEG devices have the potential for widespread brain–computer interface deployment but pose significant challenges for emotion recognition due to reduced spatial coverage and the variable signal quality encountered in uncontrolled deployment environments. While deep learning approaches have employed increasingly complex architectures, their efficacy in noisy consumer-grade signals and cross-system generalizability remains unexplored. We present a comprehensive systematic comparison of EEGNet architecture, which has become a benchmark model for consumer-grade EEG analysis versus traditional machine learning, examining when and why domain-specific feature engineering outperforms end-to-end learning in resource constrained scenarios. Approach. We conducted comprehensive within-dataset evaluation using the DREAMER dataset (23 subjects, Emotiv EPOC 14-channel) and challenging cross-dataset validation (DREAMER→SEED-VII transfer). Traditional ML employed domain-specific feature engineering (statistical, frequency-domain, and connectivity features) with random forest classification. Deep learning employed both optimized and enhanced EEGNet architectures, specifically designed for low channel consumer EEG systems. For cross-dataset validation, we implemented progressive domain adaptation combining anatomical channel mapping, CORAL adaptation, and TCA subspace learning. Statistical validation included 345 comprehensive evaluations with fivefold cross-validation × 3 seeds × 23 subjects, Wilcoxon signed-rank tests, and Cohen’s d effect size calculations. Main results. Traditional ML achieved superior within-dataset performance (F1 = 0.945 ± 0.034 versus 0.567 for EEGNet architectures, p < 0.000001, Cohen’s d = 3.863, 67% improvement) across 345 evaluations. Cross-dataset validation demonstrated good performance (F1 = 0.619 versus 0.007) through systematic domain adaptation. Progressive improvements included anatomical channel mapping (5.8× improvement), CORAL domain adaptation (2.7× improvement), and TCA subspace learning (4.5× improvement). Feature analysis revealed inter-channel connectivity patterns contributed 61% of the discriminative power. Traditional ML demonstrated superior computational efficiency (95% faster training, 10× faster inference) and excellent stability (CV = 0.036). Fairness validation experiments supported the advantage of traditional ML in its ability to persist even with minimal feature engineering (F1 = 0.842 vs. 0.646 for enhanced EEGNet), and robustness analysis revealed that deep learning degrades more under consumer-grade noise conditions (17% vs. <1% degradation). Significance. These findings challenge the assumption that architectural complexity universally improves biosignal processing performance in consumer-grade applications. Through the comparison of traditional ML against the EEGNet consumer-grade architecture, we highlight the potential that domain-specific feature engineering and lightweight adaptation techniques can provide superior accuracy, stability, and practical deployment capabilities for consumer-grade EEG emotion recognition. While our empirical comparison focused on EEGNet, the underlying principles regarding data efficiency, noise robustness, and the value of domain expertise could extend to comparisons with other complex architectures facing similar constraints in further research. This comprehensive domain adaptation framework enables robust cross-system deployment, addressing critical gaps in real-world BCI applications. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (3rd Edition))
Show Figures

Figure 1

21 pages, 3073 KB  
Review
Relevance and Evolution of Benchmarking in Computer Systems: A Comprehensive Historical and Conceptual Review
by Isaac Zablah, Lilian Sosa-Díaz and Antonio Garcia-Loureiro
Computers 2025, 14(12), 516; https://doi.org/10.3390/computers14120516 - 26 Nov 2025
Viewed by 983
Abstract
Benchmarking has been central to performance evaluation for more than four decades. Reinhold P. Weicker’s 1990 survey in IEEE Computer offered an early, rigorous critique of standard benchmarks, warning about pitfalls that continue to surface in contemporary practice. This review synthesizes the evolution [...] Read more.
Benchmarking has been central to performance evaluation for more than four decades. Reinhold P. Weicker’s 1990 survey in IEEE Computer offered an early, rigorous critique of standard benchmarks, warning about pitfalls that continue to surface in contemporary practice. This review synthesizes the evolution from classical synthetic benchmarks (Whetstone, Dhrystone) and application kernels (LINPACK) to modern suites (SPEC CPU2017), domain-specific metrics (TPC), data-intensive and graph workloads (Graph500), and Artificial Intelligence/Machine Learning (AI/ML) benchmarks (MLPerf, TPCx-AI). We emphasize energy and sustainability (Green500, SPECpower, MLPerf Power), reproducibility (artifacts, environments, rules), and domain-specific representativeness, especially in biomedical and bioinformatics contexts. Building upon Weicker’s methodological cautions, we formulate a concise checklist for fair, multidimensional, reproducible benchmarking and identify open challenges and future directions. Full article
Show Figures

Graphical abstract

21 pages, 2680 KB  
Review
Big Data and AI-Enabled Construction of a Novel Gemstone Database: Challenges, Methodologies, and Future Perspectives
by Yu Zhang and Guanghai Shi
Minerals 2025, 15(11), 1149; https://doi.org/10.3390/min15111149 - 31 Oct 2025
Viewed by 1450
Abstract
Gemstone samples, as objects of study in gemology, carry rich geological information and cultural value, playing an irreplaceable role in teaching, research, and public science communication. In the current age of big data, machine learning and artificial intelligence techniques based on gemstone databases [...] Read more.
Gemstone samples, as objects of study in gemology, carry rich geological information and cultural value, playing an irreplaceable role in teaching, research, and public science communication. In the current age of big data, machine learning and artificial intelligence techniques based on gemstone databases have emerged as a cutting-edge area of gemology. However, traditional gemstone databases have three major limitations: an absence of standardized data schemas, incomplete core datasets (e.g., records of synthetic and treated gemstones and inclusion characteristics), and poor data interoperability. These deficiencies hinder the application of advanced technologies, such as machine learning (ML) and AI techniques. This paper reviews gemstone data and applications, as well as existing gem-related sample databases, and proposes a framework for a new gemstone database based on standardization (FAIR principles), integration (blockchain technology), and dynamism (real-time updates). This framework could transform the gemstone industry, shifting it from “experience-driven” to “data-driven” practices. Powered by big data technology, this novel database will revolutionize gemological research, jewelry authentication, market transactions, and educational outreach, fostering innovation in academic research and practical applications. Full article
Show Figures

Figure 1

24 pages, 751 KB  
Review
Integrating Advanced Metabolomics and Machine Learning for Anti-Doping in Human Athletes
by Mohannad N. AbuHaweeleh, Ahmad Hamdan, Jawaher Al-Essa, Shaikha Aljaal, Nasser Al Saad, Costas Georgakopoulos, Francesco Botre and Mohamed A. Elrayess
Metabolites 2025, 15(11), 696; https://doi.org/10.3390/metabo15110696 - 27 Oct 2025
Viewed by 1827
Abstract
The ongoing challenge of doping in sports has triggered the adoption of advanced scientific strategies for the detection and prevention of doping abuse. This review examines the potential of integrating metabolomics aided by artificial intelligence (AI) and machine learning (ML) for profiling small-molecule [...] Read more.
The ongoing challenge of doping in sports has triggered the adoption of advanced scientific strategies for the detection and prevention of doping abuse. This review examines the potential of integrating metabolomics aided by artificial intelligence (AI) and machine learning (ML) for profiling small-molecule metabolites across biological systems to advance anti-doping efforts. While traditional targeted detection methods serve a primarily forensic role—providing legally defensible evidence by directly identifying prohibited substances—metabolomics offers complementary insights by revealing both exogenous compounds and endogenous physiological alterations that may persist beyond direct drug detection windows, rather than serving as an alternative to routine forensic testing. High-throughput platforms such as UHPLC-HRMS and NMR, coupled with targeted and untargeted metabolomic workflows, can provide comprehensive datasets that help discriminate between doped and clean athlete profiles. However, the complexity and dimensionality of these datasets necessitate sophisticated computational tools. ML algorithms, including supervised models like XGBoost and multi-layer perceptrons, and unsupervised methods such as clustering and dimensionality reduction, enable robust pattern recognition, classification, and anomaly detection. These approaches enhance both the sensitivity and specificity of diagnostic screening and optimize resource allocation. Case studies illustrate the value of integrating metabolomics and ML—for example, detecting recombinant human erythropoietin (r-HuEPO) use via indirect blood markers and uncovering testosterone and corticosteroid abuse with extended detection windows. Future progress will rely on interdisciplinary collaboration, open-access data infrastructure, and continuous methodological innovation to fully realize the complementary role of these technologies in supporting fair play and athlete well-being. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Metabolomics)
Show Figures

Graphical abstract

27 pages, 1802 KB  
Perspective
Toward Artificial Intelligence in Oncology and Cardiology: A Narrative Review of Systems, Challenges, and Opportunities
by Visar Vela, Ali Yasin Sonay, Perparim Limani, Lukas Graf, Besmira Sabani, Diona Gjermeni, Andi Rroku, Arber Zela, Era Gorica, Hector Rodriguez Cetina Biefer, Uljad Berdica, Euxhen Hasanaj, Adisa Trnjanin, Taulant Muka and Omer Dzemali
J. Clin. Med. 2025, 14(21), 7555; https://doi.org/10.3390/jcm14217555 - 24 Oct 2025
Viewed by 1691
Abstract
Background: Artificial intelligence (AI), the overarching field that includes machine learning (ML) and its subfield deep learning (DL), is rapidly transforming clinical research by enabling the analysis of high-dimensional data and automating the output of diagnostic and prognostic tests. As clinical trials become [...] Read more.
Background: Artificial intelligence (AI), the overarching field that includes machine learning (ML) and its subfield deep learning (DL), is rapidly transforming clinical research by enabling the analysis of high-dimensional data and automating the output of diagnostic and prognostic tests. As clinical trials become increasingly complex and costly, ML-based approaches (especially DL for image and signal data) offer promising solutions, although they require new approaches in clinical education. Objective: Explore current and emerging AI applications in oncology and cardiology, highlight real-world use cases, and discuss the challenges and future directions for responsible AI adoption. Methods: This narrative review summarizes various aspects of AI technology in clinical research, exploring its promise, use cases, and its limitations. The review was based on a literature search in PubMed covering publications from 2019 to 2025. Search terms included “artificial intelligence”, “machine learning”, “deep learning”, “oncology”, “cardiology”, “digital twin”. and “AI-ECG”. Preference was given to studies presenting validated or clinically applicable AI tools, while non-English articles, conference abstracts, and gray literature were excluded. Results: AI demonstrates significant potential in improving diagnostic accuracy, facilitating biomarker discovery, and detecting disease at an early stage. In clinical trials, AI improves patient stratification, site selection, and virtual simulations via digital twins. However, there are still challenges in harmonizing data, validating models, cross-disciplinary training, ensuring fairness, explainability, as well as the robustness of gold standards to which AI models are built. Conclusions: The integration of AI in clinical research can enhance efficiency, reduce costs, and facilitate clinical research as well as lead the way towards personalized medicine. Realizing this potential requires robust validation frameworks, transparent model interpretability, and collaborative efforts among clinicians, data scientists, and regulators. Interoperable data systems and cross-disciplinary education will be critical to enabling the integration of scalable, ethical, and trustworthy AI into healthcare. Full article
(This article belongs to the Section Clinical Research Methods)
Show Figures

Figure 1

22 pages, 6925 KB  
Article
Adaptive Urban Heat Mitigation Through Ensemble Learning: Socio-Spatial Modeling and Intervention Analysis
by Wanyun Ling and Liyang Chu
Buildings 2025, 15(21), 3820; https://doi.org/10.3390/buildings15213820 - 23 Oct 2025
Viewed by 727
Abstract
Urban Heat Islands (UHIs) are intensifying under climate change, exacerbating thermal exposure risks for socially vulnerable populations. While the role of urban environmental features in shaping UHI patterns is well recognized, their differential impacts on diverse social groups remain underexplored—limiting the development of [...] Read more.
Urban Heat Islands (UHIs) are intensifying under climate change, exacerbating thermal exposure risks for socially vulnerable populations. While the role of urban environmental features in shaping UHI patterns is well recognized, their differential impacts on diverse social groups remain underexplored—limiting the development of equitable, context-sensitive mitigation strategies. To address this challenge, we employ an interpretable ensemble machine learning framework to quantify how vegetation, water proximity, and built form influence UHI exposure across social strata and simulate the outcomes of alternative urban interventions. Drawing on data from 1660Dissemination Areas in Vancouver, we model UHI across seasonal and diurnal contexts, integrating environmental variables with socio-demographic indicators to evaluate both thermal and equity outcomes. Our ensemble AutoML framework demonstrates strong predictive accuracy across these contexts (R2 up to 0.79), providing reliable estimates of UHI dynamics. Results reveal that increasing vegetation cover consistently delivers the strongest cooling benefits (up to 2.95 °C) while advancing social equity, though fairness improvements become consistent only when vegetation intensity exceeds 1.3 times the baseline level. Water-related features yield additional cooling of approximately 1.15–1.5 °C, whereas built-form interventions yield trade-offs between cooling efficacy and fairness. Notably, modest reductions in building coverage or road density can meaningfully enhance distributional justice with limited thermal compromise. These findings underscore the importance of tailoring mitigation strategies not only for climatic impact but also for social equity. Our study offers a scalable analytical approach for designing just and effective urban climate adaptations, advancing both environmental sustainability and inclusive urban resilience in the face of intensifying heat risks. Full article
(This article belongs to the Special Issue Advancing Urban Analytics and Sensing for Sustainable Cities)
Show Figures

Figure 1

31 pages, 1634 KB  
Systematic Review
Machine Learning Techniques for Nematode Microscopic Image Analysis: A Systematic Review
by Jose Luis Jimenez, Prem Gandhi, Devadharshini Ayyappan, Adrienne Gorny, Weimin Ye and Edgar Lobaton
AgriEngineering 2025, 7(11), 356; https://doi.org/10.3390/agriengineering7110356 - 22 Oct 2025
Viewed by 1467
Abstract
Farmers rely on nematode analysis for critical crop management decisions, yet traditional detection and classification methods remain subjective, labor-intensive, and time-consuming. Advances in Machine Learning (ML) and Deep Learning (DL) offer scalable solutions for automating microscopy-based nematode analyses. This systematic literature review (SLR) [...] Read more.
Farmers rely on nematode analysis for critical crop management decisions, yet traditional detection and classification methods remain subjective, labor-intensive, and time-consuming. Advances in Machine Learning (ML) and Deep Learning (DL) offer scalable solutions for automating microscopy-based nematode analyses. This systematic literature review (SLR) analyzed 44 articles published between 2018 and 2024 on ML/DL-based nematode image analysis, selected from 1460 records screened across Web of Science, IEEE Xplore, Agricola, and supplemental Google scholar searches. The quality of reporting was examined by considering dataset documentation and code availability. The results were synthesized narratively, as diversity in datasets, tasks, and metrics precluded a meta-analysis. Performance was primarily reported using accuracy, precision, recall, F1-score, Dice coefficient, Intersection over Union (IoU), and average precision (AP). CNNs were the most commonly used architectures, with models such as YOLO providing the best detection performance. Transformer-based models excelled in dense segmentation and counting. Despite strong performance, challenges include limited training data, occlusion, and inconsistent metric reporting across tasks. Although ML/DL models hold promise for scalable nematode analysis, future research should prioritize real-world validation, diverse nematode datasets, and standardized benchmark datasets to enable fair and reproducible model comparison. Full article
Show Figures

Figure 1

34 pages, 1960 KB  
Article
Quantum-Inspired Hybrid Metaheuristic Feature Selection with SHAP for Optimized and Explainable Spam Detection
by Qusai Shambour, Mahran Al-Zyoud and Omar Almomani
Symmetry 2025, 17(10), 1716; https://doi.org/10.3390/sym17101716 - 13 Oct 2025
Cited by 1 | Viewed by 904
Abstract
The rapid growth of digital communication has intensified spam-related threats, including phishing and malware, which employ advanced evasion tactics. Traditional filtering methods struggle to keep pace, driving the need for sophisticated machine learning (ML) solutions. The effectiveness of ML models hinges on selecting [...] Read more.
The rapid growth of digital communication has intensified spam-related threats, including phishing and malware, which employ advanced evasion tactics. Traditional filtering methods struggle to keep pace, driving the need for sophisticated machine learning (ML) solutions. The effectiveness of ML models hinges on selecting high-quality input features, especially in high-dimensional datasets where irrelevant or redundant attributes impair performance and computational efficiency. Guided by principles of symmetry to achieve an optimal balance between model accuracy, complexity, and interpretability, this study proposes an Enhanced Hybrid Quantum-Inspired Firefly and Artificial Bee Colony (EHQ-FABC) algorithm for feature selection in spam detection. EHQ-FABC leverages the Firefly Algorithm’s local exploitation and the Artificial Bee Colony’s global exploration, augmented with quantum-inspired principles to maintain search space diversity and a symmetrical balance between exploration and exploitation. It eliminates redundant attributes while preserving predictive power. For interpretability, Shapley Additive Explanations (SHAPs) are employed to ensure symmetry in explanation, meaning features with equal contributions are assigned equal importance, providing a fair and consistent interpretation of the model’s decisions. Evaluated on the ISCX-URL2016 dataset, EHQ-FABC reduces features by over 76%, retaining only 17 of 72 features, while matching or outperforming filter, wrapper, embedded, and metaheuristic methods. Tested across ML classifiers like CatBoost, XGBoost, Random Forest, Extra Trees, Decision Tree, K-Nearest Neighbors, Logistic Regression, and Multi-Layer Perceptron, EHQ-FABC achieves a peak accuracy of 99.97% with CatBoost and robust results across tree ensembles, neural, and linear models. SHAP analysis highlights features like domain_token_count and NumberOfDotsinURL as key for spam detection, offering actionable insights for practitioners. EHQ-FABC provides a reliable, transparent, and efficient symmetry-aware solution, advancing both accuracy and explainability in spam detection. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

52 pages, 3501 KB  
Review
The Role of Artificial Intelligence and Machine Learning in Advancing Civil Engineering: A Comprehensive Review
by Ali Bahadori-Jahromi, Shah Room, Chia Paknahad, Marwah Altekreeti, Zeeshan Tariq and Hooman Tahayori
Appl. Sci. 2025, 15(19), 10499; https://doi.org/10.3390/app151910499 - 28 Sep 2025
Cited by 5 | Viewed by 6185
Abstract
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of [...] Read more.
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of peer-reviewed publications from the past decade, employing bibliometric mapping and critical evaluation to analyse methodological advances, practical applications, and limitations. A novel taxonomy is introduced, classifying AI/ML approaches by civil engineering domain, learning paradigm, and adoption maturity to guide future development. Key applications include pavement condition assessment, slope stability prediction, traffic flow forecasting, smart water management, and flood forecasting, leveraging techniques such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Support Vector Machines (SVMs), and hybrid physics-informed neural networks (PINNs). The review highlights challenges, including limited high-quality datasets, absence of AI provisions in design codes, integration barriers with IoT-based infrastructure, and computational complexity. While explainable AI tools like SHAP and LIME improve interpretability, their practical feasibility in safety-critical contexts remains constrained. Ethical considerations, including bias in training datasets and regulatory compliance, are also addressed. Promising directions include federated learning for data privacy, transfer learning for data-scarce regions, digital twins, and adherence to FAIR data principles. This study underscores AI as a complementary tool, not a replacement, for traditional methods, fostering a data-driven, resilient, and sustainable built environment through interdisciplinary collaboration and transparent, explainable systems. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

Back to TopTop