Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (146)

Search Parameters:
Keywords = AI bias mitigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1813 KB  
Review
Artificial Intelligence in Sports Medicine: A Decision-Centered Framework for the Future Sports Physician
by Stefano Palermi, Rita Pucciatti, Nor-Eddine Regnard, Ali Guermazi, Fabiano Araujo, Andrea Demeco, Yosra Mekki, Giuseppe D’Antona, Alessia Guarnera, Simone Cerciello, Matteo Guzzini and Marco Vecchiato
Diagnostics 2026, 16(10), 1448; https://doi.org/10.3390/diagnostics16101448 - 9 May 2026
Viewed by 539
Abstract
Background: Artificial intelligence (AI) is rapidly transforming healthcare, with increasing applications in sports medicine. Advances in machine learning, deep learning, and computer vision enable the analysis of large, heterogeneous datasets derived from imaging, wearable sensors, performance-monitoring systems, and electronic health records. While these [...] Read more.
Background: Artificial intelligence (AI) is rapidly transforming healthcare, with increasing applications in sports medicine. Advances in machine learning, deep learning, and computer vision enable the analysis of large, heterogeneous datasets derived from imaging, wearable sensors, performance-monitoring systems, and electronic health records. While these technologies offer opportunities to enhance injury prevention, diagnostic accuracy, rehabilitation monitoring, and clinical decision-making, their integration into athlete care remains complex and context-dependent. Methods: A structured narrative review of the PubMed/MEDLINE database was conducted to identify clinically relevant AI applications in sports medicine. The search focused on key domains including injury risk prediction, musculoskeletal imaging, rehabilitation monitoring, return-to-play assessment, performance management, and clinical workflow support. Evidence from original studies, reviews, methodological reports, and regulatory documents was qualitatively synthesized to provide an overview of current applications, methodological limitations, and decision-level implications. Results: AI demonstrates growing utility across multiple domains of sports medicine. Machine learning models can identify complex, non-linear relationships among training load, physiological responses, and injury risk, though their predictive performance varies widely and is often limited by dataset heterogeneity and a lack of external validation. In musculoskeletal imaging, AI-based algorithms support automated detection and quantification of abnormalities, with performance in selected tasks approaching that of expert readers, yet remaining task-specific and context-dependent. Emerging applications include movement analysis and rehabilitation monitoring through wearable sensors and computer vision systems, as well as data-driven support for return-to-play decisions and clinical workflow optimization. However, current evidence highlights important limitations, including algorithmic bias, limited generalizability, poor interpretability, and the risk of misapplication in complex clinical decision-making contexts. Conclusions: AI is likely to become an important decision-support layer in sports medicine by enabling data integration and longitudinal monitoring. However, model performance does not necessarily translate into improved clinical outcomes, and AI-generated predictions remain probabilistic and context-sensitive. Consequently, clinical decisions—particularly high-stakes processes such as return-to-play—require structured integration of AI outputs within a broader clinical framework. The sports physician remains central as a human-in-the-loop integrator, responsible for contextualizing AI-derived information, mitigating potential errors, and ensuring safe, individualized athlete management. Full article
(This article belongs to the Special Issue Artificial Intelligence in Sports Medicine: Diagnosis and Management)
Show Figures

Figure 1

18 pages, 676 KB  
Review
Artificial Intelligence Tools in Precision Lung Cancer Care: From Early Detection to Clinical Decision Support
by Christopher R. Grant, Sandip P. Patel and Tali Azenkot
Cancers 2026, 18(9), 1455; https://doi.org/10.3390/cancers18091455 - 1 May 2026
Viewed by 536
Abstract
Thoracic malignancies are uniquely positioned for the integration of emerging technologies such as artificial intelligence (AI), which have the potential to advance precision oncology across the cancer care continuum. In cancer screening, AI has emerged as a promising strategy to enhance diagnostic accuracy, [...] Read more.
Thoracic malignancies are uniquely positioned for the integration of emerging technologies such as artificial intelligence (AI), which have the potential to advance precision oncology across the cancer care continuum. In cancer screening, AI has emerged as a promising strategy to enhance diagnostic accuracy, efficiency, and scalability. Deep learning applied to pathology (pathomics) and imaging (radiomics) has enabled the development of novel, noninvasive tools capable of predicting histologic and molecular features that may correlate with treatment response or toxicity. In drug discovery, computational approaches can analyze large-scale genomic, chemical, and clinical datasets to accelerate target identification and match candidate compounds to available targets; this may be particularly useful in the context of resistance to targeted therapy. AI tools may also support treatment planning for radiation and surgery, guide systemic therapy selection, and facilitate continuous monitoring for early identification of treatment resistance or toxicity. As these technologies are integrated into clinical workflows, careful attention to ethical, regulatory, and clinical governance frameworks will be essential to ensure equitable implementation and bias mitigation. Maintaining human oversight and a human-centered approach remain critical, as complex treatment decisions and sensitive patient interactions are central to the care of patients with thoracic malignancies. Full article
Show Figures

Figure 1

28 pages, 4046 KB  
Systematic Review
From Pre-Rendered to Autonomous: A Systematic Review of AI-Driven Character Animation and Embodiment in Virtual Reality
by Anastasios Theodoropoulos
Virtual Worlds 2026, 5(2), 20; https://doi.org/10.3390/virtualworlds5020020 - 29 Apr 2026
Viewed by 663
Abstract
In recent years, the generation and animation of avatars in virtual reality (VR) have undergone a definitive paradigm shift, transitioning from pre-rendered, manually rigged meshes to autonomous, AI-driven digital entities. While individual algorithms have been extensively studied, there is a critical lack of [...] Read more.
In recent years, the generation and animation of avatars in virtual reality (VR) have undergone a definitive paradigm shift, transitioning from pre-rendered, manually rigged meshes to autonomous, AI-driven digital entities. While individual algorithms have been extensively studied, there is a critical lack of comprehensive synthesis regarding how these generative models impact the broader sociotechnical ecosystem of Spatial Computing. To address this gap, this systematic literature review, conducted in accordance with PRISMA guidelines, analyzed 48 primary studies to evaluate the intersection of Generative AI, hardware architecture, human psychology, and digital ethics. The synthesis reveals a deeply interdependent ecosystem. While advanced neural rendering and diffusion models (RQ1) successfully bypass traditional 3D authoring bottlenecks, their pursuit of absolute visual fidelity severely antagonizes the thermal and latency constraints of standalone mobile hardware (RQ2). The literature demonstrates that failing to mitigate these bottlenecks through hardware–software co-design (e.g., specialized ASICs, gaze-contingent foveation) inevitably shatters the user’s sensorimotor loop, collapsing the sense of agency and triggering the Kinematic Uncanny Valley (RQ3). Furthermore, as these hyper-realistic avatars achieve kinematic autonomy, they introduce unprecedented sociotechnical vulnerabilities regarding spatial privacy, dataset bias, and post-mortem digital identity (RQ4). Ultimately, this review concludes that realizing a compelling and inclusive AI-driven Metaverse is no longer an isolated computer graphics challenge; it demands a rigorous, interdisciplinary paradigm shift where algorithms, silicon architectures, and cognitive psychology are inextricably co-designed under a foundational framework of digital ethics. Full article
Show Figures

Figure 1

26 pages, 663 KB  
Review
Globalization in the Healthcare Industry: Drivers, Risks, and Adaptation
by Anasztázia Kész and Ildikó Balatoni
Healthcare 2026, 14(9), 1177; https://doi.org/10.3390/healthcare14091177 - 28 Apr 2026
Viewed by 505
Abstract
Globalization refers to the increasing density of economic, social, and technological interconnections on a global scale. In the healthcare industry, it simultaneously accelerates innovation and increases systemic vulnerabilities. This study aims to review and conceptually synthesise the main channels of impact: (1) pharmaceuticals, [...] Read more.
Globalization refers to the increasing density of economic, social, and technological interconnections on a global scale. In the healthcare industry, it simultaneously accelerates innovation and increases systemic vulnerabilities. This study aims to review and conceptually synthesise the main channels of impact: (1) pharmaceuticals, clinical development, and regulation; (2) supply chains and resilience; (3) service mobility (health tourism); (4) human resources and competencies; (5) digitalization, artificial intelligence (AI), and data governance; (6) ethics, law, and public policy; and (7) sustainability and climate change. The COVID-19 pandemic highlighted the risks associated with global interdependencies, particularly in supply chains, while also demonstrating the innovation-accelerating effects of knowledge sharing and international cooperation. Particular attention is given to artificial intelligence and digital health, which open up new potential for efficiency and quality improvement from research and development through diagnostics to healthcare organization, while simultaneously intensifying concerns related to data protection, cyber security, and liability. Telemedicine, platform-based systems, and real-world data may contribute to addressing the care needs of ageing societies, but only when supported by appropriate competencies and sound data governance. As global data flows intensify, the importance of data protection, bias mitigation, transparency, and accountability correspondingly increases. Through the cultural channels of globalization, health-conscious lifestyles and complementary approaches are also spreading, which we address in a brief, separate subsection. The guidelines of international organizations foster standardization; however, due to differences in local capacities and institutional environments, the effects are not homogeneous. In conclusion, the study emphasises the dual nature of globalization; it expands access and accelerates innovation, while at the same time creating new vulnerabilities—in supply chains, labour mobility, and data security—and, together with climate-related risks, generating complex adaptive pressures for the healthcare industry. Full article
(This article belongs to the Section Healthcare and Sustainability)
Show Figures

Figure 1

36 pages, 1713 KB  
Article
Software Unfairness Detection in Machine Learning-Based Systems: A Systematic Mapping Study
by Roa Alharbi and Noureddine Abbadeni
Software 2026, 5(2), 18; https://doi.org/10.3390/software5020018 - 27 Apr 2026
Viewed by 325
Abstract
Machine learning-based systems are increasingly deployed in high-stakes domains, such as healthcare, finance, law, and e-commerce, where their predictions directly influence critical decisions. Although these systems offer powerful data-driven support, they also introduce serious concerns related to fairness, bias, and discrimination. As a [...] Read more.
Machine learning-based systems are increasingly deployed in high-stakes domains, such as healthcare, finance, law, and e-commerce, where their predictions directly influence critical decisions. Although these systems offer powerful data-driven support, they also introduce serious concerns related to fairness, bias, and discrimination. As a result, detecting and addressing unfairness in machine learning software has become a central research challenge. This study presents a systematic mapping of research on software unfairness detection in machine learning systems, with the aim of consolidating existing fairness definitions, identifying major problem types, examining testing approaches, reviewing commonly used datasets, and highlighting open research gaps. A structured search was conducted across five major digital libraries and additional sources, covering publications from 2010 to 2025. From 1805 initially identified records, 67 primary studies met the inclusion and quality assessment criteria. The findings show that research activity has grown significantly since 2019, reaching a peak in 2022. Most studies were published in conference proceedings, accounting for 52% of the primary studies, followed by journals and workshop proceedings, which accounted for 42% and 6% of the primary studies. The literature encompasses multiple research themes, with 36% of the primary studies focusing on the analysis of existing fairness methods, 22% addressing bias mitigation strategies, 30% investigating testing techniques, and 12% proposing or evaluating evaluation frameworks. Fairness testing was conducted across multiple testing levels, including unit, integration, and system testing. Integration-level testing was the most prevalent, accounting for approximately 37.9% of the studies, followed by system-level testing at 27.3% and unit-level testing at 12.1%. Additionally, 22.7% of the studies applied fairness testing across more than one testing level. Frequently used datasets included COMPAS, Adult Census Income, and German Credit. Widely adopted tools, such as IBM AI Fairness 360, Themis, and Aequitas, were also identified. Overall, the systematic mapping study (SMS) highlights the progress made in fairness research while emphasizing the need for stronger integration of fairness into practical machine learning development. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

47 pages, 5474 KB  
Review
Bias in Large Language Models: Origin, Evaluation, and Mitigation
by Yufei Guo, Muzhe Guo, Juntao Su, Zhou Yang, Mengqiu Zhu, Hongfei Li, Mengyang Qiu and Shuo Shuo Liu
Electronics 2026, 15(9), 1824; https://doi.org/10.3390/electronics15091824 - 24 Apr 2026
Viewed by 425
Abstract
Large language models (LLMs) have revolutionized natural language processing, but their susceptibility to biases poses significant challenges. This comprehensive review examines the landscape of bias in LLMs, from its origins to current mitigation strategies. We categorize biases as intrinsic and extrinsic, analyzing their [...] Read more.
Large language models (LLMs) have revolutionized natural language processing, but their susceptibility to biases poses significant challenges. This comprehensive review examines the landscape of bias in LLMs, from its origins to current mitigation strategies. We categorize biases as intrinsic and extrinsic, analyzing their manifestations in various natural language processing (NLP) tasks. The review critically assesses a range of bias evaluation methods, including data-level, model-level, and output-level approaches, providing researchers with a robust toolkit for bias detection. We further explore mitigation strategies, categorizing them into pre-model, intra-model, and post-model techniques, highlighting their effectiveness and limitations. Ethical and legal implications of biased LLMs are discussed, emphasizing potential harms in real-world applications such as healthcare and criminal justice. By synthesizing current knowledge on bias in LLMs, this review contributes to the ongoing effort to develop fair and responsible artificial intelligence (AI) systems. Our work serves as a comprehensive resource for researchers and practitioners working towards understanding, evaluating, and mitigating bias in LLMs, fostering the development of more equitable AI technologies. Full article
Show Figures

Figure 1

13 pages, 2375 KB  
Opinion
CsPbI3 Perovskites at the Edge of Commercialization: Persistent Barriers, Multidisciplinary Solutions, and the Emerging Role of AI
by Carlo Spampinato
J 2026, 9(2), 12; https://doi.org/10.3390/j9020012 - 13 Apr 2026
Cited by 2 | Viewed by 586
Abstract
All-inorganic cesium lead iodide (CsPbI3) has been investigated for more than a decade as an absorber for perovskite photovoltaics thanks to its attractive bandgap, thermal robustness compared with hybrid perovskites, and compatibility with tandem concepts. Yet, despite remarkable efficiency progress, CsPbI [...] Read more.
All-inorganic cesium lead iodide (CsPbI3) has been investigated for more than a decade as an absorber for perovskite photovoltaics thanks to its attractive bandgap, thermal robustness compared with hybrid perovskites, and compatibility with tandem concepts. Yet, despite remarkable efficiency progress, CsPbI3 remains far from widespread commercialization. The core roadblock is the metastability of the photoactive black perovskite phases (α/γ/β) against transformation to the photoinactive yellow δ-phase under realistic conditions, amplified by defect chemistry, ion migration, and interfacial reactions. Additional barriers arise from scale-up constraints (film uniformity, throughput, solvent management), long-term operational stability (humidity, heat, UV, bias), and environmental/safety requirements, especially lead containment, sequestration, and end-of-life strategies. This review critically analyzes the intertwined physical, chemical, and engineering factors that still limit CsPbI3 deployment, with emphasis on how solutions in one domain can fail without co-design in others. This review summarizes state-of-the-art stabilization strategies (size/strain engineering, additive/doping routes, surface/interface passivation, and encapsulation), highlight scalable manufacturing pathways including solvent-minimized and vacuum-assisted approaches, and discuss lead-mitigation technologies such as Pb-adsorbing functional layers. Finally, I argue that artificial intelligence (AI)—from machine-learning stability models to process monitoring, robotic optimization, and digital twins—has become essential to navigate the enormous parameter space of CsPbI3 materials and manufacturing. It concludes with actionable recommendations and future directions toward bankable, scalable, and sustainable CsPbI3 photovoltaics. Full article
(This article belongs to the Section Chemistry & Material Sciences)
Show Figures

Figure 1

14 pages, 871 KB  
Article
Validation of a Dermatology-Focused Multimodal Image-and-Data Assistant in Diagnosis and Management of Common Dermatologic Conditions
by Joshua Mijares, Emma J. Bisch, Eanna DeGuzman, Kanika Garg, David Pontes, Neil K. Jairath, Vignesh Ramachandran, George Jeha, Andjela Nemcevic and Syril Keena T. Que
Medicina 2026, 62(4), 715; https://doi.org/10.3390/medicina62040715 - 9 Apr 2026
Viewed by 509
Abstract
Background and Objectives: Shortages of dermatologists create significant barriers to care, particularly for inflammatory and history-dependent conditions where image-only artificial intelligence (AI) classifiers have limited applicability. Current teledermatology solutions largely focus on single-task, morphology-based neoplasm classifiers, leaving the vast majority of dermatologic [...] Read more.
Background and Objectives: Shortages of dermatologists create significant barriers to care, particularly for inflammatory and history-dependent conditions where image-only artificial intelligence (AI) classifiers have limited applicability. Current teledermatology solutions largely focus on single-task, morphology-based neoplasm classifiers, leaving the vast majority of dermatologic presentations underserved. This study evaluated the diagnostic accuracy and management plan quality of Dermflow (Prava Medical, Delaware, USA), a proprietary dermatology-focused Multimodal Image-and-Data Assistant (MIDA) that autonomously gathers dermatology-specific history, integrates data with patient-submitted images, and outputs structured differential diagnoses and management summaries. Materials and Methods: Two AI systems, Dermflow and Claude Sonnet 4 (Claude, a leading vision–language model), analyzed 87 clinical images from the Skin Condition Image Network and Diverse Dermatology Images databases, representing 10 inflammatory dermatoses and 9 neoplastic conditions stratified across Fitzpatrick Skin Tone (FST) categories (I–II, III–IV, V–VI). For the diagnostic comparison, Dermflow received images and autonomously gathered clinical history, while Claude received identical images without history. For the management plan comparison, both systems received the correct diagnosis and the clinical histories gathered by Dermflow. The primary outcome was diagnostic accuracy. The secondary outcome was management plan quality, assessed by two blinded dermatologists across eight clinical dimensions using 5-point Likert scales. Chi-square tests compared diagnostic accuracy between models; t-tests and ANOVA compared management quality scores. Results: Dermflow achieved markedly superior diagnostic accuracy compared to Claude (86.2% vs. 24.1%, p < 0.001). Both models maintained consistent diagnostic performance across FST categories without significant within-model differences (Dermflow p = 0.924; Claude p = 0.828). Management plan quality showed no significant overall differences between models. However, composite management quality scores declined significantly for darker skin tones across both systems: Dermflow scored 4.20 (FST I–II), 3.99 (FST III–IV), and 3.47 (FST V–VI); Claude scored 4.35, 3.97, and 3.44, respectively (p < 0.001 for most pairwise FST comparisons within each model). Conclusions: Multimodal AI integrating targeted history with image analysis achieves substantially higher diagnostic accuracy than image-only approaches across both inflammatory and neoplastic dermatologic conditions. Autonomous history gathering addresses fundamental limitations of morphology-only classifiers and enables scalable, patient-facing triage across the full spectrum of dermatologic disease. However, both models demonstrated reduced management plan quality for darker skin tones despite receiving the correct diagnosis, suggesting persistent training data limitations that require targeted bias-mitigation strategies beyond domain-specific instruction. Full article
Show Figures

Figure 1

14 pages, 245 KB  
Article
Exploring Strategies to Detect and Mitigate Bias in AI in Education: Students’ Perceptions and Didactic Approaches
by María Ribes-Lafoz, Borja Navarro-Colorado and José Rovira-Collado
Trends High. Educ. 2026, 5(2), 33; https://doi.org/10.3390/higheredu5020033 - 3 Apr 2026
Viewed by 1172
Abstract
The increasing integration of Generative AI (GenAI) into higher education, particularly in the domain of language teaching, presents both opportunities and challenges. While AI-powered tools such as ChatGPT-5 can support language learning by generating personalised content which enables real-time interaction and feedback, they [...] Read more.
The increasing integration of Generative AI (GenAI) into higher education, particularly in the domain of language teaching, presents both opportunities and challenges. While AI-powered tools such as ChatGPT-5 can support language learning by generating personalised content which enables real-time interaction and feedback, they also risk perpetuating biases embedded in training data. These biases can appear in linguistic, cultural or socio-political forms, reinforcing stereotypes and influencing language norms. Therefore, equipping students and educators with strategies to critically assess AI outputs is essential for ethical and responsible AI use in language education. While recent research highlights the risks of algorithmic bias, less attention has been given to the perceptions and attitudes of pre-service teachers, whose future practice will shape classroom uses of these technologies. This exploratory pilot study adopts a survey-based approach to examine pre-service teachers’ baseline awareness of bias in artificial intelligence, with particular attention to linguistic and cultural dimensions Data were collected through an online questionnaire administered to 65 undergraduate students enrolled in Primary Education degree programmes. The study documents baseline perceptions prior to any instructional intervention and provides preliminary empirical evidence to inform the future design of pedagogical strategies aimed at developing critical AI literacy in teacher education. Full article
33 pages, 3591 KB  
Review
Ethics in Artificial Intelligence: A Cross-Sectoral Review of 2019–2025
by Charalampos M. Liapis, Nikos Fazakis, Sotiris Kotsiantis and Yannis Dimakopoulos
Informatics 2026, 13(4), 51; https://doi.org/10.3390/informatics13040051 - 27 Mar 2026
Viewed by 2259
Abstract
Artificial Intelligence (AI) has transitioned from a specialized research area to a ubiquitous socio-technical infrastructure influencing sectors from healthcare and law to manufacturing and defense. In tandem with its transformative promise, AI has created an exponentially expanding ethics literature questioning, fairness, transparency, accountability, [...] Read more.
Artificial Intelligence (AI) has transitioned from a specialized research area to a ubiquitous socio-technical infrastructure influencing sectors from healthcare and law to manufacturing and defense. In tandem with its transformative promise, AI has created an exponentially expanding ethics literature questioning, fairness, transparency, accountability, and justice. This review synthesizes publications and key policy developments between 2019 and 2025, bringing sectoral discourses together with cross-cutting frameworks. Grounded in a systematic scoping review methodology, we frame the field along four meta-dimensions: trust and transparency, bias and fairness, governance & regulation, and justice, while we investigate their expression across diverse sectors. Special attention is dedicated to healthcare (patient trust and algorithmic bias), education (integrity and authorship), media (misinformation), law (accountability), and the industrial sector (data integrity, intellectual property protection, and environmental safety). We ground abstract principles in concrete case studies to illustrate real-world harms and mitigation strategies. Furthermore, we incorporate pluralistic ethics (e.g., Ubuntu, Islamic perspectives), environmental ethics, and emerging challenges posed by Generative AI and neuro-AI interfaces. To bridge theory and practice, we propose an operational governance framework for organizations. We contend that success involves transitioning from principles toward ethics-by-design, pluralistic governance, sustainability, and adaptive oversight. This review is intended for scholars, practitioners, and policymakers who need a comprehensive and actionable framework for navigating the complex landscape of AI ethics. Full article
Show Figures

Figure 1

37 pages, 1311 KB  
Article
Systemic Data Bias in Real-World AI Systems: Technical Failures, Legal Gaps, and the Limits of the EU AI Act
by Theodoros Falelakis, Asimina Dimara and Christos-Nikolaos Anagnostopoulos
Information 2026, 17(4), 326; https://doi.org/10.3390/info17040326 - 27 Mar 2026
Viewed by 1543
Abstract
Systemic data bias constitutes a major source of failure in real-world AI systems and represents a regulatory challenge that remains insufficiently addressed by existing legal frameworks, including the EU Artificial Intelligence Act. Although the AI Act introduces a comprehensive risk-based regulatory regime, it [...] Read more.
Systemic data bias constitutes a major source of failure in real-world AI systems and represents a regulatory challenge that remains insufficiently addressed by existing legal frameworks, including the EU Artificial Intelligence Act. Although the AI Act introduces a comprehensive risk-based regulatory regime, it does not adequately capture how bias originates, propagates, and manifests across the AI lifecycle. This paper examines systemic data bias through a legal-technical lifecycle analysis that maps recurring bias mechanisms, from data collection and annotation to model training, evaluation, and deployment, to the regulatory control points established under the EU AI Act. Drawing on cross-sectoral examples from employment screening, credit scoring, healthcare risk prediction, biometric identification, and autonomous systems, the analysis demonstrates how technical bias mechanisms translate into systemic governance and accountability challenges. The findings reveal persistent regulatory gaps, including limited auditability of training datasets, the absence of mandatory fairness metrics, insufficient transparency regarding model behavior, and weak mechanisms for post-deployment monitoring and accountability. These results highlight a structural misalignment between lifecycle-based bias dynamics and the Act’s category-driven compliance framework. The paper argues that addressing systemic bias requires a governance approach that integrates technical bias mitigation with legal oversight across the full AI lifecycle rather than relying primarily on post hoc regulatory controls. Full article
Show Figures

Graphical abstract

30 pages, 3486 KB  
Article
AI Creation of Facial Expression Database for Advanced Emotion Recognition Using Diffusion Model and Pre-Trained CNN Models
by Jia Jun Ho, Wee How Khoh, Ying Han Pang, Hui Yen Yap and Fang Chuen Lim Alvin
Appl. Sci. 2026, 16(6), 2769; https://doi.org/10.3390/app16062769 - 13 Mar 2026
Viewed by 714
Abstract
With applications in psychology, security, and human–computer interaction, facial expression recognition (FER) has become an essential tool for non-verbal communication. Current research often categorizes expressions into micro- and macro-types, yet existing datasets suffer from inconsistent labelling for classes, limited diversity of the databases, [...] Read more.
With applications in psychology, security, and human–computer interaction, facial expression recognition (FER) has become an essential tool for non-verbal communication. Current research often categorizes expressions into micro- and macro-types, yet existing datasets suffer from inconsistent labelling for classes, limited diversity of the databases, and insufficient scale for the currently available datasets. To address these gaps, this work proposes a novel framework combining the diffusion model with pre-trained CNNs. Leveraging original images from established datasets, CASME II, we generate synthetic facial expressions to augment training data, mitigating bias and inconsistency. The synthetic dataset is evaluated using ResNet 50, VGG16 and Inception V3 architectures. Inception V3 trained on the proposed AI-generated dataset and tested using CASME II, VGG-16 with data augmentation applied is trained on CASME II and tested on the proposed AI-generated dataset, and Inception V3 with 30% freezing layers method is trained on the proposed AI-generated dataset and tested using CASME II. These all successfully achieved state-of-the-art performance. The data augmentation and freezing layers approaches significantly improved the performance of the models. Our proposed approaches achieved state-of-the-art performance and outperformed most of the existing state-of-the-art approaches benchmarked in this study. Full article
Show Figures

Figure 1

24 pages, 631 KB  
Article
Generative Simulation and Summarization of Neonatal Patient Data
by Jesse Levine, Gurshan Riarh and James R. Green
Information 2026, 17(3), 261; https://doi.org/10.3390/info17030261 - 5 Mar 2026
Viewed by 511
Abstract
In the Neonatal Intensive Care Unit (NICU), clinicians must balance the demands of constant patient monitoring with the need for precise documentation and clear communication with colleagues and families. To address the clinical burden of documenting patient care and health status, this paper [...] Read more.
In the Neonatal Intensive Care Unit (NICU), clinicians must balance the demands of constant patient monitoring with the need for precise documentation and clear communication with colleagues and families. To address the clinical burden of documenting patient care and health status, this paper presents two complementary AI-based systems. First, a GAN-driven NICU Patient Simulator is developed to generate realistic neonatal vital sign data and discrete clinical intervention events, typical of care in the NICU. While useful for a variety of research goals, this simulator provides a safe and controllable data source essential for the development and validation of the second system: the LLM-powered Neonatal Patient Status Summarizer (NPSS). The NPSS fuses the output of multiple machine learning systems, each extracting specific aspects of patient care and health, together with vital sign data from a patient monitor. Leveraging Retrieval-Augmented Generation (RAG) to incorporate neonatal-specific reference data, the NPSS enables several key use cases, including generating parent-friendly updates, summarizing patient status for clinician handovers, and automatically populating patient records for charting. Simulator validation demonstrates the high fidelity of the simulated data relative to available infant data in Physionet. The NPSS is evaluated using an automated LLM-as-judge framework across repeated test scenarios. To mitigate self-preference bias, evaluations were conducted using three distinct LLM judges (OpenAI o3-mini, Llama-3, and Mistral). Across judges, the NPSS achieved consistently high relevance scores (0.95–0.99) and strong groundedness scores (0.80–0.91), indicating that generated summaries remain on-topic and faithful to the underlying simulator data. Once validated, the NPSS will reduce charting workload, improve shift handover efficiency, and streamline parental updates, addressing key clinical bottlenecks in NICU data workflows. Full article
Show Figures

Graphical abstract

10 pages, 753 KB  
Article
Cardiac Point-of-Care Ultrasound Performed in a Stroke Unit Is Associated with a Reduced Hospital Length of Stay
by María Luisa Ruiz-Franco, Rodrigo José Milán-Pinilla, Laura Amaya-Pascasio, Antonio Arjona-Padillo, Manuel Payán-Ortíz, María Victoria Mejías-Olmedo, Javier Fernández-Pérez and Patricia Martínez-Sánchez
J. Clin. Med. 2026, 15(5), 1885; https://doi.org/10.3390/jcm15051885 - 1 Mar 2026
Viewed by 371
Abstract
Objectives: Cardiac point-of-care ultrasound (cPOCUS) enables rapid bedside cardiac assessment and may facilitate early identification of potential cardiac sources of embolism in patients with acute ischemic stroke (AIS). This study aimed to evaluate whether neurologist-performed cPOCUS is associated with reduced hospital length of [...] Read more.
Objectives: Cardiac point-of-care ultrasound (cPOCUS) enables rapid bedside cardiac assessment and may facilitate early identification of potential cardiac sources of embolism in patients with acute ischemic stroke (AIS). This study aimed to evaluate whether neurologist-performed cPOCUS is associated with reduced hospital length of stay (LOS) in patients admitted to a Stroke Unit (SU). Methods: We conducted a retrospective observational study including consecutive patients with AIS admitted between 2020 and 2021 who required cardiac ultrasound for etiological evaluation. Patients underwent cPOCUS and/or transthoracic echocardiography (TTE) and were classified into two groups: those evaluated with cPOCUS (with or without TTE) and those evaluated exclusively with TTE (control group). The availability of cPOCUS depended on predefined weekly schedules rather than individual clinical decision-making, partially mitigating selection bias. The primary outcome was LOS. Multivariable linear regression analysis was performed to adjust for potential confounders. Results: Among 808 patients with AIS, 332 underwent cardiac ultrasonography during hospitalization: 219 in the cPOCUS group and 113 in the control group. Overall, 60.4% were male, the mean age was 68.4 years (SD 13.3), and the median National Institutes of Health Stroke Scale score at admission was 5 (IQR 9), with no significant differences between groups. Median LOS was shorter in the cPOCUS group than in the control group [7 days (IQR 4) vs. 8 days (IQR 5); p = 0.015]. After adjustment for confounders, cPOCUS evaluation remained independently associated with shorter LOS (β −1.49, standard error 0.73, 95% CI −2.93 to −0.05; p = 0.04). Conclusions: Neurologist-performed cPOCUS is independently associated with reduced LOS in patients with AIS admitted to an SU. These findings suggest that cPOCUS may facilitate more efficient in-hospital workflows and support its potential integration into routine stroke care pathways. Full article
(This article belongs to the Section Epidemiology & Public Health)
Show Figures

Figure 1

29 pages, 1404 KB  
Article
A Comprehensive Framework for Multi-Modal Depression Detection: Integrating Adaptive Fusion, Fairness Regularization, and Explainable AI
by Lal Khan, Mohammad Zubair Khan and Ibrahim Aljubayri
Mathematics 2026, 14(4), 711; https://doi.org/10.3390/math14040711 - 18 Feb 2026
Viewed by 1046
Abstract
Depression poses a major worldwide health burden, necessitating advanced diagnostic tools based on reliable and authentic assessment. While multimodal AI systems offer promising results by integrating diverse data sources, existing approaches often rely on static fusion strategies, exhibit inherent biases, lack interpretability, and [...] Read more.
Depression poses a major worldwide health burden, necessitating advanced diagnostic tools based on reliable and authentic assessment. While multimodal AI systems offer promising results by integrating diverse data sources, existing approaches often rely on static fusion strategies, exhibit inherent biases, lack interpretability, and show limited generalizability. This paper presents a novel multimodal deep learning framework designed to address these critical limitations. The proposed architecture introduces an adaptive, context-aware, and explainable fusion mechanism, combining a Dynamic Gating Network (DGN) that dynamically adjusts modality contributions with a Multi-Head Attention Network (MHAN) to capture deep inter-modal interactions. In addition, a fairness regularization strategy is incorporated to mitigate algorithmic bias, alongside an Explainable AI (XAI) module to provide transparent and clinically meaningful insights. Quantitative evaluations across the DAIC-WOZ, StudentSADD, and Moodable datasets demonstrate strong and consistent performance, achieving an F1-score of 91.4% with 93.0% accuracy on DAIC-WOZ, 82.0% F1 with 83.7% accuracy on StudentSADD, and 80.3% F1 along with 82.5% accuracy on Moodable. Furthermore, the proposed approach reduces fairness disparities and improves generalizability compared to conventional multimodal baselines. Model explanations were also qualitatively evaluated on all three datasets by three mental-health experts using a 5-point Likert scale in terms of clarity, correctness, and clinical plausibility. Overall, this work represents a significant step toward trustworthy, equitable, and clinically applicable AI systems for robust multimodal depression detection, fostering greater confidence and adoption in mental healthcare. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop