Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (308)

Search Parameters:
Keywords = human-aligned AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
43 pages, 2712 KB  
Review
A Comprehensive Survey of Cybersecurity Threats and Data Privacy Issues in Healthcare Systems
by Ramsha Qureshi and Insoo Koo
Appl. Sci. 2026, 16(3), 1511; https://doi.org/10.3390/app16031511 - 2 Feb 2026
Abstract
The rapid digital transformation of healthcare has improved clinical efficiency, patient engagement, and data accessibility, but it has also introduced significant cyber security and data privacy challenges. Healthcare IT systems increasingly rely on interconnected networks, electronic health records (EHRs), tele-medicine platforms, cloud infrastructures, [...] Read more.
The rapid digital transformation of healthcare has improved clinical efficiency, patient engagement, and data accessibility, but it has also introduced significant cyber security and data privacy challenges. Healthcare IT systems increasingly rely on interconnected networks, electronic health records (EHRs), tele-medicine platforms, cloud infrastructures, and Internet of Medical Things (IoMT) devices, which collectively expand the attack surface for cyber threats. This scoping review maps and synthesizes recent evidence on cyber security risks in healthcare, including ransomware, data breaches, insider threats, and vulnerabilities in legacy systems, and examines key data privacy concerns related to patient confidentiality, regulatory compliance, and secure data governance. We also review contemporary security strategies, including encryption, multi-factor authentication, zero-trust architecture, blockchain-based approaches, AI-enabled threat detection, and compliance frameworks such as HIPAA and GDPR. Persistent challenges include integrating robust security with clinical usability, protecting resource-limited hospital environments, and managing human factors such as staff awareness and policy adherence. Overall, the findings suggest that effective healthcare cyber security requires a multi-layered defense combining technical controls, continuous monitoring, governance and regulatory alignment, and sustained organizational commitment to security culture. Future research should prioritize adaptive security models, improved standardization, and privacy-preserving analytics to protect patient data in increasingly complex healthcare ecosystems. Full article
Show Figures

Figure 1

31 pages, 3327 KB  
Article
Can Generative AI Co-Evolve with Human Guidance and Display Non-Utilitarian Moral Behavior?
by Rafael Lahoz-Beltra
Computation 2026, 14(2), 40; https://doi.org/10.3390/computation14020040 - 2 Feb 2026
Abstract
The growing presence of autonomous AI systems, such as self-driving cars and humanoid robots, raises critical ethical questions about how these technologies should make moral decisions. Most existing moral machine (MM) models rely on secular, utilitarian principles, which prioritize the greatest good for [...] Read more.
The growing presence of autonomous AI systems, such as self-driving cars and humanoid robots, raises critical ethical questions about how these technologies should make moral decisions. Most existing moral machine (MM) models rely on secular, utilitarian principles, which prioritize the greatest good for the greatest number but often overlook the religious and cultural values that shape moral reasoning across different traditions. This paper explores how theological perspectives, particularly those from Christian, Islamic, and East Asian ethical frameworks, can inform and enrich algorithmic ethics in autonomous systems. By integrating these religious values, the study proposes a more inclusive approach to AI decision making that respects diverse beliefs. A key innovation of this research is the use of large language models (LLMs), such as ChatGPT (GPT-5.2), to design with human guidance MM architectures that incorporate these ethical systems. Through Python 3 scripts, the paper demonstrates how autonomous machines, e.g., vehicles and humanoid robots, can make ethically informed decisions based on different religious principles. The aim is to contribute to the development of AI systems that are not only technologically advanced but also culturally sensitive and ethically responsible, ensuring that they align with a wide range of theological values in morally complex situations. Full article
(This article belongs to the Section Computational Social Science)
Show Figures

Graphical abstract

11 pages, 194 KB  
Article
Transforming Relational Care Values in AI-Mediated Healthcare: A Text Mining Analysis of Patient Narrative
by So Young Lee
Healthcare 2026, 14(3), 371; https://doi.org/10.3390/healthcare14030371 - 2 Feb 2026
Abstract
Background: This study examined how patients and caregivers perceive and experience AI-based care technologies through text mining analysis. The goal was to identify major themes, sentiments, and value-oriented interpretations embedded in their narratives and to understand how these perceptions align with key [...] Read more.
Background: This study examined how patients and caregivers perceive and experience AI-based care technologies through text mining analysis. The goal was to identify major themes, sentiments, and value-oriented interpretations embedded in their narratives and to understand how these perceptions align with key dimensions of patient-centered care. Methods: A corpus of publicly available narratives describing experiences with AI-based care was compiled from online communities. Natural language processing techniques were applied, including descriptive term analysis, topic modeling using Latent Dirichlet Allocation, and sentiment profiling based on a Korean lexicon. Emergent topics and emotional patterns were mapped onto domains of patient-centered care such as information quality, emotional support, autonomy, and continuity. Results: The analysis revealed a three-phase evolution of care values over time. In the early phase of AI-mediated care, patient narratives emphasized disruption of relational care, with negative themes such as reduced human connection, privacy concerns, safety uncertainties, and usability challenges, accompanied by emotions of fear and frustration. During the transitional phase, positive themes including convenience, improved access, and reassurance from diagnostic accuracy emerged alongside persistent emotional ambivalence, reflecting uncertainty regarding responsibility and control. In the final phase, care values were restored and strengthened, with sentiment patterns shifting toward trust and relief as AI functions became supportive of clinical care, while concerns related to depersonalization and surveillance diminished. Conclusions: Patients and caregivers experience AI-based care as both beneficial and unsettling. Perceptions improve when AI enhances efficiency and information flow without compromising relational aspects of care. Ensuring transparency, explainability, opportunities for human contact, and strong data protections is essential for aligning AI with principles of patient-centered care. Based on a small-scale qualitative dataset of patient narratives, this study offers an exploratory, value-oriented interpretation of how relational care evolves in AI-mediated healthcare contexts. In this study, care-ethics values are used as an analytical lens to operationalize key principles of patient-centered care within AI-mediated healthcare contexts. Full article
(This article belongs to the Section Digital Health Technologies)
24 pages, 4127 KB  
Article
Harnessing AI, Virtual Landscapes, and Anthropomorphic Imaginaries to Enhance Environmental Science Education at Jökulsárlón Proglacial Lagoon, Iceland
by Jacquelyn Kelly, Dianna Gielstra, Tomáš J. Oberding, Jim Bruno and Stephanie Cosentino
Glacies 2026, 3(1), 3; https://doi.org/10.3390/glacies3010003 - 1 Feb 2026
Abstract
Introductory environmental science courses offer non-STEM students an entry point to address global challenges such as climate change and cryosphere preservation. Aligned with the International Year of Glacier Preservation and the Decade of Action for Cryospheric Sciences, this mixed-method, IRB-exempt study applied the [...] Read more.
Introductory environmental science courses offer non-STEM students an entry point to address global challenges such as climate change and cryosphere preservation. Aligned with the International Year of Glacier Preservation and the Decade of Action for Cryospheric Sciences, this mixed-method, IRB-exempt study applied the Curriculum Redesign and Artificial Intelligence-Facilitated Transformation (CRAFT) model for course redesign. The project leveraged a human-centered AI approach to create anthropomorphized, place-based narratives for online learning. Generative AI is used to amend immersive virtual learning environments (VLEs) that animate glacial forces (water, rock, and elemental cycles) through narrative-driven virtual reality (VR) experiences. Students explored Iceland’s Jökulsárlón Glacier Lagoon via self-guided field simulations led by an imaginary water droplet, designed to foster environmental awareness and a sense of place. Data collection included a five-point Likert-scale survey and thematic coding of student comments. Findings revealed strong positive sentiment: 87.1% enjoyment of the imaginaries, 82.5% agreement on supporting connection to places, and 82.0% endorsement of their role in reinforcing spatial and systems thinking. Thematic analysis confirmed that anthropomorphic imaginaries enhanced emotional engagement and conceptual understanding of glacial processes, situating glacier preservation within geographic and global contexts. This AI-enhanced, multimodal approach demonstrates how narrative-based VR can make complex cryospheric concepts accessible for non-STEM learners, promoting early engagement with climate science and environmental stewardship. Full article
Show Figures

Figure 1

36 pages, 5431 KB  
Article
Explainable AI-Driven Quality and Condition Monitoring in Smart Manufacturing
by M. Nadeem Ahangar, Z. A. Farhat, Aparajithan Sivanathan, N. Ketheesram and S. Kaur
Sensors 2026, 26(3), 911; https://doi.org/10.3390/s26030911 - 30 Jan 2026
Viewed by 164
Abstract
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how [...] Read more.
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how explainable artificial intelligence (XAI) techniques can be used to systematically open and interpret the internal reasoning of AI systems commonly deployed in manufacturing, rather than to optimise or compare model performance. A unified explainability-centred framework is proposed and applied across three representative manufacturing use cases encompassing heterogeneous data modalities and learning paradigms: vision-based classification of casting defects, vision-based localisation of metal surface defects, and unsupervised acoustic anomaly detection for machine condition monitoring. Diverse models are intentionally employed as representative black-box decision-makers to evaluate whether XAI methods can provide consistent, physically meaningful explanations independent of model architecture, task formulation, or supervision strategy. A range of established XAI techniques, including Grad-CAM, Integrated Gradients, Saliency Maps, Occlusion Sensitivity, and SHAP, are applied to expose model attention, feature relevance, and decision drivers across visual and acoustic domains. The results demonstrate that XAI enables alignment between model behaviour and physically interpretable defect and fault mechanisms, supporting transparent, auditable, and human-interpretable decision-making. By positioning explainability as a core operational requirement rather than a post hoc visual aid, this work contributes a cross-modal framework for trustworthy AI in manufacturing, aligned with Industry 5.0 principles, human-in-the-loop oversight, and emerging expectations for transparent and accountable industrial AI systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 621 KB  
Article
Truth Is Better Generated than Annotated: Hierarchical Prompt Engineering and Adaptive Evaluation for Reliable Synthetic Knowledge Dialogues
by Hyeongju Ju, EunKyeong Lee, Junyoung Kang, JaKyoung Kim and Dongsuk Oh
Appl. Sci. 2026, 16(3), 1387; https://doi.org/10.3390/app16031387 - 29 Jan 2026
Viewed by 81
Abstract
Large Language Models (LLMs) have demonstrated exceptional performance in knowledge-based dialogue generation and text evaluation. Synthetic data serves as a cost-effective alternative for generating high-quality datasets. However, it often plagued by hallucinations, inconsistencies, and self-anthropomorphized responses. Concurrently, manual construction of knowledge-based dialogue datasets [...] Read more.
Large Language Models (LLMs) have demonstrated exceptional performance in knowledge-based dialogue generation and text evaluation. Synthetic data serves as a cost-effective alternative for generating high-quality datasets. However, it often plagued by hallucinations, inconsistencies, and self-anthropomorphized responses. Concurrently, manual construction of knowledge-based dialogue datasets remains bottlenecked by prohibitive costs and inherent human subjectivity. To address these multifaceted challenges, we propose ACE (Automatic Construction of Knowledge-Grounded and Engaging Human–AI Conversation Dataset), a hybrid method using hierarchical prompt engineering. This approach mitigates hallucinations and self-personalization while maintaining response consistency. Furthermore, existing human and automated evaluation methods struggle to assess critical factors like factual accuracy and coherence. To overcome this, we introduce the Truthful Answer Score (TAS), a novel metric specifically designed for knowledge-based dialogue evaluation. Our experimental results demonstrate that the ACE dataset achieves higher quality than existing benchmarks, such as Wizard of Wikipedia (WoW) and FaithDial. Additionally, TAS aligns more closely with human judgment, offering a more reliable and scalable evaluation framework. Our findings demonstrate that leveraging LLMs through systematic prompting can substantially reduce reliance on human annotation while simultaneously elevating the quality and reliability of synthetic datasets. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
24 pages, 1289 KB  
Article
Designing Understandable and Fair AI for Learning: The PEARL Framework for Human-Centered Educational AI
by Sagnik Dakshit, Kouider Mokhtari and Ayesha Khalid
Educ. Sci. 2026, 16(2), 198; https://doi.org/10.3390/educsci16020198 - 28 Jan 2026
Viewed by 164
Abstract
As artificial intelligence (AI) is increasingly used in classrooms, tutoring systems, and learning platforms, it is essential that these tools are not only powerful, but also easy to understand, fair, and supportive of real learning. Many current AI systems can generate fluent responses [...] Read more.
As artificial intelligence (AI) is increasingly used in classrooms, tutoring systems, and learning platforms, it is essential that these tools are not only powerful, but also easy to understand, fair, and supportive of real learning. Many current AI systems can generate fluent responses or accurate predictions, yet they often fail to clearly explain their decisions, reflect students’ cultural contexts, or give learners and educators meaningful control. This gap can reduce trust and limit the educational value of AI-supported learning. This paper introduces the PEARL framework, a human-centered approach for designing and evaluating explainable AI in education. PEARL is built around five core principles: Pedagogical Personalization (adapting support to learners’ levels and curriculum goals), Explainability and Engagement (providing clear, motivating explanations in everyday language), Attribution and Accountability (making AI decisions traceable and justifiable), Representation and Reflection (supporting fairness, diversity, and learner self-reflection), and Localized Learner Agency (giving learners control over how AI explains and supports them). Unlike many existing explainability approaches that focus mainly on technical performance, PEARL emphasizes how students, teachers, and administrators experience and make sense of AI decisions. The framework is demonstrated through simulated examples using an AI-based tutoring system, showing how PEARL can improve feedback clarity, support different stakeholder needs, reduce bias, and promote culturally relevant learning. The paper also introduces the PEARL Composite Score, a practical evaluation tool that helps assess how well educational AI systems align with ethical, pedagogical, and human-centered principles. This study includes a small exploratory mixed-methods user study (N = 17) evaluating example AI tutor interactions; no live classroom deployment was conducted. Together, these contributions offer a practical roadmap for building educational AI systems that are not only effective, but also trustworthy, inclusive, and genuinely supportive of human learning. Full article
(This article belongs to the Section Technology Enhanced Education)
Show Figures

Figure 1

26 pages, 1472 KB  
Review
Mapping Human–AI Relationships: Intellectual Structure and Conceptual Insights
by Nelson Alfonso Gómez-Cruz, Dorys Yaneth Rodríguez Castro, Fabiola Rey-Sarmiento, Rodrigo Zarate-Torres and Alvaro Moncada Niño
Technologies 2026, 14(2), 83; https://doi.org/10.3390/technologies14020083 - 28 Jan 2026
Viewed by 220
Abstract
As artificial intelligence (AI) becomes increasingly integrated into organizational processes to enhance efficiency, decision-making, and innovation, aligning AI systems with human teams remains a major challenge to realizing their full potential. Although academic interest is growing, the conceptual landscape of human–AI relationships remains [...] Read more.
As artificial intelligence (AI) becomes increasingly integrated into organizational processes to enhance efficiency, decision-making, and innovation, aligning AI systems with human teams remains a major challenge to realizing their full potential. Although academic interest is growing, the conceptual landscape of human–AI relationships remains fragmented. This study employs a bibliometric co-word analysis of 4093 peer-reviewed documents indexed in Scopus to map the intellectual structure of the field. Using a strategic diagram, we assess the relevance and maturity of five major thematic clusters identified in the field. Results highlight the structural dominance of Human–AI Interactions (Centrality: 1595), Human–AI Collaboration (1150), and Teaming and Augmentation (1131) as foundational themes, while Conversational AI (655), and Ethics and Responsibility (431) emerge as specialized domains. Based on the analysis, we propose a conceptual framework that classifies human–AI relationships into four categories—symbiotic, augmented, assisted, and substituted intelligence—according to the level of AI autonomy and human involvement. Rather than providing prescriptive guidance for practitioners, this framework is intended primarily as a scholarly contribution that clarifies the conceptual landscape and supports future theoretical and empirical work. While potential implications for organizational contexts can be inferred, these are secondary to the study’s main goal of offering a research-based synthesis of the field. Ultimately, our work contributes to academic consolidation by offering conceptual clarity and highlighting opportunities for future research, while underscoring the critical need for ethical alignment and interdisciplinary dialogue to guide future AI adoption. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

45 pages, 2361 KB  
Article
CAPTURE: A Stakeholder-Centered Iterative MLOps Lifecycle
by Michal Slupczynski, René Reiners and Stefan Decker
Appl. Sci. 2026, 16(3), 1264; https://doi.org/10.3390/app16031264 - 26 Jan 2026
Viewed by 189
Abstract
Current ML lifecycle frameworks provide limited support for continuous stakeholder alignment and infrastructure evolution, particularly in sensor-based AI systems. We present CAPTURE, a seven-phase framework (Consult, Articulate, Protocol, Terraform, Utilize, Reify, Evolve) that integrates stakeholder-centered requirements engineering with MLOps practices to address these [...] Read more.
Current ML lifecycle frameworks provide limited support for continuous stakeholder alignment and infrastructure evolution, particularly in sensor-based AI systems. We present CAPTURE, a seven-phase framework (Consult, Articulate, Protocol, Terraform, Utilize, Reify, Evolve) that integrates stakeholder-centered requirements engineering with MLOps practices to address these gaps. framework (Consult, Articulate, Protocol, Terraform, Utilize, Reify, Evolve) that integrates stakeholder-centered requirements engineering with MLOps practices to address these gaps. The framework was synthesized from four established standards (ISO/IEC 22989, ISO 9241-210, CRISP-ML(Q), SE4ML) and validated through a longitudinal five-year case study of a psychomotor skill learning system alongside semi-structured interviews with ten domain experts. The evaluation demonstrates that CAPTURE supports governance of iterative development and strategic evolution through explicit decision gates. Expert assessments confirm the necessity of the intermediate stakeholder-alignment layer and substantiate the participatory modeling approach. By connecting technical MLOps with human-centered design, CAPTURE reduces the risk that sensor-based AI systems become ungoverned, non-compliant, or misaligned with user
needs over time. Full article
20 pages, 1415 KB  
Article
Decoding How Articulation and Pauses Influence Pronunciation Proficiency in Korean Learners of English
by Tae-Jin Yoon, Seunghee Han and Seunghee Ha
Behav. Sci. 2026, 16(2), 179; https://doi.org/10.3390/bs16020179 - 26 Jan 2026
Viewed by 130
Abstract
This study investigates how temporal fluency cues shape human ratings of L2 English pronunciation in Korean learners, using a large read-speech corpus annotated with five-point pronunciation scores. We focus on two timing-derived measures—articulation rate (AR) and mean silence duration (SilMean)—and examine whether these [...] Read more.
This study investigates how temporal fluency cues shape human ratings of L2 English pronunciation in Korean learners, using a large read-speech corpus annotated with five-point pronunciation scores. We focus on two timing-derived measures—articulation rate (AR) and mean silence duration (SilMean)—and examine whether these cues predict (i) articulation-accuracy ratings and (ii) prosody/fluency ratings. To account for dependencies in corpus data and to control for key learner- and task-level covariates, we fitted cumulative link mixed models with random intercepts for speakers and scripts, including proficiency band (ability), age, gender, and test type as fixed effects. Across models, faster articulation and shorter silent intervals were associated with higher articulation ratings, and a combined model including both AR and SilMean provided the best fit (lowest AIC). Temporal cues were even more strongly associated with prosody ratings, supporting construct alignment between timing measures and the prosody dimension of the rubric. Marginal predicted probabilities illustrate how the likelihood of receiving high ratings (score ≥ 4) increases with AR across proficiency and linguistic-complexity strata (with SilMean held constant), and how long silent intervals reduce these probabilities when AR is held constant. These findings indicate that temporal organization provides robust information about perceived pronunciation quality in read L2 speech and underscore the importance of construct-aware modeling when developing AI-based scoring and feedback systems trained on human-labeled data. Full article
Show Figures

Figure 1

27 pages, 4789 KB  
Article
Assessing Interaction Quality in Human–AI Dialogue: An Integrative Review and Multi-Layer Framework for Conversational Agents
by Luca Marconi, Luca Longo and Federico Cabitza
Mach. Learn. Knowl. Extr. 2026, 8(2), 28; https://doi.org/10.3390/make8020028 - 26 Jan 2026
Viewed by 354
Abstract
Conversational agents are transforming digital interactions across various domains, including healthcare, education, and customer service, thanks to advances in large language models (LLMs). As these systems become more autonomous and ubiquitous, understanding what constitutes high-quality interaction from a user perspective is increasingly critical. [...] Read more.
Conversational agents are transforming digital interactions across various domains, including healthcare, education, and customer service, thanks to advances in large language models (LLMs). As these systems become more autonomous and ubiquitous, understanding what constitutes high-quality interaction from a user perspective is increasingly critical. Despite growing empirical research, the field lacks a unified framework for defining, measuring, and designing user-perceived interaction quality in human–artificial intelligence (AI) dialogue. Here, we present an integrative review of 125 empirical studies published between 2017 and 2025, spanning text-, voice-, and LLM-powered systems. Our synthesis identifies three consistent layers of user judgment: a pragmatic core (usability, task effectiveness, and conversational competence), a social–affective layer (social presence, warmth, and synchronicity), and an accountability and inclusion layer (transparency, accessibility, and fairness). These insights are formalised into a four-layer interpretive framework—Capacity, Alignment, Levers, and Outcomes—operationalised via a Capacity × Alignment matrix that maps distinct success and failure regimes. It also identifies design levers such as anthropomorphism, role framing, and onboarding strategies. The framework consolidates constructs, positions inclusion and accountability as central to quality, and offers actionable guidance for evaluation and design. This research redefines interaction quality as a dialogic construct, shifting the focus from system performance to co-orchestrated, user-centred dialogue quality. Full article
Show Figures

Figure 1

31 pages, 706 KB  
Article
Applying Action Research to Developing a GPT-Based Assistant for Construction Cost Code Verification in State-Funded Projects in Vietnam
by Quan T. Nguyen, Thuy-Binh Pham, Hai Phong Bui and Po-Han Chen
Buildings 2026, 16(3), 499; https://doi.org/10.3390/buildings16030499 - 26 Jan 2026
Viewed by 142
Abstract
Cost code verification in state-funded construction projects remains a labor-intensive and error-prone task, particularly given the structural heterogeneity of project estimates and the prevalence of malformed codes, inconsistent units of measurement (UoMs), and locally modified price components. This study evaluates a deterministic GPT-based [...] Read more.
Cost code verification in state-funded construction projects remains a labor-intensive and error-prone task, particularly given the structural heterogeneity of project estimates and the prevalence of malformed codes, inconsistent units of measurement (UoMs), and locally modified price components. This study evaluates a deterministic GPT-based assistant designed to automate Vietnam’s regulatory verification. The assistant was developed and iteratively refined across four Action Research cycles. Also, the system enforces strict rule sequencing and dataset grounding via Python-governed computations. Rather than relying on probabilistic or semantic reasoning, the system performs strictly deterministic checks on code validity, UoM alignment, and unit price conformity in material (MTR), labor (LBR), and machinery (MCR), given the provincial unit price books (UPBs). Deterministic equality is evaluated either on raw numerical values or on values transformed through explicitly declared, rule-governed operations, preserving auditability without introducing tolerance-based or inferential reasoning. A dedicated exact-match mechanism, which is activated only when a code is invalid, enables the recovery of typographical errors only when a project item’s full price vector well matches a normative entry. Using twenty real construction estimates (16,100 rows) and twelve controlled error-injection cases, the study demonstrates that the assistant executes verification steps with high reliability across diverse spreadsheet structures, avoiding ambiguity and maintaining full auditability. Deterministic extraction and normalization routines facilitate robust handling of displaced headers, merged cells, and non-standard labeling, while structured reporting provides line-by-line traceability aligned with professional verification workflows. Practitioner feedback confirms that the system reduces manual tracing effort, improves evaluation consistency, and supports documentation compliance with human judgment. This research contributes a framework for large language model (LLM)-orchestrated verification, demonstrating how Action Research can align AI tools with domain expectations. Furthermore, it establishes a methodology for deploying LLMs in safety-critical and regulation-driven environments. Limitations—including narrow diagnostic scope, unlisted quotation exclusion, single-province UPB compliance, and sensitivity to extreme spreadsheet irregularities—define directions for future deterministic extensions. Overall, the findings illustrate how tightly constrained LLM configurations can augment, rather than replace, professional cost verification practices in public-sector construction. Full article
(This article belongs to the Special Issue Knowledge Management in the Building and Construction Industry)
Show Figures

Figure 1

23 pages, 7737 KB  
Article
Training Agents for Strategic Curling Through a Unified Reinforcement Learning Framework
by Yuseong Son, Jaeyoung Park and Byunghwan Jeon
Mathematics 2026, 14(3), 403; https://doi.org/10.3390/math14030403 - 23 Jan 2026
Viewed by 181
Abstract
Curling presents a challenging continuous-control problem in which shot outcomes depend on long-horizon interactions between complex physical dynamics, strategic intent, and opponent responses. Despite recent progress in applying reinforcement learning (RL) to games and sports, curling lacks a unified environment that jointly supports [...] Read more.
Curling presents a challenging continuous-control problem in which shot outcomes depend on long-horizon interactions between complex physical dynamics, strategic intent, and opponent responses. Despite recent progress in applying reinforcement learning (RL) to games and sports, curling lacks a unified environment that jointly supports stable, rule-consistent simulation, structured state abstraction, and scalable agent training. To address this gap, we introduce a comprehensive learning framework for curling AI, consisting of a full-sized simulation environment, a task-aligned Markov decision process (MDP) formulation, and a two-phase training strategy designed for stable long-horizon optimization. First, we propose a novel MDP formulation that incorporates stone configuration, game context, and dynamic scoring factors, enabling an RL agent to reason simultaneously about physical feasibility and strategic desirability. Second, we present a two-phase curriculum learning procedure that significantly improves sample efficiency: Phase 1 trains the agent to master delivery mechanics by rewarding accurate placement around the tee line, while Phase 2 transitions to strategic learning with score-based rewards that encourage offensive and defensive planning. This staged training stabilizes policy learning and reduces the difficulty of direct exploration in the full curling action space. We integrate this MDP and training procedure into a unified Curling RL Framework, built upon a custom simulator designed for stability, reproducibility, and efficient RL training and a self-play mechanism tailored for strategic decision-making. Agent policies are optimized using Soft Actor–Critic (SAC), an entropy-regularized off-policy algorithm designed for continuous control. As a case study, we compare the learned agent’s shot patterns with elite match records from the men’s division of the Le Gruyère AOP European Curling Championships 2023, using 6512 extracted shot images. Experimental results demonstrate that the proposed framework learns diverse, human-like curling shots and outperforms ablated variants across both learning curves and head-to-head evaluations. Beyond curling, our framework provides a principled template for developing RL agents in physics-driven, strategy-intensive sports environments. Full article
(This article belongs to the Special Issue Applications of Intelligent Game and Reinforcement Learning)
Show Figures

Figure 1

29 pages, 1072 KB  
Systematic Review
Ethical Responsibility in Medical AI: A Semi-Systematic Thematic Review and Multilevel Governance Model
by Domingos Martinho, Pedro Sobreiro, Andreia Domingues, Filipa Martinho and Nuno Nogueira
Healthcare 2026, 14(3), 287; https://doi.org/10.3390/healthcare14030287 - 23 Jan 2026
Viewed by 303
Abstract
Background: Artificial intelligence (AI) is transforming medical practice, enhancing diagnostic accuracy, personalisation, and clinical efficiency. However, this transition raises complex ethical challenges related to transparency, accountability, fairness, and human oversight. This study examines how the literature conceptualises and distributes ethical responsibility in [...] Read more.
Background: Artificial intelligence (AI) is transforming medical practice, enhancing diagnostic accuracy, personalisation, and clinical efficiency. However, this transition raises complex ethical challenges related to transparency, accountability, fairness, and human oversight. This study examines how the literature conceptualises and distributes ethical responsibility in AI-assisted healthcare. Methods: This semi-systematic, theory-informed thematic review was conducted in accordance with the PRISMA 2020 guidelines. Publications from 2020 to 2025 were retrieved from PubMed, ScienceDirect, IEEE Xplore databases, and MDPI journals. A semi-quantitative keyword-based scoring model was applied to titles and abstracts to determine their relevance. High-relevance studies (n = 187) were analysed using an eight-category ethical framework: transparency and explainability, regulatory challenges, accountability, justice and equity, patient autonomy, beneficence–non-maleficence, data privacy, and the impact on the medical profession. Results: The analysis revealed a fragmented ethical landscape in which technological innovation frequently outperforms regulatory harmonisation and shared accountability structures. Transparency and explainability were the dominant concerns (34.8%). Significant gaps in organisational responsibility, equitable data practices, patient autonomy, and professional redefinition were reported. A multilevel ethical responsibility model was developed, integrating micro (clinical), meso (institutional), and macro (regulatory) dimensions, articulated through both ex ante and ex post perspectives. Conclusions: AI requires governance frameworks that integrate ethical principles, regulatory alignment, and epistemic justice in medicine. This review proposes a multidimensional model that bridges normative ethics and operational governance. Future research should explore empirical, longitudinal, and interdisciplinary approaches to assess the real impact of AI on clinical practice, equity, and trust. Full article
Show Figures

Figure 1

46 pages, 4076 KB  
Review
A Review of AI-Driven Engineering Modelling and Optimization: Methodologies, Applications and Future Directions
by Jian-Ping Li, Nereida Polovina and Savas Konur
Algorithms 2026, 19(2), 93; https://doi.org/10.3390/a19020093 - 23 Jan 2026
Viewed by 203
Abstract
Engineering is suffering a significant change driven by the integration of artificial intelligence (AI) into engineering optimization in design, analysis, and operational efficiency across numerous disciplines. This review synthesizes the current landscape of AI-driven optimization methodologies and their impacts on engineering applications. In [...] Read more.
Engineering is suffering a significant change driven by the integration of artificial intelligence (AI) into engineering optimization in design, analysis, and operational efficiency across numerous disciplines. This review synthesizes the current landscape of AI-driven optimization methodologies and their impacts on engineering applications. In the literature, several frameworks for AI-based engineering optimization have been identified: (1) machine learning models are trained as objective and constraint functions for optimization problems; (2) machine learning techniques are used to improve the efficiency of optimization algorithms; (3) neural networks approximate complex simulation models such as finite element analysis (FEA) and computational fluid dynamics (CFD) and this makes it possible to optimize complex engineering systems; and (4) machine learning predicts design parameters/initial solutions that are subsequently optimized. Fundamental AI technologies, such as artificial neural networks and deep learning, are examined in this paper, along with commonly used AI-assisted optimization strategies. Representative applications of AI-driven engineering optimization have been surveyed in this paper across multiple fields, including mechanical and aerospace engineering, civil engineering, electrical and computer engineering, chemical and materials engineering, energy and management. These studies demonstrate how AI enables significant improvements in computational modelling, predictive analytics, and generative design while effectively handling complex multi-objective constraints. Despite these advancements, challenges remain in areas such as data quality, model interpretability, and computational cost, particularly in real-time environments. Through a systematic analysis of recent case studies and emerging trends, this paper provides a critical assessment of the state of the art and identifies promising research directions, including physics-informed neural networks, digital twins, and human–AI collaborative optimization frameworks. The findings highlight AI’s potential to redefine engineering optimization paradigms, while emphasizing the need for robust, scalable, and ethically aligned implementations. Full article
(This article belongs to the Special Issue AI-Driven Engineering Optimization)
Show Figures

Figure 1

Back to TopTop