Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,300)

Search Parameters:
Keywords = language design

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 2376 KB  
Article
Efficient Word-Level Sign Language Recognition Using Quantized Spatiotemporal Deep Learning for Low-Power Microcontrollers
by Samuel Longwani Kimpinde and Peter O. Olukanmi
Algorithms 2026, 19(4), 248; https://doi.org/10.3390/a19040248 (registering DOI) - 25 Mar 2026
Abstract
Deploying efficient sign language recognition models on edge devices advances inclusive, affordable, and privacy-preserving human–computer interaction. Yet most state-of-the-art architectures target server-class hardware and fail under the strict memory, computation, and energy constraints of microcontrollers. This work introduces S3D-Conv1D, a separable spatiotemporal architecture [...] Read more.
Deploying efficient sign language recognition models on edge devices advances inclusive, affordable, and privacy-preserving human–computer interaction. Yet most state-of-the-art architectures target server-class hardware and fail under the strict memory, computation, and energy constraints of microcontrollers. This work introduces S3D-Conv1D, a separable spatiotemporal architecture for isolated word-level sign language recognition, tailored for TinyML deployment. While the idea of separating spatial and temporal processing has been explored in earlier models, the novelty here lies in a deployment pipeline designed from the outset for microcontroller-class constraints: every operator has native INT8 support in TensorFlow Lite, CMSIS-NN, and NNoM; the architecture achieves full integer-only execution with competitive accuracy; and the evaluation scale (100 and 300 classes) substantially exceeds prior TinyML sign language recognition studies. Evaluations on datasets show that S3D-Conv1D achieves 98.96% float32 accuracy on WLASL100 with stable cross-dataset generalization (82.5% on SemLex100). After INT8 quantization, accuracy remains high (98.7% on WLASL100) while compressing to 883 KB, the smallest across all evaluated architectures. An ultralight variant further reduces size to 24.7 KB while sustaining 98.5% accuracy on WLASL100 and 77.2% on WLASL300. Quantization-aware training improves stability, particularly at larger vocabulary scales. Among baselines, S3D achieves strong performances but negligible compression (30.3 MB) due to non-quantization-friendly operators. The MobileNet variant generalizes better with 99.4% on WLASL100 and 97.6% accuracy on SemLex100 but remains large at 2.71 MB in INT8 form. CNN + RNN and e-LSTM depend on unsupported recurrent or attention operators. In contrast, S3D-Conv1D meets all operator compatibility requirements, delivers full INT8 execution with a compact sub-1 MB footprint, and real-time performance. These results demonstrate that competitive word-level sign language recognition is achievable under embedded constraints when architectural design prioritizes quantization stability, operator compatibility, and deployment feasibility from the outset. Full article
Show Figures

Graphical abstract

24 pages, 1460 KB  
Perspective
From Sensing to Sense-Making: A Framework for On-Person Intelligence with Wearable Biosensors and Edge LLMs
by Tad T. Brunyé, Mitchell V. Petrimoulx and Julie A. Cantelon
Sensors 2026, 26(7), 2034; https://doi.org/10.3390/s26072034 - 25 Mar 2026
Abstract
Wearable biosensors increasingly stream multi-channel physiological and behavioral data outside the laboratory, yet most deployments still end in dashboards or threshold alarms that leave interpretation open to the user. In high-stakes domains, such as military, emergency response, aviation, industry, and elite sport, the [...] Read more.
Wearable biosensors increasingly stream multi-channel physiological and behavioral data outside the laboratory, yet most deployments still end in dashboards or threshold alarms that leave interpretation open to the user. In high-stakes domains, such as military, emergency response, aviation, industry, and elite sport, the constraint is rarely data availability but the cognitive effort required to convert noisy signals into timely, actionable decisions. We argue for on-person cognitive co-pilots: systems that integrate multimodal sensing, compute probabilistic state estimates on devices, synthesize those states with task and environmental context using locally hosted large language models (LLMs), and deliver recommendations through attention-appropriate cues that preserve autonomy. Enabling conditions include mature wearable sensing, edge artificial intelligence (AI) accelerators, tiny machine learning (TinyML) pipelines, privacy-preserving learning, and open-weight LLMs capable of local deployment with retrieval and guardrails. However, critical research gaps remain across layers: sensor validity under real-world conditions, uncertainty calibration and fusion under distribution shift, verification of LLM-mediated reasoning, interaction design that avoids alarm fatigue and automation bias, and governance models that protect privacy and consent in constrained settings. We propose a layered technical framework and research agenda grounded in cognitive engineering and human–automation interaction. Our core claim is that local, uncertainty-aware reasoning is an architectural prerequisite for trustworthy, low-latency augmentation in isolated, confined, and extreme environments. Full article
(This article belongs to the Special Issue Sensors in 2026)
Show Figures

Figure 1

27 pages, 9896 KB  
Article
Refer-ASV: Referring Multi-Object Tracking in Autonomous Surface Vehicle Navigation Scenes
by Bin Xue, Qiang Yu, Kun Ding, Ying Wang, Shiming Xiang and Chunhong Pan
J. Imaging 2026, 12(4), 145; https://doi.org/10.3390/jimaging12040145 - 25 Mar 2026
Abstract
Water-surface perception is critical for autonomous surface vehicle navigation, where reliable tracking of task-relevant objects is essential for safe and robust operation. Referring multi-object tracking (RMOT) provides a flexible tracking paradigm by allowing users to specify objects of interest through natural language. However, [...] Read more.
Water-surface perception is critical for autonomous surface vehicle navigation, where reliable tracking of task-relevant objects is essential for safe and robust operation. Referring multi-object tracking (RMOT) provides a flexible tracking paradigm by allowing users to specify objects of interest through natural language. However, existing RMOT benchmarks are mainly designed for ground or satellite scenes and fail to capture the distinctive visual and semantic characteristics of water-surface environments, including strong reflections, severe illumination variations, weak motion constraints, and a high proportion of small objects. To address this gap, we introduce Refer-ASV, the first RMOT dataset tailored for ASV navigation in complex water-surface scenes. Refer-ASV is constructed from real-world ASV videos and features diverse navigation scenes and fine-grained vessel categories. To facilitate systematic evaluation on Refer-ASV, we further propose RAMOT, an end-to-end baseline framework that enhances visual–language alignment throughout the tracking pipeline by improving visual–language alignment and robustness in challenging maritime environments. Experimental results show that RAMOT achieves a HOTA score of 39.97 on Refer-ASV, outperforming existing methods. Additional experiments on Refer-KITTI demonstrate its generalization ability across different scenes. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

16 pages, 897 KB  
Data Descriptor
A Dataset Capturing Decision Processes, Tool Interactions and Provenance Links in Autonomous AI Agents
by Yasser Hmimou, Mohamed Tabaa, Azeddine Khiat and Zineb Hidila
Data 2026, 11(4), 66; https://doi.org/10.3390/data11040066 (registering DOI) - 25 Mar 2026
Abstract
Agent-based systems built on large language models (LLMs) increasingly rely on complex internal reasoning processes, tool interactions, and memory mechanisms. However, the internal decision-making dynamics of such agents remain difficult to observe, analyze, and compare in a systematic manner. To address this limitation, [...] Read more.
Agent-based systems built on large language models (LLMs) increasingly rely on complex internal reasoning processes, tool interactions, and memory mechanisms. However, the internal decision-making dynamics of such agents remain difficult to observe, analyze, and compare in a systematic manner. To address this limitation, we present AgentSec, a curated dataset of structured agent interaction traces designed to support the analysis of agent-level reasoning and action behaviors. The dataset consists of 30 deterministic and non-redundant scenario instances, each capturing a complete agent interaction session under a fixed and validated schema. Quantitatively, the 30 released sessions comprise 67 decision nodes and 45 tool calls (73.3% successful), with provenance graphs exhibiting an average depth of 4.53 (max 7) and a maximum branching factor of 3. Scenarios are organized according to a predefined taxonomy of agent behavioral patterns, including tool success and failure modes, fallback strategies, memory conflicts and overwrites, decision rollbacks, and provenance branching structures. Each scenario encodes a distinct analytical case rather than a parametric variation, enabling focused and interpretable study of agent decision-making processes. AgentSec provides detailed records of decision traces, tool calls, memory updates, and provenance relations, and is intended to facilitate reproducible research on agent behavior analysis, auditing, and evaluation. The dataset is released alongside its schema, scenario manifest, and validation tooling to support reuse and extension by the research community. Rather than serving as a large-scale performance benchmark, AgentSec is explicitly designed as a diagnostic and unit-test suite for auditing agent-level reasoning logic and provenance consistency under controlled structural conditions. Full article
Show Figures

Figure 1

15 pages, 2768 KB  
Article
Accurate Multi-Page Document Retrieval by Effectively Fusing Context Information Across Pages
by Bing Qian, Kaiwei Deng, Yuexin Wu, Jianming Zhang, Juanjuan Sun, Yanru Xue and Chunxiao Fan
Electronics 2026, 15(7), 1353; https://doi.org/10.3390/electronics15071353 - 25 Mar 2026
Abstract
Visual retrievers such as visual retrieval-augmented generation (RAG) have recently emerged as a powerful model for retrieving multimodal documents without the need to convert page images into text. Existing visual retrievers typically encode every document page separately, ignoring the inherent rich context information [...] Read more.
Visual retrievers such as visual retrieval-augmented generation (RAG) have recently emerged as a powerful model for retrieving multimodal documents without the need to convert page images into text. Existing visual retrievers typically encode every document page separately, ignoring the inherent rich context information across pages within multi-page documents. However, some crucial semantic information often spans multiple pages in a document, and should be effectively encoded for better retrieval. To address this problem, this paper proposes a novel approach utilizing dynamically fusing visual context (DFVC), which adaptively encodes the semantic information across pages. In the proposed DFVC approach, a lightweight plug-and-play adapter is designed; in addition, a contrastive loss function incorporating the positive fused embedding vectors and negative embedding vectors is designed to constrain the adapter, allowing it to learn the weights for the context pages. Together, the designed adapter and loss function allow the retriever to effectively encode useful semantic information across pages while excluding distracting noise. The proposed DFVC is validated on commonly used challenging multi-page document benchmarks. Extensive experimental results demonstrate that it significantly boosts retrieval performance. In addition, the proposed DFVC is highly parameter-efficient since it employs frozen vision-language backbones, allowing it to be easily integrated into existing visual RAG pipelines for finer document retrieval. Full article
(This article belongs to the Special Issue Advances in AI for Data Analytics and Intelligent Systems)
Show Figures

Figure 1

35 pages, 2917 KB  
Article
Generative AI-Assisted Automation of Clinical Data Processing: A Methodological Framework for Streamlining Behavioral Research Workflows
by Marta Lilia Eraña-Díaz, Alejandra Rosales-Lagarde, Iván Arango-de-Montis and José Alejandro Velázquez-Monzón
Informatics 2026, 13(4), 48; https://doi.org/10.3390/informatics13040048 - 25 Mar 2026
Abstract
This article presents a methodological framework for automating clinical data processing workflows using Generative Artificial Intelligence (AI) as an interactive co-developer. We demonstrate how Large Language Models (LLMs), specifically ChatGPT and Claude, can assist researchers in designing, implementing, and deploying complete ETL (Extract, [...] Read more.
This article presents a methodological framework for automating clinical data processing workflows using Generative Artificial Intelligence (AI) as an interactive co-developer. We demonstrate how Large Language Models (LLMs), specifically ChatGPT and Claude, can assist researchers in designing, implementing, and deploying complete ETL (Extract, Transform, Load) pipelines without requiring advanced programming or DevOps expertise. Using a dataset of 102 participants from a nonverbal expression study as a proof-of-concept, we show how AI-assisted automation transforms FaceReader video analysis outputs during the Cyberball paradigm into structured, analysis-ready datasets through containerized workflows orchestrated via Docker and n8n. The resulting framework successfully processes all 102 datasets, generating machine learning outputs to validate pipeline execution stability (rather than clinical predictivity), and deploys interactive visualization dashboards, tasks that would normally require significant manual effort and technical specialization expertise. This work establishes a replicable methodology for integrating Generative AI into research data management workflows, with implications for accelerating scientific discovery across behavioral and medical research domains. Full article
Show Figures

Figure 1

20 pages, 944 KB  
Article
Psychometric Properties and Factor Structure of the Polish ChEDE-Q in a Community Sample of Adolescents: Associations with BMI
by Małgorzata Wąsacz, Damian Frej, Danuta Ochojska and Marta Kopańska
Nutrients 2026, 18(7), 1028; https://doi.org/10.3390/nu18071028 - 24 Mar 2026
Abstract
Background: The Child and Adolescent Eating Disorder Examination Questionnaire (ChEDE-Q) is a widely used self-report screening instrument for assessing eating disorder psychopathology in young people. Evidence on the psychometric properties of the Polish-language version remains limited. This pilot study evaluated the internal consistency, [...] Read more.
Background: The Child and Adolescent Eating Disorder Examination Questionnaire (ChEDE-Q) is a widely used self-report screening instrument for assessing eating disorder psychopathology in young people. Evidence on the psychometric properties of the Polish-language version remains limited. This pilot study evaluated the internal consistency, dimensional structure, and BMI-related convergent validity of the Polish ChE-DE-Q in a regional youth sample. Methods: A cross-sectional design was used, including 200 participants aged 10–18 years. Item characteristics and data quality were examined. Internal consistency was assessed using Cronbach’s alpha and McDonald’s omega. Dimensional structure was evaluated with exploratory factor analysis (EFA) based on a polychoric correlation matrix and confirmatory factor analysis (CFA) comparing one-factor, four-factor, and bifactor models. Convergent validity was examined using Spearman’s rank correlations with BMI and linear regression analyses with BMI z-scores. Results: The global score showed high internal consistency (α = 0.898; ω = 0.900). Subscale reliability ranged from acceptable to high. EFA supported a multidimensional solution. In CFA, the bifactor model showed the best fit among the tested alternatives (CFI = 0.742; TLI = 0.681; RMSEA = 0.122; SRMR = 0.084), but none of the tested models achieved fully satisfactory absolute fit. The global score correlated positively with BMI (rho = 0.282; p < 0.001) and was significantly associated with BMI z-score in regression analysis (B = 0.334; p < 0.001). Conclusions: The Polish ChEDE-Q global score demonstrated strong internal consistency and preliminary BMI-related convergent validity. The findings provide initial support for a general factor and for using the global score in screening-oriented research; however, the pilot character of the study and the suboptimal absolute fit indices indicate that further validation in larger and more heterogeneous samples is required. Full article
(This article belongs to the Special Issue Advances in Eating Disorders: Nutritional Perspectives)
Show Figures

Figure 1

29 pages, 1513 KB  
Article
Restorative Urban Development: Creating Social Capacity Through Black Modernist Architecture
by Eric Harris and Kathy Dixon
Sustainability 2026, 18(7), 3186; https://doi.org/10.3390/su18073186 - 24 Mar 2026
Abstract
Black Modernist architecture offers a powerful yet underexamined pathway for advancing restorative capacity in American cities. This paper argues that Black Modernism functions as a restorative design methodology, addressing social, economic, and ecological harm imposed on Black communities through slavery, racial capitalism, urban [...] Read more.
Black Modernist architecture offers a powerful yet underexamined pathway for advancing restorative capacity in American cities. This paper argues that Black Modernism functions as a restorative design methodology, addressing social, economic, and ecological harm imposed on Black communities through slavery, racial capitalism, urban renewal, and infrastructural violence. Grounded in the restorative economics framework pioneered by O’Hara, the paper explores the role Black Modernism plays in sustaining sink capacities defined as the social, ecological, and emotional processes that absorb stress, pollution, waste, and trauma. Conventional economic models ignore these capacities, despite their necessity for economic productivity. Black communities, like all marginalized communities, have historically been forced to provide them without compensation. Situating Black Modernist architecture within this framework, the paper demonstrates how Black architects have designed buildings and landscapes that restore dignity, memory, health, and cultural identity, thereby expanding community sink capacities. Drawing on the works of various scholars, the paper examines case studies from Washington, DC, Atlanta, and Chicago, which reveal how Black communities have borne the burden of unremunerated restorative labor while shaping the American built environment. The paper positions Black Modernism as both a design language and a political–economic intervention, challenging architectural value systems that privilege monumental production over community restoration. It concludes by proposing a Restorative Design Framework that integrates Black Modernist principles with restorative economics, offering policy and planning pathways that recognize cultural labor, emotional restoration, and community well-being as essential components of sustainable urban development. Full article
(This article belongs to the Collection Toward a Restorative Economy)
Show Figures

Figure 1

24 pages, 9125 KB  
Article
Decoupled Dual-Stage Generation to Balance Factuality and Empathy in Customer-Support Dialogue Systems
by Serynn Kim, Hongseok Choi and Jin-Xia Huang
Appl. Sci. 2026, 16(7), 3123; https://doi.org/10.3390/app16073123 - 24 Mar 2026
Abstract
In practical customer-support dialogue systems, responses must simultaneously deliver factually grounded information and context-appropriate empathy, yet existing single-stage generation models often exhibit specialization bias, favoring one objective at the expense of the other. To address this limitation, we propose a dual-stage generation framework [...] Read more.
In practical customer-support dialogue systems, responses must simultaneously deliver factually grounded information and context-appropriate empathy, yet existing single-stage generation models often exhibit specialization bias, favoring one objective at the expense of the other. To address this limitation, we propose a dual-stage generation framework that explicitly decouples factual grounding from empathetic modulation. Our primary configuration follows a fact-to-empathy order, in which the system first generates a fact-centric draft via structured query interpretation and optional retrieval-augmented generation, then applies empathy-aware tuning conditioned on inferred emotion type, intensity, and empathy necessity. To enable deployment in resource-constrained environments, only the query interpretation module is explicitly trained using knowledge distillation, allowing the overall system to operate with compact 4B–8B backbone language models. Furthermore, we construct a customer-support dialogue dataset designed to reflect realistic interactions involving both informational and emotional demands. Extensive experiments with compact models show that the proposed approach generally improves key dimensions of empathetic response quality while maintaining overall factual performance, thereby helping mitigate the representational entanglement empirically observed in single-stage baselines. Both quantitative metrics and scenario-based analyses confirm that decoupled generation enables a more balanced integration of factuality and empathy than single-stage generation. These results suggest that dual-stage generation provides a practical and extensible foundation for deployable, real-world customer-support dialogue systems. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 3963 KB  
Article
CalcTutor: Multi-Agent LLM Grading of Handwritten Mathematics with RAG-Grounded Feedback for Adaptive Learning Support
by Le Ying Tan, Buyuan Zhu, Shiyu Hu, Ankit Mishra, Darren J. Yeo and Kang Hao Cheong
Mathematics 2026, 14(7), 1094; https://doi.org/10.3390/math14071094 - 24 Mar 2026
Abstract
Personalized instruction remains a major bottleneck in higher education, especially in large classes where timely, individualized feedback is difficult to achieve. Existing automation typically relies on rigid rule-based pipelines or computationally heavy deep learning models, making it difficult to simultaneously achieve interpretability, instructional [...] Read more.
Personalized instruction remains a major bottleneck in higher education, especially in large classes where timely, individualized feedback is difficult to achieve. Existing automation typically relies on rigid rule-based pipelines or computationally heavy deep learning models, making it difficult to simultaneously achieve interpretability, instructional usability, and scalable deployment. In this study, we present CalcTutor, a generative-AI-based assessment and feedback system designed to support open-ended handwritten calculus problem solving. The system organizes instructional support through three coordinated components: (1) a multi-agent large language model (LLM) mechanism that evaluates solution processes and produces diagnostic feedback, (2) a retrieval-augmented generation (RAG) pipeline that links diagnosed difficulties to aligned instructional materials, and (3) real-time learner analytics for both students and instructors, forming an integrated instructional support workflow rather than an automated answer-checking tool. In offline evaluation and a pilot classroom deployment, the multi-agent grader achieved a weighted agreement accuracy of 0.931 and an F1-score of 0.934 on 1055 handwritten solutions. Participant feedback and workflow testing indicated that CalcTutor can be stably integrated into routine classroom use and enables students to interpret and act upon the provided feedback. These results indicate that automated assessment, diagnostic feedback, and targeted review can operate coherently within a single instructional process that supports instructor-led assessment practices. Using undergraduate calculus as an application domain for open-ended handwritten mathematical assessment, the study demonstrates the operational feasibility of a closed-loop assessment–feedback–revision workflow and provides a deployable instructional infrastructure for formative instructional support in real classroom contexts. Full article
Show Figures

Figure 1

28 pages, 436 KB  
Review
Sustainable Computing Education in African Higher Education: A Critical Synthesis and Context-Aware Framework for Practice
by Kehinde Aruleba and Ebenezer Esenogho
Sustainability 2026, 18(7), 3170; https://doi.org/10.3390/su18073170 - 24 Mar 2026
Abstract
Sustainable computing is now a mainstream expectation of the profession, yet its treatment in higher education remains uneven, and often reflects assumptions of stable power, affordable connectivity, and frequent hardware refresh. This conceptual paper offers a critical synthesis of the misalignment between globally [...] Read more.
Sustainable computing is now a mainstream expectation of the profession, yet its treatment in higher education remains uneven, and often reflects assumptions of stable power, affordable connectivity, and frequent hardware refresh. This conceptual paper offers a critical synthesis of the misalignment between globally promoted sustainability competencies and the infrastructural realities of African higher education. We argue that when curricula designed for resource-abundant settings are adopted without adaptation in contexts shaped by energy volatility, high data costs, and complex device ecologies, a design–reality gap emerges: students may learn the language of sustainability but lack the practical competence to engineer resilient, resource-aware systems. Employing an explanatory synthesis of two evidence pools, i.e., global work on sustainable computing education and Africa-focused scholarship on infrastructure constraints, we propose the Context-Aware Sustainable Computing Education Framework. The framework integrates three dimensions of reform: pedagogy that shifts from awareness to context-aware action competence through constraint-led challenges, curriculum reform that embeds frugal computing and lifecycle stewardship as technical rigour within core modules, and an infrastructure-as-driver stance that treats the campus energy and device environment as a living laboratory for responsible trade-offs. We conclude with tiered implementation pathways, showing how departments can progress from minimum viable changes to institutional approaches. The synthesis positions African universities as credible contributors to global thinking on resilient computing under tightening resource constraints. Full article
(This article belongs to the Section Sustainable Education and Approaches)
Show Figures

Figure 1

33 pages, 4356 KB  
Systematic Review
Large Language Models in Sustainable Energy Systems: A Systematic Review on Modeling, Optimization, Governance, and Alignment to Sustainable Development Goals
by T. A. Alka, M. Suresh, Santanu Mandal, Walter Leal Filho and Raghu Raman
Energies 2026, 19(6), 1588; https://doi.org/10.3390/en19061588 - 23 Mar 2026
Abstract
Sustainable energy systems (SESs) support intelligent modeling, automation, and governance that enable energy access, infrastructure innovation, and climate resilience. Despite their potential, their integration with large language models (LLMs) raises concerns regarding energy intensity, transparency, equity, and regulation. This study adopts a mixed-methods [...] Read more.
Sustainable energy systems (SESs) support intelligent modeling, automation, and governance that enable energy access, infrastructure innovation, and climate resilience. Despite their potential, their integration with large language models (LLMs) raises concerns regarding energy intensity, transparency, equity, and regulation. This study adopts a mixed-methods review combining a BERTopic-based thematic analysis and case-based synthesis to examine applications of LLMs in energy modeling, optimization, etc., and to assess their alignment with the United Nations Sustainable Development Goals. These applications support SDG 7 (Affordable and Clean Energy) by improving access to energy knowledge and decision support, SDG 9 (Industry, Innovation and Infrastructure) through intelligent and scalable digital infrastructure, and SDG 13 (Climate Action) by climate-responsive planning and operational efficiency. The findings reveal that modular, agent-based LLM workflows enhance energy modeling and regulatory compliance. However, sustainability trade-offs necessitate responsible Artificial Intelligence (AI) governance emphasizing transparency, ethical design, and inclusivity. This review informs policy and practice by suggesting that LLMs offer potential value for sustainable energy application deployment within responsible AI governance frameworks that emphasize ethical design, accountability, and equitable access. The study provides future research directions using the ADO (antecedents–decisions–outcomes) framework, emphasizing regulatory readiness, ethical design, and inclusive governance aligned with SDGs 7, 9, and 13, among others. Full article
(This article belongs to the Special Issue Sustainable Energy Systems: Progress, Challenges and Prospects)
Show Figures

Figure 1

20 pages, 1238 KB  
Article
Perceived Usability as a Factor Associated with Clinical Outcomes in Mobile Health Diabetes Management: A Bayesian Mediation and Equity Analysis
by Oscar Eduardo Rodríguez Montes, María del Carmen Gogeascoechea-Trejo and Clara Bermúdez-Tamayo
J. Clin. Med. 2026, 15(6), 2465; https://doi.org/10.3390/jcm15062465 - 23 Mar 2026
Abstract
Background: While mobile health (mHealth) interventions show promise for type 2 diabetes management, mechanisms linking user experience to clinical outcomes remain poorly understood. We hypothesized that perceived usability may mediate associations between patient characteristics and short-term clinical changes, with implications for health equity [...] Read more.
Background: While mobile health (mHealth) interventions show promise for type 2 diabetes management, mechanisms linking user experience to clinical outcomes remain poorly understood. We hypothesized that perceived usability may mediate associations between patient characteristics and short-term clinical changes, with implications for health equity in digital interventions. Methods: Secondary analysis of the intervention arm from a randomized controlled trial in urban Mexican primary care (ClinicalTrials.gov NCT05924516). Participants used a diabetes self-management mobile application for 90 days. We assessed usability with the validated Computer System Usability Questionnaire (CSUQ; 16 items, 7-point scale) and measured clinical changes in body mass index (BMI), systolic blood pressure (SBP), and HbA1c. Bayesian mediation analysis (literature-informed priors) examined interface quality as a mediator of age-related clinical effects. Item-level analysis identified educational disparities in specific usability dimensions using independent t-tests adjusted for multiple comparisons. Results: Mean overall usability was 5.20/7 (SD = 0.89, 74th percentile). Interface quality mediated 39% of the age–SBP association. Participants experiencing high usability (≥6) versus low usability showed BMI reduction −0.78 vs. −0.21 kg/m2 (Cohen’s d = 0.56) and SBP reduction −7.3 vs. −1.2 mmHg (Cohen’s d = 0.51). No mediation effect was observed for HbA1c change. Users with ≤primary education (41% of sample) scored 1.9 points lower on error messages (3.2 vs. 5.1, p < 0.01) and 1.4 points lower on help documentation (3.6 vs. 5.0, p < 0.03). These disparities persisted after controlling for age and baseline severity. Conclusions: Perceived usability was associated with a potential mechanistic pathway linking user experience to clinical outcomes. Higher usability scores were associated with clinically meaningful improvements in cardiometabolic parameters. Educational disparities in understanding error messages and helping documentation represent modifiable design barriers. Implementing contextual error explanations with visual examples and plain-language help content may enhance both clinical effectiveness and equity in digital diabetes interventions. Full article
(This article belongs to the Special Issue Clinical Management for Metabolic Syndrome and Obesity)
Show Figures

Figure 1

13 pages, 1443 KB  
Article
Comparative Quality Assessment of Artificial Intelligence in Patient Education on Platelet-Rich Plasma (PRP) Therapy
by Jonas Krueckel, Dominik Szymski, Nura Ahmad, David Schiffelholz, Johannes Weber, Siska Buchhorn, Tomas Buchhorn, Kai Fehske, Siegmund Lang, Volker Alt and Franz Hilber
J. Pers. Med. 2026, 16(3), 173; https://doi.org/10.3390/jpm16030173 - 23 Mar 2026
Abstract
Background: Platelet-rich plasma (PRP) therapy is increasingly used for musculoskeletal conditions, yet patients seeking supplementary information online encounter resources of variable quality. Large language models (LLMs) such as ChatGPT and Google Gemini may support patient education, but their performance in answering common [...] Read more.
Background: Platelet-rich plasma (PRP) therapy is increasingly used for musculoskeletal conditions, yet patients seeking supplementary information online encounter resources of variable quality. Large language models (LLMs) such as ChatGPT and Google Gemini may support patient education, but their performance in answering common patient questions about PRP therapy has not been well characterized. Methods: This study compared the quality of responses generated by ChatGPT-4, ChatGPT-3.5, and Google Gemini to common PRP-related patient questions. Ten frequently asked PRP-related questions were identified through a structured search of online sources, PubMed, Google Trends, and AI-assisted query generation. Each question was submitted to the three LLMs using a standardized prompt designed to elicit clear and empathetic responses. Five orthopedic surgeons, blinded to model identity, assessed each answer using a previously published four-tier rating framework. Secondary metrics included exhaustiveness, clarity, empathy, and response length. Results: All models produced mostly satisfactory answers. ChatGPT-3.5 received the highest proportion of excellent ratings (70%), compared with 40% for ChatGPT-4 and 22% for Gemini, and outperformed both models in overall quality. The most common limitation across models was insufficient detail. ChatGPT-4 and Gemini performed similarly in several categories, although Gemini was rated lower in empathy and comprehensiveness. Overall differences between models were statistically significant. Conclusions: Commonly available LLMs were able to provide mostly satisfactory responses to patient questions about PRP. However, important limitations remained, particularly with respect to detail and individualization. These tools may support initial patient information-seeking, but they should complement rather than replace expert medical counseling. Full article
Show Figures

Figure 1

36 pages, 5956 KB  
Article
A Knowledge-Augmented Two-Stage Workflow for Architectural Concept-to-Massing Generation and Evaluation
by Shangci Sun and Yao Fu
Buildings 2026, 16(6), 1265; https://doi.org/10.3390/buildings16061265 - 23 Mar 2026
Viewed by 45
Abstract
Large language models (LLMs) and diffusion-based image generators can rapidly produce architectural ideas and imagery, yet translating conceptual narratives into massing composition is often implicit and difficult to reproduce. In this paper, we present a knowledge-augmented two-stage workflow for architectural concept-to-massing generation and [...] Read more.
Large language models (LLMs) and diffusion-based image generators can rapidly produce architectural ideas and imagery, yet translating conceptual narratives into massing composition is often implicit and difficult to reproduce. In this paper, we present a knowledge-augmented two-stage workflow for architectural concept-to-massing generation and evaluation. The outputs are represented as axonometric massing proxy images, which serve as 2D visual proxies for early-stage massing refinement rather than editable 3D models. The workflow integrates a prototype library and Knowledge Graph (KG) routing to map narrative cues into executable strategy and operation tokens and compile stage-specific prompts. Stage 1 produces structural concept sketches emphasizing legible composition, while Stage 2 generates axonometric massing proxy images conditioned on Stage 1 sketches to stabilize composition across candidates. Under a fixed sampling budget, candidates are ranked using a rubric-based scoring protocol with Top-K selection, and evaluation signals can be written back to update prompt compilation iteratively. Across diverse project briefs, ablation studies demonstrate that knowledge augmentation improves constraint compliance and composition readability while maintaining controlled diversity for early exploration. We report expert ratings together with paired statistical tests to support reproducible comparisons. Full article
Show Figures

Figure 1

Back to TopTop