Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (403)

Search Parameters:
Keywords = process to improve collaborative work

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1267 KB  
Article
Differentially Private Federated Learning with Adaptive Clipping Thresholds
by Jianhua Liu, Yanglin Zeng, Zhongmei Wang, Weiqing Zhang and Yao Tong
Future Internet 2026, 18(3), 148; https://doi.org/10.3390/fi18030148 (registering DOI) - 14 Mar 2026
Abstract
Under non-independent and identically distributed (Non-IID) conditions, significant variations exist in local model updates across clients and training phases during the collaborative modeling process of differential privacy federated learning (DP-FL). Fixed clipping thresholds and noise scales struggle to accommodate these diverse update differences, [...] Read more.
Under non-independent and identically distributed (Non-IID) conditions, significant variations exist in local model updates across clients and training phases during the collaborative modeling process of differential privacy federated learning (DP-FL). Fixed clipping thresholds and noise scales struggle to accommodate these diverse update differences, leading to mismatches between local update intensity and noise perturbations. This imbalance results in data privacy leaks and suboptimal model accuracy. To address this, we propose a differential privacy federated learning method based on adaptive clipping thresholds. During each communication round, the server adaptively estimates the global clipping threshold for that round using a quantile strategy based on the statistical distribution of client update norms. Simultaneously, clients adaptively adjust their noise scales according to the clipping threshold magnitude, enabling dynamic matching of clipping intensity and noise perturbation across training phases and clients. The novelty of this work lies in a quantile-driven, round-wise global clipping adaptation that synchronizes sensitivity bounding and noise calibration across heterogeneous clients, enabling improved privacy–utility behavior under a fixed privacy accountant. Using experimental results on the rail damage datasets, our proposed method slightly reduces the attacker’s MIA ROC-AUC by 0.0033 and 0.0080 compared with Fed-DPA and DP-FedAvg, respectively, indicating stronger privacy protection, while improving average accuracy by 1.55% and 3.35% and achieving faster, more stable convergence. We further validate its effectiveness on CIFAR-10 under non-IID partitions. Full article
Show Figures

Figure 1

42 pages, 1151 KB  
Review
Active Learning in University Physics for Sustainable Higher Education: Effective Components, Mechanisms, and SDG-Aligned Competency Pathways—A Multidimensional Review
by Fan Xiao, Chenglong Wang and Jun Jiang
Sustainability 2026, 18(6), 2791; https://doi.org/10.3390/su18062791 - 12 Mar 2026
Abstract
Active learning has increasingly been adopted as an evidence-aligned approach to improving learning quality in university physics—a domain characterized by high conceptual abstraction, persistent misconceptions, and substantial variability in student performance. Evidence from physics education research indicates that active-learning designs can outperform lecture-dominant [...] Read more.
Active learning has increasingly been adopted as an evidence-aligned approach to improving learning quality in university physics—a domain characterized by high conceptual abstraction, persistent misconceptions, and substantial variability in student performance. Evidence from physics education research indicates that active-learning designs can outperform lecture-dominant instruction in conceptual learning and student engagement; however, reported effects vary substantially across instructional settings and implementation models. Here, empirical studies and review-level syntheses are integrated to delineate (i) the instructional components that most reliably underpin successful active learning, (ii) the mechanisms through which these components influence learning processes and outcomes, and (iii) the boundary conditions that moderate effectiveness across higher-education contexts. The synthesis is further situated within sustainability-oriented higher education by linking physics active-learning designs to competence development relevant to quality education, climate literacy, and collaborative problem solving. Evidence spanning flipped classroom implementations, peer instruction, collaborative problem solving, inquiry- and project-based approaches, and technology-enhanced formats is organized into a component–mechanism–outcome framework structured along cognitive, affective, and behavioral pathways. Two deliverables are advanced: an integrative mechanism model connecting instructional components to mediating processes, learning outcomes, and sustainability-aligned competencies, and an operational toolbox that translates the evidence into actionable design heuristics, measurement options, and scaling considerations. By redirecting attention from “which strategy works” to “which components work, how, and under what conditions,” the review aims to support instructors, departments, and institutions seeking scalable, evidence-aligned active learning in university physics. Full article
(This article belongs to the Special Issue STEM Education and Innovative Methodologies for Sustainability)
Show Figures

Figure 1

18 pages, 1182 KB  
Article
Co-MedGraphRAG: A Collaborative Large–Small Model Medical Question-Answering Framework Enhanced by Knowledge Graph Reasoning
by Sizhe Chen and Tao Chen
Information 2026, 17(3), 247; https://doi.org/10.3390/info17030247 - 2 Mar 2026
Viewed by 279
Abstract
Large language models (LLMs) have demonstrated significant capabilities in natural language processing (NLP), but they often encounter challenges in the medical domain. This can result in insufficient alignment between generated answers and user intent, as well as factual deviations. To address these issues, [...] Read more.
Large language models (LLMs) have demonstrated significant capabilities in natural language processing (NLP), but they often encounter challenges in the medical domain. This can result in insufficient alignment between generated answers and user intent, as well as factual deviations. To address these issues, we propose Co-MedGraphRAG, a novel framework combining knowledge graph reasoning with large–small model collaboration, aimed at improving the structural grounding and interpretability of medical responses. The framework operates through a multi-stage collaborative mechanism to augment question answering. First, a large language model constructs a question-specific knowledge graph (KG) containing pending entities (denoted as “none”) to explicitly define known and unknown variables. Subsequently, a hybrid reasoning strategy is employed to populate the pending entities, thereby completing the question-specific knowledge graph. Finally, this graph serves as critical structured evidence, combined with the original question, to augment the large language model in generating the final answer, implemented using Qwen2.5-7B and GLM4-9B in this paper. To evaluate the generated answers, we introduce a larger-parameter LLM(GPT-4o) to assess performance across five dimensions and compute an overall score. Experiments on three medical datasets demonstrate that Co-MedGraphRAG achieves consistent improvements in relevance, practicality, and structured knowledge support compared with mainstream Retrieval-Augmented Generation (RAG) frameworks. This work serves as a reference for researchers and developers designing medical question-answering frameworks and exploring decision-support applications. Full article
Show Figures

Graphical abstract

17 pages, 14849 KB  
Article
A Collaborative Robotic System for Autonomous Object Handling with Natural User Interaction
by Federico Neri, Gaetano Lettera, Giacomo Palmieri and Massimo Callegari
Robotics 2026, 15(3), 49; https://doi.org/10.3390/robotics15030049 - 27 Feb 2026
Viewed by 288
Abstract
In Industry 5.0, the transition from fixed traditional automation to flexible human–robot collaboration (HRC) needs interfaces that are both intuitive and efficient. This paper introduces a novel, multimodal control system for autonomous object handling, specifically designed to enhance natural user interaction in dynamic [...] Read more.
In Industry 5.0, the transition from fixed traditional automation to flexible human–robot collaboration (HRC) needs interfaces that are both intuitive and efficient. This paper introduces a novel, multimodal control system for autonomous object handling, specifically designed to enhance natural user interaction in dynamic work environments. The system integrates a 6-Degrees of Freedom (DoF) collaborative robot (UR5e) with a hand-eye RGB-D vision system to achieve robust autonomy. The core technical contribution lies in a vision pipeline utilizing deep learning for object detection and point cloud processing for accurate 6D pose estimation, enabling advanced tasks such as human-aware object handover directly onto the operator’s hand. Crucially, an Automatic Speech Recognition (ASR) is incorporated, providing a Natural Language Understanding (NLU) layer that allows operators to issue real-time commands for task modification, error correction and object selection. Experimental results demonstrate that this multimodal approach offers a streamlined workflow aiming to improve operational flexibility compared to traditional HMIs, while enhancing the perceived naturalness of the collaborative task. The system establishes a framework for highly responsive and intuitive human–robot workspaces, advancing the state of the art in natural interaction for collaborative object manipulation. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

27 pages, 9877 KB  
Article
An A*-DWA Algorithm Enhanced Laser SLAM System for Orchard Navigation: Design and Performance Analysis
by Hongsen Wang, Xiuhua Zhang, Zheng Huang, Yongwei Yuan, Degang Kong and Shanshan Li
Agriculture 2026, 16(4), 469; https://doi.org/10.3390/agriculture16040469 - 18 Feb 2026
Viewed by 300
Abstract
To address the key limitations of existing laser SLAM (Simultaneous Localization and Mapping) navigation systems in orchards—insufficient safety margins, unsmooth trajectories, poor dynamic obstacle adaptability, and high energy consumption—this study proposes an A* (A-Star)-DWA (Dynamic Window Approach) collaborative optimization algorithm integrated into an [...] Read more.
To address the key limitations of existing laser SLAM (Simultaneous Localization and Mapping) navigation systems in orchards—insufficient safety margins, unsmooth trajectories, poor dynamic obstacle adaptability, and high energy consumption—this study proposes an A* (A-Star)-DWA (Dynamic Window Approach) collaborative optimization algorithm integrated into an orchard-specific laser SLAM framework. Three core enhancements were added to the global A* planner: (1) obstacle–vertex adjacency checks (maintaining ~1 m minimum safety distance, meeting 0.8–1.2 m orchard machinery requirements); (2) redundant node elimination (reducing unnecessary turns and energy use); (3) obstacle density metric integrated into the heuristic function (optimizing node expansion efficiency). For the local DWA planner, key parameters (azimuth weight, obstacle distance weight, prediction horizon, etc.) were calibrated to orchard scenarios and tracked robot kinematics, with a lightweight “deviate → avoid → rejoin global path” mechanism for real-time obstacle avoidance. A three-stage path smoothing process (Bresenham verification + cubic spline interpolation + curvature constraint optimization) further improved trajectory quality. The A*-DWA framework synergizes A*’s global optimality (overcoming DWA’s local minima) and DWA’s real-time obstacle avoidance (compensating for A*’s static limitation). Validations via Matlab/Gazebo/Rviz simulations and field tests in the “Xinli No. 7” pear orchard confirmed superior performance: 100% obstacle avoidance success rate (vs. 85.0–92.0% for comparative algorithms), 0.36–0.45 s response time (57.7–71.1% shorter), 1.05–1.15 m safety distance (far exceeding 0.60–0.82 m of existing methods); field tests show 10% lower energy consumption than traditional A*, 0.011 m mean lateral deviation (straight segments), and 65% reduced peak turning deviation (0.14 m). This work resolves multidimensional orchard navigation challenges, enhances agricultural robot efficiency, safety, and adaptability, and provides a practical basis for smart agriculture advancement. Full article
(This article belongs to the Special Issue Application of Smart Technologies in Orchard Management)
Show Figures

Figure 1

14 pages, 1935 KB  
Article
The Cardiologist Driving Synthetic AI: The TIMA Method for Clinically Supervised Synthetic Data Generation
by Gianmarco Parise, Roberto Ceravolo, Fabiana Lucà, Michele Massimo Gulizia, Cecilia Tetta, Orlando Parise, Federico Nardi, Massimo Grimaldi and Sandro Gelsomino
J. Clin. Med. 2026, 15(4), 1351; https://doi.org/10.3390/jcm15041351 - 9 Feb 2026
Viewed by 254
Abstract
Background/Objectives: Synthetic artificial intelligence (AI) is increasingly used in cardiovascular medicine to generate realistic clinical data from limited samples while preserving patient privacy. Despite its promise, concerns remain regarding the clinical reliability of synthetic datasets, which hampers their integration into routine practice. This [...] Read more.
Background/Objectives: Synthetic artificial intelligence (AI) is increasingly used in cardiovascular medicine to generate realistic clinical data from limited samples while preserving patient privacy. Despite its promise, concerns remain regarding the clinical reliability of synthetic datasets, which hampers their integration into routine practice. This article introduces the TIMA method (Team-Implementation Multidisciplinary Approach), designed to involve clinicians directly in every phase of synthetic data development. The objective of this work is to describe the TIMA framework and to illustrate how structured clinician–data scientist collaboration can enhance the clinical robustness and plausibility of synthetic AI outputs. Methods: The TIMA approach structures the synthetic data generation workflow around continuous interaction between clinicians and data scientists. Cardiologists define clinical constraints, verify inter-variable relationships, and assess the coherence and plausibility of generated records. The framework is illustrated through multiple cardiology use cases, including atrial fibrillation risk prediction and surgical mortality estimation in infective endocarditis, to demonstrate its adaptability across different clinical contexts. Each phase includes iterative validation steps aimed at ensuring alignment with established clinical knowledge rather than reporting quantitative performance outcomes. Results: Application of the TIMA framework supported the development of synthetic datasets that adhered more closely to clinical logic and domain-specific constraints. Clinician–data scientist collaboration enabled early detection of implausible variable interactions, improved interpretability of synthetic data patterns, and enhanced internal consistency across different cardiology-oriented scenarios. Conclusions: TIMA represents a scalable and replicable methodological model for integrating synthetic AI into cardiology by embedding clinical expertise throughout the data generation process. Its structured, multidisciplinary workflow supports the production of synthetic data that is not only statistically coherent but also clinically meaningful, thereby strengthening trust and reliability in AI-assisted cardiovascular research. Full article
Show Figures

Figure 1

13 pages, 2395 KB  
Article
Engineering the Future of Heart Failure Therapeutics: Integrating 3D Printing, Silicone Molding, and Translational Development for Implantable Cardiac Devices
by Carleigh Eagle, Aarti Desai, Michael Franklin, Robert Pooley, Elizabeth Johnson, Shawn Robinson, Mark Lopez and Rohan Goswami
Bioengineering 2026, 13(2), 192; https://doi.org/10.3390/bioengineering13020192 - 8 Feb 2026
Viewed by 503
Abstract
Three-dimensional (3D) anatomic modeling derived from high-resolution medical imaging, such as computed tomography (CT) and magnetic resonance imaging (MRI), has been increasingly adopted in preclinical testing and device development. This white paper describes a cardiac-specific workflow that integrates 3D printing and silicone molding [...] Read more.
Three-dimensional (3D) anatomic modeling derived from high-resolution medical imaging, such as computed tomography (CT) and magnetic resonance imaging (MRI), has been increasingly adopted in preclinical testing and device development. This white paper describes a cardiac-specific workflow that integrates 3D printing and silicone molding for support device development and procedural simulation. Patient-derived computed tomography angiography data were segmented using FDA-cleared medical modeling software to isolate the left ventricular anatomy and were further processed in computer-aided design (CAD) to ensure accurate physiological wall thickness and structural fidelity. Material jetting 3D printing was performed on a Stratasys J750 using material distributions designed to mimic the mechanical properties of myocardium, thereby approximating myocardial compliance. In parallel, stereolithography apparatus molds were designed from the left ventricle CAD model to cast transparent, pliable left ventricular models in Sorta-Clear™ 18 silicone. The 3D-printed models preserved intricate morphological detail and were suitable for mechanical manipulation and device deployment studies, whereas silicone models offered tunable mechanical properties, transparency for visualization, and durability for repeated use. Together, these complementary modalities provided rapid manufacturing capability and application-relevant physical representation. Case-specific parameters, strengths, and limitations of both models in enhancing patient care and device testing are highlighted, with relevance to heart failure applications. Current knowledge gaps, workflow and integration challenges, and future opportunities are identified, positioning this work as a reference framework for continued innovation in anatomic modeling. Within the collaborative framework of Mayo Clinic’s Anatomic Modeling Unit and Simulation Center, this integrated modeling workflow demonstrates the value of multidisciplinary collaboration between engineers and clinicians. Clinically, these patient-specific left ventricular models may enable pre-procedural device sizing and positioning and may support simulation of mechanical circulatory support (MCS) deployment while identifying possible anatomic constraints prior to intervention. This workflow has direct applicability in advanced heart failure patients undergoing MCS support, such as the Impella axillary MCS device or the durable LVAD, with potential to reduce procedural uncertainty while reducing complications and improving peri-procedural outcomes. Additionally, these models also serve as high-accuracy educational tools, enabling trainees and multidisciplinary care teams to visualize and possibly rehearse procedural steps while gaining hands-on experience in a risk-free environment. Full article
Show Figures

Figure 1

22 pages, 4238 KB  
Article
Tailored Annealing for Interfacial Design and Mechanical Optimization of Cu18150/Al1060/Cu18150 Trilayer Composites
by Yuchao Zhao, Mahmoud Ebrahimi, Linfeng Wu, Shokouh Attarilar and Qudong Wang
Metals 2026, 16(2), 176; https://doi.org/10.3390/met16020176 - 1 Feb 2026
Viewed by 318
Abstract
Copper–aluminum layered composites offer a promising combination of high conductivity, light weight, and cost-effectiveness, making them attractive for applications in electric vehicles, electronics, and power transmission. However, achieving reliable interfacial bonding while avoiding excessive work hardening and brittle intermetallic formation remains a significant [...] Read more.
Copper–aluminum layered composites offer a promising combination of high conductivity, light weight, and cost-effectiveness, making them attractive for applications in electric vehicles, electronics, and power transmission. However, achieving reliable interfacial bonding while avoiding excessive work hardening and brittle intermetallic formation remains a significant challenge. In this study, a Cu18150/Al1060/Cu18150 trilayer composite was fabricated through a three-stage high-temperature oxygen-free rolling process. Subsequently, the produced composite was subjected to annealing treatments to systematically investigate the effects of rolling passes, annealing temperature/time on interfacial evolution and mechanical behavior. Results indicate that rolling passes primarily influence interfacial topography and defect distribution. Fewer passes lead to wavy, mechanically bonded interfaces, while more passes improve flatness but reduce intermetallic continuity. Annealing temperature critically governs diffusion kinetics; temperatures up to 400 °C promote the formation of a uniform Al2Cu layer, whereas 450 °C accelerates the growth of brittle Al4Cu9, thickening the intermetallic layer to 18 μm and compromising toughness. Annealing duration further modulates diffusion mechanisms, with short-term (0.5 h) treatments favoring defect-assisted diffusion, resulting in a porous, rapidly thickened layer. In contrast, longer annealing (≥1 h) shifts toward lattice diffusion, which densifies the interface but risks excessive brittle phase formation if prolonged. Mechanical performance evolves accordingly; as-rolled strength increases with the number of rolling passes, but at the expense of ductility. Annealing transforms bonding from a mechanical to a metallurgical condition, shifting fracture from delamination to collaborative failure. The identified optimal process, single-pass rolling followed by annealing at 420 °C for 1 h, yields a balanced interfacial structure of Al2Cu, AlCu, and Al4Cu9 phases, achieving a tensile strength of 258.9 MPa and an elongation of 28.2%, thereby satisfying the target performance criteria (≥220 MPa and ≥20%). Full article
Show Figures

Figure 1

12 pages, 257 KB  
Brief Report
Developing a Public Health Quality Tool for Mobile Health Clinics to Assess and Improve Care
by Nancy E. Oriol, Josephina Lin, Jennifer Bennet, Darien DeLorenzo, Mary Kathryn Fallon, Delaney Gracy, Caterina Hill, Madge Vasquez, Anthony Vavasis, Mollie Williams and Peggy Honoré
Int. J. Environ. Res. Public Health 2026, 23(2), 141; https://doi.org/10.3390/ijerph23020141 - 23 Jan 2026
Viewed by 680
Abstract
This report describes the development and deployment of the Public Health Quality Tool (PHQTool), an online resource designed to help mobile health clinics (MHCs) assess and improve the quality of their public health services. MHCs provide essential clinical and public health services to [...] Read more.
This report describes the development and deployment of the Public Health Quality Tool (PHQTool), an online resource designed to help mobile health clinics (MHCs) assess and improve the quality of their public health services. MHCs provide essential clinical and public health services to underserved populations but have historically lacked tools to assess and improve the quality of their work. To address this gap, the PHQTool was developed as an online, evidence-based, self-assessment resource for MHCs, hosted on the Mobile Health Map (MHMap) platform. This report documents the collaborative development process of the PHQTool and presents preliminary evaluation findings related to usability and relevance among mobile health clinics. Drawing from national public health frameworks and Honore et al.’s established public health quality aims, the PHQTool focuses on six aims most relevant to mobile care: Equitable, Health Promoting, Proactive, Transparent, Effective, and Efficient. Selection of the six quality aims was guided by explicit criteria developed through pilot testing and stakeholder feedback. The six aims were those that could be directly implemented through mobile clinic practices and were feasible to assess within diverse mobile clinic contexts. The remaining three aims (“population-centered,” “risk-reducing,” and “vigilant”) were determined to be less directly actionable at the program level or required system-wide or data infrastructure beyond the scope of individual mobile clinics. Development included expert consultation, pilot testing, and iterative refinement informed by user feedback. The tool allows clinics to evaluate practices, identify improvement goals, and track progress over time. Since implementation, 82 MHCs representing diverse organizational types have used the PHQTool, reporting high usability and identifying common improvement areas such as outreach, efficiency, and equity-driven service delivery. Across pilot and post-pilot implementation phases, a majority of respondents agreed or strongly agreed that the tool was user-friendly, relevant to their work, and appropriately scoped for mobile clinic practice. Usability and acceptance were assessed using descriptive statistics, including percentage agreement across Likert-scale items as well as qualitative feedback collected during structured debriefs. Reported findings reflect self-reported perceptions of feasibility, clarity, and relevance rather than inferential statistical comparisons. The PHQTool facilitates systematic quality assessment within the mobile clinic sector and supports consistent documentation of public health efforts. By providing a standardized, accessible framework for evaluation, it contributes to broader efforts to strengthen evidence-based quality improvement and promote accountability in MHCs. Full article
(This article belongs to the Special Issue Advances and Trends in Mobile Healthcare)
16 pages, 559 KB  
Commentary
Design Justice in Action: Co-Developing an HIV and Substance Use Linkage Intervention with Young Adults Involved in the Carceral System
by Sheridan Sweet, Nicole McCaffery, Jerry Jiang, Robert W. S. Coulter, James E. Egan, Janet Myers, Martha Shumway, Marina Tolou-Shams and Emily F. Dauria
Soc. Sci. 2026, 15(1), 55; https://doi.org/10.3390/socsci15010055 - 22 Jan 2026
Viewed by 255
Abstract
To redress systemically biased approaches to health interventions and service design, it is critical that public health researchers employ frameworks that are intentional in their approach to recognizing and working against existing power structures to advance equity in public health. Design Justice represents [...] Read more.
To redress systemically biased approaches to health interventions and service design, it is critical that public health researchers employ frameworks that are intentional in their approach to recognizing and working against existing power structures to advance equity in public health. Design Justice represents an approach to design which centers marginalized people and uses collaborative design processes to address community needs and challenges. The purpose of this paper is to describe our process for applying a Design Justice framework to Project XX. Project XX is a study funded by XX designed to develop and test an eHealth-enhanced peer navigation intervention to improve engagement in substance use and HIV-related services for young adults with recent carceral system involvement. We situate the project within the theoretical foundation of Design Justice and community-engaged research, describe its development and implementation, and analyze the application of Design Justice principles from an implementation science perspective by overlaying them with Stanford University’s Center for Dissemination and Implementation’s five key dimensions of dissemination and implementation methods. We highlight successes, challenges, and lessons learned, offering recommendations to guide more equitable and inclusive approaches for future research and practice. Full article
(This article belongs to the Special Issue Public Health and Social Change)
Show Figures

Figure 1

44 pages, 4883 KB  
Article
Mapping the Role of Artificial Intelligence and Machine Learning in Advancing Sustainable Banking
by Alina Georgiana Manta, Claudia Gherțescu, Roxana Maria Bădîrcea, Liviu Florin Manta, Jenica Popescu and Mihail Olaru
Sustainability 2026, 18(2), 618; https://doi.org/10.3390/su18020618 - 7 Jan 2026
Viewed by 616
Abstract
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and [...] Read more.
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and Web of Science to explore how decentralized digital infrastructures and AI-driven analytical capabilities contribute to sustainable financial development, transparent governance, and climate-resilient digital societies. Findings indicate a rapid increase in interdisciplinary work integrating Distributed Ledger Technology (DLT) with large-scale data processing, federated learning, privacy-preserving computation, and intelligent automation—tools that can enhance financial inclusion, regulatory integrity, and environmental risk management. Keyword network analyses reveal blockchain’s growing role in improving data provenance, security, and trust—key governance dimensions for sustainable and resilient financial systems—while AI/ML and big data analytics dominate research on predictive intelligence, ESG-related risk modeling, customer well-being analytics, and real-time decision support for sustainable finance. Comparative analyses show distinct emphases: Web of Science highlights decentralized architectures, consensus mechanisms, and smart contracts relevant to transparent financial governance, whereas Scopus emphasizes customer-centered analytics, natural language processing, and high-throughput data environments supporting inclusive and equitable financial services. Patterns of global collaboration demonstrate strong internationalization, with Europe, China, and the United States emerging as key hubs in shaping sustainable and digitally resilient banking infrastructures. By mapping intellectual, technological, and collaborative structures, this study clarifies how decentralized intelligence—enabled by the fusion of AI/ML, blockchain, and big data—supports secure, scalable, and sustainability-driven financial ecosystems. The results identify critical research pathways for strengthening financial governance, enhancing climate and social resilience, and advancing digital transformation, which contributes to more inclusive, equitable, and sustainable societies. Full article
Show Figures

Figure 1

30 pages, 3006 KB  
Article
MiRA: A Zero-Shot Mixture-of-Reasoning Agents Framework for Multimodal Answering of Science Questions
by Fawaz Alsolami, Asmaa Alrayzah and Rayyan Najam
Appl. Sci. 2026, 16(1), 372; https://doi.org/10.3390/app16010372 - 29 Dec 2025
Viewed by 679
Abstract
Multimodal question answering (QA) involves integrating information from both visual and textual inputs and requires models that can reason compositionally and accurately across modalities. Existing approaches, including fine-tuned vision–language and prompting, often struggle with generalization, interpretability, and reliance on task-specific data. In this [...] Read more.
Multimodal question answering (QA) involves integrating information from both visual and textual inputs and requires models that can reason compositionally and accurately across modalities. Existing approaches, including fine-tuned vision–language and prompting, often struggle with generalization, interpretability, and reliance on task-specific data. In this work, we propose a Mixture-of-Reasoning Agents (MiRA) framework for zero-shot multimodal reasoning. MiRA decomposes the reasoning process across three specialized agents—Visual Analyzing, Text Comprehending, and Judge—which consolidate multimodal evidence. Each agent operates independently using pretrained language models, enabling structured, interpretable reasoning without supervised training or task-specific adaptation. Evaluated on the ScienceQA benchmark, MiRA achieves 96.0% accuracy, surpassing all zero-shot methods, outperforming few-shot GPT-4o models by more than 18% on image-based questions, and achieving similar performance to the best fine-tuned systems. The analysis further shows that the Judge agent consistently improves the reliability of individual agent outputs, and that strong linear correlations (r > 0.95) exist between image-specific accuracy and overall performance across models. We identify a previously unreported and robust pattern in which performance on image-specific tasks strongly predicts overall task success. We also conduct detailed error analyses for each agent, highlighting complementary strengths and failure modes. These results demonstrate that modular agent collaboration with zero-shot reasoning provides highly accurate multimodal QA, establishing a new paradigm for zero-shot multimodal AI and offering a principled framework for future research in generalizable AI. Full article
(This article belongs to the Special Issue Deep Learning and Its Applications in Natural Language Processing)
Show Figures

Figure 1

21 pages, 4682 KB  
Article
Research on “Extraction–Injection–Locking” Collaborative Prevention and Control Technology for Coal Mine Gas Disasters
by Ting Lu, Xuefeng Zhang and Gang Liu
Processes 2026, 14(1), 115; https://doi.org/10.3390/pr14010115 - 29 Dec 2025
Viewed by 311
Abstract
In response to the issues of low synergy efficiency between gas extraction and water injection, unclear procedural connections, and high costs in coal mine gas disaster prevention, this paper proposes a collaborative prevention technology for coal mine gas disasters termed “pump–injection–lock.” First, based [...] Read more.
In response to the issues of low synergy efficiency between gas extraction and water injection, unclear procedural connections, and high costs in coal mine gas disaster prevention, this paper proposes a collaborative prevention technology for coal mine gas disasters termed “pump–injection–lock.” First, based on the kinetics of gas desorption in gas-bearing coal under different water-bearing conditions, an optimization model for the sequence of gas extraction and high-pressure water injection was developed. This model reduced the gas desorption rate in the experimental area by 32.5% and increased the effective extraction radius of boreholes by 18.7%. Second, based on the coupling relationship between water lock formation pressure, interfacial tension, and pore structure, a criterion model for process transition was constructed, enabling quantifiable identification of the transition node between “pump–injection.” The water lock’s inhibition of gas release duration was improved by over 25% compared to conventional water injection. Finally, by integrating the multiple effects of high-pressure water injection—enhancing permeability, softening, displacement, and flow limitation—a “multi-purpose” synergistic pathway was established. This increased the pre-drainage gas concentration in the test working face by 40%, the pure gas extraction volume by 28%, and reduced gas over-limit incidents by over 50%. Experiments and industrial trials demonstrated that the application of this technology in the 15# coal seam of Yixin Coal Mine shortened gas extraction by 36%, reduced borehole engineering by 72.8%, eliminated gas over-limit incidents during mining, and cumulatively generated economic benefits exceeding 425 million yuan in the same year, significantly improving the efficiency and cost-effectiveness of gas disaster prevention. Full article
Show Figures

Figure 1

16 pages, 385 KB  
Review
How to Prevent Suicide in Older Patients with a Neurocognitive Disorder: A Scoping Review Leading to the Development of a Clinical Guide for Healthcare Workers
by Sylvie Lapierre, Cécile Bardon, Charles Viau-Quesnel, Jean Vézina, Rock-André Blondin, Catherine Gagnon, Isabelle Lafleur, Christophe Marchand-Pellerin, Myriam Gauvreau and Nicole Poirier
Healthcare 2026, 14(1), 36; https://doi.org/10.3390/healthcare14010036 - 23 Dec 2025
Viewed by 548
Abstract
Background/Objective: Healthcare professionals working with individuals living with neurocognitive disorders (NCD) express the need for training to prevent suicidal behaviors in this population. Accordingly, this paper describes the process used to develop a suicide prevention clinical guide for use in geriatric care settings. [...] Read more.
Background/Objective: Healthcare professionals working with individuals living with neurocognitive disorders (NCD) express the need for training to prevent suicidal behaviors in this population. Accordingly, this paper describes the process used to develop a suicide prevention clinical guide for use in geriatric care settings. Methods: The project involved three steps. First, a team of researchers conducted a scoping review of empirical studies on suicide among older adults with NCD, focusing on prevalence, risk and protective factors, assessment and practical interventions. Secondly, based on these findings, the team created a clinical guide that helps healthcare professionals assess needs and suicide risk and formulate action plans to improve well-being, ensure safety, and reduce the risk of suicide. Result: The guide was finalized after 18 months of deliberation. It enables professionals to structure their evaluation, so that no relevant aspect is overlooked, and protective factors are reinforced. It emphasizes shared responsibilities and interdisciplinary collaboration. It recommends that professionals conduct a personalized clinical assessment of unmet needs to reduce distress. During the third step, the guide was evaluated through a pilot study, involving post-training focus groups and interviews with professionals who used it in clinical practice. Conclusions: Participants’ feedback was integrated into the final version of the Guide, and the results indicated that it helped dispel misconceptions about the low risk of suicide among patients with NCD, whose suicidality is frequently misinterpreted as mere disruptive behavior. Organizational barriers represent the main challenge professionals may face when using the Guide. Full article
Show Figures

Figure 1

21 pages, 956 KB  
Article
How to Harness LLMs in Project-Based Learning: Empirical Evidence for Individual Autonomy and Moderate Constraints in Engineering Education
by Xiaoyu Yi, Wenkai Feng, Yali He and Fei Wang
Systems 2025, 13(12), 1112; https://doi.org/10.3390/systems13121112 - 10 Dec 2025
Viewed by 652
Abstract
The integration of large language models (LLMs) into project-based learning (PBL) holds significant potential for addressing enduring pedagogical challenges in engineering education, such as providing scalable, personalized support during complex problem-solving. Grounded in Self-Determination Theory (SDT), this study investigates how different LLM usage [...] Read more.
The integration of large language models (LLMs) into project-based learning (PBL) holds significant potential for addressing enduring pedagogical challenges in engineering education, such as providing scalable, personalized support during complex problem-solving. Grounded in Self-Determination Theory (SDT), this study investigates how different LLM usage strategies impact student learning within a blended engineering geology PBL context. A one-semester quasi-experiment (N = 120) employed a 2 (usage mode: individual/shared) × 2 (interaction restriction: restricted/unrestricted) factorial design. Mixed-methods data, including surveys, interaction logs, and reflective reports, were analyzed to assess learning engagement, psychological needs satisfaction, cognitive interaction levels, and project outcomes. Results demonstrate that the individual use strategy significantly outperformed shared use in enhancing engagement, needs satisfaction, higher-order cognitive interactions, and final project scores. The restricted interaction strategy effectively served as a metacognitive scaffold, optimizing the learning process by promoting deliberate planning. Notably, individual autonomy did not undermine collaboration but enhanced it by improving the quality of individual contributions to group work. Students also developed robust critical verification habits to navigate LLM “hallucinations.” This research identifies “individual autonomy” as the core mechanism and “moderate constraint” as a crucial design principle for LLM integration, providing an empirically supported framework for harnessing generative AI to foster both motivational and cognitive outcomes in engineering PBL. Full article
Show Figures

Figure 1

Back to TopTop