Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,085)

Search Parameters:
Keywords = fair

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2640 KiB  
Article
DenseNet-Based Classification of EEG Abnormalities Using Spectrograms
by Lan Wei and Catherine Mooney
Algorithms 2025, 18(8), 486; https://doi.org/10.3390/a18080486 - 5 Aug 2025
Abstract
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening [...] Read more.
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening of EEGs can help clinicians quickly identify potential neurological abnormalities, enabling timely intervention and guiding further diagnostic and treatment strategies. Methodology: We utilized the Temple University Hospital EEG dataset to develop a DenseNet-based deep learning model. To enable a fair comparison of different EEG representations, we used three input types: signal images, spectrograms, and scalograms. To reduce dimensionality and simplify computation, we focused on two channels: T5 and O1. For interpretability, we applied Local Interpretable Model-agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize the EEG regions influencing the model’s predictions. Key Findings: Among the input types, spectrogram-based representations achieved the highest classification accuracy, indicating that time-frequency features are especially effective for this task. The model demonstrated strong performance overall, and the integration of LIME and Grad-CAM provided transparent explanations of its decisions, enhancing interpretability. This approach offers a practical and interpretable solution for automated EEG screening, contributing to more efficient clinical workflows and better understanding of complex neurological conditions. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Figure 1

32 pages, 1045 KiB  
Review
Nanoparticle Uptake and Crossing by Human In Vitro Models of Intestinal Barriers: A Scoping Review
by Chiara Ritarossi, Valentina Prota, Francesca De Battistis, Chiara Laura Battistelli, Isabella De Angelis, Cristina Andreoli and Olimpia Vincentini
Nanomaterials 2025, 15(15), 1195; https://doi.org/10.3390/nano15151195 - 5 Aug 2025
Abstract
The Caco-2 in vitro model of the intestinal barrier is a well-established system for the investigation of the intestinal fate of orally ingested chemicals and drugs, and it has been used for over ten years by pharmaceutical industries as a model for absorption [...] Read more.
The Caco-2 in vitro model of the intestinal barrier is a well-established system for the investigation of the intestinal fate of orally ingested chemicals and drugs, and it has been used for over ten years by pharmaceutical industries as a model for absorption in preclinical studies. The Caco-2 model shows a fair correlation with in vivo drug absorption, though some inherent biases remain unresolved. Its main limitation lies in the lack of structural complexity, as it does not replicate the diverse cell types and mucus layer present in the human intestinal epithelium. Consequently, the development of advanced in vitro models of the intestinal barrier, that more structurally resemble the human intestinal epithelium physiology, has increased the potential applications of these models. Recently, Caco-2-based advanced intestinal models have proven effective in predicting nanomaterial uptake and transport across the intestinal barrier. The aim of this review is to provide a state-of-the-art of human in vitro intestinal barrier models for the study of translocation/uptake of nanoparticles relevant for oral exposure, including inorganic nanomaterials, micro/nano plastic, and fiber nanomaterials. The main effects of the above-mentioned nanomaterials on the intestinal barrier are also reported. Full article
(This article belongs to the Special Issue Nanosafety and Nanotoxicology: Current Opportunities and Challenges)
Show Figures

Graphical abstract

23 pages, 930 KiB  
Article
The Principle of Shared Utilization of Benefits Applied to the Development of Artificial Intelligence
by Camilo Vargas-Machado and Andrés Roncancio Bedoya
Philosophies 2025, 10(4), 87; https://doi.org/10.3390/philosophies10040087 (registering DOI) - 5 Aug 2025
Abstract
This conceptual position is based on the diagnosis that artificial intelligence (AI) accentuates existing economic and geopolitical divides in communities in the Global South, which provide data without receiving rewards. Based on bioethical precedents of fair distribution of genetic resources, it is proposed [...] Read more.
This conceptual position is based on the diagnosis that artificial intelligence (AI) accentuates existing economic and geopolitical divides in communities in the Global South, which provide data without receiving rewards. Based on bioethical precedents of fair distribution of genetic resources, it is proposed to transfer the principle of benefit-sharing to the emerging algorithmic governance in the context of AI. From this discussion, the study reveals an algorithmic concentration in the Global North. This dynamic generates political, cultural, and labor asymmetries. Regarding the methodological design, the research was qualitative, with an interpretive paradigm and an inductive method, applying documentary review and content analysis techniques. In addition, two theoretical and two analytical categories were used. As a result, six emerging categories were identified that serve as pillars of the studied principle and are capable of reversing the gaps: equity, accessibility, transparency, sustainability, participation, and cooperation. At the end of the research, it was confirmed that AI, without a solid ethical framework, concentrates benefits in dominant economies. Therefore, if this trend does not change, the Global South will become dependent, and its data will lack equitable returns. Therefore, benefit-sharing is proposed as a normative basis for fair, transparent, and participatory international governance. Full article
Show Figures

Figure 1

30 pages, 825 KiB  
Review
Predictive Analytics in Human Resources Management: Evaluating AIHR’s Role in Talent Retention
by Ana Maria Căvescu and Nirvana Popescu
AppliedMath 2025, 5(3), 99; https://doi.org/10.3390/appliedmath5030099 (registering DOI) - 5 Aug 2025
Abstract
This study explores the role of artificial intelligence (AI) in human resource management (HRM), with a focus on recruitment, employee retention, and performance optimization. Through a PRISMA-based systematic literature review, the paper examines many machine learning algorithms including XGBoost, SVM, random forest, and [...] Read more.
This study explores the role of artificial intelligence (AI) in human resource management (HRM), with a focus on recruitment, employee retention, and performance optimization. Through a PRISMA-based systematic literature review, the paper examines many machine learning algorithms including XGBoost, SVM, random forest, and linear regression in decision-making related to employee-attrition prediction and talent management. The findings suggest that these technologies can automate HR processes, reduce bias, and personalize employee experiences. However, the implementation of AI in HRM also presents challenges, including data privacy concerns, algorithmic bias, and organizational resistance. To address these obstacles, the study highlights the importance of adopting ethical AI frameworks, ensuring transparency in decision-making, and developing effective integration strategies. Future research should focus on improving explainability, minimizing algorithmic bias, and promoting fairness in AI-driven HR practices. Full article
Show Figures

Figure 1

21 pages, 469 KiB  
Systematic Review
The Effectiveness of Virtual Reality in Improving Balance and Gait in People with Parkinson’s Disease: A Systematic Review
by Sofia Fernandes, Bruna Oliveira, Sofia Sacadura, Cristina Rakasi, Isabel Furtado, João Paulo Figueiredo, Rui Soles Gonçalves and Anabela Correia Martins
Sensors 2025, 25(15), 4795; https://doi.org/10.3390/s25154795 - 4 Aug 2025
Abstract
Background: Virtual reality (VR), often used with motion sensors, provides interactive tools for physiotherapy aimed at enhancing motor functions. This systematic review examined the effects of VR-based interventions, alone or combined with conventional physiotherapy (PT), on balance and gait in individuals with Parkinson’s [...] Read more.
Background: Virtual reality (VR), often used with motion sensors, provides interactive tools for physiotherapy aimed at enhancing motor functions. This systematic review examined the effects of VR-based interventions, alone or combined with conventional physiotherapy (PT), on balance and gait in individuals with Parkinson’s disease (PD). Methods: Following PRISMA guidelines, eight randomized controlled trials (RCTs) published between January 2019 and April 2025 were included. Interventions lasted between 5 and 12 weeks and were grouped as VR alone or VR combined with PT. Methodological quality was assessed using the PEDro Scale. Results: Of the 31 comparisons for balance and gait, 30 were favored by the experimental group, with 12 reaching statistical significance. Secondary outcomes (function, cognition, and quality of life) showed mixed results, with 6 comparisons favoring the experimental group (3 statistically significant) and 4 favoring the control group (1 statistically significant). Overall, the studies showed fair to good quality and a moderate risk of bias. Conclusions: VR-based interventions, particularly when combined with PT, show promise for improving balance and gait in PD. However, the evidence is limited by the small number of studies, heterogeneity of protocols, and methodological constraints. More rigorous, long-term trials are needed to clarify their therapeutic potential. Full article
Show Figures

Figure 1

19 pages, 2795 KiB  
Article
State Analysis of Grouped Smart Meters Driven by Interpretable Random Forest
by Zhongdong Wang, Zhengbo Zhang, Weijiang Wu, Zhen Zhang, Xiaolin Xu and Hongbin Li
Electronics 2025, 14(15), 3105; https://doi.org/10.3390/electronics14153105 - 4 Aug 2025
Abstract
Accurate evaluation of the operational status of smart meters, as the critical interface between the power grid and its users, is essential for ensuring fairness in power transactions. This highlights the importance of implementing rotation management practices based on meter status. However, the [...] Read more.
Accurate evaluation of the operational status of smart meters, as the critical interface between the power grid and its users, is essential for ensuring fairness in power transactions. This highlights the importance of implementing rotation management practices based on meter status. However, the traditional expiration-based rotation method has become inadequate due to the extended service life of modern smart meters, necessitating a shift toward status-driven targeted management. Existing multifactor comprehensive assessment methods often face challenges in balancing accuracy and interpretability. To address these limitations, this study proposes a novel method for analyzing the status of smart meter groups using an interpretable random forest model. The approach incorporates an expert-knowledge-guided grouping assessment strategy, develops a multi-source heterogeneous feature set with strong correlations to meter status, and enhances the random forest model with the SHAP (SHapley Additive exPlanations) interpretability framework. Compared to conventional methods, the proposed approach demonstrates superior efficiency and reliability in predicting the failure rates of smart meter groups within distribution network areas, offering robust support for the maintenance and management of smart meters. Full article
Show Figures

Figure 1

25 pages, 5488 KiB  
Article
Biased by Design? Evaluating Bias and Behavioral Diversity in LLM Annotation of Real-World and Synthetic Hotel Reviews
by Maria C. Voutsa, Nicolas Tsapatsoulis and Constantinos Djouvas
AI 2025, 6(8), 178; https://doi.org/10.3390/ai6080178 - 4 Aug 2025
Abstract
As large language models (LLMs) gain traction among researchers and practitioners, particularly in digital marketing for tasks such as customer feedback analysis and automated communication, concerns remain about the reliability and consistency of their outputs. This study investigates annotation bias in LLMs by [...] Read more.
As large language models (LLMs) gain traction among researchers and practitioners, particularly in digital marketing for tasks such as customer feedback analysis and automated communication, concerns remain about the reliability and consistency of their outputs. This study investigates annotation bias in LLMs by comparing human and AI-generated annotation labels across sentiment, topic, and aspect dimensions in hotel booking reviews. Using the HRAST dataset, which includes 23,114 real user-generated review sentences and a synthetically generated corpus of 2000 LLM-authored sentences, we evaluate inter-annotator agreement between a human expert and three LLMs (ChatGPT-3.5, ChatGPT-4, and ChatGPT-4-mini) as a proxy for assessing annotation bias. Our findings show high agreement among LLMs, especially on synthetic data, but only moderate to fair alignment with human annotations, particularly in sentiment and aspect-based sentiment analysis. LLMs display a pronounced neutrality bias, often defaulting to neutral sentiment in ambiguous cases. Moreover, annotation behavior varies notably with task design, as manual, one-to-one prompting produces higher agreement with human labels than automated batch processing. The study identifies three distinct AI biases—repetition bias, behavioral bias, and neutrality bias—that shape annotation outcomes. These findings highlight how dataset complexity and annotation mode influence LLM behavior, offering important theoretical, methodological, and practical implications for AI-assisted annotation and synthetic content generation. Full article
(This article belongs to the Special Issue AI Bias in the Media and Beyond)
Show Figures

Figure 1

8 pages, 844 KiB  
Opinion
Flawed Metrics, Damaging Outcomes: A Rebuttal to the RI2 Integrity Index Targeting Top Indonesian Universities
by Muhammad Iqhrammullah, Derren D. C. H. Rampengan, Muhammad Fadhlal Maula and Ikhwan Amri
Publications 2025, 13(3), 36; https://doi.org/10.3390/publications13030036 - 4 Aug 2025
Abstract
The Research Integrity Risk Index (RI2), introduced as a tool to identify universities at risk of compromised research integrity, adopts an overly reductive methodology by combining retraction rates and delisted journal proportions into a single, equally weighted composite score. While its [...] Read more.
The Research Integrity Risk Index (RI2), introduced as a tool to identify universities at risk of compromised research integrity, adopts an overly reductive methodology by combining retraction rates and delisted journal proportions into a single, equally weighted composite score. While its stated aim is to promote accountability, this commentary critiques the RI2 index for its flawed assumptions, lack of empirical validation, and disproportionate penalization of institutions in low- and middle-income countries. We examine how RI2 misinterprets retractions, misuses delisting data, and fails to account for diverse academic publishing environments, particularly in Indonesia, where many high-performing universities are unfairly categorized as “high risk” or “red flag.” The index’s uncritical reliance on opaque delisting decisions, combined with its fixed equal-weighting formula, produces volatile and context-insensitive scores that do not accurately reflect the presence or severity of research misconduct. Moreover, RI2 has gained significant media attention and policy influence despite being based on an unreviewed preprint, with no transparent mechanism for institutional rebuttal or contextual adjustment. By comparing RI2 classifications with established benchmarks such as the Scimago Institution Rankings and drawing from lessons in global development metrics, we argue that RI2, although conceptually innovative, should remain an exploratory framework. It requires rigorous scientific validation before being adopted as a global standard. We also propose flexible weighting schemes, regional calibration, and transparent engagement processes to improve the fairness and reliability of institutional research integrity assessments. Full article
Show Figures

Figure 1

16 pages, 3099 KiB  
Article
Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control with Spatio-Temporal Attention Mechanism
by Wenzhe Jia and Mingyu Ji
Appl. Sci. 2025, 15(15), 8605; https://doi.org/10.3390/app15158605 (registering DOI) - 3 Aug 2025
Viewed by 75
Abstract
Traffic congestion in large-scale road networks significantly impacts urban sustainability. Traditional traffic signal control methods lack adaptability to dynamic traffic conditions. Recently, deep reinforcement learning (DRL) has emerged as a promising solution for optimizing signal control. This study proposes a Multi-Agent Deep Reinforcement [...] Read more.
Traffic congestion in large-scale road networks significantly impacts urban sustainability. Traditional traffic signal control methods lack adaptability to dynamic traffic conditions. Recently, deep reinforcement learning (DRL) has emerged as a promising solution for optimizing signal control. This study proposes a Multi-Agent Deep Reinforcement Learning (MADRL) framework for large-scale traffic signal control. The framework employs spatio-temporal attention networks to extract relevant traffic patterns and a hierarchical reinforcement learning strategy for coordinated multi-agent optimization. The problem is formulated as a Markov Decision Process (MDP) with a novel reward function that balances vehicle waiting time, throughput, and fairness. We validate our approach on simulated large-scale traffic scenarios using SUMO (Simulation of Urban Mobility). Experimental results demonstrate that our framework reduces vehicle waiting time by 25% compared to baseline methods while maintaining scalability across different road network sizes. The proposed spatio-temporal multi-agent reinforcement learning framework effectively optimizes large-scale traffic signal control, providing a scalable and efficient solution for smart urban transportation. Full article
Show Figures

Figure 1

21 pages, 1800 KiB  
Article
GAPSO: Cloud-Edge-End Collaborative Task Offloading Based on Genetic Particle Swarm Optimization
by Wu Wen, Yibin Huang, Zhong Xiao, Lizhuang Tan and Peiying Zhang
Symmetry 2025, 17(8), 1225; https://doi.org/10.3390/sym17081225 - 3 Aug 2025
Viewed by 89
Abstract
In the 6G era, the proliferation of smart devices has led to explosive growth in data volume. The traditional cloud computing can no longer meet the demand for efficient processing of large amounts of data. Edge computing can solve the energy loss problems [...] Read more.
In the 6G era, the proliferation of smart devices has led to explosive growth in data volume. The traditional cloud computing can no longer meet the demand for efficient processing of large amounts of data. Edge computing can solve the energy loss problems caused by transmission delay and multi-level forwarding in cloud computing by processing data close to the data source. In this paper, we propose a cloud–edge–end collaborative task offloading strategy with task response time and execution energy consumption as the optimization targets under a limited resource environment. The tasks generated by smart devices can be processed using three kinds of computing nodes, including user devices, edge servers, and cloud servers. The computing nodes are constrained by bandwidth and computing resources. For the target optimization problem, a genetic particle swarm optimization algorithm considering three layers of computing nodes is designed. The task offloading optimization is performed by introducing (1) opposition-based learning algorithm, (2) adaptive inertia weights, and (3) adjustive acceleration coefficients. All metaheuristic algorithms adopt a symmetric training method to ensure fairness and consistency in evaluation. Through experimental simulation, compared with the classic evolutionary algorithm, our algorithm reduces the objective function value by about 6–12% and has higher algorithm convergence speed, accuracy, and stability. Full article
Show Figures

Figure 1

36 pages, 699 KiB  
Article
A Framework of Indicators for Assessing Team Performance of Human–Robot Collaboration in Construction Projects
by Guodong Zhang, Xiaowei Luo, Lei Zhang, Wei Li, Wen Wang and Qiming Li
Buildings 2025, 15(15), 2734; https://doi.org/10.3390/buildings15152734 - 2 Aug 2025
Viewed by 231
Abstract
The construction industry has been troubled by a shortage of skilled labor and safety accidents in recent years. Therefore, more and more robots are introduced to undertake dangerous and repetitive jobs, so that human workers can concentrate on higher-value and creative problem-solving tasks. [...] Read more.
The construction industry has been troubled by a shortage of skilled labor and safety accidents in recent years. Therefore, more and more robots are introduced to undertake dangerous and repetitive jobs, so that human workers can concentrate on higher-value and creative problem-solving tasks. Nevertheless, although human–robot collaboration (HRC) shows great potential, most existing evaluation methods still focus on the single performance of either the human or robot, and systematic indicators for a whole HRC team remain insufficient. To fill this research gap, the present study constructs a comprehensive evaluation framework for HRC team performance in construction projects. Firstly, a detailed literature review is carried out, and three theories are integrated to build 33 indicators preliminarily. Afterwards, an expert questionnaire survey (N = 15) is adopted to revise and verify the model empirically. The survey yielded a Cronbach’s alpha of 0.916, indicating excellent internal consistency. The indicators rated highest in importance were task completion time (µ = 4.53) and dynamic separation distance (µ = 4.47) on a 5-point scale. Eight indicators were excluded due to mean importance ratings falling below the 3.0 threshold. The framework is formed with five main dimensions and 25 concrete indicators. Finally, an AHP-TOPSIS method is used to evaluate the HRC team performance. The AHP analysis reveals that Safety (weight = 0.2708) is prioritized over Productivity (weight = 0.2327) by experts, establishing a safety-first principle for successful HRC deployment. The framework is demonstrated through a case study of a human–robot plastering team, whose team performance scored as fair. This shows that the framework can help practitioners find out the advantages and disadvantages of HRC team performance and provide targeted improvement strategies. Furthermore, the framework offers construction managers a scientific basis for deciding robot deployment and team assignment, thus promoting safer, more efficient, and more creative HRC in construction projects. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

24 pages, 1855 KiB  
Article
AI-Driven Panel Assignment Optimization via Document Similarity and Natural Language Processing
by Rohit Ramachandran, Urjit Patil, Srinivasaraghavan Sundar, Prem Shah and Preethi Ramesh
AI 2025, 6(8), 177; https://doi.org/10.3390/ai6080177 - 1 Aug 2025
Viewed by 230
Abstract
Efficient and accurate panel assignment is critical in expert and peer review processes. Traditional methods—based on manual preferences or Heuristic rules—often introduce bias, inconsistency, and scalability challenges. We present an automated framework that combines transformer-based document similarity modeling with optimization-based reviewer assignment. Using [...] Read more.
Efficient and accurate panel assignment is critical in expert and peer review processes. Traditional methods—based on manual preferences or Heuristic rules—often introduce bias, inconsistency, and scalability challenges. We present an automated framework that combines transformer-based document similarity modeling with optimization-based reviewer assignment. Using the all-mpnet-base-v2 from model (version 3.4.1), our system computes semantic similarity between proposal texts and reviewer documents, including CVs and Google Scholar profiles, without requiring manual input from reviewers. These similarity scores are then converted into rankings and integrated into an Integer Linear Programming (ILP) formulation that accounts for workload balance, conflicts of interest, and role-specific reviewer assignments (lead, scribe, reviewer). The method was tested across 40 researchers in two distinct disciplines (Chemical Engineering and Philosophy), each with 10 proposal documents. Results showed high self-similarity scores (0.65–0.89), strong differentiation between unrelated fields (−0.21 to 0.08), and comparable performance between reviewer document types. The optimization consistently prioritized top matches while maintaining feasibility under assignment constraints. By eliminating the need for subjective preferences and leveraging deep semantic analysis, our framework offers a scalable, fair, and efficient alternative to manual or Heuristic assignment processes. This approach can support large-scale review workflows while enhancing transparency and alignment with reviewer expertise. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

30 pages, 1293 KiB  
Article
Obstacles and Drivers of Sustainable Horizontal Logistics Collaboration: Analysis of Logistics Providers’ Behaviour in Slovenia
by Ines Pentek and Tomislav Letnik
Sustainability 2025, 17(15), 7001; https://doi.org/10.3390/su17157001 - 1 Aug 2025
Viewed by 158
Abstract
The logistics industry faces challenges from evolving consumer expectations, technological advances, sustainability demands, and market disruptions. Logistics collaboration is in theory perceived as one of the most promising solutions to solve these issues, but here are still a lot of challenges that needs [...] Read more.
The logistics industry faces challenges from evolving consumer expectations, technological advances, sustainability demands, and market disruptions. Logistics collaboration is in theory perceived as one of the most promising solutions to solve these issues, but here are still a lot of challenges that needs to be better understood and addressed. While vertical collaboration among supply chain actors is well advanced, horizontal collaboration among competing service providers remains under-explored. This study developed a novel methodology based on the COM-B behaviour-change framework to better understand the main challenges, opportunities, capabilities and drivers that would motivate competing companies to exploit the potential of horizontal logistics collaboration. A survey was designed and conducted among 71 logistics service providers in Slovenia, chosen for its fragmented market and low willingness to collaborate. Statistical analysis reveals cost reduction (M = 4.21/5) and improved vehicle utilization (M = 4.29/5) as the primary motivators. On the other hand, maintaining company reputation (M = 4.64/5), fair resource sharing (M = 4.20/5), and transparency of logistics processes (M = 4.17/5) all persist as key enabling conditions. These findings underscore the pivotal role of behavioural drivers and suggest strategies that combine economic incentives with targeted trust-building measures. Future research should employ experimental designs in diverse national contexts and integrate vertical–horizontal approaches to validate causal pathways and advance theory. Full article
Show Figures

Figure 1

12 pages, 441 KiB  
Article
Diagnostic Value of Point-of-Care Ultrasound for Sarcopenia in Geriatric Patients Hospitalized for Hip Fracture
by Laure Mondo, Chloé Louis, Hinda Saboul, Laetitia Beernaert and Sandra De Breucker
J. Clin. Med. 2025, 14(15), 5424; https://doi.org/10.3390/jcm14155424 - 1 Aug 2025
Viewed by 167
Abstract
Introduction: Sarcopenia is a systemic condition linked to increased morbidity and mortality in older adults. Point-of-Care Ultrasound (POCUS) offers a rapid, bedside method to assess muscle mass. This study evaluates the diagnostic accuracy of POCUS compared to Dual-energy X-ray Absorptiometry (DXA), the [...] Read more.
Introduction: Sarcopenia is a systemic condition linked to increased morbidity and mortality in older adults. Point-of-Care Ultrasound (POCUS) offers a rapid, bedside method to assess muscle mass. This study evaluates the diagnostic accuracy of POCUS compared to Dual-energy X-ray Absorptiometry (DXA), the gold standard method, and explores its prognostic value in old patients undergoing surgery for hip fractures. Patients and Methods: In this prospective, single-center study, 126 patients aged ≥ 70 years and hospitalized with hip fractures were included. Sarcopenia was defined according to the revised 2018 EWGSOP2 criteria. Muscle mass was assessed by the Appendicular Skeletal Muscle Mass Index (ASMI) using DXA and by the thickness of the rectus femoris (RF) muscle using POCUS. Results: Of the 126 included patients, 52 had both DXA and POCUS assessments, and 43% of them met the diagnostic criteria for sarcopenia or severe sarcopenia. RF muscle thickness measured by POCUS was significantly associated with ASMI (R2 = 0.30; p < 0.001). POCUS showed a fair diagnostic accuracy in women (AUC 0.652) and an excellent accuracy in men (AUC 0.905). Optimal diagnostic thresholds according to Youden’s index were 5.7 mm for women and 9.3 mm for men. Neither RF thickness, ASMI, nor sarcopenia status predicted mortality or major postoperative complications. Conclusions: POCUS is a promising, accessible tool for diagnosing sarcopenia in old adults with hip fractures. Nonetheless, its prognostic utility remains uncertain and should be further evaluated in long-term studies. Full article
(This article belongs to the Special Issue The “Orthogeriatric Fracture Syndrome”—Issues and Perspectives)
Show Figures

Figure 1

15 pages, 1515 KiB  
Article
Ontology-Based Data Pipeline for Semantic Reaction Classification and Research Data Management
by Hendrik Borgelt, Frederick Gabriel Kitel and Norbert Kockmann
Computers 2025, 14(8), 311; https://doi.org/10.3390/computers14080311 - 1 Aug 2025
Viewed by 137
Abstract
Catalysis research is complex and interdisciplinary, involving diverse physical effects and challenging data practices. Research data often captures only selected aspects, such as specific reactants and products, limiting its utility for machine learning and the implementation of FAIR (Findable, Accessible, Interoperable, Reusable) workflows. [...] Read more.
Catalysis research is complex and interdisciplinary, involving diverse physical effects and challenging data practices. Research data often captures only selected aspects, such as specific reactants and products, limiting its utility for machine learning and the implementation of FAIR (Findable, Accessible, Interoperable, Reusable) workflows. To improve this, semantic structuring through ontologies is essential. This work extends the established ontologies by refining logical relations and integrating semantic tools such as the Web Ontology Language or the Shape Constraint Language. It incorporates application programming interfaces from chemical databases, such as the Kyoto Encyclopedia of Genes and Genomes and the National Institute of Health’s PubChem database, and builds upon established ontologies. A key innovation lies in automatically decomposing chemical substances through database entries and chemical identifier representations to identify functional groups, enabling more generalized reaction classification. Using new semantic functionality, functional groups are flexibly addressed, improving the classification of reactions such as saponification and ester cleavage with simultaneous oxidation. A graphical interface (GUI) supports user interaction with the knowledge graph, enabling ontological reasoning and querying. This approach demonstrates improved specificity of the newly established ontology over its predecessors and offers a more user-friendly interface for engaging with structured chemical knowledge. Future work will focus on expanding ontology coverage to support a wider range of reactions in catalysis research. Full article
Show Figures

Figure 1

Back to TopTop