Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,034)

Search Parameters:
Keywords = high fairness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1855 KiB  
Article
AI-Driven Panel Assignment Optimization via Document Similarity and Natural Language Processing
by Rohit Ramachandran, Urjit Patil, Srinivasaraghavan Sundar, Prem Shah and Preethi Ramesh
AI 2025, 6(8), 177; https://doi.org/10.3390/ai6080177 (registering DOI) - 1 Aug 2025
Abstract
Efficient and accurate panel assignment is critical in expert and peer review processes. Traditional methods—based on manual preferences or Heuristic rules—often introduce bias, inconsistency, and scalability challenges. We present an automated framework that combines transformer-based document similarity modeling with optimization-based reviewer assignment. Using [...] Read more.
Efficient and accurate panel assignment is critical in expert and peer review processes. Traditional methods—based on manual preferences or Heuristic rules—often introduce bias, inconsistency, and scalability challenges. We present an automated framework that combines transformer-based document similarity modeling with optimization-based reviewer assignment. Using the all-mpnet-base-v2 from model (version 3.4.1), our system computes semantic similarity between proposal texts and reviewer documents, including CVs and Google Scholar profiles, without requiring manual input from reviewers. These similarity scores are then converted into rankings and integrated into an Integer Linear Programming (ILP) formulation that accounts for workload balance, conflicts of interest, and role-specific reviewer assignments (lead, scribe, reviewer). The method was tested across 40 researchers in two distinct disciplines (Chemical Engineering and Philosophy), each with 10 proposal documents. Results showed high self-similarity scores (0.65–0.89), strong differentiation between unrelated fields (−0.21 to 0.08), and comparable performance between reviewer document types. The optimization consistently prioritized top matches while maintaining feasibility under assignment constraints. By eliminating the need for subjective preferences and leveraging deep semantic analysis, our framework offers a scalable, fair, and efficient alternative to manual or Heuristic assignment processes. This approach can support large-scale review workflows while enhancing transparency and alignment with reviewer expertise. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

16 pages, 2412 KiB  
Article
Measuring Equitable Prosperity in the EU-27: Introducing the IDDO, a Composite Index of Growth and Income Inequality (2005–2024)
by Narcis Eduard Mitu and George Teodor Mitu
World 2025, 6(3), 103; https://doi.org/10.3390/world6030103 - 1 Aug 2025
Abstract
This article introduces the Index of Distributive and Developmental Outlook (IDDO), a composite indicator designed to jointly assess economic performance and income inequality across EU-27 Member States. While GDP per capita is widely used to evaluate national prosperity, and the Gini coefficient captures [...] Read more.
This article introduces the Index of Distributive and Developmental Outlook (IDDO), a composite indicator designed to jointly assess economic performance and income inequality across EU-27 Member States. While GDP per capita is widely used to evaluate national prosperity, and the Gini coefficient captures income distribution, their separate use often obscures the interaction between growth and equity—an essential dimension of sustainable development. To address this gap, the IDDO integrates normalized values of both indicators using arithmetic and geometric means. The study applies the IDDO to a longitudinal dataset covering the years 2005, 2014, and 2024, allowing for comparative and temporal analysis. Based on IDDO scores, countries are classified into four development types: balanced development, growth with inequality, equity with stagnation, and dual vulnerability. Results show that while some Member States, such as Luxembourg, Czechia, and Slovenia, maintain consistently high IDDO levels, others—including Bulgaria, Romania, and Latvia—exhibit persistent challenges in aligning growth with equitable outcomes. The findings underscore the need for cohesion policies that prioritize not only economic convergence but also distributive fairness. The IDDO provides a practical and adaptable tool for diagnosing development patterns, benchmarking performance, and informing policy design within the EU framework. Full article
Show Figures

Figure 1

21 pages, 2909 KiB  
Article
Novel Federated Graph Contrastive Learning for IoMT Security: Protecting Data Poisoning and Inference Attacks
by Amarudin Daulay, Kalamullah Ramli, Ruki Harwahyu, Taufik Hidayat and Bernardi Pranggono
Mathematics 2025, 13(15), 2471; https://doi.org/10.3390/math13152471 - 31 Jul 2025
Abstract
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient [...] Read more.
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient FL framework integrating contrastive graph representation learning for enhanced feature discrimination, a Jain-index-based fairness-aware aggregation mechanism, an adaptive synchronization scheduler to optimize communication rounds, and secure aggregation via homomorphic encryption within a Trusted Execution Environment. We evaluate FedGCL on four benchmark malware datasets (Drebin, Malgenome, Kronodroid, and TUANDROMD) using 5 to 15 graph neural network clients over 20 communication rounds. Our experiments demonstrate that FedGCL achieves 96.3% global accuracy within three rounds and converges to 98.9% by round twenty—reducing required training rounds by 45% compared to FedAvg—while incurring only approximately 10% additional computational overhead. By preserving patient data privacy at the edge, FedGCL enhances system resilience without sacrificing model performance. These results indicate FedGCL’s promise as a secure, efficient, and fair federated malware detection solution for IoMT ecosystems. Full article
Show Figures

Figure 1

23 pages, 1447 KiB  
Article
Heat Risk Perception and Vulnerability in Puerto Rico: Insights for Climate Adaptation in the Caribbean
by Brenda Guzman-Colon, Zack Guido, Claudia P. Amaya-Ardila, Laura T. Cabrera-Rivera and Pablo A. Méndez-Lázaro
Int. J. Environ. Res. Public Health 2025, 22(8), 1197; https://doi.org/10.3390/ijerph22081197 - 31 Jul 2025
Abstract
Extreme heat poses growing health risks in tropical regions, yet public perception of this threat remains understudied in the Caribbean. This study examines how residents in Puerto Rico perceived heat-related health risks and how these perceptions relate to vulnerability and protective behaviors during [...] Read more.
Extreme heat poses growing health risks in tropical regions, yet public perception of this threat remains understudied in the Caribbean. This study examines how residents in Puerto Rico perceived heat-related health risks and how these perceptions relate to vulnerability and protective behaviors during the extreme heat events of the summer of 2020. We conducted a cross-sectional telephone survey of 500 adults across metropolitan and non-metropolitan areas of Puerto Rico, using stratified probability sampling. The questionnaire assessed heat risk perception, sociodemographic characteristics, health status, prior heat exposure, and heat-related behaviors. While most participants expressed concern about climate change and high temperatures, fewer than half perceived heat as a high level of personal health risk. Higher levels of risk perception were significantly associated with being male, aged 50–64, unemployed, and in fair health, having multiple chronic conditions, and prior experience with heat-related symptoms. Those with symptoms were nearly five times more likely to report high levels of risk perception (OR = 4.94, 95% CI: 2.93–8.34). In contrast, older adults (65+), despite their higher level of vulnerability, reported lower levels of risk perception and fewer symptoms. Nighttime heat exposure was widespread and strongly associated with heat-related symptoms. Common coping strategies included the use of fans and air conditioning, though economic constraints and infrastructure instability limited access. The findings highlight the disparity between actual and perceived vulnerability, particularly among older adults. Public health strategies should focus on risk communication tailored to vulnerable groups and address barriers to heat adaptation. Strengthening heat resilience in Puerto Rico requires improved infrastructure, equitable access to cooling, and targeted outreach. Full article
Show Figures

Figure 1

28 pages, 1334 KiB  
Review
Evaluating Data Quality: Comparative Insights on Standards, Methodologies, and Modern Software Tools
by Theodoros Alexakis, Evgenia Adamopoulou, Nikolaos Peppes, Emmanouil Daskalakis and Georgios Ntouskas
Electronics 2025, 14(15), 3038; https://doi.org/10.3390/electronics14153038 - 30 Jul 2025
Viewed by 4
Abstract
In an era of exponential data growth, ensuring high data quality has become essential for effective, evidence-based decision making. This study presents a structured and comparative review of the field by integrating data classifications, quality dimensions, assessment methodologies, and modern software tools. Unlike [...] Read more.
In an era of exponential data growth, ensuring high data quality has become essential for effective, evidence-based decision making. This study presents a structured and comparative review of the field by integrating data classifications, quality dimensions, assessment methodologies, and modern software tools. Unlike earlier reviews that focus narrowly on individual aspects, this work synthesizes foundational concepts with formal frameworks, including the Findable, Accessible, Interoperable, and Reusable (FAIR) principles and the ISO/IEC 25000 series on software and data quality. It further examines well-established assessment models, such as Total Data Quality Management (TDQM), Data Warehouse Quality (DWQ), and High-Quality Data Management (HDQM), and critically evaluates commercial platforms in terms of functionality, AI integration, and adaptability. A key contribution lies in the development of conceptual mappings that link data quality dimensions with FAIR indicators and maturity levels, offering a practical reference model. The findings also identify gaps in current tools and approaches, particularly around cost-awareness, explainability, and process adaptability. By bridging theory and practice, the study contributes to the academic literature while offering actionable insights for building scalable, standards-aligned, and context-sensitive data quality management strategies. Full article
Show Figures

Figure 1

31 pages, 1317 KiB  
Article
Privacy-Preserving Clinical Decision Support for Emergency Triage Using LLMs: System Architecture and Real-World Evaluation
by Alper Karamanlıoğlu, Berkan Demirel, Onur Tural, Osman Tufan Doğan and Ferda Nur Alpaslan
Appl. Sci. 2025, 15(15), 8412; https://doi.org/10.3390/app15158412 - 29 Jul 2025
Viewed by 190
Abstract
This study presents a next-generation clinical decision-support architecture for Clinical Decision Support Systems (CDSS) focused on emergency triage. By integrating Large Language Models (LLMs), Federated Learning (FL), and low-latency streaming analytics within a modular, privacy-preserving framework, the system addresses key deployment challenges in [...] Read more.
This study presents a next-generation clinical decision-support architecture for Clinical Decision Support Systems (CDSS) focused on emergency triage. By integrating Large Language Models (LLMs), Federated Learning (FL), and low-latency streaming analytics within a modular, privacy-preserving framework, the system addresses key deployment challenges in high-stakes clinical settings. Unlike traditional models, the architecture processes both structured (vitals, labs) and unstructured (clinical notes) data to enable context-aware reasoning with clinically acceptable latency at the point of care. It leverages big data infrastructure for large-scale EHR management and incorporates digital twin concepts for live patient monitoring. Federated training allows institutions to collaboratively improve models without sharing raw data, ensuring compliance with GDPR/HIPAA, and FAIR principles. Privacy is further protected through differential privacy, secure aggregation, and inference isolation. We evaluate the system through two studies: (1) a benchmark of 750+ USMLE-style questions validating the medical reasoning of fine-tuned LLMs; and (2) a real-world case study (n = 132, 75.8% first-pass agreement) using de-identified MIMIC-III data to assess triage accuracy and responsiveness. The system demonstrated clinically acceptable latency and promising alignment with expert judgment on reviewed cases. The infectious disease triage case demonstrates low-latency recognition of sepsis-like presentations in the ED. This work offers a scalable, audit-compliant, and clinician-validated blueprint for CDSS, enabling low-latency triage and extensibility across specialties. Full article
(This article belongs to the Special Issue Large Language Models: Transforming E-health)
Show Figures

Figure 1

25 pages, 1319 KiB  
Article
Beyond Performance: Explaining and Ensuring Fairness in Student Academic Performance Prediction with Machine Learning
by Kadir Kesgin, Salih Kiraz, Selahattin Kosunalp and Bozhana Stoycheva
Appl. Sci. 2025, 15(15), 8409; https://doi.org/10.3390/app15158409 - 29 Jul 2025
Viewed by 118
Abstract
This study addresses fairness in machine learning for student academic performance prediction using the UCI Student Performance dataset. We comparatively evaluate logistic regression, Random Forest, and XGBoost, integrating the Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance and 5-fold cross-validation for robust [...] Read more.
This study addresses fairness in machine learning for student academic performance prediction using the UCI Student Performance dataset. We comparatively evaluate logistic regression, Random Forest, and XGBoost, integrating the Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance and 5-fold cross-validation for robust model training. A comprehensive fairness analysis is conducted, considering sensitive attributes such as gender, school type, and socioeconomic factors, including parental education (Medu and Fedu), cohabitation status (Pstatus), and family size (famsize). Using the AIF360 library, we compute the demographic parity difference (DP) and Equalized Odds Difference (EO) to assess model biases across diverse subgroups. Our results demonstrate that XGBoost achieves high predictive performance (accuracy: 0.789; F1 score: 0.803) while maintaining low bias for socioeconomic attributes, offering a balanced approach to fairness and performance. A sensitivity analysis of bias mitigation strategies further enhances the study, advancing equitable artificial intelligence in education by incorporating socially relevant factors. Full article
(This article belongs to the Special Issue Challenges and Trends in Technology-Enhanced Learning)
Show Figures

Figure 1

22 pages, 3267 KiB  
Article
Identifying Deformation Drivers in Dam Segments Using Combined X- and C-Band PS Time Series
by Jonas Ziemer, Jannik Jänichen, Gideon Stein, Natascha Liedel, Carolin Wicker, Katja Last, Joachim Denzler, Christiane Schmullius, Maha Shadaydeh and Clémence Dubois
Remote Sens. 2025, 17(15), 2629; https://doi.org/10.3390/rs17152629 - 29 Jul 2025
Viewed by 177
Abstract
Dams play a vital role in securing water and electricity supplies for households and industry, and they contribute significantly to flood protection. Regular monitoring of dam deformations holds fundamental socio-economic and ecological importance. Traditionally, this has relied on time-consuming in situ techniques that [...] Read more.
Dams play a vital role in securing water and electricity supplies for households and industry, and they contribute significantly to flood protection. Regular monitoring of dam deformations holds fundamental socio-economic and ecological importance. Traditionally, this has relied on time-consuming in situ techniques that offer either high spatial or temporal resolution. Persistent Scatterer Interferometry (PSI) addresses these limitations, enabling high-resolution monitoring in both domains. Sensors such as TerraSAR-X (TSX) and Sentinel-1 (S-1) have proven effective for deformation analysis with millimeter accuracy. Combining TSX and S-1 datasets enhances monitoring capabilities by leveraging the high spatial resolution of TSX with the broad coverage of S-1. This improves monitoring by increasing PS point density, reducing revisit intervals, and facilitating the detection of environmental deformation drivers. This study aims to investigate two objectives: first, we evaluate the benefits of a spatially and temporally densified PS time series derived from TSX and S-1 data for detecting radial deformations in individual dam segments. To support this, we developed the TSX2StaMPS toolbox, integrated into the updated snap2stamps workflow for generating single-master interferogram stacks using TSX data. Second, we identify deformation drivers using water level and temperature as exogenous variables. The five-year study period (2017–2022) was conducted on a gravity dam in North Rhine-Westphalia, Germany, which was divided into logically connected segments. The results were compared to in situ data obtained from pendulum measurements. Linear models demonstrated a fair agreement between the combined time series and the pendulum data (R2 = 0.5; MAE = 2.3 mm). Temperature was identified as the primary long-term driver of periodic deformations of the gravity dam. Following the filling of the reservoir, the variance in the PS data increased from 0.9 mm to 3.9 mm in RMSE, suggesting that water level changes are more responsible for short-term variations in the SAR signal. Upon full impoundment, the mean deformation amplitude decreased by approximately 1.7 mm toward the downstream side of the dam, which was attributed to the higher water pressure. The last five meters of water level rise resulted in higher feature importance due to interaction effects with temperature. The study concludes that integrating multiple PS datasets for dam monitoring is beneficial particularly for dams where few PS points can be identified using one sensor or where pendulum systems are not installed. Identifying the drivers of deformation is feasible and can be incorporated into existing monitoring frameworks. Full article
(This article belongs to the Special Issue Dam Stability Monitoring with Satellite Geodesy II)
Show Figures

Figure 1

17 pages, 2708 KiB  
Review
Review of Optical Imaging in Coronary Artery Disease Diagnosis
by Naeif Almagal, Niall Leahy, Foziyah Alqahtani, Sara Alsubai, Hesham Elzomor, Paolo Alberto Del Sole, Ruth Sharif and Faisal Sharif
J. Cardiovasc. Dev. Dis. 2025, 12(8), 288; https://doi.org/10.3390/jcdd12080288 - 29 Jul 2025
Viewed by 192
Abstract
Optical Coherence Tomography (OCT) is a further light-based intravascular imaging modality and provides a high-resolution, cross-sectional view of coronary arteries. It has a useful anatomic and increasingly physiological evaluation in light of coronary artery disease (CAD). This review provides a critical examination of [...] Read more.
Optical Coherence Tomography (OCT) is a further light-based intravascular imaging modality and provides a high-resolution, cross-sectional view of coronary arteries. It has a useful anatomic and increasingly physiological evaluation in light of coronary artery disease (CAD). This review provides a critical examination of the increased application of the OCT in assessing coronary artery physiology, beyond its initial mainstay application in anatomical imaging. OCT provides precise information on plaque morphology, which can help identify vulnerable plaques, and is most important in informing percutaneous coronary interventions (PCIs), including implanting a stent and optimizing it. The combination of OCT and functional measurements, such as optical flow ratio and OCT-based fractional flow reserve (OCT-FFR), permits a more complete assessment of coronary stenoses, which may provide increased diagnostic accuracy and better revascularization decision-making. The recent developments in OCT technology have also enhanced the accuracy in the measurement of coronary functions. The innovations may support the optimal treatment of patients as they provide more personalized and individualized treatment options; however, it is critical to recognize the limitations of OCT and distinguish between the hypothetical advantages and empirical outcomes. This review evaluates the existing uses, technological solutions, and future trends in OCT-based physiological imaging and evaluation, and explains how such an advancement will be beneficial in the treatment of CAD and gives a fair representation concerning other imaging applications. Full article
Show Figures

Figure 1

34 pages, 1954 KiB  
Article
A FAIR Resource Recommender System for Smart Open Scientific Inquiries
by Syed N. Sakib, Sajratul Y. Rubaiat, Kallol Naha, Hasan H. Rahman and Hasan M. Jamil
Appl. Sci. 2025, 15(15), 8334; https://doi.org/10.3390/app15158334 - 26 Jul 2025
Viewed by 192
Abstract
A vast proportion of scientific data remains locked behind dynamic web interfaces, often called the deep web—inaccessible to conventional search engines and standard crawlers. This gap between data availability and machine usability hampers the goals of open science and automation. While registries like [...] Read more.
A vast proportion of scientific data remains locked behind dynamic web interfaces, often called the deep web—inaccessible to conventional search engines and standard crawlers. This gap between data availability and machine usability hampers the goals of open science and automation. While registries like FAIRsharing offer structured metadata describing data standards, repositories, and policies aligned with the FAIR (Findable, Accessible, Interoperable, and Reusable) principles, they do not enable seamless, programmatic access to the underlying datasets. We present FAIRFind, a system designed to bridge this accessibility gap. FAIRFind autonomously discovers, interprets, and operationalizes access paths to biological databases on the deep web, regardless of their FAIR compliance. Central to our approach is the Deep Web Communication Protocol (DWCP), a resource description language that represents web forms, HyperText Markup Language (HTML) tables, and file-based data interfaces in a machine-actionable format. Leveraging large language models (LLMs), FAIRFind combines a specialized deep web crawler and web-form comprehension engine to transform passive web metadata into executable workflows. By indexing and embedding these workflows, FAIRFind enables natural language querying over diverse biological data sources and returns structured, source-resolved results. Evaluation across multiple open-source LLMs and database types demonstrates over 90% success in structured data extraction and high semantic retrieval accuracy. FAIRFind advances existing registries by turning linked resources from static references into actionable endpoints, laying a foundation for intelligent, autonomous data discovery across scientific domains. Full article
Show Figures

Figure 1

16 pages, 1145 KiB  
Review
Beyond Global Metrics: The U-Smile Method for Explainable, Interpretable, and Transparent Variable Selection in Risk Prediction Models
by Katarzyna B. Kubiak, Agata Konieczna, Anna Tyranska-Fobke and Barbara Więckowska
Appl. Sci. 2025, 15(15), 8303; https://doi.org/10.3390/app15158303 - 25 Jul 2025
Viewed by 109
Abstract
Variable selection (VS) is a critical step in developing predictive binary classification (BC) models. Many traditional methods for assessing the added value of a candidate variable provide global performance summaries and lack an interpretable graphical summary of results. To address this limitation, we [...] Read more.
Variable selection (VS) is a critical step in developing predictive binary classification (BC) models. Many traditional methods for assessing the added value of a candidate variable provide global performance summaries and lack an interpretable graphical summary of results. To address this limitation, we developed the U-smile method, a residual-based, post hoc evaluation approach for assessing prediction improvements and worsening separately for events and non-events. The U-smile method produces three families of interpretable BA-RB-I coefficients at three levels of generality and a standardized graphical summary through U-smile and prediction improvement–worsening (PIW) plots, enabling transparent, interpretable, and explainable VS. Validated in balanced and imbalanced BC scenarios, the method proved robust to class imbalance and collinearity, and more sensitive than traditional metrics in detecting subtle but meaningful effects. Moreover, the method’s intuitive visual output (U-smile plot) facilitates the rapid communication of results to non-technical stakeholders, bridging the gap between data science and applied decision-making. The U-smile method supports both local and global evaluations and complements existing explainable machine learning (XML) and artificial intelligence (XAI) tools without overlapping in their functions. The U-smile method offers a transparency-enhancing and human-oriented approach for ethical and fair VS, making it highly suited for high-stakes domains, e.g., healthcare and public health. Full article
Show Figures

Figure 1

19 pages, 19333 KiB  
Article
A m-RGA Scheduling Algorithm Based on High-Performance Switch System and Simulation Application
by Bowen Cheng and Weibin Zhou
Electronics 2025, 14(15), 2971; https://doi.org/10.3390/electronics14152971 - 25 Jul 2025
Viewed by 177
Abstract
High-speed switching chips are key components of network core devices in the high-performance computing paradigm, and their scheduling algorithm performance directly influences the throughput, latency, and fairness within the system. However, traditional scheduling algorithms often encounter issues such as high implementation complexity and [...] Read more.
High-speed switching chips are key components of network core devices in the high-performance computing paradigm, and their scheduling algorithm performance directly influences the throughput, latency, and fairness within the system. However, traditional scheduling algorithms often encounter issues such as high implementation complexity and high communication overhead when dealing with bursty traffic. To addressing the issue of bottlenecks in high-speed switching chip scheduling, we propose a low-complexity and high-performance scheduling algorithm called m-RGA, where m represents a priority mechanism. First, by monitoring the historical service time and load level of the VOQs at the port, the priority of the VOQs is dynamically adjusted to enhance the efficient matching and fair allocation of port resources. Additionally, we prove that an algorithm achieving a 2× speedup under a constant traffic model can simultaneously guarantee throughput and latency, making this algorithm theoretically as excellent as any maximum matching algorithm. Through simulation, we demonstrate that m-RGA outperforms Highest Rank First (HRF) arbitration in terms of latency under non-uniform and bursty traffic patterns. Full article
Show Figures

Figure 1

20 pages, 333 KiB  
Article
Interprofessional Collaboration in Obstetric and Midwifery Care—Multigroup Comparison of Midwives’ and Physicians’ Perspective
by Anja Alexandra Schulz and Markus Antonius Wirtz
Healthcare 2025, 13(15), 1798; https://doi.org/10.3390/healthcare13151798 - 24 Jul 2025
Viewed by 155
Abstract
Background: Interprofessional collaboration (IPC) is considered fundamental for integrated, high-quality woman-centered care. This study analyzes concordance/differences in the perspectives of midwives and physicians on IPC and Equitable Communication (EC) in prenatal/postpartum (PPC) and birth care (BC). Methods: The short form of [...] Read more.
Background: Interprofessional collaboration (IPC) is considered fundamental for integrated, high-quality woman-centered care. This study analyzes concordance/differences in the perspectives of midwives and physicians on IPC and Equitable Communication (EC) in prenatal/postpartum (PPC) and birth care (BC). Methods: The short form of the ICS Scale (ICS-R with eight items) adapted for the midwifery context, and the EC scale (three items) were completed by 293 midwives and 215 physicians in Germany. Profession- and the setting-specific differences were analyzed using t-tests and ANOVA with repeated measurements. Confirmatory factor analysis with nested model comparisons test the fairness of the scales. Results: Midwives’ ratings of all IPC aspects were systematically lower than physicians’ in both care settings (variance component professional group: η2p = 0.227/ 0.318), esp. for EC (d = 1.22–1.41). Both groups rated EC higher in BC. The setting effect was less pronounced among physicians for the ICS-R items than among midwives. Violations of test fairness reveal validity deficiencies when using the aggregated EC sum score for group comparisons. Conclusions: Fundamental professional differences were found in the IPC assessment between physicians and midwives. The results enhance the understanding of IPC dynamics and provide starting points for action to leverage IPC’s potential for woman-centered care. Full article
(This article belongs to the Special Issue Midwifery-Led Care and Practice: Promoting Maternal and Child Health)
26 pages, 338 KiB  
Article
ChatGPT as a Stable and Fair Tool for Automated Essay Scoring
by Francisco García-Varela, Miguel Nussbaum, Marcelo Mendoza, Carolina Martínez-Troncoso and Zvi Bekerman
Educ. Sci. 2025, 15(8), 946; https://doi.org/10.3390/educsci15080946 - 23 Jul 2025
Viewed by 329
Abstract
The evaluation of open-ended questions is typically performed by human instructors using predefined criteria to uphold academic standards. However, manual grading presents challenges, including high costs, rater fatigue, and potential bias, prompting interest in automated essay scoring systems. While automated essay scoring tools [...] Read more.
The evaluation of open-ended questions is typically performed by human instructors using predefined criteria to uphold academic standards. However, manual grading presents challenges, including high costs, rater fatigue, and potential bias, prompting interest in automated essay scoring systems. While automated essay scoring tools can assess content, coherence, and grammar, discrepancies between human and automated scoring have raised concerns about their reliability as standalone evaluators. Large language models like ChatGPT offer new possibilities, but their consistency and fairness in feedback remain underexplored. This study investigates whether ChatGPT can provide stable and fair essay scoring—specifically, whether identical student responses receive consistent evaluations across multiple AI interactions using the same criteria. The study was conducted in two marketing courses at an engineering school in Chile, involving 40 students. Results showed that ChatGPT, when unprompted or using minimal guidance, produced volatile grades and shifting criteria. Incorporating the instructor’s rubric reduced this variability but did not eliminate it. Only after providing an example-rich rubric, a standardized output format, low temperature settings, and a normalization process based on decision tables did ChatGPT-4o demonstrate consistent and fair grading. Based on these findings, we developed a scalable algorithm that automatically generates effective grading rubrics and decision tables with minimal human input. The added value of this work lies in the development of a scalable algorithm capable of automatically generating normalized rubrics and decision tables for new questions, thereby extending the accessibility and reliability of automated assessment. Full article
(This article belongs to the Section Technology Enhanced Education)
18 pages, 1138 KiB  
Article
Intelligent Priority-Aware Spectrum Access in 5G Vehicular IoT: A Reinforcement Learning Approach
by Adeel Iqbal, Tahir Khurshaid and Yazdan Ahmad Qadri
Sensors 2025, 25(15), 4554; https://doi.org/10.3390/s25154554 - 23 Jul 2025
Viewed by 251
Abstract
Efficient and intelligent spectrum access is crucial for meeting the diverse Quality of Service (QoS) demands of Vehicular Internet of Things (V-IoT) systems in next-generation cellular networks. This work proposes a novel reinforcement learning (RL)-based priority-aware spectrum management (RL-PASM) framework, a centralized self-learning [...] Read more.
Efficient and intelligent spectrum access is crucial for meeting the diverse Quality of Service (QoS) demands of Vehicular Internet of Things (V-IoT) systems in next-generation cellular networks. This work proposes a novel reinforcement learning (RL)-based priority-aware spectrum management (RL-PASM) framework, a centralized self-learning priority-aware spectrum management framework operating through Roadside Units (RSUs). RL-PASM dynamically allocates spectrum resources across three traffic classes: high-priority (HP), low-priority (LP), and best-effort (BE), utilizing reinforcement learning (RL). This work compares four RL algorithms: Q-Learning, Double Q-Learning, Deep Q-Network (DQN), and Actor-Critic (AC) methods. The environment is modeled as a discrete-time Markov Decision Process (MDP), and a context-sensitive reward function guides fairness-preserving decisions for access, preemption, coexistence, and hand-off. Extensive simulations conducted under realistic vehicular load conditions evaluate the performance across key metrics, including throughput, delay, energy efficiency, fairness, blocking, and interruption probability. Unlike prior approaches, RL-PASM introduces a unified multi-objective reward formulation and centralized RSU-based control to support adaptive priority-aware access for dynamic vehicular environments. Simulation results confirm that RL-PASM balances throughput, latency, fairness, and energy efficiency, demonstrating its suitability for scalable and resource-constrained deployments. The results also demonstrate that DQN achieves the highest average throughput, followed by vanilla QL. DQL and AC maintain fairness at high levels and low average interruption probability. QL demonstrates the lowest average delay and the highest energy efficiency, making it a suitable candidate for edge-constrained vehicular deployments. Selecting the appropriate RL method, RL-PASM offers a robust and adaptable solution for scalable, intelligent, and priority-aware spectrum access in vehicular communication infrastructures. Full article
(This article belongs to the Special Issue Emerging Trends in Next-Generation mmWave Cognitive Radio Networks)
Show Figures

Figure 1

Back to TopTop