Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,256)

Search Parameters:
Keywords = FAIR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2907 KB  
Article
GeoCetus: A Multi-Decadal Open Geospatial Infrastructure for the Continuous Monitoring of Marine Strandings in Italy
by Alessio Di Lorenzo, Ludovica Di Renzo, Chiara Profico, Daniela Profico, Vincenzo Olivieri and Sergio Guccione
Animals 2026, 16(9), 1323; https://doi.org/10.3390/ani16091323 (registering DOI) - 26 Apr 2026
Abstract
Marine turtle and cetacean strandings along the Italian coastline represent critical ecological events that require systematic documentation, yet historical data have suffered from fragmentation and poor accessibility across heterogeneous archives. GeoCetus addresses this gap by providing a unified national framework for the centralized [...] Read more.
Marine turtle and cetacean strandings along the Italian coastline represent critical ecological events that require systematic documentation, yet historical data have suffered from fragmentation and poor accessibility across heterogeneous archives. GeoCetus addresses this gap by providing a unified national framework for the centralized collection, management, and open visualization of these data. The platform’s architecture integrates a spatially enabled database with a modern RESTful API, utilizing automated workflows to push data to a public GitHub.com repository. This system unifies historical and contemporary datasets, comprising over 4700 georeferenced records dating back to 1999, while ensuring data quality through structured validation, qualified contributors and reverse geocoding. The results demonstrate a significant improvement in data interoperability and democratization, with the dataset expanding by an average of 150–300 new records annually under a CC-BY-SA license. By adhering to FAIR Data Principles, GeoCetus offers the necessary infrastructure to support real-time operational responses and reproducible ecological analyses. We conclude that this standardized, machine-readable approach is essential for evidence-based national conservation strategies and effective environmental monitoring. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

20 pages, 976 KB  
Article
Decoupling Fairness Perception from Grading Validity in Digitally Mediated Peer Assessment: A Two-Stage fsQCA Study
by Duen-Huang Huang and Yu-Cheng Wang
Information 2026, 17(5), 411; https://doi.org/10.3390/info17050411 (registering DOI) - 25 Apr 2026
Abstract
Artificial intelligence (AI) has become increasingly embedded in technology-enhanced learning environments, where peer assessment now serves both instructional and analytic purposes. Beyond allocating feedback and grades, it also produces data that is later interpreted through learning analytics systems. In practice, visible indicators such [...] Read more.
Artificial intelligence (AI) has become increasingly embedded in technology-enhanced learning environments, where peer assessment now serves both instructional and analytic purposes. Beyond allocating feedback and grades, it also produces data that is later interpreted through learning analytics systems. In practice, visible indicators such as students’ fairness perceptions and the degree of agreement among peer raters are often treated as signs that the assessment process is functioning effectively. However, these indicators do not necessarily correspond to grading validity. Students may regard a peer assessment process as fair even when peer-generated ratings remain weakly aligned with expert judgement. This study, therefore, examines whether the socio-technical configurations associated with high perceived fairness in a digitally mediated peer assessment environment also correspond to criterion-referenced grading validity. Data were collected from 215 undergraduate students enrolled in an Artificial Intelligence Foundations course over two consecutive semesters at a university in Taiwan, with instructor ratings serving as an external expert reference within the course context, rather than as a universal ground truth. Because anonymity conditions and semester were fully confounded in the study design, differences linked to anonymity should not be interpreted as isolated causal effects. A two-stage fuzzy-set Qualitative Comparative Analysis (fsQCA) was used. In the first stage, three equifinal configurations associated with high perceived fairness were identified. In the second stage, these configurations were examined against four grading objectivity outcomes: peer–instructor alignment, peer convergence, familiarity bias, and leniency bias. The findings show that fairness perception and grading validity are only partially aligned. Configurations anchored in explicit criterion transparency consistently supported both experiential legitimacy and evaluative accuracy. By contrast, one configuration was associated with high peer convergence while remaining weakly aligned with instructor standards, a pattern described here as false objectivity; this context-dependent configurational finding warrants further investigation across other settings. The study contributes to research on digitally enhanced assessment and learning analytics by showing that fairness perception, peer convergence, and grading validity should be treated as analytically distinct dimensions of assessment quality. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
47 pages, 5459 KB  
Review
Bias in Large Language Models: Origin, Evaluation, and Mitigation
by Yufei Guo, Muzhe Guo, Juntao Su, Zhou Yang, Mengqiu Zhu, Hongfei Li, Mengyang Qiu and Shuo Shuo Liu
Electronics 2026, 15(9), 1824; https://doi.org/10.3390/electronics15091824 - 24 Apr 2026
Abstract
Large language models (LLMs) have revolutionized natural language processing, but their susceptibility to biases poses significant challenges. This comprehensive review examines the landscape of bias in LLMs, from its origins to current mitigation strategies. We categorize biases as intrinsic and extrinsic, analyzing their [...] Read more.
Large language models (LLMs) have revolutionized natural language processing, but their susceptibility to biases poses significant challenges. This comprehensive review examines the landscape of bias in LLMs, from its origins to current mitigation strategies. We categorize biases as intrinsic and extrinsic, analyzing their manifestations in various natural language processing (NLP) tasks. The review critically assesses a range of bias evaluation methods, including data-level, model-level, and output-level approaches, providing researchers with a robust toolkit for bias detection. We further explore mitigation strategies, categorizing them into pre-model, intra-model, and post-model techniques, highlighting their effectiveness and limitations. Ethical and legal implications of biased LLMs are discussed, emphasizing potential harms in real-world applications such as healthcare and criminal justice. By synthesizing current knowledge on bias in LLMs, this review contributes to the ongoing effort to develop fair and responsible artificial intelligence (AI) systems. Our work serves as a comprehensive resource for researchers and practitioners working towards understanding, evaluating, and mitigating bias in LLMs, fostering the development of more equitable AI technologies. Full article
45 pages, 1775 KB  
Review
Symmetry- Preserving Contact Interaction Approaches: An Overview of Meson and Diquark Form Factors
by Laura Xiomara Gutiérrez-Guerrero and Roger José Hernández-Pinto
Particles 2026, 9(2), 45; https://doi.org/10.3390/particles9020045 (registering DOI) - 24 Apr 2026
Abstract
We present an updated overview of the symmetry-preserving contact interaction model in hadronic physics, which was developed a little over a decade ago to describe the mass spectrum and internal structure of mesons and diquarks composed of light and heavy quarks. Over the [...] Read more.
We present an updated overview of the symmetry-preserving contact interaction model in hadronic physics, which was developed a little over a decade ago to describe the mass spectrum and internal structure of mesons and diquarks composed of light and heavy quarks. Over the years, the contact interaction model has evolved into a framework capable of treating both ground and excited states, providing a simple yet consistent approach to nonperturbative QCD. In this review, we examine the mass spectrum and elastic form factors of forty mesons with different spins and parities, together with their corresponding diquark partners. Importantly, we update the comparison of contact interaction predictions using recent results from the literature, offering a fresh perspective on the model’s performance, strengths, and limitations. The analysis presented here refines previous conclusions and supports the contact interaction model as a practical tool for hadron structure studies, with potential applications to baryons and multiquark states. We also present comparisons with other theoretical models and approaches, including lattice quantum chromodynamics, and comment on future prospects in view of ongoing and planned experimental programs regarding hadron structure. In particular, forthcoming measurements at FAIR together with future studies at Jefferson Lab and the Electron Ion Collider are expected to provide key insights into hadron structure, with FAIR offering indirect constraints via hadron spectroscopy, hadronic interactions, and in-medium properties; high-precision data on meson structure and form factors from Jefferson Lab and the Electron Ion Collider will provide valuable benchmarks with which to confront predictions based on the contact interaction model. Full article
(This article belongs to the Special Issue Strong QCD and Hadron Structure)
19 pages, 5808 KB  
Article
Speedcubing as a Tool for Sustainable Social Development: Sport, Educational and Psychological Implications
by Mariusz Dzieńkowski, Piotr Tokarski, Karol Łazaruk, Małgorzata Plechawska-Wójcik, Karolina Rybak, Tomasz Zientarski and Anna Katarzyna Mazurek-Kusiak
Sustainability 2026, 18(9), 4222; https://doi.org/10.3390/su18094222 - 23 Apr 2026
Viewed by 409
Abstract
Speedcubing, the competitive practice of fast solving the Rubik’s Cube, has gained global popularity both as a sporting and an educational activity. Aside from its recreational value, speedcubing may contribute to broader social and developmental outcomes. This study aims to examine the potential [...] Read more.
Speedcubing, the competitive practice of fast solving the Rubik’s Cube, has gained global popularity both as a sporting and an educational activity. Aside from its recreational value, speedcubing may contribute to broader social and developmental outcomes. This study aims to examine the potential of speedcubing as a tool for sustainable social development, concentrating on its educational, psychological, and social implications and its relationship to selected United Nations Sustainable Development Goals (SDGs). An anonymous online survey consisting of 26 items (22 used for the main analysis and 4 demographic items) was conducted among 112 participants associated with the speedcubing community, including active competitors, coaches, and parents. The questionnaire addressed accessibility, cognitive and social competencies, and perceived educational and social benefits, as well as user preferences regarding digital tools supporting learning. The results indicate that participation in speedcubing supports the development of analytical thinking, problem-solving skills, perseverance, and self-control. Respondents also emphasized its educational value, accessibility, and role in fostering fair play and social integration. These findings suggest that speedcubing may contribute to several Sustainable Development Goals (SDGs), particularly SDG 3 (Good Health and Well-being), SDG 4 (Quality Education), and SDG 11 and SDG 12 (Sustainable Cities and Communities; Responsible Consumption and Production). Full article
Show Figures

Figure 1

26 pages, 10442 KB  
Article
Resource-Adaptive Semantic Transmission and Client Scheduling for OFDM-Based V2X Communications
by Jiahao Liu, Yuanle Chen, Wei Wu and Feng Tian
Sensors 2026, 26(9), 2615; https://doi.org/10.3390/s26092615 - 23 Apr 2026
Viewed by 394
Abstract
Proportional, fair scheduling in OFDM-based vehicle-to-everything (V2X) uplink causes the resource-block allocation of each vehicle to vary from slot to slot, yet conventional semantic encoders produce a fixed number of output tokens regardless of the instantaneous channel capacity. When the encoder output exceeds [...] Read more.
Proportional, fair scheduling in OFDM-based vehicle-to-everything (V2X) uplink causes the resource-block allocation of each vehicle to vary from slot to slot, yet conventional semantic encoders produce a fixed number of output tokens regardless of the instantaneous channel capacity. When the encoder output exceeds the slot budget, transmitted features are truncated and the resulting federated learning gradient is corrupted—a problem that affected 23% of training rounds for non-line-of-sight vehicles in our experiments. The difficulty is worsened by a spatial pattern common in urban deployments: vehicles at congested intersections suffer the poorest propagation conditions while carrying the training data most relevant to safety, and throughput-driven client selection excludes them in favor of vehicles with strong channels but uninformative scenes. We address both issues within a single framework for OFDM-based V2X federated learning. On the transmission side, a Sensing-Guided Adaptive Modulation (SGAM) module derives a per-slot token budget from the current resource-block allocation and selects tokens through differentiable Gumbel-TopK pruning with a hard capacity clip, so the transmitted token count stays within the slot budget. On the scheduling side, a Channel-Decoupled Federated Learning (CDFL) module partitions clients independently by channel quality and data complexity, selects diverse representatives per partition via facility location optimization, and corrects for partition-size imbalance through inverse propensity weighting during model aggregation. Experiments on NuScenes with 20 non-IID vehicular clients under realistic OFDM channel simulation demonstrate a Macro-F1 of 0.710 (+8.7 points over the Oort-adapted baseline), zero budget violations throughout training, and a 75% reduction in training variance; the worst-class F1 more than doubles relative to FedAvg. Full article
(This article belongs to the Special Issue Challenges and Future Trends of UAV Communications)
28 pages, 426 KB  
Systematic Review
Narrative and Challenge in Single-Player RPGs: A 1990–2025 Player-Centered Systematic Review
by João Antunes, Vítor Carvalho and José Miguel Domingues
Digital 2026, 6(2), 33; https://doi.org/10.3390/digital6020033 - 23 Apr 2026
Viewed by 79
Abstract
Single-player role-playing games (RPGs) combine two promises that do not always align: delivering a compelling narrative experience (world, characters, choices, and consequences) while sustaining a demanding ludic trajectory in which players face obstacles, master systems, and progress over time. This Systematic Literature Review [...] Read more.
Single-player role-playing games (RPGs) combine two promises that do not always align: delivering a compelling narrative experience (world, characters, choices, and consequences) while sustaining a demanding ludic trajectory in which players face obstacles, master systems, and progress over time. This Systematic Literature Review (SLR) synthesizes existing evidence on the evolution of narrative and challenge in single-player RPGs from a player-centered perspective, with particular attention paid to immersion, engagement, flow, and perceived agency. A multi-database search strategy was conducted across Google Scholar, Scopus, IEEE Xplore, and the ACM Digital Library using query strings targeting narrative/agency, challenge and dynamic difficulty adjustment (DDA), adaptive difficulty, and the historical evolution of RPG narrative design, following a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-reported selection flow and Rayyan-supported screening. From 423 identified records, duplicates and non-eligible records were removed through staged screening, yielding 43 reports sought for retrieval; because six were not accessible in full text at consolidation, the synthesis was conducted on 37 full-text articles. The findings indicate (i) a predominance of work on narrative and agency, where agency is framed as a design effect rather than merely the presence of explicit branching choices; (ii) a recent rise in challenge/adaptation research, frequently tied to flow, fairness, and differentiated player profiles; and (iii) the emergence of artificial intelligence (AI)-driven approaches, including non-player character (NPC) systems, combat AI, reinforcement learning, and large language model (LLM)-based narrative control, which amplify core design trade-offs between narrative coherence and perceived agency. Beyond synthesizing a dispersed body of literature, the review contributes an integrated player-centered analytical framework that brings together narrative, challenge, and player experience, while also highlighting the need for more consistent measurement practices, stronger comparative designs, and longer-term empirical work in single-player RPG research. Full article
29 pages, 704 KB  
Systematic Review
Reassessing Minimum Wage Impacts: What the Spanish Case Contributes to International Evidence
by Manuela Adelaida de Paz-Báñez, Celia Sánchez-López and María José Asensio-Coto
Sustainability 2026, 18(9), 4206; https://doi.org/10.3390/su18094206 - 23 Apr 2026
Viewed by 108
Abstract
Minimum wage policies have become a central instrument for promoting social and economic sustainability by ensuring sufficient income to cover basic needs and reduce inequalities. They align with recent predistribution approaches in the literature and with goal 10.4 of the United Nations 2030 [...] Read more.
Minimum wage policies have become a central instrument for promoting social and economic sustainability by ensuring sufficient income to cover basic needs and reduce inequalities. They align with recent predistribution approaches in the literature and with goal 10.4 of the United Nations 2030 Agenda. In the European context, these policies are explicitly embedded within the sustainable development and just transition agenda, where the European Union emphasises that securing fair wages is a necessary condition for inclusive, balanced and equality-enhancing growth. At the same time, the methodological debate has evolved from early time-series-based approaches to a new generation of quasi-experimental studies, which provide more rigorous and less biased evidence. Within this framework, Spain represents a relevant case due to the scale and persistence of its minimum wage reforms since 2019, yet the Spanish case has lacked a systematic synthesis comparable to those available for other advanced economies (e.g., Germany, the UK, the USA). This article offers the first systematic synthesis of empirical evidence on the effects of the minimum wage in Spain from the 1990s to 2025, following the PRISMA 2020 methodology. This process yielded a large number of articles, from which an initial selection of 249 was made. Following the full screening and eligibility assessment, 34 articles were retained. The results allow for an analysis of the current state of research on the effects of the minimum wage across multiple dimensions, especially on employment and inequality. Other aspects, such as productivity, prices, other business adjustments, administrative obstacles, and public finances, are still poorly addressed in the available literature. In any case, this is a valuable exercise in understanding how wage policies can help to clarify the relationship between minimum wage policies and the transformation of labour markets. Full article
(This article belongs to the Special Issue Innovation in Circular Economy and Sustainable Development)
37 pages, 7664 KB  
Article
Joint Congestion Control Evaluation for MPTCP and MPQUIC over Multi-Link Backhauls with eMBB and mMTC-Like Traffic
by Roberto Picchi and Daniele Tarchi
Electronics 2026, 15(9), 1797; https://doi.org/10.3390/electronics15091797 - 23 Apr 2026
Viewed by 71
Abstract
Multi-link terrestrial backhauls create a shared transport environment in which heterogeneous multipath protocols compete for the same forwarding resources while reacting to congestion with different control logics. In this paper, we investigate this problem in a 5G Integrated Access and Backhaul (IAB) scenario [...] Read more.
Multi-link terrestrial backhauls create a shared transport environment in which heterogeneous multipath protocols compete for the same forwarding resources while reacting to congestion with different control logics. In this paper, we investigate this problem in a 5G Integrated Access and Backhaul (IAB) scenario where an IAB node aggregates traffic from multiple User Equipments (UEs) and forwards it toward the core network over two terrestrial backhaul paths. We focus on the coexistence of Multipath TCP (MPTCP) and Multipath QUIC (MPQUIC), evaluating how cross-protocol Congestion Control (CC) pairings affect performance. Specifically, all feasible BBR, CUBIC, and Reno cross-pairings are assessed under symmetric and asymmetric dual-backhaul conditions, considering Enhanced Mobile Broadband (eMBB) and dense low-rate traffic regimes representative of mMTC-like operation. The analysis considers throughput, Jain’s fairness index, jitter , and packet loss to identify the trade-offs of each CC pairing. Results show that CC selection is a first-order design factor in MPTCP/MPQUIC coexistence over shared backhauls. No single pairing is uniformly optimal across all metrics: some configurations provide more balanced throughput sharing, others improve fairness, while the most favorable solutions for jitter do not necessarily maximize transport efficiency. These findings identify CC pairing as a tuning dimension for multi-link backhaul systems based on heterogeneous multipath transports. Full article
(This article belongs to the Section Computer Science & Engineering)
21 pages, 1193 KB  
Article
Multiscale Learning for Accurate Recognition of Subtle Motion Actions: Toward Unobtrusive AI-Based Occupational Health Monitoring
by Ciro Mennella, Umberto Maniscalco, Massimo Esposito and Aniello Minutolo
Electronics 2026, 15(9), 1794; https://doi.org/10.3390/electronics15091794 - 23 Apr 2026
Viewed by 182
Abstract
The integration of artificial intelligence with unobtrusive sensing technologies is transforming occupational health monitoring by enabling continuous, objective assessment of worker activities in real industrial environments. This study focuses on the accurate recognition of subtle motion actions within logistics workflows using multichannel optical [...] Read more.
The integration of artificial intelligence with unobtrusive sensing technologies is transforming occupational health monitoring by enabling continuous, objective assessment of worker activities in real industrial environments. This study focuses on the accurate recognition of subtle motion actions within logistics workflows using multichannel optical motion-capture data. We investigate several deep learning architectures commonly employed for temporal motion analysis, including tCNN, Transformer, CNN–LSTM, and ConvLSTM. To enhance robustness and fairness across workers with varying movement styles, a subject-independent evaluation protocol is adopted, and a multiscale temporal learning strategy is explored to better capture fine-grained and low-saliency actions. Experimental results show that the proposed multiscale tCNN achieves the highest accuracy, obtaining per-class recall range between 73% and 83% and an overall accuracy of approximately 79%, consistently outperforming recurrent and attention-based architectures. These findings demonstrate the effectiveness of multiscale convolution-based temporal modeling for recognizing subtle motion actions and highlight the potential of combining optical motion capture with AI analytics to support unobtrusive, reliable occupational health monitoring in smart industry environments. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning Techniques for Healthcare)
Show Figures

Figure 1

21 pages, 908 KB  
Article
Hierarchical Semantic Transmission and Lyapunov-Optimized Online Scheduling for the Internet of Vehicles
by Le Jiang, Yani Guo, Wenzhao Zhang, Penghao Wang and Shujun Han
Sensors 2026, 26(9), 2606; https://doi.org/10.3390/s26092606 - 23 Apr 2026
Viewed by 126
Abstract
The inherent redundancy in vehicle sensor data, coupled with constrained onboard resources and stringent latency requirements, renders traditional bit-oriented transmission paradigms inefficient for autonomous-driving perception tasks. Semantic communication offers a promising direction by shifting the focus from bit-level fidelity to task-level information delivery. [...] Read more.
The inherent redundancy in vehicle sensor data, coupled with constrained onboard resources and stringent latency requirements, renders traditional bit-oriented transmission paradigms inefficient for autonomous-driving perception tasks. Semantic communication offers a promising direction by shifting the focus from bit-level fidelity to task-level information delivery. In this paper, we propose a unified framework that integrates hierarchical transmission and online scheduling for Internet of Vehicles (IoV)-oriented collaborative perception. The proposed hierarchy separates information into two complementary layers: a coarse metadata layer (object bounding boxes) for latency-critical awareness, and fine-grained visual semantics (multi-scale region-of-interest (ROI) patches) for perception-intensive tasks. We formulate an online scheduling problem that jointly exploits Age of Information (AoI) and Channel State Information (CSI) to dynamically decide what to transmit and at what fidelity under per-frame budget constraints. To address cross-scheme fairness, we report resource utilization under a fixed kbps/fps physical budget and evaluate robustness using a combination of a lightweight task-proxy metric and COCO-style Average Recall (AR100) under ROI-only evaluation. The hierarchical transmission architecture, combined with AoI awareness, reduces global semantic staleness by approximately 78%. The Lyapunov-based online scheduler enables intelligent, signal-to-noise ratio (SNR)-adaptive switching between coarse and fine semantic levels, ensuring robust perception under varying channel quality. Under strict physical-budget constraints and unreliable channel conditions, joint source-channel coding (JSCC) exhibits significantly stronger task robustness than conventional schemes: at 0 dB SNR, the task-proxy detection rate improves by nearly 47 percentage points over the uncoded baseline. Full article
(This article belongs to the Section Sensor Networks)
25 pages, 750 KB  
Article
M2AML: Metric-Based Model-Agnostic Meta-Learning for Few-Shot Classification
by Xiaoming Han, Dianxi Shi, Zhen Wang and Shaowu Yang
Entropy 2026, 28(5), 484; https://doi.org/10.3390/e28050484 - 23 Apr 2026
Viewed by 155
Abstract
Model-Agnostic Meta-Learning (MAML) and Prototypical Networks (ProtoNet) establish the foundational paradigms for few-shot classification. However, MAML suffers from optimization instability caused by reconstructing classification boundaries for every new task. Conversely, ProtoNet lacks the internal mathematical capacity necessary for task-specific parameter adaptation under domain [...] Read more.
Model-Agnostic Meta-Learning (MAML) and Prototypical Networks (ProtoNet) establish the foundational paradigms for few-shot classification. However, MAML suffers from optimization instability caused by reconstructing classification boundaries for every new task. Conversely, ProtoNet lacks the internal mathematical capacity necessary for task-specific parameter adaptation under domain shifts. To reconcile these structural limitations, we introduce Metric-based Model-Agnostic Meta-Learning (M2AML). By completely excising the parameterized classification layer from the episodic adaptation sequence, our framework replaces traditional inner-loop classification with a dynamic self-exclusive geometric similarity metric. Substituting functional mappings with spatial distance optimizations efficiently resolves evaluation conflicts, thereby establishing perfectly synchronized inner and outer learning rates alongside substantially accelerated adaptation steps. Extensive experiments across mini-ImageNet, tiered-ImageNet, and CIFAR-FS validate our approach against a comprehensive array of established algorithms. To ensure strictly fair comparative evaluations, we meticulously reproduce the MAML, ProtoNet, and Proto-MAML baselines. Empirical results demonstrate that M2AML achieves state-of-the-art performance across most evaluation settings, delivering absolute accuracy improvements ranging from 0.1% to 2.1% over existing leading models. Full article
Show Figures

Figure 1

16 pages, 613 KB  
Review
Digital Exclusion or Zero Hunger? A Sustainability Review of Ethical AI in Fragile Contexts
by Dalal Iriqat and Yara Ashour
Sustainability 2026, 18(9), 4171; https://doi.org/10.3390/su18094171 - 22 Apr 2026
Viewed by 298
Abstract
In contemporary debates on the United Nations Sustainable Development Goals, there is growing recognition that artificial intelligence (AI) may contribute meaningfully to SDG 2 (Zero Hunger), particularly by enhancing the efficiency of food aid distribution and resource allocation. However, such optimism must be [...] Read more.
In contemporary debates on the United Nations Sustainable Development Goals, there is growing recognition that artificial intelligence (AI) may contribute meaningfully to SDG 2 (Zero Hunger), particularly by enhancing the efficiency of food aid distribution and resource allocation. However, such optimism must be critically situated within the broader institutional and ethical contexts in which AI operates. This study argues that the effectiveness of AI in conflict-affected settings is contingent not only on technical capacity but also on governance structures, ethical safeguards, and institutional trust, dimensions closely aligned with SDG 16 (Peace, Justice, and Strong Institutions). Using the Gaza Strip as a case study, this article demonstrates that AI-driven food assistance mechanisms may inadvertently reinforce structural vulnerabilities. Specifically, algorithmic targeting of aid risks deepening dependency, exacerbating digital exclusion, and weakening already fragile governance systems. The absence of robust data accountability frameworks further complicates these dynamics, raising concerns regarding transparency, fairness, and long-term sustainability. The findings caution against privileging technical efficiency at the expense of socio-political stability. Rather, they highlight that the sustainability of AI interventions in humanitarian contexts fundamentally depends on the credibility and legitimacy of institutions. Accordingly, this study proposes a conceptual model for AI in hunger relief and digital humanitarianism that integrates technical innovation with institutional accountability and social trust. This study presents a narrative review informed by structural searching that examines the influence of AI on food security interventions in fragile contexts. This analysis applies a combined ethical governance and sustainability lens to assess current applications and risks. This research advances a broader analytical framework that moves beyond purely technical interpretations of AI, emphasizing its role as a socio-political tool, through identifying five key pillars for sustainable AI governance: data sovereignty, algorithmic accountability, inclusive system design, community-led governance, and market integrity. Full article
(This article belongs to the Special Issue Achieving Sustainability Goals Through Artificial Intelligence)
Show Figures

Figure 1

45 pages, 7599 KB  
Systematic Review
Educational Measurement with Emerging Technologies: A Systematic Review Through Evidentiary Lens on Granularity and Constructing Measures Theory
by Linwei Yu, Gary K. W. Wong, Bingjie Zhang and Feifei Wang
Educ. Sci. 2026, 16(4), 661; https://doi.org/10.3390/educsci16040661 - 21 Apr 2026
Viewed by 154
Abstract
Emerging technologies (ETs), such as AI and reality techniques, are reshaping educational measurement. However, existing studies remain dispersed and are rarely synthesized in ways that clarify how ETs participate in the evidentiary work of educational measurement. Guided by PRISMA 2020, we systematically reviewed [...] Read more.
Emerging technologies (ETs), such as AI and reality techniques, are reshaping educational measurement. However, existing studies remain dispersed and are rarely synthesized in ways that clarify how ETs participate in the evidentiary work of educational measurement. Guided by PRISMA 2020, we systematically reviewed 933 empirical studies published between 2016 and 2025 in formal educational settings. We coded studies by (a) grain size (micro, meso, macro), (b) Constructing Measures Theory building blocks (construct map, item design, outcome space, measurement model), and (c) ET category. Results showed a strong concentration at the micro level (88.88%) and in outcome space and measurement model work (86.80% combined), indicating that ET-enabled innovation has focused primarily on transforming performances into indicators and modeling those indicators for interpretation and decision-making. Learning analytics and educational data mining, machine learning and deep learning, and automated scoring and feedback systems were the dominant ET clusters. These findings point to an uneven development of ET-enabled educational measurement. Included studies also indicating recurring concerns about transparency, fairness, and governance are linked to the field’s main areas of ET-enabled concentration. We therefore argue for closer alignment among construct claims, evidence, modeling, and intended use, and offer implications for developers, researchers, and education practitioners. Full article
(This article belongs to the Special Issue The State of the Art and the Future of Education)
Show Figures

Figure 1

17 pages, 1005 KB  
Article
“No Fair!”: Children’s Perceptions of Fairness in Merit-Based Distributions
by Meltem Yucel, Madeline Brence and Amrisha Vaish
Behav. Sci. 2026, 16(4), 617; https://doi.org/10.3390/bs16040617 - 21 Apr 2026
Viewed by 208
Abstract
Recent research by Yucel and colleagues suggests that children perceive equality-based fairness violations (resources being distributed unequally) as less serious than prototypical moral harms, but that making the harmful consequences of unfairness salient shifts these judgments toward the moral domain. We examined whether [...] Read more.
Recent research by Yucel and colleagues suggests that children perceive equality-based fairness violations (resources being distributed unequally) as less serious than prototypical moral harms, but that making the harmful consequences of unfairness salient shifts these judgments toward the moral domain. We examined whether merit-based fairness violations (someone receiving less than they earned) would similarly shift judgments toward the moral domain by making the injustice more salient. Replicating prior work, 4-year-old children (N = 62) rated prototypical moral violations as significantly more severe than equality-based fairness violations, which were rated as similar in severity to conventional violations. Contrary to predictions, merit-based fairness violations also showed this pattern: They were judged as less severe than prototypical moral violations and similarly severe as both equality-based fairness violations and conventional violations. Children also did not consistently group either type of fairness violation with moral or conventional violations. These findings contribute to a growing body of evidence that children’s (and adults’) perceptions of fairness—whether equality-based or merit-based—are more nuanced than previously thought and that unfairness may not spontaneously be treated like other, more prototypical moral norm violations. Full article
(This article belongs to the Special Issue Social Cognition and Cooperative Behavior)
Show Figures

Figure 1

Back to TopTop