Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,129)

Search Parameters:
Keywords = trust models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 5351 KB  
Article
Dual-Factor Adaptive Robust Aggregation for Secure Federated Learning in IoT Networks
by Zuan Song, Wuzheng Tan, Hailong Wang, Guilong Zhang and Jian Weng
Future Internet 2026, 18(4), 201; https://doi.org/10.3390/fi18040201 - 10 Apr 2026
Abstract
Federated Learning (FL) has been widely adopted in privacy-sensitive and distributed environments. However, training stability becomes significantly challenged when differential privacy (DP) noise and Byzantine client behaviors coexist, as these heterogeneous perturbations jointly introduce time-varying distortions to model updates. Existing approaches typically address [...] Read more.
Federated Learning (FL) has been widely adopted in privacy-sensitive and distributed environments. However, training stability becomes significantly challenged when differential privacy (DP) noise and Byzantine client behaviors coexist, as these heterogeneous perturbations jointly introduce time-varying distortions to model updates. Existing approaches typically address privacy and robustness in isolation. Under DP constraints, noise injection increases gradient variance and obscures the distinction between benign and adversarial updates, causing many robust aggregation methods to misclassify normal clients or fail to detect malicious ones. As a result, their effectiveness degrades substantially in practical IoT environments where noise and attacks interact. In this work, we propose a dual-factor adaptive and robust aggregation framework (DARA) to improve the stability of FL under such combined disturbances. DARA adjusts the differential privacy noise scale by jointly considering local update magnitudes and training-round dynamics, aiming to mitigate noise-induced bias under a fixed privacy budget. Meanwhile, a direction-aware weighted aggregation scheme assigns continuous trust weights based on cosine similarity between updates, thereby suppressing the influence of potentially anomalous or adversarial clients. We conduct extensive experiments on multiple benchmark datasets to evaluate DARA under differential privacy constraints and Byzantine attack scenarios. The results indicate that DARA achieves favorable robustness and convergence behavior compared with representative aggregation baselines, while maintaining competitive model accuracy. Full article
(This article belongs to the Special Issue Federated Learning: Challenges, Methods, and Future Directions)
Show Figures

Figure 1

26 pages, 372 KB  
Article
Attitudes Toward Sexual and Digital Consent and Institutional Distrust as Determinants of Gender-Based Violence Prevention: Evidence from an Urban Adult Population
by Esperanza García Uceda, Diana Valero Errazu and Jesús C. Aguerri
Int. J. Environ. Res. Public Health 2026, 23(4), 480; https://doi.org/10.3390/ijerph23040480 - 10 Apr 2026
Abstract
Gender-based and sexual violence are major public health concerns, and norms about consent are central to their prevention. This study examines how attitudes toward sexual consent relate to digital sexual consent and to the occasional feeling of distrust in public consent campaigns and [...] Read more.
Gender-based and sexual violence are major public health concerns, and norms about consent are central to their prevention. This study examines how attitudes toward sexual consent relate to digital sexual consent and to the occasional feeling of distrust in public consent campaigns and institutions. We conducted a cross-sectional online survey embedded in the evaluation of a municipal consent campaign in Zaragoza (Spain). Adults (N = 404; 56.7% women) completed a 14-item short version of the Sexual Consent Scale–Revised, two items on digital sexual consent, and three items on institutional reluctance (perceived “sermonizing” tone, distrust in effectiveness, and lack of personal identification with the message). Correlation and multiple regression models with robust standard errors were estimated, controlling for gender, age, education, income, relationship status, and social media use. Attitudes toward sexual consent were strongly and positively associated with digital sexual consent. Gender was the most consistent sociodemographic correlate: men showed less egalitarian attitudes than women across all consent measurements. Institutional reluctance was systematically related to less supportive consent attitudes: perceiving institutional messages as exaggerated or personally irrelevant predicted lower support for sexual and digital consent norms, whereas trust in the campaign’s effectiveness was associated with more egalitarian attitudes. The findings support the continuity between sexual and digital consent and highlight gender and institutional trust as key determinants for the prevention of gender-based and sexual violence. Public health and social policies should integrate digital consent into consent education and co-design campaigns that minimize defensive reactions and rebuild trust in institutions. Full article
24 pages, 965 KB  
Article
Bridging the Strategy–Execution Gap in Digital Process Transformation: An Organizational Development Process Model from a Chinese Brewery Case
by Yunlu Cai and Siti Rohaida Mohamed Zainal
Adm. Sci. 2026, 16(4), 184; https://doi.org/10.3390/admsci16040184 - 10 Apr 2026
Abstract
This study explains how strategy–execution gaps become self-reinforcing during digital process transformation in layered manufacturing organizations. Drawing on an embedded qualitative process study of a large Chinese brewery’s transformation (2020–2024), we triangulate 10 semi-structured interviews across hierarchical levels with longitudinal public disclosures to [...] Read more.
This study explains how strategy–execution gaps become self-reinforcing during digital process transformation in layered manufacturing organizations. Drawing on an embedded qualitative process study of a large Chinese brewery’s transformation (2020–2024), we triangulate 10 semi-structured interviews across hierarchical levels with longitudinal public disclosures to reconstruct the initiative timeline and trace mechanisms across change phases. The analysis shows that platform-based process governance can scale faster than shared meaning and dialog, producing frontline sensemaking gaps and formalistic, top-down communication. These conditions thin employee voice and weaken feedback closure, which in turn erodes the legitimacy of organizational diagnosis and fragments implementation support. As interface problems are handled through local workarounds, management intensifies visibility-based monitoring, further suppressing voice and reinforcing the execution gap. We develop an organizational development process model that centers feedback closure and diagnosis legitimacy as bridging mechanisms linking soft change dynamics (meaning, trust, voice) with hard digital governance (process standards, data infrastructures, monitoring). The model offers actionable implications for leaders to build closure and legitimate diagnosis as operational capabilities throughout transformation. Full article
Show Figures

Figure 1

31 pages, 1306 KB  
Article
Governing Forest Rights Mortgage Loans Through Hybrid Governance: Institutional Innovation and Organizational Mediation in China’s Collective Forest Regions
by Liushan Fan, Wenlan Wang, Yuanzhu Wei, Yongbo Lai and Xingwei Ye
Forests 2026, 17(4), 464; https://doi.org/10.3390/f17040464 - 10 Apr 2026
Abstract
Forest Rights Mortgage Loans (FRMLs) have grown quickly in China’s collective forest areas, even though the basic conditions for this type of lending remain far from ideal. In many places, forest holdings are small and scattered, property rights are complex and not fully [...] Read more.
Forest Rights Mortgage Loans (FRMLs) have grown quickly in China’s collective forest areas, even though the basic conditions for this type of lending remain far from ideal. In many places, forest holdings are small and scattered, property rights are complex and not fully consolidated, and channels for disposing of collateral are limited. Under these circumstances, the Fulin Loan Model (FLM) in Fujian provides a useful case for understanding how forest-rights lending can still function in practice. Drawing on fieldwork, semi-structured interviews, and process tracing, this study explores both how the model was established and how it has been sustained over time. The analysis suggests that the FLM is neither a straightforward market-based lending tool nor merely a top-down policy arrangement. Rather, it relies on a more mixed form of governance in which local government support, banking procedures, and village-level social relations are brought together through specific organizational arrangements. These arrangements help lower the costs of early institutional experimentation, distribute and manage lending risks, and translate locally rooted trust into a form of credit support that formal financial institutions can recognize. As a single-case study, the FLM points to one possible way in which rural finance can be made workable under conditions of incomplete markets and strong social embeddedness. Full article
Show Figures

Figure 1

15 pages, 631 KB  
Article
How Digital Stress and eHealth Literacy Relate to Missed Nursing Care and Willingness to Use AI Decision Support
by Emilia Clej, Adelina Mavrea, Camelia Fizedean, Alina Doina Tănase, Adrian Cosmin Ilie and Alina Tischer
Healthcare 2026, 14(8), 996; https://doi.org/10.3390/healthcare14080996 - 10 Apr 2026
Abstract
Background: Digitalization and artificial intelligence-supported clinical decision support systems (AI-DSS), defined here as tools that generate patient-specific alerts, risk estimates, prioritization prompts, documentation suggestions, or related recommendation outputs intended to support rather than replace professional nursing judgment, can improve clinical decision-making, yet [...] Read more.
Background: Digitalization and artificial intelligence-supported clinical decision support systems (AI-DSS), defined here as tools that generate patient-specific alerts, risk estimates, prioritization prompts, documentation suggestions, or related recommendation outputs intended to support rather than replace professional nursing judgment, can improve clinical decision-making, yet they may also amplify technostress and burnout, with downstream effects on missed nursing care and implementation readiness. Methods: We surveyed 239 registered nurses from a tertiary-care hospital in Timișoara, Romania (January–March 2025), including critical care (n = 60) and general wards (n = 179). Measures included a 15-item technostress scale, eHEALS, Maslach Burnout Inventory–Human Services Survey (MBI-HSS), Safety Attitudes Questionnaire (SAQ) teamwork and safety climate subscales, a 10-item missed nursing care inventory, and a six-item AI-DSS acceptance scale reflecting perceived usefulness, trust, and stated willingness to use such tools if available as an attitudinal readiness outcome rather than as routine observed use. Multivariable regression, exploratory mediation models, cluster analysis, and exploratory ROC analysis were performed. Results: Higher technostress was associated with higher emotional exhaustion (r = 0.52) and more missed care (r = 0.41), whereas eHealth literacy correlated with higher AI-DSS acceptance (r = 0.35) and lower technostress (r = −0.34). In adjusted models, technostress (per 10 points) was associated with higher missed care (β = 0.28, p < 0.001) (equivalent to 0.14 points per 5-point increase) and higher odds of low AI-DSS acceptance (OR = 1.38, p = 0.001), while eHealth literacy was associated with lower odds of low acceptance (OR = 0.71 per 5 points, p < 0.001). Burnout and the safety climate statistically accounted for approximately 35% of the technostress–missed care association. Three workflow phenotypes were identified, with the high-strain/low-literacy cluster showing the most missed care (3.5 ± 1.8) and the lowest AI acceptance (19.7 ± 5.2). An exploratory in-sample ROC model for intention to leave achieved an AUC of 0.82. Conclusions: Higher technostress clustered with worse nurse well-being, more care omissions, and lower AI-DSS acceptance, whereas eHealth literacy appeared protective. Interventions combining digital skills support, usability-focused redesign, and a stronger safety climate may reduce missed care and support safer AI implementation. Full article
Show Figures

Figure 1

28 pages, 5386 KB  
Review
Baseline Load Estimation Using Intelligent Performance Quantification for Incentive-Based Demand Response Programs
by Suhaib Sajid, Bin Li, Bing Qi, Badia Berehman, Qi Guo, Muhammad Athar and Ali Muqtadir
Energies 2026, 19(8), 1851; https://doi.org/10.3390/en19081851 - 9 Apr 2026
Abstract
Incentive-based demand response (DR) programs rely on accurate and trustworthy quantification of customer performance to ensure fair compensation and market efficiency. Estimating the customer baseline load is an important part of this process. It shows how much electricity would be used if there [...] Read more.
Incentive-based demand response (DR) programs rely on accurate and trustworthy quantification of customer performance to ensure fair compensation and market efficiency. Estimating the customer baseline load is an important part of this process. It shows how much electricity would be used if there were no DR occurrence. Unlike conventional load forecasting, baseline modeling is inherently unobservable, economically sensitive, and vulnerable to strategic manipulation. With the growing penetration of distributed energy resources, electric vehicles, and intelligent control technologies, traditional baseline estimation approaches face increasing limitations. This paper offers a thorough and future-oriented synthesis of baseline load estimation for incentive-based DR strategies. Current approaches are carefully classified into rule-based, statistical, probabilistic, machine learning (ML), and hybrid intelligence techniques, and their appropriateness for various DR services and client categories is rigorously evaluated. Beyond modeling accuracy, this paper emphasizes market-oriented requirements, including incentive compatibility, simplicity, transparency, privacy preservation, and deployment feasibility. Furthermore, emerging digital trust enablers such as blockchain and FL are reviewed, along with baseline-free and baseline-light alternatives for performance evaluation. Finally, open research challenges and future directions toward interpretable, robust, and market-ready baseline intelligence are discussed. Full article
Show Figures

Figure 1

29 pages, 2319 KB  
Article
Machine Learning-Based Approach for Malicious Node Security and Trust Provision in 5G-Enabled VANET
by Samuel Kofi Erskine
AI 2026, 7(4), 136; https://doi.org/10.3390/ai7040136 - 9 Apr 2026
Abstract
This research utilizes machine learning (ML)-based malicious node detection techniques to effectively incorporate security and trustworthiness into fifth-generation (5G) and Vehicular Ad hoc Network (VANET) systems, in contrast to traditional methods that do not employ modern techniques. VANET may be vulnerable due to [...] Read more.
This research utilizes machine learning (ML)-based malicious node detection techniques to effectively incorporate security and trustworthiness into fifth-generation (5G) and Vehicular Ad hoc Network (VANET) systems, in contrast to traditional methods that do not employ modern techniques. VANET may be vulnerable due to vehicle mobility, network openness, and the conventional network architecture. Therefore, security and trust management using modern methodologies, such as ML approaches, is essential for 5G-enabled VANET integration, which has become a paramount concern. And due to limitations imposed by traditional security methods, which are unable to identify malicious nodes in VANET completely, processing delays are longer. Therefore, this research utilizes the VANET malicious-node dataset designed for real-time malicious node/attack detection in VANET. The proposed ML methodology uses a Random Forest (RF) and an optimized ensemble ML classifier, such as XGBoost and LightGBM, which require a security and trustworthiness solution provided by the RF Trust Extended Authentication (TEA). We simulate vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) mobility, communication behaviors, and trust metrics to assess the accuracy of malicious-vehicular-node features for the identification and detection of attacks, including False Injection, Sybil, blackhole, and Denial-of-Service (DoS). The proposed ML methodology also identifies these attack patterns, providing a realistic dataset for Intelligent Transportation System (ITS) research. In contrast, traditional VANET methods do not. We compared the performance of the proposed ML method with other literature-standard ML and RF methods using metrics such as accuracy, confusion matrices, and precision, Recall, and F1-score to measure effectiveness. In our proposed machine learning (ML) method, we achieve 99% accuracy in classifying MVN and predicting both attack, including False Injection, Sybil, blackhole, and Denial-of-Service (DoS), and benign classes, with precision, recall, and F1-score of 100% each, and establish a trustworthiness score of 100%, Whilst the standard models, such as other VANET methods achieved an accuracy of only 95%, with precision, recall, and F1-score of 98%, without a confusion matrix to confirm the model’s performance. Full article
24 pages, 1584 KB  
Review
From Dialogue Systems to Autonomous Agents: A Modeling Framework for Ethical Generative AI in Healthcare
by James C. L. Chow and Kay Li
Information 2026, 17(4), 361; https://doi.org/10.3390/info17040361 - 9 Apr 2026
Abstract
The advancement of generative artificial intelligence (GAI) in healthcare is driving a transition from dialogue-based medical chatbots to workflow-embedded clinical AI agents. These agentic systems incorporate persistent state management, coordinated tool invocation, and bounded autonomy, enabling multi-step reasoning within institutional processes. As a [...] Read more.
The advancement of generative artificial intelligence (GAI) in healthcare is driving a transition from dialogue-based medical chatbots to workflow-embedded clinical AI agents. These agentic systems incorporate persistent state management, coordinated tool invocation, and bounded autonomy, enabling multi-step reasoning within institutional processes. As a result, traditional response-level evaluation frameworks are insufficient for understanding system behavior. This review provides a conceptual synthesis of the evolution from conversational systems to agentic architectures and proposes a system-level modeling framework for ethical clinical AI agents. We identify core architectural dimensions, including autonomy gradients, state persistence, tool orchestration, workflow coupling, and human–AI co-agency, and examine how these features reshape bias propagation pathways, error cascade dynamics, trust calibration, and accountability structures. Emphasizing that ethical risks emerge from longitudinal system interactions rather than isolated outputs, we argue for embedding fairness constraints, transparency mechanisms, and lifecycle governance directly within AI design. By outlining trajectory-level evaluation strategies, equity-aware development approaches, collaborative oversight models, and adaptive regulatory frameworks, this paper establishes a foundation for the responsible and trustworthy integration of agentic AI in healthcare. Full article
(This article belongs to the Special Issue Modeling in the Era of Generative AI)
Show Figures

Graphical abstract

35 pages, 934 KB  
Review
Blockchain-Enabled Federated Learning: A Survey on System Design, Key Challenges, and Future Directions
by Lingzi Zhu, Bo Zhao and Rao Peng
Electronics 2026, 15(8), 1572; https://doi.org/10.3390/electronics15081572 - 9 Apr 2026
Abstract
The rapid advancement of artificial intelligence relies on massive high-quality data, yet increasingly stringent data privacy regulations have exacerbated the problem of data silos. Federated learning enables collaborative training under privacy protection by exchanging model parameters rather than transmitting raw data. Nevertheless, its [...] Read more.
The rapid advancement of artificial intelligence relies on massive high-quality data, yet increasingly stringent data privacy regulations have exacerbated the problem of data silos. Federated learning enables collaborative training under privacy protection by exchanging model parameters rather than transmitting raw data. Nevertheless, its traditional centralized architecture still suffers from limitations such as single points of failure, lack of trust, and insufficient incentives. The integration of blockchain and federated learning opens new pathways for decentralized, auditable, and secure machine learning systems. This paper systematically reviews research progress in blockchain-enabled federated learning, analyzing technological evolution from three perspectives: system architecture, incentive mechanisms, and privacy enhancement. It further explores critical challenges including efficiency bottlenecks, storage overhead, and the inherent tension between transparency and privacy, while identifying key research directions for building scalable, efficient, and trustworthy decentralized learning systems. Full article
(This article belongs to the Special Issue Data Privacy Protection in Blockchain Systems)
Show Figures

Figure 1

18 pages, 1606 KB  
Article
Multi-Scale Dynamic Perception and Context Guidance Modulation for Efficient Deepfake Detection
by Yuanqing Ding, Fanliang Bu and Hanming Zhai
Electronics 2026, 15(8), 1569; https://doi.org/10.3390/electronics15081569 - 9 Apr 2026
Abstract
Deepfake technology poses significant threats to information authenticity and social trust, necessitating effective detection methods. However, existing detection approaches predominantly rely on high-complexity network architectures that, while accurate in controlled environments, suffer from prohibitive computational costs that hinder deployment in resource-constrained scenarios such [...] Read more.
Deepfake technology poses significant threats to information authenticity and social trust, necessitating effective detection methods. However, existing detection approaches predominantly rely on high-complexity network architectures that, while accurate in controlled environments, suffer from prohibitive computational costs that hinder deployment in resource-constrained scenarios such as social media platforms. To address this efficiency-accuracy dilemma, we propose a lightweight face forgery detection method that systematically learns multi-scale forgery traces. Our approach features a four-stage lightweight architecture that hierarchically extracts features from local textures to global semantics, mimicking the human visual system. Within each stage, a multi-scale dynamic perception mechanism divides feature channels into parallel groups equipped with lightweight attention modules to capture forgery cues spanning pixel-level anomalies, local structures, regional patterns, and semantic inconsistencies. Furthermore, rather than relying on conventional feature fusion that risks suppressing subtle artifacts, we introduce a novel Context-Guided Dynamic Convolution. This mechanism uses mid-level spatial anomalies as active anchors to dynamically modulate high-level semantic filters, with the goal of mitigating the disconnect between semantic content and forgery evidence. Our model achieves strong performance, yielding an AUC of 91.98% on FaceForensics++ and 93.50% on DeepFake Detection Challenge, outperforming current state-of-the-art lightweight methods. Furthermore, compared to heavy Vision Transformers, our model achieves a superior performance-efficiency trade-off, requiring only 3.06 M parameters and 1.36 G FLOPs, making it highly suitable for real-time, resource-constrained deployment. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

28 pages, 1354 KB  
Article
From Delivery Delays to AI-Mediated Escalation Failures: A BERTopic Analysis of Complaints About Risk and Trust in E-Commerce Marketplaces (2019–2025)
by Munise Hayrun Sağlam
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 116; https://doi.org/10.3390/jtaer21040116 - 9 Apr 2026
Abstract
Automated customer service and algorithmic governance are common in digital marketplaces, yet trust can erode when logistics, refunds, and escalation fail. Complaint-based risk and trust narratives in Turkey’s e-commerce marketplaces are analyzed for January 2019–December 2025 using 118,173 de-identified Turkish and English texts [...] Read more.
Automated customer service and algorithmic governance are common in digital marketplaces, yet trust can erode when logistics, refunds, and escalation fail. Complaint-based risk and trust narratives in Turkey’s e-commerce marketplaces are analyzed for January 2019–December 2025 using 118,173 de-identified Turkish and English texts from Şikayetvar, a leading Turkish online consumer-complaint portal, and reviews of official marketplace apps on Google Play and the Apple App Store. BERTopic is implemented in Python with multilingual transformer embeddings, UMAP, HDBSCAN, and c-TF-IDF representations. The selected model identifies 35 micro-topics grouped into five macro-themes: fulfillment disruptions, remediation frictions, product-integrity risks, escalation failures, and governance threats. Monthly probability-weighted prevalence is estimated, and marketplace differences are evaluated with divergence measures, permutation tests, and multinomial regression controlling for time and language. Changepoint tests indicate a shift toward fulfillment grievances in April 2020, rising governance threats from June 2022, and increasing escalation failures linked to automated support from February 2023. These patterns suggest that barriers to human escalation convert operational incidents into platform-level trust judgments, offering monitoring signals for service recovery, marketplace governance, and AI oversight. By isolating escalation failures as a distinct complaint domain, the study links service automation to procedural justice mechanisms that translate operational breakdowns into platform-level trust and risk judgments. Full article
(This article belongs to the Section Data Science, AI, and e-Commerce Analytics)
Show Figures

Figure 1

21 pages, 1405 KB  
Article
Trust-Aware and Energy-Efficient Federated Learning for Secure Sensor Networks at the Edge
by Manuel J. C. S. Reis
Sensors 2026, 26(8), 2307; https://doi.org/10.3390/s26082307 - 9 Apr 2026
Abstract
The widespread adoption of large-scale sensor networks in privacy-sensitive and safety-critical applications has intensified the demand for secure, trustworthy, and energy-efficient learning mechanisms at the network edge. Federated learning has emerged as a promising paradigm for privacy preservation by enabling collaborative model training [...] Read more.
The widespread adoption of large-scale sensor networks in privacy-sensitive and safety-critical applications has intensified the demand for secure, trustworthy, and energy-efficient learning mechanisms at the network edge. Federated learning has emerged as a promising paradigm for privacy preservation by enabling collaborative model training without sharing raw sensor data. However, most existing federated approaches inadequately address trust management, communication efficiency, and energy constraints, which are critical in real-world sensor-based systems. This paper proposes a trust-aware and energy-efficient federated learning framework specifically designed for secure sensor networks operating in resource-constrained edge environments. The proposed approach integrates lightweight trust metrics, trust-driven model aggregation, and adaptive communication scheduling to mitigate the impact of unreliable or malicious nodes while reducing unnecessary energy expenditure. By dynamically weighting client contributions based on trust and participation efficiency, the framework enhances robustness and learning stability under heterogeneous sensing conditions. Experimental results show that the proposed method maintains significantly higher accuracy under adversarial participation while reducing communication overhead and cumulative energy consumption. In particular, the framework improves model accuracy by up to 3.2% under heterogeneous conditions, reduces communication overhead by 28%, and decreases cumulative energy consumption by 31% compared with conventional federated learning approaches. Full article
(This article belongs to the Special Issue Sensor Security and Beyond)
Show Figures

Figure 1

30 pages, 2444 KB  
Systematic Review
The Decentralized AI Ecosystem in Healthcare: A Systematic Review of Technologies, Governance, and Implementation
by Antonio Pesqueira, Carmen Cucul, Thomas Egelhof, Stephanie Fuchs, Leilei Tang, Natalia Sofia and Andreia de Bem Machado
Systems 2026, 14(4), 414; https://doi.org/10.3390/systems14040414 - 9 Apr 2026
Abstract
This research examines the emerging ecosystem of models that are developed and run across a distributed network of computers called decentralized artificial intelligence. The focus is to understand these models in the healthcare context and with a focus on their core components: technologies, [...] Read more.
This research examines the emerging ecosystem of models that are developed and run across a distributed network of computers called decentralized artificial intelligence. The focus is to understand these models in the healthcare context and with a focus on their core components: technologies, governance frameworks, and real-world applications. A systematic literature review was conducted, analyzing peer-reviewed studies from PubMed, Scopus, and Web of Science to map the current landscape of the field. The primary objective was to synthesize the current research on decentralized approaches in healthcare, including core approaches like federated learning and blockchain-based AI models, as well as emerging concepts such as agentic AI blockchain-based AI models and DAOs, to comprehend their application in clinical and operational settings. The research assesses the maturity of these implementations, ranging from pilot programs to large-scale organizational settings. It also identified the key computational and technical methods and platforms used and the key benefits and challenges influencing their adoption. The findings underscore the pivotal role of the decentralized paradigm in addressing the fundamental limitations of traditional AI, including data privacy, trust, institutional silos, and regulatory complexity. Insights are also offered for healthcare providers, technology developers, researchers, and policymakers aiming to navigate and leverage decentralized AI to build more equitable, efficient, and collaborative healthcare systems. Full article
(This article belongs to the Special Issue Leveraging AI Algorithms to Enhance Healthcare Systems)
Show Figures

Figure 1

25 pages, 595 KB  
Article
Reimagining SDG 17 in Africa Through the Marshall Plan Paradigm: A Conceptual Framework for Equitable and Sustainable Global Partnerships
by Olusiji Adebola Lasekan, Margot Teresa Godoy Pena and Blessy Sarah Mathew
Sustainability 2026, 18(8), 3688; https://doi.org/10.3390/su18083688 - 8 Apr 2026
Abstract
This study develops a conceptual framework for reimagining Sustainable Development Goal 17 (SDG 17) in Africa through a reinterpretation of the Marshall Plan’s governance logic. The primary focus is to address persistent failures in development partnerships—namely, fragmentation, weak coordination, power asymmetries, and limited [...] Read more.
This study develops a conceptual framework for reimagining Sustainable Development Goal 17 (SDG 17) in Africa through a reinterpretation of the Marshall Plan’s governance logic. The primary focus is to address persistent failures in development partnerships—namely, fragmentation, weak coordination, power asymmetries, and limited institutional capacity—by proposing a structured model of partnership governance. Using a theory-building methodology grounded in historical analysis and documentary evidence, the study applies a systematic adaptation logic in which core governance mechanisms from the Marshall Plan are re-specified to reflect African institutional realities. These mechanisms—coordination, mutual accountability, collective action, state capacity, and trust—are translated into eight operational pillars: co-development, institutional strengthening, structural transformation, regional integration, blended finance, digital public infrastructure, knowledge co-production, and resilience. The framework conceptualizes SDG 17 as a meta-governance system that aligns actors, institutions, and resources across sectors. By moving from historical abstraction to context-sensitive application, the study contributes a coherent, Africa-centered governance model that enhances partnership effectiveness and informs post-2030 development policy. Full article
(This article belongs to the Special Issue Latest Review Papers in Development Goals Towards Sustainability 2026)
Show Figures

Figure 1

37 pages, 1897 KB  
Article
A Bayesian Feature Weighting Model with Simplex-Constrained Dirichlet and Contamination-Aware Priors for Noisy Medical Data
by Mehmet Ali Cengiz, Zeynep Öztürk and Abdulmohsen Alharthi
Mathematics 2026, 14(8), 1243; https://doi.org/10.3390/math14081243 - 8 Apr 2026
Abstract
Feature weighting plays a central role in medical classification by enhancing predictive accuracy, interpretability, and clinical trust through the explicit quantification of variable relevance. Despite their widespread use, existing filter-, wrapper-, and embedded-based feature weighting methods are predominantly deterministic and exhibit pronounced sensitivity [...] Read more.
Feature weighting plays a central role in medical classification by enhancing predictive accuracy, interpretability, and clinical trust through the explicit quantification of variable relevance. Despite their widespread use, existing filter-, wrapper-, and embedded-based feature weighting methods are predominantly deterministic and exhibit pronounced sensitivity to label noise and outliers, which are pervasive in real-world medical data. This often results in unstable importance estimates and unreliable clinical interpretations. In this work, we introduce a novel Bayesian feature weighting model that fundamentally departs from existing approaches by jointly integrating simplex-constrained Dirichlet priors for global feature weights, hierarchical shrinkage priors for coefficient regularization, and contamination-aware priors for explicit modeling of label noise within a single coherent probabilistic framework. Unlike conventional Bayesian feature selection or robust classification models, the proposed formulation yields globally interpretable feature weights defined on the probability simplex, while simultaneously providing full posterior uncertainty quantification and robustness to both mislabeled observations and aberrant feature values through principled influence control. Comprehensive simulation studies across diverse contamination scenarios, together with applications to multiple real-world medical datasets, demonstrate that the proposed model consistently outperforms classical and state-of-the-art baselines in terms of discrimination, probabilistic calibration, and stability of feature-importance estimates. These results highlight the practical and methodological significance of the proposed framework as a robust, uncertainty-aware, and interpretable solution for medical decision making under noisy data conditions. Full article
(This article belongs to the Special Issue Statistical Machine Learning: Models and Its Applications)
Show Figures

Figure 1

Back to TopTop