Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (69)

Search Parameters:
Keywords = causal model of trust

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1851 KB  
Article
Where to Start? Participatory Systems Mapping for Place-Based Service Integration in the City of Casey
by Matt Healey, Joseph Lea and Vanessa Hammond
Systems 2026, 14(4), 407; https://doi.org/10.3390/systems14040407 - 7 Apr 2026
Viewed by 321
Abstract
Place-based approaches have gained significant attention as a means of addressing entrenched disadvantage through collaborative, locally responsive service delivery, yet implementation has yielded mixed results and the systemic factors that facilitate or impede inter-organisational collaboration remain inadequately understood. This study applied participatory systems [...] Read more.
Place-based approaches have gained significant attention as a means of addressing entrenched disadvantage through collaborative, locally responsive service delivery, yet implementation has yielded mixed results and the systemic factors that facilitate or impede inter-organisational collaboration remain inadequately understood. This study applied participatory systems mapping as part of a systemic inquiry to identify leverage points for place-based integrated service delivery in the City of Casey, an outer-metropolitan municipality in Melbourne, Australia. Twenty-one representatives from the Casey Futures Partnership engaged in group model building workshops, co-producing a causal loop diagram containing 33 factors and 104 directional connections. The resulting map was analysed using a blended analytical approach combining network metrics with the Action Scales Model. Funding availability and criteria emerged as the most central factor within the system, while belief-level factors, including territorial behaviour and resource and collaboration mindset, were found to be substantially shaped by upstream structural conditions. Factors combining network influence with deeper system positioning and amenability to local action included awareness of community needs and priorities, trust and willingness to collaborate from funders, inter-organisational communication, and advocacy effectiveness. The findings support multi-level place-based approaches that address underlying beliefs and structural conditions alongside operational improvements. Full article
Show Figures

Figure 1

30 pages, 1979 KB  
Article
Design Consistency and Aesthetic Experience in Digital Health Communication: A Mixed-Method Study of Lifestyle Medicine Product Ecosystems
by Yuexing Wang and Xin Ma
Healthcare 2026, 14(7), 964; https://doi.org/10.3390/healthcare14070964 - 7 Apr 2026
Viewed by 270
Abstract
Background/Objectives: Digital health ecosystems increasingly integrate content, behavioral interventions, and commercial offerings across multiple platforms. While design consistency is established as critical for trust in commercial contexts, its associations with health behavior change and objective health outcomes remain underexplored. This study examined how [...] Read more.
Background/Objectives: Digital health ecosystems increasingly integrate content, behavioral interventions, and commercial offerings across multiple platforms. While design consistency is established as critical for trust in commercial contexts, its associations with health behavior change and objective health outcomes remain underexplored. This study examined how cross-platform design consistency and aesthetic experience are associated with behavioral adoption through psychological pathways and investigated relationships between design-driven adoption and objective health outcomes. Methods: A convergent mixed-method design comprised five integrated studies: systematic content analysis of short-form videos (N = 200), expert evaluation and user testing (N = 33), a cross-sectional survey (N = 186), semi-structured interviews (N = 15), and a 3-month longitudinal health outcome analysis (N = 143). Structural equation modeling tested pathways from design features through psychological mediators and COM-B components (capability, opportunity, motivation) to behavioral adoption and health outcomes. Results: Design consistency was significantly associated with trust (β = 0.52), perceived value (β = 0.68), and reduced perceived risk (β = −0.41; all p < 0.001). Aesthetic experience predicted emotional resonance (β = 0.71, p < 0.001) and moderated design–trust associations. COM-B components mediated 75% of the intention-to-adoption pathway (total indirect effect = 0.51, p < 0.001). High-adoption users showed clinically meaningful improvements in weight (−2.8 kg, d = 0.89), HbA1c (−0.7%, d = 0.65), fasting glucose (−0.9 mmol/L, d = 0.72), and LDL-C (−0.4 mmol/L, d = 0.51) over three months. Conclusions: Within a single, influencer-centered Chinese digital health ecosystem, design consistency and aesthetic experience were significantly associated with trust, psychological readiness, and behavioral adoption. These findings are observational; randomized controlled trials and multi-site replication are required to establish causal mechanisms and assess generalizability. Full article
Show Figures

Figure 1

27 pages, 1493 KB  
Article
Emergency Alert and Warning Systems and Their Impact on Sustainable Disaster Preparedness and Awareness in the Philippines: A SEM–ANN Analysis
by Charmine Sheena R. Saflor and Kyla Kudhal
Sustainability 2026, 18(7), 3590; https://doi.org/10.3390/su18073590 - 6 Apr 2026
Viewed by 325
Abstract
Emergency Alert and Warning Systems (EAWSs) are essential components of sustainable disaster risk reduction, providing communities with timely information to prepare for and respond to impending hazards. In the Philippines, one of the world’s most disaster-prone countries, earthquakes, typhoons, and other natural hazards [...] Read more.
Emergency Alert and Warning Systems (EAWSs) are essential components of sustainable disaster risk reduction, providing communities with timely information to prepare for and respond to impending hazards. In the Philippines, one of the world’s most disaster-prone countries, earthquakes, typhoons, and other natural hazards occur frequently. However, national statistics from 2018 indicated that only 40% of Filipinos considered themselves well prepared for disasters, while 31% reported being slightly prepared or not prepared at all. This study investigates the perceived effectiveness of EAWSs in enhancing disaster awareness and preparedness among Filipino residents. Guided by the Theory of Planned Behavior (TPB), the research develops an integrated framework to examine behavioral, technical, and perceptual factors influencing preparedness intentions. Data were collected from 200 respondents through a structured survey. Structural Equation Modeling (SEM) was employed to identify significant linear relationships among the constructs, while an Artificial Neural Network (ANN) analysis was subsequently applied to capture nonlinear patterns and rank the relative importance of key predictors. Unlike previous studies that rely solely on SEM or descriptive approaches, the combined SEM–ANN framework enables a more comprehensive understanding of both causal relationships and complex behavioral dynamics influencing disaster preparedness. The findings reveal that behavioral intention, system reliability, message clarity, and trust in EAWS substantially affect individuals’ preparedness behavior and risk mitigation actions. These results underscore the importance of strengthening EAWS design and communication strategies to support long-term disaster resilience. The study provides practical insights for national agencies, local governments, and policymakers on refining emergency communication systems and developing sustainable, evidence-based disaster preparedness initiatives. Full article
Show Figures

Figure 1

32 pages, 1364 KB  
Article
XRL-LLM: Explainable Reinforcement Learning Framework for Voltage Control
by Shrenik Jadhav, Birva Sevak and Van-Hai Bui
Energies 2026, 19(7), 1789; https://doi.org/10.3390/en19071789 - 6 Apr 2026
Viewed by 314
Abstract
Reinforcement learning (RL) agents are increasingly deployed for voltage control in power distribution networks. However, their opaque decision-making creates a significant trust barrier, limiting their adoption in safety-sensitive operational settings. This paper presents XRL-LLM, a novel framework that generates natural language explanations for [...] Read more.
Reinforcement learning (RL) agents are increasingly deployed for voltage control in power distribution networks. However, their opaque decision-making creates a significant trust barrier, limiting their adoption in safety-sensitive operational settings. This paper presents XRL-LLM, a novel framework that generates natural language explanations for RL control decisions by combining game-theoretic feature attribution (KernelSHAP) with large language model (LLM) reasoning grounded in power systems domain knowledge. We deployed a Proximal Policy Optimization (PPO) agent on an IEEE 33-bus network to coordinate capacitor banks and on-load tap changers, successfully reducing voltage violations by 90.5% across diverse loading conditions. To make these decisions interpretable, KernelSHAP identifies the most influential state features. These features are then processed by a domain-context-engineered LLM prompt that explicitly encodes network topology, device specifications, and ANSI C84.1 voltage limits.Evaluated via G-Eval across 30 scenarios, XRL-LLM achieves an explanation quality score of 4.13/5. This represents a 33.7% improvement over template-based generation and a 67.9% improvement over raw SHAP outputs, delivering statistically significant gains in accuracy, actionability, and completeness (p<0.001, Cohen’s d values up to 4.07). Additionally, a physics-grounded counterfactual verification procedure, which perturbs the underlying power flow model, confirms a causal faithfulness of 0.81 under critical loading. Finally, five ablation studies yield three broader insights. First, structured domain context engineering produces synergistic quality gains that exceed any single knowledge component, demonstrating that prompt composition matters more than the choice of foundational model. Second, even an open source 8B-parameter model outperforms templates given the same prompt, confirming the framework’s backbone-agnostic value. Most importantly, counterfactual faithfulness increases alongside load severity, indicating that post hoc attributions are most reliable in the high-stakes regimes where trustworthy explanations matter most. Full article
Show Figures

Figure 1

42 pages, 1024 KB  
Review
From Concrete to Code: A Survey of AI-Driven Transportation Infrastructure, Security, and Human Interaction
by Nuri Alperen Kose, Kubra Kose and Fan Liang
Sensors 2026, 26(7), 2219; https://doi.org/10.3390/s26072219 - 3 Apr 2026
Viewed by 498
Abstract
The transition to AI-driven Cyber–Physical Systems has fundamentally reshaped transportation, introducing systemic risks that transcend traditional physical boundaries. Unlike prior reviews focused on isolated technological domains, this survey proposes a novel “End-to-End” analytical framework that models the causal propagation of vulnerabilities from physical [...] Read more.
The transition to AI-driven Cyber–Physical Systems has fundamentally reshaped transportation, introducing systemic risks that transcend traditional physical boundaries. Unlike prior reviews focused on isolated technological domains, this survey proposes a novel “End-to-End” analytical framework that models the causal propagation of vulnerabilities from physical sensing hardware to human cognitive responses. Synthesizing 140 research contributions (2017–2025), we evaluate the paradigm shift from deterministic control to Generative AI and Large Language Models (Transportation 5.0). To substantiate our framework, we introduce a structured cross-layer threat matrix and mathematically formalize the technology–cognition cascade, explicitly mapping how physical layer perturbations, such as optical jamming, bypass digital edge security to trigger hazardous behavioral reactions in human drivers. We conclude that ensuring the resilience of next-generation infrastructure requires a unified analytical architecture that formally bounds hardware constraints, algorithmic safety, and human trust. Full article
Show Figures

Figure 1

29 pages, 7418 KB  
Article
EvoDropX: Evolutionary Optimization of Feature Corruption Sequences for Faithful Explanations of Transformer Models
by Dhiraj Kumar Singh and Conor Ryan
Algorithms 2026, 19(3), 187; https://doi.org/10.3390/a19030187 - 2 Mar 2026
Viewed by 324
Abstract
As deep learning models become increasingly integrated into critical decision-making systems, the need for explainable Artificial Intelligence (xAI) has grown paramount to ensure transparency, accountability, and trust. Post hoc explainability methods, which analyse trained models to interpret their predictions without modifying the underlying [...] Read more.
As deep learning models become increasingly integrated into critical decision-making systems, the need for explainable Artificial Intelligence (xAI) has grown paramount to ensure transparency, accountability, and trust. Post hoc explainability methods, which analyse trained models to interpret their predictions without modifying the underlying architecture, have become increasingly important, especially in fields such as healthcare and finance. Modern xAI techniques often produce feature importance rankings that fail to capture the true causal influence of features, particularly in transformer-based models. Recent quantitative metrics, such as Symmetric Relevance Gain (SRG), which measures the area between the feature corruption performance curves of the Most Important Feature (MIF) and the Least Important Feature (LIF), provide a more rigorous basis for evaluating explanation fidelity. In this study, we first show that existing xAI methods exhibit consistently poor performance under the SRG criterion when explaining transformer-based text classifiers. To address these limitations, we introduceEvoDropX, a novel framework that formulates explanation as an optimisation problem. EvoDropX leverages Grammatical Evolution (GE) to evolve sequences of feature corruption with the explicit objective of maximising SRG, thereby identifying features that most strongly influence model predictions. EvoDropX provides interventional, input–output (behavioural) explanations and does not attempt to infer or interpret internal model mechanisms. Through comprehensive experiments across multiple datasets (IMDb movie reviews (IMDB), Stanford Sentiment Treebank (SST-2), Amazon Polarity (AP)), multiple transformer models (Bidirectional Encoder Representations from Transformers (BERT), RoBERTa, DistilBERT), and multiple metrics (SRG, MIF, LIF, Counterfactual Conciseness (CFC)), we demonstrate that EvoDropX significantly outperforms all state-of-the-art (SOTA) xAI baselines including Attention-Aware Layer- Wise Relevance Propagation for Transformers (AttnLRP), SHapley Additive exPlanations (SHAP), and Local Interpretable Model-agnostic Explanations (LIME), when evaluated using intervention-based faithfulness criteria. Notably, EvoDropX achieves 74.77% improvement in SRG than the best-performing baseline on the IMDB dataset with the BERT model, with consistent improvements observed across all dataset-model pairs. Finally, qualitative and linguistic analyses reveal that EvoDropX captures both sentiment-bearing terms and their structural relationships within sentences, yielding explanations that are both faithful and interpretable. Full article
Show Figures

Figure 1

23 pages, 591 KB  
Article
From ESG Signals to Sustainable Relationships: A Strategic Perspective on Perceived Sustainability Awareness, Dual-Path Value, and Long-Term Trust
by Yoon Joo Park
Sustainability 2026, 18(5), 2179; https://doi.org/10.3390/su18052179 - 24 Feb 2026
Viewed by 424
Abstract
This study examines how consumers’ perceptions of corporate environmental, social, and governance (ESG) performance are statistically associated with sustainable relational outcomes within a structured cognitive and relational framework. Drawing on signaling theory and perceived value theory, we propose and empirically test a sequential [...] Read more.
This study examines how consumers’ perceptions of corporate environmental, social, and governance (ESG) performance are statistically associated with sustainable relational outcomes within a structured cognitive and relational framework. Drawing on signaling theory and perceived value theory, we propose and empirically test a sequential mediation model in which perceived ESG performance is positively associated with perceived sustainability awareness (PSA), PSA subsequently is associated with dual-path value perceptions (cognitive and socio-emotional value), and these value perceptions are positively related to long-term trust (LTT) and value co-creation (VCC). In addition, the moderating role of signal credibility on the ESG–PSA relationship is examined. Using survey data from 278 South Korean consumers and structural equation modeling, the results indicate that perceived ESG performance is significantly positively associated with PSA, which in turn is positively associated with both cognitive and socio-emotional value. These value dimensions independently and positively relate to long-term trust, which is in turn associated with value co-creation. Contrary to expectations derived from signaling theory, signal credibility does not significantly moderate the ESG–PSA relationship, suggesting that ESG signals may function as baseline legitimacy cues within the South Korean institutional context, where sustainability norms are relatively institutionalized. Overall, the findings suggest that ESG effectiveness does not operate through direct persuasion but is consistent with a multi-stage cognitive and relational framework. By distinguishing sustainability awareness from ESG perception and decomposing value perceptions into dual paths, this study advances theoretical understanding of how ESG signals may be internalized and statistically linked to sustainable firm–consumer relationships. From a managerial perspective, the results highlight the strategic importance of designing ESG initiatives and communications that enhance sustainability awareness and support long-term trust as foundations for engagement and co-creation. Given the cross-sectional design, the proposed sequential structure should be interpreted as associative rather than definitive causal evidence. Full article
Show Figures

Figure 1

69 pages, 31002 KB  
Review
Next-Gen Explainable AI (XAI) for Federated and Distributed Internet of Things Systems: A State-of-the-Art Survey
by Aristeidis Karras, Anastasios Giannaros, Natalia Amasiadi and Christos Karras
Future Internet 2026, 18(2), 83; https://doi.org/10.3390/fi18020083 - 4 Feb 2026
Viewed by 1108
Abstract
Background: Explainable Artificial Intelligence (XAI) is deployed in Internet of Things (IoT) ecosystems for smart cities and precision agriculture, where opaque models can compromise trust, accountability, and regulatory compliance. Objective: This survey investigates how XAI is currently integrated into distributed and federated IoT [...] Read more.
Background: Explainable Artificial Intelligence (XAI) is deployed in Internet of Things (IoT) ecosystems for smart cities and precision agriculture, where opaque models can compromise trust, accountability, and regulatory compliance. Objective: This survey investigates how XAI is currently integrated into distributed and federated IoT architectures and identifies systematic gaps in evaluation under real-world resource constraints. Methods: A structured search across IEEE Xplore, ACM Digital Library, ScienceDirect, SpringerLink, and Google Scholar targeted publications related to XAI, IoT, edge/fog computing, smart cities, smart agriculture, and federated learning. Relevant peer-reviewed works were synthesized along three dimensions: deployment tier (device, edge/fog, cloud), explanation scope (local vs. global), and validation methodology. Results: The analysis reveals a persistent resource–interpretability gap: computationally intensive explainers are frequently applied on constrained edge and federated platforms without explicitly accounting for latency, memory footprint, or energy consumption. Only a minority of studies quantify privacy–utility effects or address causal attribution in sensor-rich environments, limiting the reliability of explanations in safety- and mission-critical IoT applications. Contribution: To address these shortcomings, the survey introduces a hardware-centric evaluation framework with the Computational Complexity Score (CCS), Memory Footprint Ratio (MFR), and Privacy–Utility Trade-off (PUT) metrics and proposes a hierarchical IoT–XAI reference architecture, together with the conceptual Internet of Things Interpretability Evaluation Standard (IOTIES) for cross-domain assessment. Conclusions: The findings indicate that IoT–XAI research must shift from accuracy-only reporting to lightweight, model-agnostic, and privacy-aware explanation pipelines that are explicitly budgeted for edge resources and aligned with the needs of heterogeneous stakeholders in smart city and agricultural deployments. Full article
(This article belongs to the Special Issue Human-Centric Explainability in Large-Scale IoT and AI Systems)
Show Figures

Graphical abstract

36 pages, 642 KB  
Article
Sustainable Trade Credit Access: The Role of Digital Transformation Under the Resource Dependence Theory
by Yang Xu, Yun Che, Xu Tian, Shuai Zhang and Yu Zhang
Sustainability 2026, 18(3), 1174; https://doi.org/10.3390/su18031174 - 23 Jan 2026
Viewed by 647
Abstract
This paper constructs a two-way fixed effects model using data from 4623 Chinese A-share listed enterprises from 2011 to 2022, confirming that firm digital transformation can enhance access to sustainable trade credit. Specifically, for every 1% increase in the standard deviation of digital [...] Read more.
This paper constructs a two-way fixed effects model using data from 4623 Chinese A-share listed enterprises from 2011 to 2022, confirming that firm digital transformation can enhance access to sustainable trade credit. Specifically, for every 1% increase in the standard deviation of digital transformation, the trade credit obtained by enterprises increases by 2.14% in relation to their average value. We employed instrumental variable (IV) and propensity score matching (PSM) methods, utilizing the Broadband China pilot policy as a quasi-natural experiment to conduct a multi-period propensity score matching-difference in differences (PSM-DID) analysis to address potential issues of reverse causality and sample selection bias. Mechanism analysis indicates that the diversification of supplier structures, R&D innovation, and market share facilitated by digitalization are three main channels. This effect is particularly significant in state-owned enterprises, mature enterprises, and those with higher social trust. Finally, the study also found that the spillover effects of digital transformation encourage client enterprises to allocate credit resources to downstream firms, thereby promoting the sustainable development of supply chain finance. Furthermore, the digital transformation primarily alleviates short-term credit challenges for enterprises and reduces their reliance on bank credit. Full article
Show Figures

Figure 1

23 pages, 633 KB  
Article
Artificial Intelligence Governance in Smart Cities: A Causal Model of Citizen Sustainability Co-Creation Through Acceptance, Trust, and Adaptability
by Lersak Phothong, Anupong Sukprasert and Nantana Ngamtampong
Sustainability 2026, 18(2), 1109; https://doi.org/10.3390/su18021109 - 21 Jan 2026
Viewed by 827
Abstract
Urban sustainability has become a defining governance challenge as smart cities increasingly integrate artificial intelligence (AI) into public service delivery and decision-making. While AI-enabled systems promise efficiency and responsiveness, growing concerns regarding trust, legitimacy, and citizen engagement suggest that technological adoption alone does [...] Read more.
Urban sustainability has become a defining governance challenge as smart cities increasingly integrate artificial intelligence (AI) into public service delivery and decision-making. While AI-enabled systems promise efficiency and responsiveness, growing concerns regarding trust, legitimacy, and citizen engagement suggest that technological adoption alone does not guarantee sustainable urban outcomes. Existing studies have largely emphasized technological performance or individual adoption, paying limited attention to the governance mechanisms through which AI acceptance translates into sustainability co-creation. To address this gap, this study develops and empirically examines the AI–Urban Citizen Sustainability Co-Creation Framework (AI–CSCF) within the context of smart cities in Thailand. A quantitative survey was conducted with 1002 citizens across three smart city settings, and structural equation modeling (SEM) was employed to examine the relationships among AI acceptance, trust in AI, citizen adaptability, and sustainability co-creation. The results indicate that AI acceptance functions as a foundational condition shaping trust in AI and citizen adaptability, through which its influence on sustainability co-creation is indirectly transmitted. Trust in AI emerges as a key mediating mechanism linking AI-enabled governance to participatory sustainability outcomes. These findings underscore the importance of human-centered and trustworthy AI governance that strengthens citizen trust, enhances adaptive capacities, and positions citizens as active co-creators of sustainable urban development aligned with SDG 11. Full article
(This article belongs to the Section Sustainable Urban and Rural Development)
Show Figures

Figure 1

33 pages, 2238 KB  
Article
Impact of Autonomic Computing on Process Industry
by Walter Quadrini, Simone Arena, Sofia Teocchi, Francesco Alessandro Cuzzola and Marco Taisch
Sustainability 2026, 18(2), 847; https://doi.org/10.3390/su18020847 - 14 Jan 2026
Viewed by 371
Abstract
Traditional sustainability frameworks in large scale production systems, such as Process Industry (PI) ones, often overlook operational resilience, creating a “resiliency gap” where systems optimized for efficiency remain vulnerable to disruptions. This study addresses this gap by proposing and empirically validating a Quadruple [...] Read more.
Traditional sustainability frameworks in large scale production systems, such as Process Industry (PI) ones, often overlook operational resilience, creating a “resiliency gap” where systems optimized for efficiency remain vulnerable to disruptions. This study addresses this gap by proposing and empirically validating a Quadruple Bottom Line (4BL) framework that integrates resilience as the fourth pillar alongside economic, environmental, and social goals. The purpose is to evaluate the impact that Autonomic Computing (AC) can imply in this perspective. A Procedural Action Research (PAR) methodology was conducted across four distinct PI industrial cases (asphalt, steel, pharma, and aluminum). This involved the ECOGRAI framework to qualitatively link strategic companies’ objectives to shop-floor Key Performance Indicators (KPIs), guiding the assessment of AC systems. The results show benefits at a business level observed following the introduction of AC systems, which were implemented for enhancing resilience by managing ML model drift. Key findings include reduction in plant downtimes, decreases in waste (steel), reductions in gas consumption, and improved operator trust. This research provides empirical evidence that AC can make resilience an actionable component of industrial strategy, leading to measurable improvements across all four pillars of the 4BL framework. Its contribution is methodological and operational, aiming to demonstrate feasibility and causal plausibility. Full article
(This article belongs to the Special Issue Large-Scale Production Systems: Sustainable Manufacturing and Service)
Show Figures

Figure 1

18 pages, 343 KB  
Article
The Bidirectional Relationship Between Audit Fees and Corporate Reputation: Panel Evidence from South African Listed Firms
by Omobolade Stephen Ogundele and Lethiwe Nzama-Sithole
J. Risk Financial Manag. 2026, 19(1), 35; https://doi.org/10.3390/jrfm19010035 - 4 Jan 2026
Viewed by 758
Abstract
As corporate accountability, credibility, transparency, and stakeholders’ trust continue to attract global attention, this study examines how corporate reputation influences audit fees and whether audit fees, in turn, shape reputation. Hence, this study examines the bidirectional relationship between audit fees and corporate reputation [...] Read more.
As corporate accountability, credibility, transparency, and stakeholders’ trust continue to attract global attention, this study examines how corporate reputation influences audit fees and whether audit fees, in turn, shape reputation. Hence, this study examines the bidirectional relationship between audit fees and corporate reputation in South African listed firms. This study reviewed three theories, such as capital reputation, stakeholder, and agency theories. Exploring panel data from sixteen listed firms over a period of ten years (2015–2024), this study employs panel regression analysis and two-step system generalised method of moments (System GMM) estimates in accounting for endogeneity, heterogeneity, and dynamic relationships. Data was sourced from the annual reports and accounts of selected firms. The results from the fixed effects model indicate that corporate reputation exerts a statistically significant and positive influence on audit fees. Conversely, findings from the random effects model reveal that audit fees positively influence corporate reputation. The two-step GMM result confirms a bidirectional causal relationship as lagged corporate reputation significantly influences subsequent audit fees, while lagged audit fees also significantly influence future corporate reputation. This study recommends that the board of directors should view audit not as an expense but as a strategic investment in sustaining stakeholder trust and reputation. Among other things, policymakers and regulators should also strengthen audit market transparency in ensuring that audit pricing reflects genuine reputational consideration and audit quality. Full article
(This article belongs to the Section Business and Entrepreneurship)
32 pages, 3384 KB  
Review
A Survey of the Application of Explainable Artificial Intelligence in Biomedical Informatics
by Hassan Eshkiki, Farinaz Tanhaei, Fabio Caraffini and Benjamin Mora
Appl. Sci. 2025, 15(24), 12934; https://doi.org/10.3390/app152412934 - 8 Dec 2025
Viewed by 1876
Abstract
This review investigates the application of Explainable Artificial Intelligence (XAI) in biomedical informatics, encompassing domains such as medical imaging, genomics, and electronic health records. Through a systematic analysis of 43 peer-reviewed articles, we examine current trends, as well as the strengths and limitations [...] Read more.
This review investigates the application of Explainable Artificial Intelligence (XAI) in biomedical informatics, encompassing domains such as medical imaging, genomics, and electronic health records. Through a systematic analysis of 43 peer-reviewed articles, we examine current trends, as well as the strengths and limitations of methodologies currently used in real-world healthcare settings. Our findings highlight a growing interest in XAI, particularly in medical imaging, yet reveal persistent challenges in clinical adoption, including issues of trust, interpretability, and integration into decision-making workflows. We identify critical gaps in existing approaches and underscore the need for more robust, human-centred, and intrinsically interpretable models, with only 44% of the papers studied proposing human-centred validations. Furthermore, we argue that fairness and accountability, which are key to the acceptance of AI in clinical practice, can be supported by the use of post hoc tools for identifying potential biases but ultimately require the implementation of complementary fairness-aware or causal approaches alongside evaluation frameworks that prioritise clinical relevance and user trust. This review provides a foundation for advancing XAI research on the development of more transparent, equitable, and clinically meaningful AI systems for use in healthcare. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Biomedical Informatics)
Show Figures

Figure 1

29 pages, 3769 KB  
Systematic Review
Illuminating Industry Evolution: Reframing Artificial Intelligence Through Transparent Machine Reasoning
by Albérico Travassos Rosário and Joana Carmo Dias
Information 2025, 16(12), 1044; https://doi.org/10.3390/info16121044 - 1 Dec 2025
Cited by 3 | Viewed by 851
Abstract
As intelligent systems become increasingly embedded in industrial ecosystems, the demand for transparency, reliability, and interpretability has intensified. This study investigates how explainable artificial intelligence (XAI) contributes to enhancing accountability, trust, and human–machine collaboration across industrial contexts transitioning from Industry 4.0 to Industry [...] Read more.
As intelligent systems become increasingly embedded in industrial ecosystems, the demand for transparency, reliability, and interpretability has intensified. This study investigates how explainable artificial intelligence (XAI) contributes to enhancing accountability, trust, and human–machine collaboration across industrial contexts transitioning from Industry 4.0 to Industry 5.0. To achieve this objective, a systematic bibliometric literature review (LRSB) was conducted following the PRISMA framework, analysing 98 peer-reviewed publications indexed in Scopus. This methodological approach enabled the identification of major research trends, theoretical foundations, and technical strategies that shape the development and implementation of XAI within industrial settings. The findings reveal that explainability is evolving from a purely technical requirement to a multidimensional construct integrating ethical, social, and regulatory dimensions. Techniques such as counterfactual reasoning, causal modelling, and hybrid neuro-symbolic frameworks are shown to improve interpretability and trust while aligning AI systems with human-centric and legal principles, notably those outlined in the EU AI Act. The bibliometric analysis further highlights the increasing maturity of XAI research, with strong scholarly convergence around transparency, fairness, and collaborative intelligence. By reframing artificial intelligence through the lens of transparent machine reasoning, this study contributes to both theory and practice. It advances a conceptual model linking explainability with measurable indicators of trustworthiness and accountability, and it offers a roadmap for developing responsible, human-aligned AI systems in the era of Industry 5.0. Ultimately, the study underscores that fostering explainability not only enhances functional integrity but also strengthens the ethical and societal legitimacy of AI in industrial transformation. Full article
(This article belongs to the Special Issue Advances in Information Studies)
Show Figures

Figure 1

24 pages, 1396 KB  
Article
Drivers of Efficient Destination Management in Times of Transition: Key Findings for Destination Development Management and Marketing Organisations (DDMMOs)
by Iordanis Katemliadis, Andreas Papatheodorou, Maria Doumi and Nicholas Karachalis
Tour. Hosp. 2025, 6(5), 244; https://doi.org/10.3390/tourhosp6050244 - 13 Nov 2025
Viewed by 1328
Abstract
This paper reflects on the results of a survey and aims to illuminate the operations of Destination Development, Management and Marketing Organisations (DDMMOs) by identifying different Key Performance Areas (KPAs), the indicators connected to them, and examining how they influence each other. Various [...] Read more.
This paper reflects on the results of a survey and aims to illuminate the operations of Destination Development, Management and Marketing Organisations (DDMMOs) by identifying different Key Performance Areas (KPAs), the indicators connected to them, and examining how they influence each other. Various linkages were explored between Enablers and Results performance areas, both within and across these categories. The use of multivariate statistical techniques such as Structural Equation Modelling (SEM), along with Analysis of Variance (ANOVA), chi-square tests, Pearson correlation, and other descriptive statistical methods yielded several insightful findings. The authors developed a research model which operated at an observation level and measured all the latent variables and tested all the hypothetical dependencies. The model investigates causal relationships among variables and understands how each contributes to overall performance. Researchers created a questionnaire using the EFQM framework, which consisted of seven constructs and 72 indicators rated on a Likert scale (1–5). Out of the 141 questionnaires distributed, 128 were considered valid and formed the sample for this research. All respondents were experienced employees/managers of DDMMOs in various roles. The results revealed that Leadership is one of the most valuable functions that DDMMOs can provide, and that when stakeholders trust the DDMMO, they become more efficient. The optimal size and ownership structure should be tailored to the specific needs of the destination, which can also influence how it manages its response. Furthermore, this paper revealed the link between sustainability and performance. The effectiveness of DDMMOs will largely determine the impact on the local economy and society. The research model developed together with the insights revealed is a testament of the practical relevance of this paper. Full article
Show Figures

Figure 1

Back to TopTop