Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (375)

Search Parameters:
Keywords = Four Assurances

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1501 KB  
Review
Sustainability Reporting Between Financial Market Forces and Regulatory Mandates: A Global Bibliometric Analysis
by Anissa Naouar, Hajer Zarrouk and Teheni El Ghak
Int. J. Financial Stud. 2026, 14(4), 82; https://doi.org/10.3390/ijfs14040082 - 1 Apr 2026
Viewed by 679
Abstract
This study examines the evolution of sustainability reporting research by integrating financial market dynamics, regulatory frameworks, and digital transformation into a unified analytical lens. It explores how these forces shape the credibility, comparability, and strategic relevance of sustainability disclosure. A bibliometric analysis of [...] Read more.
This study examines the evolution of sustainability reporting research by integrating financial market dynamics, regulatory frameworks, and digital transformation into a unified analytical lens. It explores how these forces shape the credibility, comparability, and strategic relevance of sustainability disclosure. A bibliometric analysis of 683 publications indexed in the Web of Science (2006–2025) was conducted. Performance indicators and science-mapping techniques were applied to identify the intellectual structure of the field. Four major thematic clusters were detected: (i) corporate social responsibility and disclosure performance, (ii) governance and accountability, (iii) regulatory and institutional frameworks, and (iv) financial market and digital innovation drivers. Findings reveal that Disclosure, corporate social responsibility, and performance remain the field’s core anchors, while governance, accountability, innovation, and strategy increasingly shape reporting credibility. Sustainability reporting reduces information asymmetry, lowers financing costs, and builds stakeholder trust; however, persistent fragmentation, greenwashing, and weak assurance highlight the need for global harmonization. Regulatory initiatives and market instruments are converging to institutionalize sustainability disclosure. The study advances a policy and managerial agenda advocating stronger governance oversight, harmonized disclosure frameworks, and technology-enabled assurance mechanisms to enhance transparency, accountability, and investor confidence. Full article
Show Figures

Figure 1

25 pages, 26208 KB  
Article
Analysis of Forest Ecosystem Service Clusters and Influencing Factors Based on SOFM and XGBoost Models
by Yong Cao, Hao Wang, Ziwei Zhang, Cheng Wang, Zhili Xu and Bin Dong
Forests 2026, 17(4), 439; https://doi.org/10.3390/f17040439 - 1 Apr 2026
Viewed by 281
Abstract
This study focuses on the Dabie Mountain Comprehensive Station in Anhui Province, constructing a multi-scale analytical framework and integrating remote sensing and socio-economic data to systematically assess the spatiotemporal evolution of ecosystem service bundles (ESBs) and landscape ecological risks using SOFM, XGBoost, and [...] Read more.
This study focuses on the Dabie Mountain Comprehensive Station in Anhui Province, constructing a multi-scale analytical framework and integrating remote sensing and socio-economic data to systematically assess the spatiotemporal evolution of ecosystem service bundles (ESBs) and landscape ecological risks using SOFM, XGBoost, and SHAP models. The research categorizes ecosystem service functions into four types: water conservation core areas, carbon storage–habitat optimization areas, carbon storage–water production composite areas, and multifunctional synergy areas. From 2013 to 2023, the proportion of multifunctional synergy areas increased from 39.85% to 42.86%, while carbon storage-habitat optimization areas and water conservation core areas decreased by 28,035.47 hm2 and 2118.8 hm2, respectively, indicating significant spatial restructuring of regional ecosystem service functions. The landscape ecological risk exhibits a pattern of “medium risk dominance with high-low polarization,” where high-risk areas overlap with urban expansion zones, and low-risk areas are concentrated in ecological conservation zones. Quantitative analysis reveals that climatic factors (e.g., annual precipitation) dominate the risk patterns in water conservation core areas and ecological conservation zones, topographic factors (e.g., elevation) influence regional spatial differentiation, and socio-economic factors (e.g., nighttime light index) significantly affect agricultural production core areas. The findings elucidate the evolutionary patterns of ecosystem service functions and the mechanisms of risk formation in the Dabie Mountain region, providing a scientific basis and technical support for regional land use optimization, ecosystem function enhancement, and ecological security assurance. Full article
(This article belongs to the Section Forest Ecology and Management)
Show Figures

Figure 1

40 pages, 4626 KB  
Review
A Systematic Lifecycle-Referenced Capability Mapping of MLOps Platforms for Energy Forecasting
by Xun Zhao, Zheng Grace Ma and Bo Nørregaard Jørgensen
Information 2026, 17(4), 328; https://doi.org/10.3390/info17040328 - 28 Mar 2026
Viewed by 360
Abstract
Accurate energy forecasting is essential for maintaining power system reliability, integrating renewable generation, and ensuring market stability. Although machine learning has improved forecasting accuracy, its operational deployment depends on Machine Learning Operations (MLOps) platforms that automate and scale the entire lifecycle of energy [...] Read more.
Accurate energy forecasting is essential for maintaining power system reliability, integrating renewable generation, and ensuring market stability. Although machine learning has improved forecasting accuracy, its operational deployment depends on Machine Learning Operations (MLOps) platforms that automate and scale the entire lifecycle of energy data pipelines. However, the capabilities of existing MLOps platforms for energy forecasting have not been systematically compared. This study adopts a PRISMA-informed review process to identify relevant end-to-end MLOps platforms for energy forecasting and then maps their documented capabilities using an established energy forecasting pipeline lifecycle as the reference structure. A total of 256 records were screened across vendor documentation, open-source repositories, and academic literature, of which 13 MLOps platforms were selected for comparative capability analysis. Platform capabilities are organised and presented across an end-to-end lifecycle covering project setup and governance, data ingestion and management, model development and experimentation, deployment and serving, and monitoring and feedback. Commercial platforms such as Amazon SageMaker and Google Vertex AI generally provide stronger end-to-end integration and production readiness, while open-source platforms such as Kubeflow and ClearML offer modular flexibility that typically requires additional integration effort to achieve end-to-end operation. The mapping identifies four priority areas where platform support remains limited, namely (i) governance workflow automation, (ii) automated data quality validation, (iii) feature management, and (iv) deployment and monitoring support under nonstationary conditions. These findings indicate that platform selection for energy forecasting should be treated as a lifecycle capability decision, balancing end-to-end integration, operational assurance, and long-term flexibility. Full article
Show Figures

Graphical abstract

34 pages, 4009 KB  
Article
Optimal Operation Strategy for Island Multi-Energy Microgrids Considering the Water-Energy Nexus of Wastewater Treatment and Desalination
by Wang Pan, Wei Zhang and Dong Han
Sustainability 2026, 18(7), 3297; https://doi.org/10.3390/su18073297 - 28 Mar 2026
Viewed by 219
Abstract
Island regions face dual challenges of renewable energy accommodation and freshwater scarcity, severely constraining operational economy and reliability. However, existing research regards wastewater treatment and seawater desalination as isolated subsystems, overlooking the significant synergistic potential in their water-energy nexus. This paper proposes a [...] Read more.
Island regions face dual challenges of renewable energy accommodation and freshwater scarcity, severely constraining operational economy and reliability. However, existing research regards wastewater treatment and seawater desalination as isolated subsystems, overlooking the significant synergistic potential in their water-energy nexus. This paper proposes a novel optimal operation framework for standalone island multi-energy microgrids, constructing a water-energy coupled system that integrates wastewater treatment, seawater desalination, hydrogen electrolysis, methanation, and diversified energy storage. A hierarchical collaborative dynamic weighting mechanism is proposed to facilitate system coupling coordination. At the system macro-level, a Sigmoid-based adaptive strategy responds to real-time operating conditions by dynamically adjusting the weighting ratios of four-dimensional objectives; at the water system micro-level, the load allocation between wastewater treatment and seawater desalination is optimized through a continuous regulation mechanism. This method establishes a framework to maximize the coupling coordination between wastewater treatment and seawater desalination, fully exploiting the flexible load characteristics of water treatment facilities to mitigate renewable energy fluctuations. Simulation results from a case study validate the effectiveness of the proposed strategy; the method achieves collaborative and efficient system operation alongside water-energy security assurance and significantly reduces the total system operating cost by 76,259.14 CNY compared to traditional methods. Full article
Show Figures

Figure 1

34 pages, 1413 KB  
Systematic Review
A Systematic Review of Safety-Driven Approaches in Human–Robot Collaborative Systems
by Akhtar Khan, Maaz Akhtar, Sheheryar Mohsin Qureshi, Muzzamil Mustafa, Naser A. Alsaleh and Imran Ahmad
Sensors 2026, 26(7), 2079; https://doi.org/10.3390/s26072079 - 27 Mar 2026
Viewed by 607
Abstract
Collaboration between humans and robots (HRC) is advancing rapidly due to the intersection of robotics and generative artificial intelligence (GenAI). The current paper includes a systematic review of 103 studies on the role of generative models, including Generative Adversarial Networks (GANs), Variational Autoencoders [...] Read more.
Collaboration between humans and robots (HRC) is advancing rapidly due to the intersection of robotics and generative artificial intelligence (GenAI). The current paper includes a systematic review of 103 studies on the role of generative models, including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), diffusion models, and Large Language Models (LLMs) in improving the safety, trust, and adaptability of collaborative robotics using a PRISMA-based systematic approach. The review recognizes four major themed areas of GenAI-based safety frameworks—namely, data-driven simulation to synthesize hazards, predictive reasoning to forecast human motion, adaptive control to reduce risks dynamically, and trust-aware cognition to explain human–robot interaction. Findings indicate that generative models transform robotic safety from a reactive mechanism to proactive, contextual and interpretable systems. Nevertheless, real-time performance, interpretability, standard benchmarking, and ethical assurance are still some of the challenges to be overcome. The paper proposes a taxonomy linking generative modeling layers and physical, cognitive and ethical aspects of HRC safety, and gives a roadmap of certifiable hybrid systems with generative foresight and deterministic control. This synthesis provides a foundation for developing transparent, adaptive, and trustworthy collaborative robotic systems. Full article
(This article belongs to the Special Issue Feature Review Papers in Sensors and Robotics)
Show Figures

Figure 1

18 pages, 1781 KB  
Article
Design and Characterisation of a Polyvinyl Chloride (PVC) Tissue-Mimicking Polymer Phantom for Quantitative Shear Wave Elastography Validation
by Wadhhah Aldehani, Sarah Louise Savaridas, Cheng Wei and Luigi Manfredi
Polymers 2026, 18(7), 797; https://doi.org/10.3390/polym18070797 - 26 Mar 2026
Viewed by 383
Abstract
A polyvinyl chloride (PVC)-based tissue-mimicking polymer phantom was developed and mechanically characterised to replicate stiffness ranges relevant to breast elastography and to provide a controlled platform for evaluating shear wave elastography (SWE) measurements. SWE provides quantitative stiffness information that complements B-mode ultrasound in [...] Read more.
A polyvinyl chloride (PVC)-based tissue-mimicking polymer phantom was developed and mechanically characterised to replicate stiffness ranges relevant to breast elastography and to provide a controlled platform for evaluating shear wave elastography (SWE) measurements. SWE provides quantitative stiffness information that complements B-mode ultrasound in breast imaging. However, measurement variability related to operator technique and tissue continues to limit confidence in clinical interpretation. This study evaluates the reproducibility of SWE using custom-fabricated PVC-based breast phantoms with mechanically defined stiffness properties. Two PVC-based breast phantoms with identical geometry and different background stiffnesses were scanned using a single ultrasound system under a fixed SWE protocol. Each phantom contained four embedded inclusions representing clinically relevant stiffness categories. Six breast imagers independently acquired repeated SWE measurements in transverse and longitudinal planes, blinded to lesion identity and ground truth. Inter-operator reproducibility was assessed using intraclass correlation coefficients, and was high across both phantom backgrounds, with low intra-operator variability following quality assurance exclusion of one dataset due to sampling error. Measurement variability was lowest for solid inclusions and increased for the cyst-like inclusion in the stiffer background. SWE measurements consistently preserved the relative stiffness ordering of inclusions, although absolute values differed systematically from mechanically derived ground-truth stiffness. These findings demonstrate that PVC-based polymer phantoms provide a stable and reproducible platform for evaluating SWE measurement behaviour under controlled conditions. By isolating operator and acquisition effects from biological variability, this polymer-based framework supports methodological standardisation and structured operator training in breast elastography. Full article
(This article belongs to the Special Issue Polymers for Biomedical Engineering and Clinical Innovation)
Show Figures

Figure 1

12 pages, 255 KB  
Review
Reporting Standards and Quality Assurance Methods for Pancreatoduodenectomy in Randomised Controlled Trials: A Structured Narrative Review
by Abdullah K. Malik, Bishow B. Karki, Balaji Mahendran, John A. G. Moir, Shailesh V. Shrikhande, Andrew M. Smith, Deborah D. Stocken, Natalie S. Blencowe and Samir Pathak
J. Clin. Med. 2026, 15(6), 2455; https://doi.org/10.3390/jcm15062455 - 23 Mar 2026
Viewed by 373
Abstract
Background: Surgical interventions are complex and comprise multiple components, creating difficulties when considering how they might be described, standardised, and monitored (i.e., quality assurance) within randomised controlled trials (RCTs). Consolidated Standards of Reporting Trials – Non-Pharmacological Treatment (CONSORT-NPT) provides specific recommendations to improve [...] Read more.
Background: Surgical interventions are complex and comprise multiple components, creating difficulties when considering how they might be described, standardised, and monitored (i.e., quality assurance) within randomised controlled trials (RCTs). Consolidated Standards of Reporting Trials – Non-Pharmacological Treatment (CONSORT-NPT) provides specific recommendations to improve the quality, transparency, and replicability of RCTs involving a surgical intervention. This structured narrative review explores and summarizes the reporting of quality assurance measures for surgical interventions in RCTs, using pancreatoduodenectomy (PD) as an exploratory case study. Methods: Searches for RCTs of PD were undertaken in PubMed, Medline, the Cochrane Central Register of Controlled Trials, and the Cochrane Database of Systematic Reviews from 2020 to 2024. Pancreatoduodenectomy (PD) was deconstructed into its constituent components (n = 40), and selected RCTs were scrutinised to explore reporting of quality assurance measures against the deconstructed components as described in CONSORT-NPT. Results: Of 189 screened articles, 37 RCTs were included, reporting on 5659 patients across 16 countries. No studies described all components of PD, and four did not report any components at all. Nine studies described some form of standardisation, and three measured adherence to standards, using intra-operative photographs. Minimum surgeon and centre volumes were specified in two and six trials, respectively. Conclusions: Quality assurance measures were poorly reported in selected RCTs involving PD, creating uncertainty in interpreting results. To enhance the design of future RCTs, a wider consensus regarding the core components of a PD is required. This will facilitate subsequent consideration of how these might need to be reported in future pancreatic surgical RCTs. Full article
18 pages, 820 KB  
Article
Pathways to Green AI: Information Disclosure of Artificial Intelligence Within the ESG Framework of Commercial Entities
by Junkai Chen
Sustainability 2026, 18(6), 2922; https://doi.org/10.3390/su18062922 - 17 Mar 2026
Viewed by 377
Abstract
Strengthening transparency has emerged as a pivotal issue in promoting the responsible development of artificial intelligence (AI). As the prevailing framework for corporate information disclosure, Environmental, Social, and Governance (ESG) reporting shares an inherent synergy with AI governance; both are rooted in the [...] Read more.
Strengthening transparency has emerged as a pivotal issue in promoting the responsible development of artificial intelligence (AI). As the prevailing framework for corporate information disclosure, Environmental, Social, and Governance (ESG) reporting shares an inherent synergy with AI governance; both are rooted in the pursuit of sustainable development and the disclosure of specific matters to investors and broader stakeholders. This study analyzes the status of artificial intelligence (AI) information disclosure in the ESG (Environmental, Social, and Governance) reports of listed companies across the United States, Europe, and China, finding that: (1) ESG reports have emerged as a primary channel for business organizations to disclose AI-related information; (2) significant disparities exist in disclosure levels across four key AI-related domains—development, application, manufacturing, and consumption; and (3) disclosure density varies considerably across E, S, and G dimensions, with the Governance (G) pillar exhibiting the most comprehensive information. Based on an empirical analysis of the ESG-AI disclosure framework, this study proposes an optimization scheme for ESG-AI reporting, clearly defining mandatory ESG-AI disclosure obligations for listed companies and employing the “comply or explain” mechanism to balance corporate transparency with operational efficiency while adhering to the “Double Materiality” principle by disclosing model training energy consumption and ecological impacts under Environmental (E) matters, addressing employment, employee training, marketing labeling, and customer privacy under Social (S) matters, and elaborating on corporate AI strategies, risk management protocols, and governance policies under Governance (G) matters. Regarding procedural safeguards, taking China as a case study, centralized disclosure could be implemented through the National Enterprise Credit Information Publicity System, complemented by an assurance system for listed company reports to enhance the accessibility and accuracy of information disclosure. Full article
Show Figures

Figure 1

31 pages, 7528 KB  
Article
Shield Machine Attitude Prediction Method Based on Causal Graph Convolutional Network
by Liang Zeng, Xingao Yan, Chenning Zhang, Xue Wang and Shanshan Wang
Algorithms 2026, 19(3), 224; https://doi.org/10.3390/a19030224 - 16 Mar 2026
Viewed by 248
Abstract
Accurately predicting and controlling the attitude of a shield tunneling machine is critical for quality assurance in shield tunneling projects. Existing prediction methods utilize historical data to construct a machine learning framework to predict future attitude deviations. However, this method is poorly interpretable [...] Read more.
Accurately predicting and controlling the attitude of a shield tunneling machine is critical for quality assurance in shield tunneling projects. Existing prediction methods utilize historical data to construct a machine learning framework to predict future attitude deviations. However, this method is poorly interpretable and lacks practical engineering guidance. Considering the shortcomings of this prediction method, this study suggests an innovative deep learning method called causal graph convolutional network (C-GCN-GRU), and the goal of this project is the improvement of the interpretability of the shield attitude prediction. The causal relationships between key attitude features of the shield machine are recognized and quantified by the PCMCI+ method. The found causal relationships are converted into collocation matrices to be input into a model consisting of GCN and GRU, and combined with multi-head causal attention to better forecast the shield machine attitude. The results trained on a dataset from the Karnaphuli River Tunnel Project in Bangladesh show that the accuracy of the four variables characterizing the shield attitude and position predicted by the C-GCN-GRU model outperforms that of the other four similar models and provides decision support for attitude and position adjustments in shield tunnels. Full article
Show Figures

Figure 1

23 pages, 1013 KB  
Article
The Mediating Role of Audit Quality in the Relationship Between IFRS Adoption and Financial Reporting: Evidence from Big Four Auditing Firms in an Emerging Market
by Mohammad Zaid Alaskar, Abdulrahman Alomair, Abubkr Ahmed Elhadi Abdelraheem and Asaad Mubarak Hussien Musa
J. Risk Financial Manag. 2026, 19(3), 182; https://doi.org/10.3390/jrfm19030182 - 4 Mar 2026
Viewed by 662
Abstract
This paper set out to investigate how the Big Four auditing firms in Saudi Arabia perceive the impact of International Financial Reporting Standards (IFRS) adoption on financial reporting practices, addressing ongoing debate in the literature regarding whether IFRS adoption consistently enhances reporting practices [...] Read more.
This paper set out to investigate how the Big Four auditing firms in Saudi Arabia perceive the impact of International Financial Reporting Standards (IFRS) adoption on financial reporting practices, addressing ongoing debate in the literature regarding whether IFRS adoption consistently enhances reporting practices across different institutional contexts. Further, the study investigates whether audit quality (AQ) mediates the relationship between IFRS adoption and financial reporting quality (FRQ). To address these questions, a structured questionnaire was circulated among auditors and quality assurance auditors working in the Saudi branches of the Big Four. Responses were examined using partial least squares (PLS) analysis. The results showed that IFRS has a clear positive effect on the qualitative characteristics of financial reporting, aligning with evidence from earlier studies. The results also underscored the mediating role played by AQ in reinforcing the benefits of IFRS on reporting practices. The findings carry significant ramifications, specifically for major stakeholders, including regulatory authorities, financiers, board members, senior executives, and investors. Full article
(This article belongs to the Special Issue Accounting Information and Capital Markets)
Show Figures

Figure 1

47 pages, 2418 KB  
Review
Beyond Next-Token Prediction: A Standards-Aligned Survey of Autoregressive LLM Failure Modes, Deployment Patterns, and the Potential Role of World Models
by Lorenzo Ricciardi Celsi and James McCann
Electronics 2026, 15(5), 966; https://doi.org/10.3390/electronics15050966 - 26 Feb 2026
Viewed by 731
Abstract
This paper is a focused, standards-aligned survey of where autoregressive (AR) large language models (LLMs) tend to break down when deployed inside industrial informatics workflows that must satisfy long-horizon objectives, hard constraints, traceability, and functional-safety obligations (e.g., IEC 61508/ISO 26262/ISO 21448). Rather than [...] Read more.
This paper is a focused, standards-aligned survey of where autoregressive (AR) large language models (LLMs) tend to break down when deployed inside industrial informatics workflows that must satisfy long-horizon objectives, hard constraints, traceability, and functional-safety obligations (e.g., IEC 61508/ISO 26262/ISO 21448). Rather than claiming new algorithms or experiments, we synthesize and organize prior work into (i) a control-oriented taxonomy of four AR failure modes that recur in practice (compounding error, myopic objectives, data brittleness/hallucinations, and scaling/latency inefficiencies), (ii) a catalog of standards-compatible deployment patterns that mitigate these issues (human-gated LLM-in-the-loop, retrieval + verification pipelines, planner-of-record architectures, and runtime assurance envelopes), and (iii) an operational decision framework (criteria table with observable proxies, a stepwise decision procedure, and worked examples) for deciding when token-centric mitigations are sufficient versus when state/world-model components become warranted. Joint Embedding Predictive Architectures (JEPA) and Hierarchical JEPA (H-JEPA) JEPA are proposed as representative state-predictive architectures, with discussion explicitly bounded by currently available empirical evidence; we explicitly note that the published evidence base is currently concentrated on vision/multimodal benchmarks and that industrial control validation remains limited. To make evidence boundaries transparent, we introduce (a) a survey method (scope, inclusion/exclusion criteria, and data-extraction fields), (b) a comparison matrix across representative prior systems, and (c) an evidence map that links each deployment pattern to peer-reviewed empirical findings and system reports. Full article
Show Figures

Figure 1

24 pages, 1146 KB  
Article
Synchronizing Concurrent Security Modernization Programs: A Systems Integration Framework for Post-Quantum Cryptography, Zero Trust Architecture, and AI Security
by Robert Campbell
Systems 2026, 14(3), 233; https://doi.org/10.3390/systems14030233 - 25 Feb 2026
Viewed by 417
Abstract
Large organizations face a critical systems integration challenge when executing multiple concurrent security modernization programs. This paper examines the U.S. Department of Defense’s simultaneous implementation of three transformational initiatives—post-quantum cryptography (PQC) migration, Zero Trust Architecture (ZTA) deployment, and AI security assurance—each operating under [...] Read more.
Large organizations face a critical systems integration challenge when executing multiple concurrent security modernization programs. This paper examines the U.S. Department of Defense’s simultaneous implementation of three transformational initiatives—post-quantum cryptography (PQC) migration, Zero Trust Architecture (ZTA) deployment, and AI security assurance—each operating under separate governance structures, timelines, and compliance frameworks. Through systematic evidence synthesis of 59 sources (47 policy/standards documents and 12 performance benchmarks), we identify cross-program dependencies that create integration failures when programs operate in isolation. We propose a shared modernization substrate—a four-layer infrastructure architecture (Cryptographic Services, Identity Management, Analytics Pipeline, Policy Orchestration) that enables coordinated execution while preserving program independence. The framework addresses the fundamental systems challenge of achieving interoperability across programs with misaligned schedules and competing resource demands. We introduce a five-level Triad Convergence Maturity Model (TCMM) with operationalized indicators enabling repeatable organizational assessment. Illustrative application to three DoD modernization contexts demonstrates the framework’s ability to differentiate maturity levels. Performance analysis synthesizes published benchmark data: enterprise PQC latency overhead is modest (measured), while tactical environment estimates of 158–383% overhead are derived from benchmark extrapolation under packet-loss assumptions (modeled). Scenario modeling suggests that coordinated incident response through the substrate architecture could substantially reduce risk exposure windows compared to siloed approaches (modeled). The framework transforms fragmented program execution into synchronized systems modernization, offering practical guidance for chief information officers, program managers, and enterprise architects managing concurrent technology transitions. Full article
Show Figures

Figure 1

27 pages, 2061 KB  
Article
Delphi-Based Expert Evaluation of the XR2Learn Hybrid Instructional Design Framework for XR Education
by Christoforos Karachristos, Theodora Kouvara, Vasilis Zafeiropoulos, Theofanis Orphanoudakis, Giorgos Anastasakis, Alessandra Antonaci, Ioannis Chatzigiannakis, Maria Paola Conte, Angelica Marsico, Sindi Devole, Silvia Giordano and Matteo Besenzoni
Computers 2026, 15(2), 131; https://doi.org/10.3390/computers15020131 - 19 Feb 2026
Cited by 1 | Viewed by 429
Abstract
Extended Reality (XR) has reshaped how learning can be structured, yet its integration into formal curricula continues to lag behind its technological potential. Established instructional design models such as ADDIE and ASSURE provide stable planning structures, but were not developed to address the [...] Read more.
Extended Reality (XR) has reshaped how learning can be structured, yet its integration into formal curricula continues to lag behind its technological potential. Established instructional design models such as ADDIE and ASSURE provide stable planning structures, but were not developed to address the spatial, embodied and interactive characteristics of immersive environments. The XR2Learn framework was developed to bridge this gap by combining structured instructional planning with XR-specific pedagogical considerations. This study presents a multi-round Delphi-based expert evaluation of XR2Learn, involving twenty specialists in instructional design and XR-enhanced education. Experts assessed the framework across four dimensions: validity, clarity, usability and suitability. Qualitative feedback was thematically analyzed and subsequently quantified to establish consensus. The findings show strong agreement regarding the framework’s pedagogical grounding, logical structure and alignment with established instructional design practices. At the same time, experts identified limitations related to practical enactment, accessibility and the explicit integration of XR-specific learning constructs such as presence and social interaction. Overall, the results position XR2Learn as a framework at a transitional stage, moving from conceptual formulation toward practical instructional use. The study provides the first systematic expert validation of XR2Learn and outlines targeted directions for its refinement as a robust instructional design framework for XR-based education. Full article
Show Figures

Figure 1

19 pages, 1244 KB  
Article
Anomaly Detection as a Key Driver of Digital Forensic Resilience: Empirical Evidence from Critical Infrastructure Experts
by Marija Gombar, Darko Možnik and Mirjana Pejić Bach
Systems 2026, 14(2), 213; https://doi.org/10.3390/systems14020213 - 17 Feb 2026
Viewed by 655
Abstract
Ensuring strategic resilience in critical infrastructures supported with a machine learning approach requires moving beyond compliance checklists and post-incident analysis toward proactive, intelligence-based approaches. This study introduces the Forensic Resilience Operational Model (FROM), a systems thinking framework designed to embed forensic intelligence into [...] Read more.
Ensuring strategic resilience in critical infrastructures supported with a machine learning approach requires moving beyond compliance checklists and post-incident analysis toward proactive, intelligence-based approaches. This study introduces the Forensic Resilience Operational Model (FROM), a systems thinking framework designed to embed forensic intelligence into the resilience cycle of complex socio-technical systems. To quantify this integration, the study investigates the determinants of the extent to which four operational pillars (forensic readiness, anomaly detection, governance and privacy safeguards, and structured intelligence integration) affect forensic resilience, using empirical survey data from 212 cybersecurity professionals across critical infrastructure sectors. We deploy Partial Least Squares Structural Equation Modelling (PLS-SEM) to investigate these relationships, and the results confirm that anomaly detection is the strongest contributor to forensic resilience, followed by structured intelligence integration and forensic readiness. Governance safeguards, while comparatively weaker, provide the necessary legitimacy and assurance of compliance. Supported with sector-specific case studies in the maritime, financial, and CERT domains, the findings highlight both the adaptability of the proposed FROM and the operational constraints encountered in real-world contexts. The study contributes to the field of systems-oriented strategic management by demonstrating that, when systematically embedded, forensic intelligence enhances adaptive capacity, supports predictive decision-making, and strengthens resilience in environments characterized by uncertainty and high complexity. Full article
Show Figures

Figure 1

37 pages, 8361 KB  
Article
A Proactive Resource Pre-Allocation Framework for Anti-Jamming in Field-Deployed Communication Networks: An Evidence Theory Approach
by Haotian Yu, Xin Guan and Lang Ruan
Electronics 2026, 15(4), 846; https://doi.org/10.3390/electronics15040846 - 16 Feb 2026
Viewed by 325
Abstract
This study addresses the challenge of anticipatory resource allocation in field-deployed communication networks under dynamic unmanned aerial vehicle jamming. In such scenarios, energy supply is severely constrained. It cannot be replenished in real time, necessitating a one-time resource pre-allocation that must remain effective [...] Read more.
This study addresses the challenge of anticipatory resource allocation in field-deployed communication networks under dynamic unmanned aerial vehicle jamming. In such scenarios, energy supply is severely constrained. It cannot be replenished in real time, necessitating a one-time resource pre-allocation that must remain effective throughout the mission. To overcome these limitations, we propose a novel optimization framework consisting of four integrated components: (1) independent threat assessment via trajectory-coverage spatial mapping using digital elevation models and ray-tracing algorithms, (2) evidence-theoretic fusion of heterogeneous information sources—including objective intelligence data and subjective expert knowledge, (3) jamming distribution modeling through dedicated probability transformation algorithms for fixed-interval and continuous random jamming modes, and (4) decoupled resource-confidence optimization solved via convex programming. By employing evidence discount factors and Dempster’s combination rule, the framework quantifies reliability disparities. It integrates multiple heterogeneous sources and uses theoretically derived, forward-computable models—combining Binomial distributions, piecewise cubic Hermite interpolation, and uniform distribution assumptions—to efficiently convert threat basic probability assignments into jamming duration probability density functions. Extensive Monte Carlo simulations demonstrate significant improvement in mission assurance metrics, with consistent performance under diverse uncertainties. The approach is also validated in cross-domain applications using Bohai rescue data, confirming its utility in resource-limited, highly uncertain environments. Full article
Show Figures

Figure 1

Back to TopTop