Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (74)

Search Parameters:
Keywords = audit log

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2185 KB  
Article
Visually Sustainable but Spatially Broken? A Two-Level Assessment of How Generative AI Encodes Sustainable Urban Design Principles
by Sanghoon Jung
Sustainability 2026, 18(6), 2943; https://doi.org/10.3390/su18062943 (registering DOI) - 17 Mar 2026
Abstract
Generative AI enables rapid visualization of sustainable urban design scenarios, yet the question of whether these outputs encode sustainability as operable spatial logic, rather than merely depicting it as a visual impression, remains underexplored. This study proposes a two-level assessment framework that scores [...] Read more.
Generative AI enables rapid visualization of sustainable urban design scenarios, yet the question of whether these outputs encode sustainability as operable spatial logic, rather than merely depicting it as a visual impression, remains underexplored. This study proposes a two-level assessment framework that scores the same sustainability dimensions at both the visual-representation level and the spatial-logic level, treating the systematic decoupling between the two as a form of visual greenwashing: system-induced representational distortion rather than deliberate misrepresentation. Using AI-workflow reports from two site-based urban design studios (47 students, 12 teams, 36 coded scenes), the framework integrates rubric-based scoring with qualitative process tracing of breakdown–repair logs. Results show that image-level scores consistently outperform logic-level scores across all five dimensions, with the gap most severe in mobility hierarchy and walkability and smallest in green/blue infrastructure. Case analysis reveals that breakdowns arise from failures in program encoding, urban-scale coherence, functional-boundary demarcation, and relational-condition matching, and that students deploy multi-stage repair pipelines, including prompt restructuring, tool switching, reference injection, and external-source compositing, to re-inject collapsed spatial logic. These findings reframe AI-assisted urban design as repair-centered workmanship rather than automated production. The study proposes three guardrails to prevent visual sustainability from substituting for spatial-logic sustainability: image–logic paired submission, design audit trail formalization, and gap-based red-flag review. Full article
Show Figures

Figure 1

36 pages, 5029 KB  
Article
Option-C Verified Semantic Digital Twins for Decarbonized, Pressure-Reliable Central Business District Hospitals
by Zhe Wei
Buildings 2026, 16(6), 1096; https://doi.org/10.3390/buildings16061096 - 10 Mar 2026
Viewed by 116
Abstract
Central business district (CBD) hospitals must sustain reliable pressure relationships in critical rooms while reducing whole-facility carbon under tight space and disruption constraints. We developed an ontology-grounded semantic digital twin that normalizes building automation system (BAS) and building management system (BMS) telemetry into [...] Read more.
Central business district (CBD) hospitals must sustain reliable pressure relationships in critical rooms while reducing whole-facility carbon under tight space and disruption constraints. We developed an ontology-grounded semantic digital twin that normalizes building automation system (BAS) and building management system (BMS) telemetry into a unified semantic store consistent with Brick Schema, enabling portable asset discovery via query and thereby supporting forecasting, anomaly detection, and multi-objective optimization without dependence on vendor point naming conventions. Whole-facility impacts were verified using International Performance Measurement and Verification Protocol Option C–style measurement and verification with an S0-calibrated baseline model and residual-based savings attribution. Relative to the baseline (S0), the intervention (S3) produced a step increase in the critical-room pressure-compliance pass rate, tighter room-to-corridor differential-pressure (ΔP) control across airborne infection isolation and open room strata, and intent-aligned ventilation delivery (air changes per hour ratio distribution concentrated near unity; p < 0.05 where letter groups differ). Operational-state discrimination improved (AUC 0.649→0.696) and issue-resolution times shortened (left-shifted cumulative distribution function), indicating reduced service burden. Option C verification showed energy residuals shifting negative under S3, consistent with net savings versus baseline expectations. Across progressive maturity (S0→S3), time-to-value and burden fractions decreased, carbon intensity (tCO2e m−2) decreased, long-tail exposure compressed (log-scale horizon), and composite performance indices increased (p < 0.05). These results demonstrate a verifiable pathway to pressure-reliable, decarbonized hospital operations at the whole-facility boundary while making the semantic layer’s utility explicit through query-driven, ontology-grounded asset discovery. We present an IPMVP Option-C–verifiable semantic digital-twin governance framework that links audited operational evidence (telemetry → actions → verification) to whole-facility energy and carbon outcomes while maintaining critical-room pressure-relationship reliability. Optimization benchmarking (including quantum annealing) is used as supporting decision-support evaluation, rather than as the central contribution. Full article
Show Figures

Figure 1

26 pages, 409 KB  
Article
Unified Data Governance in Heterogeneous Database Environments: An API-Driven Architecture for Multi-Platform Policy Enforcement
by Maryam Abbasi, Paulo Váz, José Silva, Filipe Cardoso, Filipe Sá and Pedro Martins
Data 2026, 11(3), 54; https://doi.org/10.3390/data11030054 - 7 Mar 2026
Viewed by 260
Abstract
Modern organizations increasingly rely on heterogeneous database environments that combine relational, document-oriented, and key-value storage systems to optimize performance for diverse application requirements. However, this technological diversity creates significant challenges for implementing consistent data governance policies, regulatory compliance, and access control across disparate [...] Read more.
Modern organizations increasingly rely on heterogeneous database environments that combine relational, document-oriented, and key-value storage systems to optimize performance for diverse application requirements. However, this technological diversity creates significant challenges for implementing consistent data governance policies, regulatory compliance, and access control across disparate systems. Traditional governance approaches that operate within individual database silos fail to provide unified policy enforcement and create compliance gaps that expose organizations to regulatory and operational risks. This paper presents a novel API-driven architecture that enables unified data governance across heterogeneous database environments without requiring database-specific modifications or vendor lock-in. The proposed framework implements a centralized governance layer that coordinates policy enforcement across PostgreSQL, MongoDB, and Amazon DynamoDB systems through RESTful API interfaces. Key architectural components include differentiated access control through hierarchical API key management, automated compliance workflows for regulatory requirements such as GDPR, real-time audit trail generation, and comprehensive data quality monitoring with automated improvement mechanisms. Comprehensive experimental evaluation demonstrates the framework’s effectiveness across multiple operational dimensions. The system achieved 95.2% accuracy in access control enforcement across different data classification levels, while automated GDPR compliance workflows demonstrated 98.6% success rates with average processing times of 2.9 h. Performance evaluation reveals acceptable overhead characteristics with linear scaling patterns for PostgreSQL operations (R2 = 0.89), consistent sub-20ms response times for MongoDB logging operations, and sustained throughput rates ranging from 38.9 to 142.7 requests per second across the integrated system. Data quality improvements ranged from 16.1% to 34.3% across accuracy, completeness, consistency, and timeliness dimensions over a 12-week monitoring period, with accuracy improving by 17.8 percentage points, completeness by 13.2 percentage points, consistency by 19.7 percentage points, and timeliness by 24.5 percentage points. The duplicate detection system achieved 94.6% precision and 95.6% recall across various duplicate types, including cross-database redundancy identification. The results demonstrate that API-driven governance architectures can effectively address the persistent challenges of policy fragmentation in multi-database environments while maintaining operational performance and enabling measurable improvements in data quality and regulatory compliance. The framework provides a practical migration path for organizations seeking to implement comprehensive governance capabilities without replacing existing database infrastructure investments. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

22 pages, 641 KB  
Article
Risk-Based AI Assurance Framework
by Aoun E. Muhammad and Kin-Choong Yow
Information 2026, 17(3), 263; https://doi.org/10.3390/info17030263 - 5 Mar 2026
Viewed by 228
Abstract
The aim of this research is to present a risk-based AI assurance framework that produces quantifiable metrics for auditors and stakeholders to make deployment decisions with evidence-driven assurance of traceability, explainability, accountability, and reproducibility. Our proposed framework incorporates risk severity core with additional [...] Read more.
The aim of this research is to present a risk-based AI assurance framework that produces quantifiable metrics for auditors and stakeholders to make deployment decisions with evidence-driven assurance of traceability, explainability, accountability, and reproducibility. Our proposed framework incorporates risk severity core with additional modifiers to accommodate the context, governance obligations, technical and environmental exposure, and residual risk relevant to the AI model. This multi-tiered technique enables stakeholders and governance teams to operationalize the safe deployment assurance. The final Assurance Adequacy Score (AAS) comprises a Governance Readiness Score (GRS) along with two additional indices to quantify the traceability and explainability of the AI model. The Traceability Adequacy Index (TAI) is calculated by evaluating the attributes such as the dataset and model versioning, pipeline logging, model audit completeness, and reproducibility. And an Explainability Adequacy Index (EAI) is calculated by evaluating the attributes such as the fidelity for local and global explanations, stability, faithfulness of the explanation provided, robustness, coverage, and human comprehension. This architecture enables integration of risk assessment and enables continued AI assurance by deploying a bottleneck principle where the readiness of the AI model is confined by the weaker of the indices. Finally, a tiered gate mechanism is applied on the Assurance Adequacy Score to enforce minimum assurance floors for high-risk AI systems. The evaluation conducted on multi-domain AI models demonstrates the Risk-Based AI Assurance Framework’s (RBAAF) ability to yield stable and consistent readiness decisions with sensitivity analysis and re-scoring. The use cases demonstrate that even comparable risk levels can lead to significantly different deployment outcomes depending on assurance maturity, and design-specific improvements in traceable or explainable domains have the ability to shift gate outcomes. Combining governance regulations with a standardized and quantifiable traceability and explainability score enables the stakeholders to evaluate the AI system for an accountable and regulation-compliant deployment. Full article
Show Figures

Graphical abstract

41 pages, 815 KB  
Article
XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(2), 43; https://doi.org/10.3390/jcp6020043 - 2 Mar 2026
Viewed by 408
Abstract
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing [...] Read more.
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing approaches often treat these concerns in isolation as follows: Explainable Artificial Intelligence (XAI) methods are added ad hoc to machine learning pipelines, while governance and regulatory frameworks remain largely conceptual and weakly connected to the concrete artefacts produced in practice. This article proposes XAI-Compliance-by-Design, a modular framework that integrates XAI techniques, compliance-by-design principles and trustworthy Machine Learning Operations (MLOps) practices into a unified architecture for high-risk AI systems in cybersecurity and privacy domains. The framework follows a dual-flow design that couples an upstream technical pipeline (data, model, explanation, and monitoring) with a downstream governance pipeline (policy, oversight, audit, and decision-making), orchestrated by a Compliance-by-Design Engine and a technical–regulatory correspondence matrix aligned with the GDPR, the AI Act, and ISO/IEC 42001. The framework is instantiated and evaluated through an end-to-end, Python-based proof of concept using a synthetic, intrusion detection system (IDS)-inspired anomaly detection scenario with a Random Forest (RF) classifier, Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), drift indicators, and tamper-evident evidence bundles and decision dossiers. The results show that, even in a modest, toy setting, the framework systematically produces verifiable artefacts that support auditability and accountability across the model lifecycle. By linking explanation reports, drift statistics and compliance logs to concrete regulatory provisions, the approach illustrates how organisations operating high-risk AI for cybersecurity and privacy can move from model-centric optimisation to evidence-centric governance. The article discusses how the proposed framework can be generalised to real-world high-risk AI applications, contributing to the operationalisation of European digital sovereignty in AI governance. This article does not introduce a new intrusion detection algorithm; instead, it proposes an evidence-centric governance pipeline that captures decision provenance and compliance artefacts so that decisions can be audited and justified against regulatory obligations. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Graphical abstract

21 pages, 372 KB  
Review
Open-Source Large Language Models in Education: A Narrative Review of Evidence, Pedagogical Roles, and Learning Outcomes
by Michael Pin-Chuan Lin, Jing-Yuan Huang, Daniel H. Chang, Gerald Tembrevilla, G. Michael Bowen, Eric Poitras, Vasudevan Janarthanan and Jeeho Ryoo
AI Educ. 2026, 2(1), 4; https://doi.org/10.3390/aieduc2010004 - 27 Feb 2026
Viewed by 738
Abstract
Open-source large language models (LLMs) are increasingly explored in educational contexts due to their transparency, adaptability, and alignment with institutional governance and equity considerations. Despite growing interest, empirical research on how open-source LLMs are deployed in education and what evidence currently supports their [...] Read more.
Open-source large language models (LLMs) are increasingly explored in educational contexts due to their transparency, adaptability, and alignment with institutional governance and equity considerations. Despite growing interest, empirical research on how open-source LLMs are deployed in education and what evidence currently supports their integration remains limited and fragmented. This paper presents a state-of-the-art narrative review of peer-reviewed, human empirical studies examining the use of open-source LLMs in education. Guided by three questions, the review synthesizes how open-source LLMs are deployed across instructional contexts, what learner-related evidence is reported, and how teachers engage in human–AI collaboration. The reviewed literature is concentrated in higher education, particularly within computer science and programming domains, with applications focused on post-class tutoring, guidance, and formative feedback. Learner perceptions are generally positive, but evidence linking open-source LLM use to measurable learning outcomes remains emerging and inconsistent. Through interpretive synthesis, the review articulates a four-role model—Designer, Facilitator, Monitor, and Evaluator—that captures how teacher agency is enacted across AI-supported instructional workflows. This review maps recurring orchestration dimensions, decision points, and tensions that characterize early implementations, and it proposes a minimal orchestration reporting scaffold (configuration, boundaries, logging, adjudication) intended to support auditability and cross-study comparison as the empirical base develops. Full article
Show Figures

Figure 1

17 pages, 321 KB  
Article
Algorithmic Profiling of Operational Risk: A Data-Driven Predictive Model for Micro-Enterprise Solvency Assessment
by Jazmín Pérez-Salazar, Nicolás Márquez and Cristian Vidal-Silva
Computers 2026, 15(2), 135; https://doi.org/10.3390/computers15020135 - 22 Feb 2026
Viewed by 336
Abstract
The persistent financial exclusion of micro-enterprises is fundamentally driven by information asymmetry, as traditional credit scoring models rely heavily on audited financial statements that small entities rarely possess. To address this “thin-file” challenge, this study proposes a shift from asset-based valuation to behavioral [...] Read more.
The persistent financial exclusion of micro-enterprises is fundamentally driven by information asymmetry, as traditional credit scoring models rely heavily on audited financial statements that small entities rarely possess. To address this “thin-file” challenge, this study proposes a shift from asset-based valuation to behavioral algorithmic profiling, hypothesizing that high-frequency operational risk patterns can serve as informative proxies for solvency compared to static liquidity ratios. Using an Extreme Gradient Boosting (XGBoost) architecture on a synthetic dataset of 5000 micro-enterprise transaction logs, we develop a predictive framework that extracts latent features such as supply chain latency, inventory turnover consistency, and digital footprint intensity. The proposed model achieves an Area Under the Curve (AUC) of 0.94, outperforming traditional linear baselines and achieving performance levels above those commonly reported in micro-enterprise solvency prediction studies. The results indicate that operational stability emerges as a strong indicator of repayment capacity within the evaluated context, outperforming static liquidity-based measures. These findings suggest that computational intelligence approaches grounded in high-frequency operational data may contribute to mitigating information asymmetries in micro-enterprise credit assessment, particularly in environments characterized by limited financial disclosure, although further empirical validation is required prior to large-scale deployment. Full article
Show Figures

Figure 1

21 pages, 1499 KB  
Article
A Conceptual Framework for Sustainable Pollution Control in Informal Economies with Generative AI
by Akira Nagamatsu, Yuji Tou and Chihiro Watanabe
Sustainability 2026, 18(3), 1703; https://doi.org/10.3390/su18031703 - 6 Feb 2026
Viewed by 442
Abstract
Intangible environmental externalities in informal economies are hard to detect, attribute, and regulate because transaction records and evidentiary trails are fragmented. This conceptual paper reframes pollution control from improving model performance to designing institutions for verifiability and examines how generative AI (GAI) can [...] Read more.
Intangible environmental externalities in informal economies are hard to detect, attribute, and regulate because transaction records and evidentiary trails are fragmented. This conceptual paper reframes pollution control from improving model performance to designing institutions for verifiability and examines how generative AI (GAI) can both strengthen and undermine that verifiability. Integrating transaction-structure theory, institutional economics, and digital-governance research, we derive four propositions: (P1) standardized, interoperable evidence and hybrid auditing allow GAI to lower verification costs; (P2) opaque, multi-tier transactions and concentrated data control enable plausible falsification; (P3) detection reduces pollution only when linked to remediation through enforcement capacity; and (P4) incentives must reward verified, not merely claimed, circularity to deter greenwashing. We illustrate feasibility and boundary conditions through three precedents: Amazon’s unit-level identifiers and sustainability labeling, India’s CPCB extended producer responsibility portal for plastic packaging, and Brazil’s nationwide e-invoicing infrastructure (NF-e/SPED). The framework offers actionable design principles, testable hypotheses, and measurable indicators (evidence linkage, audit-log completeness, time-to-remediation) for future empirical work. The framework is intended to support analytic generalization for policy and practice across contexts. Full article
Show Figures

Figure 1

31 pages, 11526 KB  
Review
Transferability and Robustness in Proximal and UAV Crop Imaging
by Jayme Garcia Arnal Barbedo
Agronomy 2026, 16(3), 364; https://doi.org/10.3390/agronomy16030364 - 2 Feb 2026
Viewed by 353
Abstract
AI-driven imaging is becoming central to crop monitoring, with proximal and unmanned aerial vehicle (UAV) platforms now routinely used for disease and stress detection, yield estimation, canopy structure, and fruit counting. Yet, as these models move from plots to farms, the main bottleneck [...] Read more.
AI-driven imaging is becoming central to crop monitoring, with proximal and unmanned aerial vehicle (UAV) platforms now routinely used for disease and stress detection, yield estimation, canopy structure, and fruit counting. Yet, as these models move from plots to farms, the main bottleneck is no longer raw accuracy but robustness under distribution shift. Systems trained in one field, season, cultivar, or sensor often fail when the scene, sensor, protocol, or timing changes in realistic ways. This review synthesizes recent advances on robustness and transferability in proximal and UAV imaging, drawing on a corpus of 42 core studies across field crops, orchards, greenhouse environments, and multi-platform phenotyping. Shift types are organized into four axes, namely scene, sensor, protocol, and time. The article also maps the empirical evidence on when RGB imaging alone is sufficient and when multispectral, hyperspectral, or thermal modalities can potentially improve robustness. This serves as a basis to synthesize acquisition and evaluation practices that often matter more than architectural tweaks, which include phenology-aware flight planning, radiometric standardization, metadata logging, and leave-one-field/season-out splits. Adaptation options are consolidated into a practical symptom/remedy roadmap, ranging from lightweight normalization and small target-set fine-tuning to feature alignment, unsupervised domain adaptation, style translation, and test-time updates. Finally, a benchmark and dataset agenda are outlined with emphasis on object-oriented splits, cross-sensor and cross-scale collections, and longitudinal datasets where the same fields are followed across seasons under different management regimes. The goal is to outline practices and evaluation protocols that support progress toward deployable and auditable systems, noting that such claims require standardized out-of-distribution testing and transparent reporting as emphasized in the benchmark specification and experiment suite proposed here. Full article
Show Figures

Figure 1

23 pages, 8113 KB  
Article
Estimating H I Mass Fraction in Galaxies with Bayesian Neural Networks
by Joelson Sartori, Cristian G. Bernal and Carlos Frajuca
Galaxies 2026, 14(1), 10; https://doi.org/10.3390/galaxies14010010 - 2 Feb 2026
Viewed by 478
Abstract
Neutral atomic hydrogen (H I) regulates galaxy growth and quenching, but direct 21 cm measurements remain observationally expensive and affected by selection biases. We develop Bayesian neural networks (BNNs)—a type of neural model that returns both a prediction and an associated uncertainty—to infer [...] Read more.
Neutral atomic hydrogen (H I) regulates galaxy growth and quenching, but direct 21 cm measurements remain observationally expensive and affected by selection biases. We develop Bayesian neural networks (BNNs)—a type of neural model that returns both a prediction and an associated uncertainty—to infer the H I mass, log10(MHI), from widely available optical properties (e.g., stellar mass, apparent magnitudes, and diagnostic colors) and simple structural parameters. For continuity with the photometric gas fraction (PGF) literature, we also report the gas-to-stellar-mass ratio, log10(G/S), where explicitly noted. Our dataset is a reproducible cross-match of SDSS DR12, the MPA–JHU value-added catalogs, and the 100% ALFALFA release, resulting in 31,501 galaxies after quality controls. To ensure fair evaluation, we adopt fixed train/validation/test partitions and an additional sky-holdout region to probe domain shift, i.e., how well the model extrapolates to sky regions that were not used for training. We also audit features to avoid information leakage and benchmark the BNNs against deterministic models, including a feed-forward neural network baseline and gradient-boosted trees (GBTs, a standard tree-based ensemble method in machine learning). Performance is assessed using mean absolute error (MAE), root-mean-square error (RMSE), and probabilistic diagnostics such as the negative log-likelihood (NLL, a loss that rewards models that assign high probability to the observed H I masses), reliability diagrams (plots comparing predicted probabilities to observed frequencies), and empirical 68%/95% coverage. The Bayesian models achieve point accuracy comparable to the deterministic baselines while additionally providing calibrated prediction intervals that adapt to stellar mass, surface density, and color. This enables galaxy-by-galaxy uncertainty estimation and prioritization for 21 cm follow-up that explicitly accounts for predicted uncertainties (“risk-aware” target selection). Overall, the results demonstrate that uncertainty-aware machine-learning methods offer a scalable and reproducible route to inferring galactic H I content from widely available optical data. Full article
Show Figures

Figure 1

55 pages, 2886 KB  
Article
Hybrid AI and LLM-Enabled Agent-Based Real-Time Decision Support Architecture for Industrial Batch Processes: A Clean-in-Place Case Study
by Apolinar González-Potes, Diego Martínez-Castro, Carlos M. Paredes, Alberto Ochoa-Brust, Luis J. Mena, Rafael Martínez-Peláez, Vanessa G. Félix and Ramón A. Félix-Cuadras
AI 2026, 7(2), 51; https://doi.org/10.3390/ai7020051 - 1 Feb 2026
Viewed by 1486
Abstract
A hybrid AI and LLM-enabled architecture is presented for real-time decision support in industrial batch processes, where supervision still relies heavily on human operators and ad hoc SCADA logic. Unlike algorithmic contributions proposing novel AI methods, this work addresses the practical integration and [...] Read more.
A hybrid AI and LLM-enabled architecture is presented for real-time decision support in industrial batch processes, where supervision still relies heavily on human operators and ad hoc SCADA logic. Unlike algorithmic contributions proposing novel AI methods, this work addresses the practical integration and deployment challenges arising when applying existing AI techniques to safety-critical industrial environments with legacy PLC/SCADA infrastructure and real-time constraints. The framework combines deterministic rule-based agents, fuzzy and statistical enrichment, and large language models (LLMs) to support monitoring, diagnostic interpretation, preventive maintenance planning, and operator interaction with minimal manual intervention. High-frequency sensor streams are collected into rolling buffers per active process instance; deterministic agents compute enriched variables, discrete supervisory states, and rule-based alarms, while an LLM-driven analytics agent answers free-form operator queries over the same enriched datasets through a conversational interface. The architecture is instantiated and deployed in the Clean-in-Place (CIP) system of an industrial beverage plant and evaluated following a case study design aimed at demonstrating architectural feasibility and diagnostic behavior under realistic operating regimes rather than statistical generalization. Three representative multi-stage CIP executions—purposively selected from 24 runs monitored during a six-month deployment—span nominal baseline, preventive-warning, and diagnostic-alert conditions. The study quantifies stage-specification compliance, state-to-specification consistency, and temporal stability of supervisory states, and performs spot-check audits of numerical consistency between language-based summaries and enriched logs. Results in the evaluated CIP deployment show high time within specification in sanitizing stages (100% compliance across the evaluated runs), coherent and mostly stable supervisory states in variable alkaline conditions (state-specification consistency Γs0.98), and data-grounded conversational diagnostics in real time (median numerical error below 3% in audited samples), without altering the existing CIP control logic. These findings suggest that the architecture can be transferred to other industrial cleaning and batch operations by reconfiguring process-specific rules and ontologies, though empirical validation in other process types remains future work. The contribution lies in demonstrating how to bridge the gap between AI theory and industrial practice through careful system architecture, data transformation pipelines, and integration patterns that enable reliable AI-enhanced decision support in production environments, offering a practical path toward AI-assisted process supervision with explainable conversational interfaces that support preventive maintenance decision-making and equipment health monitoring. Full article
Show Figures

Figure 1

15 pages, 712 KB  
Article
Stage-Aware Governance of Large Language Models: Managing Uncertainty and Human Oversight in AI-Assisted Literature Review Systems
by Junic Kim and Haeyong Shin
Systems 2026, 14(2), 153; https://doi.org/10.3390/systems14020153 - 31 Jan 2026
Viewed by 543
Abstract
This study proposes a stage-aware governance framework for large language models (LLMs) that structures human oversight and accountability across different decision stages in AI-assisted literature review systems. Large language models (LLMs) are increasingly embedded in systematic review workflows, yet how human oversight and [...] Read more.
This study proposes a stage-aware governance framework for large language models (LLMs) that structures human oversight and accountability across different decision stages in AI-assisted literature review systems. Large language models (LLMs) are increasingly embedded in systematic review workflows, yet how human oversight and accountability should be structured across different decision stages remains unclear. This study evaluates three LLMs in a controlled two-stage literature review workflow—title-and-abstract screening and eligibility assessment—using identical evidence inputs and fixed inclusion criteria, with outputs benchmarked against expert consensus under fully reproducible conditions with standardized prompts and comprehensive logging. While LLMs closely matched expert decisions during screening (precision 0.83–0.91; F1 up to 0.89; Cohen’s κ 0.65–0.85), performance degraded substantially at the eligibility stage (F1 0.58–0.65; κ 0.52–0.62), indicating increased epistemic uncertainty when fine-grained criteria must be inferred from abstract-level information. Importantly, disagreements clustered in borderline cases rather than random error, supporting a stage-aware governance approach in which LLMs automate high-throughput screening while inter-model disagreement is operationalized as an actionable uncertainty signal that triggers human oversight in more consequential decision stages. These findings highlight the need for explicit oversight thresholds, responsibility allocation, and auditability in the responsible deployment of AI-assisted decision systems for evidence synthesis. Full article
(This article belongs to the Special Issue Ethics and Governance of Artificial Intelligence (AI) Systems)
Show Figures

Figure 1

21 pages, 9102 KB  
Article
A Lightweight Edge AI Framework for Adaptive Traffic Signal Control in Mid-Sized Philippine Cities
by Alex L. Maureal, Franch Maverick A. Lorilla and Ginno L. Andres
Sustainability 2026, 18(3), 1147; https://doi.org/10.3390/su18031147 - 23 Jan 2026
Viewed by 714
Abstract
Mid-sized Philippine cities commonly rely on fixed-time traffic signal plans that cannot respond to short-term, demand-driven surges, resulting in measurable idle time at stop lines, increased delay, and unnecessary emissions, while adaptive signal control has demonstrated performance benefits, many existing solutions depend on [...] Read more.
Mid-sized Philippine cities commonly rely on fixed-time traffic signal plans that cannot respond to short-term, demand-driven surges, resulting in measurable idle time at stop lines, increased delay, and unnecessary emissions, while adaptive signal control has demonstrated performance benefits, many existing solutions depend on centralized infrastructure and high-bandwidth connectivity, limiting their applicability for resource-constrained local government units (LGUs). This study reports a field deployment of TrafficEZ, a lightweight edge AI signal controller that reallocates green splits locally using traffic-density approximations derived from cabinet-mounted cameras. The controller follows a macroscopic, cycle-level control abstraction consistent with Transportation System Models (TSMs) and does not rely on stationary flow–density–speed (fundamental diagram) assumptions. The system estimates queued demand and discharge efficiency on-device and updates green time each cycle without altering cycle length, intergreen intervals, or pedestrian safety timings. A quasi-experimental pre–post evaluation was conducted at three signalized intersections in El Salvador City using an existing 125 s, three-phase fixed-time plan as the baseline. Observed field results show average per-vehicle delay reductions of 18–32%, with reclaimed effective green translating into approximately 50–200 additional vehicles per hour served at the busiest approaches. Box-occupancy durations shortened, indicating reduced spillback risk, while conservative idle-time estimates imply corresponding CO2 savings during peak periods. Because all decisions run locally within the signal cabinet, operation remained robust during backhaul interruptions and supported incremental, intersection-by-intersection deployment; per-cycle actions were logged to support auditability and governance reporting. These findings demonstrate that density-driven edge AI can deliver practical mobility, reliability, and sustainability gains for LGUs while supporting evidence-based governance and performance reporting. Full article
Show Figures

Figure 1

11 pages, 370 KB  
Communication
Engineering Explainable AI Systems for GDPR-Aligned Decision Transparency: A Modular Framework for Continuous Compliance
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(1), 7; https://doi.org/10.3390/jcp6010007 - 30 Dec 2025
Viewed by 1107
Abstract
Explainability is increasingly expected to support not only interpretation, but also accountability, human oversight, and auditability in high-risk Artificial Intelligence (AI) systems. However, in many deployments, explanations are generated as isolated technical reports, remaining weakly connected to decision provenance, governance actions, audit logs, [...] Read more.
Explainability is increasingly expected to support not only interpretation, but also accountability, human oversight, and auditability in high-risk Artificial Intelligence (AI) systems. However, in many deployments, explanations are generated as isolated technical reports, remaining weakly connected to decision provenance, governance actions, audit logs, and regulatory documentation. This short communication introduces XAI-Compliance-by-Design, a modular engineering framework for explainable artificial intelligence (XAI) systems that routes explainability outputs and related technical traces into structured, audit-ready evidence throughout the AI lifecycle, designed to align with key obligations under the European Union Artificial Intelligence Act (EU AI Act) and the General Data Protection Regulation (GDPR). The framework specifies (i) a modular architecture that separates technical evidence generation from governance consumption through explicit interface points for emitting, storing, and querying evidence, and (ii) a Technical–Regulatory Correspondence Matrix—a mapping table linking regulatory anchors to concrete evidence artefacts and governance triggers. As this communication does not report measured results, it also introduces an Evidence-by-Design evaluation protocol defining measurable indicators, baseline configurations, and required artefacts to enable reproducible empirical validation in future work. Overall, the contribution is a practical blueprint that clarifies what evidence must be produced, where it is generated in the pipeline, and how it supports continuous compliance and auditability efforts without relying on post hoc explanations. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

38 pages, 5997 KB  
Article
Blockchain-Enhanced Network Scanning and Monitoring (BENSAM) Framework
by Syed Wasif Abbas Hamdani, Kamran Ali and Zia Muhammad
Blockchains 2026, 4(1), 1; https://doi.org/10.3390/blockchains4010001 - 26 Dec 2025
Viewed by 525
Abstract
In recent years, the convergence of advanced technologies has enabled real-time data access and sharing across diverse devices and networks, significantly amplifying cybersecurity risks. For organizations with digital infrastructures, network security is crucial for mitigating potential cyber-attacks. They establish security policies to protect [...] Read more.
In recent years, the convergence of advanced technologies has enabled real-time data access and sharing across diverse devices and networks, significantly amplifying cybersecurity risks. For organizations with digital infrastructures, network security is crucial for mitigating potential cyber-attacks. They establish security policies to protect systems and data, but employees may intentionally or unintentionally bypass these policies, rendering the network vulnerable to internal and external threats. Detecting these policy violations is challenging, requiring frequent manual system checks for compliance. This paper addresses key challenges in safeguarding digital assets against evolving threats, including rogue access points, man-in-the-middle attacks, denial-of-service (DoS) incidents, unpatched vulnerabilities, and AI-driven automated exploits. We propose a Blockchain-Enhanced Network Scanning and Monitoring (BENSAM) Framework, a multi-layered system that integrates advanced network scanning with a structured database for asset management, policy-driven vulnerability detection, and remediation planning. Key enhancements include device profiling, user activity monitoring, network forensics, intrusion detection capabilities, and multi-format report generation. By incorporating blockchain technology, and leveraging immutable ledgers and smart contracts, the framework ensures tamper-proof audit trails, decentralized verification of policy compliance, and automated real-time responses to violations such as alerts; actual device isolation is performed by external controllers like SDN or NAC systems. The research provides a detailed literature review on blockchain applications in domains like IoT, healthcare, and vehicular networks. A working prototype of the proposed BENSAM framework was developed that demonstrates end-to-end network scanning, device profiling, traffic monitoring, policy enforcement, and blockchain-based immutable logging. This implementation is publicly released and is available on GitHub. It analyzes common network vulnerabilities (e.g., open ports, remote access, and disabled firewalls), attacks (including spoofing, flooding, and DDoS), and outlines policy enforcement methods. Moreover, the framework anticipates emerging challenges from AI-driven attacks such as adversarial evasion, data poisoning, and transformer-based threats, positioning the system for the future integration of adaptive mechanisms to counter these advanced intrusions. This blockchain-enhanced approach streamlines security analysis, extends the framework for AI threat detection with improved accuracy, and reduces administrative overhead by integrating multiple security tools into a cohesive, trustworthy, reliable solution. Full article
Show Figures

Figure 1

Back to TopTop