Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (361)

Search Parameters:
Keywords = enforcement gap

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2649 KB  
Article
Operational Anomaly Screening in Permanent Basic Farmland Using Optimized Remote Sensing Semantic Segmentation: Implications for Sustainable Land Stewardship
by Jianwen Wang, Yujie Wang, Jiahao Cheng, Caiyun Gao, Wei Rong, Nan Wang and Jian Hu
Sustainability 2026, 18(9), 4292; https://doi.org/10.3390/su18094292 (registering DOI) - 26 Apr 2026
Abstract
Cropland protection enforcement is central to food security and sustainable land management, yet small-scale encroachments within Permanent Basic Farmland (PBF) boundaries frequently evade conventional field surveys and reactive inspection regimes. Existing remote sensing approaches rely mainly on comprehensive land-cover classification or bi-temporal change [...] Read more.
Cropland protection enforcement is central to food security and sustainable land management, yet small-scale encroachments within Permanent Basic Farmland (PBF) boundaries frequently evade conventional field surveys and reactive inspection regimes. Existing remote sensing approaches rely mainly on comprehensive land-cover classification or bi-temporal change detection, which often generate alerts beyond the regulatory scope and require annotation efforts that limit county-scale deployment. To address this gap, this study reframes PBF monitoring as a boundary-constrained anomaly screening task, defined as the detection of surface conditions that deviate from expected cultivation norms within legally defined parcels. To operationalise this task, we adapt a DeepLabv3+-based segmentation pipeline by incorporating an auxiliary edge branch and a composite loss to improve sensitivity to minority-class anomalies and preserve fragmented parcel boundaries. The model is trained on the LoveDA dataset and evaluated in Mancheng District, Hebei Province, China, without site-specific fine-tuning. Multi-temporal imagery from 2021 to 2023 is further used as a post hoc consistency check to distinguish persistent anomalies from transient surface conditions, rather than to model temporal dynamics explicitly. Cross-regional zero-shot evaluation further examines model robustness under heterogeneous environmental conditions. Benchmarked against five comparison architectures, the adapted pipeline achieves a Recall of 61.25%, representing a 10.24 percentage-point improvement over DeepLabv3+ and expanding the set of candidate encroachments for field verification. This result should be interpreted in terms of screening sensitivity rather than overall segmentation optimisation. The outputs are intended as preliminary screening leads that support, rather than replace, expert review. The principal contribution of this study therefore lies in reframing PBF monitoring as an operational anomaly-screening task aligned with enforcement needs, rather than in proposing a fundamentally new segmentation architecture. Full article
Show Figures

Figure 1

24 pages, 1531 KB  
Article
SS-RIME: A Scale-Stabilized Approach to EEG Cognitive Workload Classification
by Kais Khaldi, Afrah Alanazi, Inam Alanazi, Sahar Almenwer and Anis Mohamed
Sensors 2026, 26(9), 2679; https://doi.org/10.3390/s26092679 (registering DOI) - 25 Apr 2026
Abstract
Accurate and interpretable assessment of cognitive workload from EEG remains a central challenge in neuroergonomics and real-time human–machine interaction. To address the limitations of existing Empirical Mode Decomposition (EMD) and Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) approaches, particularly their instability, [...] Read more.
Accurate and interpretable assessment of cognitive workload from EEG remains a central challenge in neuroergonomics and real-time human–machine interaction. To address the limitations of existing Empirical Mode Decomposition (EMD) and Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) approaches, particularly their instability, limited neuroscientific grounding, and sensitivity to amplitude fluctuations, this paper introduces Scale-Stabilized Relative Intrinsic Mode Energy (SS-RIME), a theoretically motivated and physiologically informed feature extraction framework. SS-RIME integrates instantaneous frequency stabilization to enforce a consistent oscillatory hierarchy across subjects, delta (1–4 Hz) and theta (4–7.5 Hz) spectral weighting based on established frontal-midline activity, and cross-IMF energy normalization to reduce amplitude-driven variability. Applied to 64-channel EEG recorded during N-back tasks, the proposed framework achieved high performance, outperforming both classical machine-learning baselines and deep learning models such as EEGNet, DeepConvNet, and ShallowConvNet. SS-RIME yielded accuracies of 99.12±0.41% (0 vs. 2-back), 97.84±0.63% (0 vs. 3-back), and 92.31±1.12% (2 vs. 3-back), demonstrating strong cross-subject generalization. Theta-dominant IMFs over frontal midline regions emerged as the most discriminative components, supporting the neuroscientific validity of the stabilized and spectrally weighted Hilbert–Huang representation. With an inference time below 20 ms per epoch, SS-RIME is computationally efficient and suitable for real-time neuroergonomics applications, providing a robust, explainable, and physiologically grounded solution for EEG-based cognitive workload decoding while addressing key methodological gaps in prior EMD/CEEMDAN and deep learning approaches. Full article
(This article belongs to the Section Intelligent Sensors)
21 pages, 627 KB  
Review
Flexibility and Controllability in Low-Voltage Distribution Grids Under High PV Penetration
by Fredrik Ege Abrahamsen, Ian Norheim and Kjetil Obstfelder Uhlen
Energies 2026, 19(9), 2072; https://doi.org/10.3390/en19092072 - 24 Apr 2026
Abstract
The rapid integration of distributed solar photovoltaic (PV) generation is reshaping low-voltage distribution grids (LVDGs), creating voltage rise, reverse power flow, and congestion challenges for distribution system operators (DSOs). Flexibility in generation and demand, broadly understood as the capability to adjust generation or [...] Read more.
The rapid integration of distributed solar photovoltaic (PV) generation is reshaping low-voltage distribution grids (LVDGs), creating voltage rise, reverse power flow, and congestion challenges for distribution system operators (DSOs). Flexibility in generation and demand, broadly understood as the capability to adjust generation or consumption in response to variability and uncertainty in net load, is increasingly central to cost-effective grid operation under high PV penetration. This review examines flexibility and controllability options in LVDGs, focusing on voltage regulation methods, supply- and demand-side flexibility resources, and market-based coordination mechanisms. The Norwegian Regulation on Quality of Supply (FoL) provides the regulatory context: it enforces 1 min average voltage compliance, stricter than the 10 min averaging window of EN 50160, making short-duration voltage excursions operationally significant and directly influencing the trade-off between curtailment, grid reinforcement, and local flexibility measures. Inverter-based active–reactive power control emerges as the most cost-effective overvoltage mitigation option, complemented by local battery energy storage systems (BESS) and demand response for congestion relief and energy shifting. Key gaps include limited LV observability, insufficient application of quasi-static time series (QSTS) assessment in planning, and underdeveloped DSO-aggregator coordination frameworks. Combined inverter control, feeder-end storage, and demand-side flexibility can defer costly reinforcements, particularly in rural 230 V IT feeders where voltage constraints dominate. Full article
Show Figures

Figure 1

20 pages, 860 KB  
Article
The Enforcement of Intimate Image Offences and the Effectiveness of Victim Services in Taiwan: A Qualitative Study Using Reflexive Thematic Analysis
by Wen-Ling Hung
Int. J. Environ. Res. Public Health 2026, 23(4), 525; https://doi.org/10.3390/ijerph23040525 - 18 Apr 2026
Viewed by 230
Abstract
(1) Background: The non-consensual dissemination of intimate images constitutes a severe form of online gender-based violence (OGBV) that inflicts profound harm on victims’ sexual privacy, psychological well-being, and social functioning. Taiwan enacted comprehensive legislative reforms in 2023—commonly referred to as the “Four Acts [...] Read more.
(1) Background: The non-consensual dissemination of intimate images constitutes a severe form of online gender-based violence (OGBV) that inflicts profound harm on victims’ sexual privacy, psychological well-being, and social functioning. Taiwan enacted comprehensive legislative reforms in 2023—commonly referred to as the “Four Acts on Sexual Violence Prevention”—to strengthen criminal responses and expand victim protection mechanisms. However, the extent to which these reforms have translated into effective frontline practice remains insufficiently examined. (2) Methods: This qualitative study employed reflexive thematic analysis to investigate frontline professionals’ experiences with enforcing intimate image offence legislation and delivering victim support services. Semi-structured, in-depth interviews were conducted with 20 practitioners, including social workers, police officers, prosecutors, and lawyers. (3) Results: Three superordinate themes emerged across macro, meso, and micro structural levels. At the macro level, limited public awareness and persistent victim-blaming attitudes undermine prevention, help-seeking, and reporting. At the meso level, legislative fragmentation, challenges in preserving and analysing digital evidence, and inter-agency coordination gaps constrain enforcement capacity. At the micro level, procedural delays, risks of secondary victimization, and perceived inadequacies in compensation and support mechanisms weaken victims’ trust in institutional responses. (4) Conclusions: While Taiwan’s legislative reforms represent a significant institutional advancement, legal reform alone is insufficient to address digital sexual violence effectively. Comprehensive responses require integrated public education initiatives, enhanced inter-agency coordination, strengthened digital investigation capacity, and trauma-informed victim protection practices across all structural levels. In particular, the findings underscore an urgent public health need to establish rapid digital evidence preservation and takedown mechanisms to limit the proliferation of non-consensual sexual images and mitigate the associated mental health harms among victims. Full article
Show Figures

Figure 1

17 pages, 765 KB  
Article
From Cognitive Necessity to Cognitive Choice: Higher Education Assessment and Learning in the Age of Generative AI
by Matthew Montebello
AI Educ. 2026, 2(2), 12; https://doi.org/10.3390/aieduc2020012 - 16 Apr 2026
Viewed by 359
Abstract
The widespread adoption of generative artificial intelligence in higher education has intensified debates around assessment, authorship, and academic integrity. This paper argues that such debates obscure a more fundamental pedagogical shift, namely, the decoupling of assessment performance from cognitive engagement. Historically, assessment functioned [...] Read more.
The widespread adoption of generative artificial intelligence in higher education has intensified debates around assessment, authorship, and academic integrity. This paper argues that such debates obscure a more fundamental pedagogical shift, namely, the decoupling of assessment performance from cognitive engagement. Historically, assessment functioned not only as a measure of learning, but also as a structural mechanism that implicitly enforced cognitive engagement. With the advent of GenAI, learners can increasingly produce assessment outputs without necessarily engaging in the cognitive processes traditionally associated with learning. As a result, cognitive engagement has shifted from being a pedagogical necessity to an intentional learner choice. This paper conceptualises this shift as the cognitive engagement gap, wherein successful assessment completion no longer reliably indicates learning or epistemic development. Through a theory-informed conceptual analysis, the paper examines how GenAI reconfigures learning processes, challenges the validity of assessment as a proxy for learning, and exposes long-standing assumptions embedded in assessment-centred pedagogies. In response, the paper proposes a Cognitive Engagement-Centred Assessment (CECA) framework, offering principled guidance for designing assessment that foregrounds cognitive processes, metacognition, and learning assurance in AI-mediated environments. The paper concludes by positioning GenAI not as a threat to assessment, but as a catalyst for more intentional, transparent, and learning-centred pedagogical design. Full article
Show Figures

Graphical abstract

17 pages, 1880 KB  
Article
Efficient Seismic Event Extraction via Lightweight DoG Enhancement and Spatial Consistency Constraints for Oil and Gas Exploration
by Ruilong Suo, Jingong Zhang, Tao Zhang, Feng Zhang, Bolong Wang, Zhaoyu Zhang, Dawei Ren and Yitao Lei
Processes 2026, 14(8), 1268; https://doi.org/10.3390/pr14081268 - 16 Apr 2026
Viewed by 244
Abstract
The automatic extraction of seismic reflection events is fundamental to seismic interpretation and structural identification in oil and gas exploration, particularly for large-scale regional surveys and preliminary basin-scale assessments. Although the B-COSFIRE (Bar-Combination of Shifted Filter Responses) method has demonstrated strong capability in [...] Read more.
The automatic extraction of seismic reflection events is fundamental to seismic interpretation and structural identification in oil and gas exploration, particularly for large-scale regional surveys and preliminary basin-scale assessments. Although the B-COSFIRE (Bar-Combination of Shifted Filter Responses) method has demonstrated strong capability in detecting ridge-like structures, its application in large-scale seismic processing is limited by high computational cost and complex filter bank configuration. Conventional edge detectors such as the Canny operator are computationally efficient but often produce fragmented and noise-sensitive results in low signal-to-noise ratio (SNR) seismic data because they rely solely on local gradient information and ignore the spatial continuity of geological horizons. To overcome these limitations, this study proposes a lightweight and computationally efficient framework for rapid seismic event extraction. The method simplifies the B-COSFIRE architecture by replacing its configurable filter bank with a Difference-of-Gaussian (DoG) operator, which enhances ridge-like reflection features while suppressing background interference through a center–surround mechanism. Furthermore, a Spatial Consistency Constraint (SCC) module is introduced to enforce lateral continuity using directional morphological closing operations. This strategy reconstructs disrupted reflection segments and converts isolated detection responses into spatially coherent linear structures. Adaptive thresholding and skeletonization are then applied to obtain single-pixel-wide reflection contours suitable for geological interpretation and regional structural analysis. The proposed method was evaluated using both synthetic seismic models (Ricker wavelet convolution with Gaussian noise, σ = 0.15) and real post-stack seismic profiles characterized by low SNR conditions. Experimental results demonstrate that the proposed method achieves a Precision of 0.9527, Recall of 1.0000, and F1-score of 0.9758 on synthetic data, outperforming both the standard Canny detector (F1: 0.8972) and B-COSFIRE (F1: 0.7311). The Continuity Index reaches 261.00 pixels, substantially higher than Canny (223.67 pixels) and B-COSFIRE (66.86 pixels). Notably, B-COSFIRE exhibits a severely imbalanced detection profile (Precision: 0.5762, Recall: 1.000), indicating excessive false positives that undermine its practical utility. The proposed method additionally achieves the lowest runtime (0.024 s per profile), representing a 44× speedup over B-COSFIRE (1.039 s), while requiring no training data. Overall, the proposed framework provides a practical and efficient solution for automated seismic event extraction. With only a small number of geologically interpretable parameters and strong robustness across different datasets, the method is well-suited for large-scale seismic data processing and preliminary structural assessment in underexplored regions, enabling rapid first-pass evaluation of extensive survey areas before detailed interpretation and reservoir characterization. These characteristics make the method particularly suitable for computer-assisted interpretation workflows in industrial oil and gas exploration. Unlike prior approaches that treat seismic event extraction as a generic edge detection problem, the proposed framework explicitly encodes geological prior knowledge—specifically, the lateral continuity of stratigraphic interfaces—as a morphological constraint, bridging the gap between image processing methodology and geophysical interpretation requirements. Full article
(This article belongs to the Topic Advanced Technology for Oil and Nature Gas Exploration)
Show Figures

Figure 1

13 pages, 1485 KB  
Article
CAHT: A Constraint-Aware Heterogeneous Transformer for Real-Time Multi-Robot Task Allocation in Warehouse Environments
by Shengshuo Gong and Oleg Varlamov
Algorithms 2026, 19(4), 312; https://doi.org/10.3390/a19040312 - 16 Apr 2026
Viewed by 251
Abstract
The NP-hard coordination of heterogeneous robots for time-windowed warehouse tasks remains challenging: metaheuristics are precise but slow, whereas neural methods cannot handle heterogeneous constraints, leading to infeasible allocations. This paper presents the Constraint-Aware Heterogeneous Transformer (CAHT), a lightweight encoder–decoder architecture that performs end-to-end [...] Read more.
The NP-hard coordination of heterogeneous robots for time-windowed warehouse tasks remains challenging: metaheuristics are precise but slow, whereas neural methods cannot handle heterogeneous constraints, leading to infeasible allocations. This paper presents the Constraint-Aware Heterogeneous Transformer (CAHT), a lightweight encoder–decoder architecture that performs end-to-end task assignment and sequencing in a single forward pass. The central innovation is a dynamic feasibility masking mechanism that enforces capacity and energy constraints directly within the softmax computation, eliminating infeasible allocations at the architectural level. This is complemented by a spatial-bias Transformer encoder and a two-stage supervised–reinforcement learning training paradigm using ALNS-generated labels. Experiments across four problem scales (5–20 robots, 50–200 tasks) demonstrate that CAHT achieves objective values within 7–13% of the ALNS reference while being 29–91× faster (23–104 ms vs. 2–3 s). Constraint violation rates remain below 6%, with time-window satisfaction above 94%. Ablation analysis identifies dynamic masking as the dominant contribution (+213% degradation upon removal), and cross-scale generalization reveals that the optimality gap decreases from 13.0% to 10.7% as the problem scale grows. With only 0.91 M parameters, CAHT occupies a new trade-off point on the Pareto frontier, offering a practical path toward real-time autonomous warehouse coordination. Full article
Show Figures

Figure 1

19 pages, 12679 KB  
Article
Lightweight Semantic-Guided FCOS for In-Line Micro-Defect Inspection in Semiconductor Manufacturing
by Tao Zhang, Shichang Yan and Gaoe Qin
Micromachines 2026, 17(4), 473; https://doi.org/10.3390/mi17040473 - 14 Apr 2026
Viewed by 340
Abstract
The relentless miniaturization of semiconductor components and Printed Circuit Boards (PCBs) has rendered Automated Optical Inspection (AOI) of micro-defects a critical bottleneck in modern manufacturing and metrology. While in-line inspection systems offer economically viable and scalable quality control solutions, they impose stringent constraints [...] Read more.
The relentless miniaturization of semiconductor components and Printed Circuit Boards (PCBs) has rendered Automated Optical Inspection (AOI) of micro-defects a critical bottleneck in modern manufacturing and metrology. While in-line inspection systems offer economically viable and scalable quality control solutions, they impose stringent constraints on both inference latency and detection robustness—particularly for diminutive, sparsely distributed defects (e.g., mouse bites, pinholes) amidst complex, repetitive circuit topologies. To bridge this gap, we present a semantic-enhanced FCOS framework specifically engineered for micro-defect inspection. Our approach introduces two synergistic innovations: (1) a Semantic-Guided Upsampling Unit (SGU) that adaptively reweights channel–spatial features to reconcile the semantic disparity between shallow textural details and deep contextual representations; and (2) a Sparse Center-ness Calibration (SCC) module that enforces high-confidence, spatially sparse supervision to sharpen localization precision and suppress false positives. The SGU is integrated within a Progressive Semantic-Enhanced Feature Pyramid Network (PSE-FPN) that extends multi-scale representations to stride-4 (P2) resolution, while the SCC module is embedded directly into the detection head. Comprehensive evaluations on MS COCO and the real-world DeepPCB dataset validate the efficacy of our design. On COCO, our model achieves 41.8% AP with real-time throughput of 28 FPS on a single NVIDIA 1080Ti GPU. A lightweight variant further attains 41.6% AP at 42 FPS, accommodating high-throughput production environments. For PCB defect detection, the framework delivers 98.7% mAP@0.5, substantially outperforming contemporary detectors. These results demonstrate that semantics-aware, lightweight architectures enable scalable, real-time quality assurance in semiconductor manufacturing. Full article
(This article belongs to the Special Issue Emerging Technologies and Applications for Semiconductor Industry)
Show Figures

Figure 1

21 pages, 4182 KB  
Article
Incremental Pavement Distress Classification in UAV-Based Remote Sensing via Analytic Geometric Alignment
by Quanziang Wang, Xin Li, Jiangjun Peng, Xixi Jia and Renzhen Wang
Remote Sens. 2026, 18(8), 1141; https://doi.org/10.3390/rs18081141 - 12 Apr 2026
Viewed by 261
Abstract
Automated pavement distress classification using high-resolution Unmanned Aerial Vehicle (UAV) imagery is pivotal for intelligent transportation systems. However, long-term UAV monitoring faces a continuous stream of evolving distress types and changing remote sensing background textures, necessitating Class-Incremental Learning (CIL) capabilities. Existing methods struggle [...] Read more.
Automated pavement distress classification using high-resolution Unmanned Aerial Vehicle (UAV) imagery is pivotal for intelligent transportation systems. However, long-term UAV monitoring faces a continuous stream of evolving distress types and changing remote sensing background textures, necessitating Class-Incremental Learning (CIL) capabilities. Existing methods struggle to balance stability and plasticity, especially under the severe storage limitations typical of local edge stations in air–ground collaborative systems. This data scarcity leads to catastrophic forgetting and confusion among fine-grained distress categories. To address these challenges, we propose a data-efficient approach named Analytic Geometric Alignment (AGA). Our framework mainly consists of three key components. First, to overcome the optimization gap between the feature extractor and the fixed geometric target, we introduce a Subspace-Aware Analytic Initialization (SAI) that computes a closed-form projection to instantly align the feature subspace with the ETF manifold before each task training. Second, on this aligned basis, a Decoupled Geometric Adapter (DGA) is incorporated to facilitate continuous non-linear adaptation to complex aerial textures. Finally, for stable incremental training, we design a Memory-Prioritized Regression (MPR) loss to enforce tighter geometric constraints on replay samples, significantly enhancing model stability. Extensive experiments on the UAV-PDD2023 dataset demonstrate that AGA significantly outperforms state-of-the-art methods, showcasing excellent robustness and data efficiency. Full article
Show Figures

Figure 1

30 pages, 1474 KB  
Review
Dynamic Virtual Power Plants: Resource Coordination for Measured Inertia and Fast Frequency Services
by Yitong Wang, Yutian Huang, Gang Lei, Allen Wang and Jianguo Zhu
Appl. Sci. 2026, 16(8), 3731; https://doi.org/10.3390/app16083731 - 10 Apr 2026
Viewed by 247
Abstract
This paper reviews recent work on dynamic virtual power plants (DVPPs) using an Energy–Information–Market framework. It addresses the important problem of how DVPPs can support low-inertia power system operation and feeder-level stability under high renewable penetration. First, system-level studies on low-inertia operation and [...] Read more.
This paper reviews recent work on dynamic virtual power plants (DVPPs) using an Energy–Information–Market framework. It addresses the important problem of how DVPPs can support low-inertia power system operation and feeder-level stability under high renewable penetration. First, system-level studies on low-inertia operation and frequency control are used to frame quantitative requirements on rate of change of frequency, nadir, and quasi-steady-state limits. Second, energy-layer models are surveyed, including participation-factor-based DVPP controllers, grid-forming architectures, model-free frequency regulation, and robust frequency-constrained scheduling for allocating virtual inertia and fast frequency response (FFR) across distributed energy resource fleets. Third, information-layer and market-layer models are reviewed, covering stochastic and robust bidding, distribution locational marginal price-based clearing, peer-to-peer and community markets, privacy-preserving coordination, and emerging governance and cybersecurity schemes for DVPP participation. Across these strands, much of the literature remains centred on steady-state active and reactive power dispatch, with dynamic security enforced as constraints rather than formulated as verifiable and tradable services. This review identifies gaps in dynamic metrics and benchmarks, forecasting of available inertia and FFR capacity, market-physics co-design, multi-aggregator interaction, and experimentally validated DVPP implementations. These findings suggest that DVPPs can “sell stability” at the feeder level only through co-designed control, information, and market mechanisms and outline a research roadmap for this purpose. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

28 pages, 4886 KB  
Article
Equivariant Transition Matrices for Explainable Deep Learning: A Lie Group Linearization Approach
by Pavlo Radiuk, Oleksander Barmak, Leonid Bedratyuk and Iurii Krak
Mach. Learn. Knowl. Extr. 2026, 8(4), 92; https://doi.org/10.3390/make8040092 - 6 Apr 2026
Viewed by 308
Abstract
Deep learning systems deployed in regulated settings require explanations that are accurate and stable under nuisance transformations, yet classical post hoc transition matrices rely on fidelity-only fitting that fails to guarantee consistent explanations under spatial rotations or other group actions. In this work, [...] Read more.
Deep learning systems deployed in regulated settings require explanations that are accurate and stable under nuisance transformations, yet classical post hoc transition matrices rely on fidelity-only fitting that fails to guarantee consistent explanations under spatial rotations or other group actions. In this work, we propose Equivariant Transition Matrices, a post hoc approach that augments transition matrices with Lie-group-aware structural constraints to bridge this research gap. Our method estimates infinitesimal generators in the formal and mental feature spaces, enforces an approximate intertwining relation at the Lie algebra level, and solves the resulting convex Least-Squares problem via singular value decomposition for small networks or implicit operators for large systems. We introduce diagnostics for symmetry validation and an unsupervised strategy for regularization weight selection. On a controlled synthetic benchmark, our approach reduces the symmetry defect from 13,100 to 0.0425 while increasing the mean squared error marginally from 0.00367 to 0.00524. On the MNIST dataset, the symmetry defect decreases by 72.6 percent (141.19 to 38.65) with changes in structural similarity and peak signal-to-noise ratio below 0.03 percent and 0.06 percent, respectively. These results demonstrate that explanation-level equivariance can be reliably imposed post-training, providing geometrically consistent interpretations for fixed deep models. Full article
(This article belongs to the Special Issue Trustworthy AI: Integrating Knowledge, Retrieval, and Reasoning)
Show Figures

Figure 1

20 pages, 920 KB  
Article
A Legal Framework for Mitigating Soil Pollution Risk in Rwanda: Transitioning from Reactive Regulation to Proactive Governance
by Alida Chrystella Gwiza, Ming Yu and Donatus Dunee
Sustainability 2026, 18(7), 3458; https://doi.org/10.3390/su18073458 - 2 Apr 2026
Viewed by 326
Abstract
Agricultural soil contamination increasingly threatens food security, environmental health, and rural livelihoods in Rwanda. However, the country’s laws and regulations remain largely ineffective and reactive to issues. Existing environmental legislation broadly addresses pollution but lacks a clear, risk-based framework for the protection, monitoring, [...] Read more.
Agricultural soil contamination increasingly threatens food security, environmental health, and rural livelihoods in Rwanda. However, the country’s laws and regulations remain largely ineffective and reactive to issues. Existing environmental legislation broadly addresses pollution but lacks a clear, risk-based framework for the protection, monitoring, and remediation of soil. This study assesses the adequacy of Rwanda’s current legal and institutional frameworks for managing soil pollution and develops a governance structure to enhance agricultural sustainability. It employs a qualitative desk-based methodology that combines doctrinal legal analysis, comparative environmental governance review, and interdisciplinary literature synthesis to evaluate Rwanda’s regulatory frameworks alongside established models from China, Brazil, and Kenya. The analysis highlights critical gaps, including the absence of soil-specific standards, poor institutional coordination, and inadequate systems for early risk detection and liability enforcement. The research proposes a legally mandated, multi-phase soil risk management process that includes inquiry, monitoring, assessment, mitigation, and adaptive oversight, drawing on insights from previous studies. The findings suggest that incorporating preventive, risk-based measures into national legislation can improve environmental governance, lower long-term remediation costs, and promote sustainable agricultural practices. Conducted from mid-2024 to late 2025, this study advances environmental law and sustainability by providing a context-specific framework for regulating soil pollution applicable to Rwanda and other developing economies. It also contributes to the global dialogue on risk-based environmental governance and provides a model for improving soil protection laws in emerging regulatory settings. Full article
Show Figures

Figure 1

27 pages, 2899 KB  
Review
A Global Review of Highly Pathogenic Avian Influenza (HPAI) and Control Strategies in Nepal
by Deepak Subedi, Sameer Thakur, Madhav Paudel, Parikshya Gurung, Sujan Kafle, Suman Bhattarai, Abhisek Niraula, Hari Marasini, Milan Kandel, Surendra Karki, Anand Tiwari and Sumit Jyoti
Zoonotic Dis. 2026, 6(2), 11; https://doi.org/10.3390/zoonoticdis6020011 - 1 Apr 2026
Viewed by 1047
Abstract
Highly pathogenic avian influenza (HPAI) is a transboundary and zoonotic viral disease affecting poultry and wild birds in many countries worldwide. Globally, HPAI outbreaks have led to the death or culling of hundreds of millions of birds over the past two decades and [...] Read more.
Highly pathogenic avian influenza (HPAI) is a transboundary and zoonotic viral disease affecting poultry and wild birds in many countries worldwide. Globally, HPAI outbreaks have led to the death or culling of hundreds of millions of birds over the past two decades and have caused nearly 1000 confirmed human H5N1 infections, with a case fatality rate of approximately 50%. Asia and Europe remain among the most affected regions, with recurrent outbreaks linked to intensive poultry production, live bird markets, and migratory bird pathways. In Nepal, HPAI has been reported since 2009, with more than 320 outbreaks recorded and over 2.7 million birds lost, alongside one confirmed human fatality. Control measures rely largely on stamping out, movement restrictions, and surveillance; however, gaps in farm-level biosecurity, informal cross-border poultry trade, and limited vaccination use continue to sustain vulnerability. Strengthened multisectoral coordination under a One Health framework, integrating veterinary and public health surveillance, molecular monitoring, community awareness, and risk-based biosecurity enforcement, is essential to reduce the impact of HPAI and mitigate future zoonotic and pandemic risks. Full article
Show Figures

Figure 1

20 pages, 2468 KB  
Article
WEDGE-Net: Wavelet-Driven Memory-Efficient Anomaly Detection for Industrial Edge Computing
by Joon-Min Park and Gye-Young Kim
Sensors 2026, 26(7), 2154; https://doi.org/10.3390/s26072154 - 31 Mar 2026
Viewed by 444
Abstract
As deep learning-based Anomaly Detection (AD) transitions from theoretical research to industrial application, the focus is shifting towards operational efficiency and economic viability on edge devices. While recent studies have achieved remarkable detection accuracy on standard benchmarks, they often rely on heavy memory [...] Read more.
As deep learning-based Anomaly Detection (AD) transitions from theoretical research to industrial application, the focus is shifting towards operational efficiency and economic viability on edge devices. While recent studies have achieved remarkable detection accuracy on standard benchmarks, they often rely on heavy memory banks or complex backbones, which pose challenges for deployment in resource-constrained manufacturing environments. Furthermore, real-world inspection lines often present distinct challenges—such as environmental noise and strict latency requirements—that are not fully addressed by accuracy-centric metrics. To bridge the gap between high-performance research models and practical edge deployment, we introduce WEDGE-Net. Our approach is designed to balance structural precision with extreme memory efficiency. We decouple anomaly detection into two specialized streams: (1) a Frequency Stream (DWT) that physically filters out environmental noise to isolate structural defects, and (2) a Context Stream where a Semantic Module explicitly guides feature extraction to enforce object consistency. By synthesizing these two modalities, WEDGE-Net effectively suppresses high-frequency noise while enhancing structural-feature compactness. To validate operational stability, we conducted a robustness analysis of the ‘Tile’ category, which poses a challenging task for distinguishing defects from high-frequency textures. In this stress test, WEDGE-Net demonstrated superior resistance to environmental noise compared to conventional methods. Experimental results on the MVTec AD dataset demonstrate that WEDGE-Net achieves a mean image-level AUROC of 97.82% and an inference speed of 686.5 FPS (measured on an RTX 4090 GPU) under an extreme 1% memory-compression setting. Notably, our method demonstrates superior efficiency, achieving a 2.1× inference speedup over the widely adopted comparative model (PatchCore-10%) while maintaining competitive detection accuracy (e.g., 100% AUROC on Transistor). We hope this work serves as a practical reference for implementing real-time industrial inspection on resource-constrained edge devices. Full article
Show Figures

Figure 1

25 pages, 720 KB  
Article
From Hybrid Commons to Trilateral Treaty: A Four-Stage Allocation Framework for the Salween River Basin
by Thomas Stephen Ramsey, Weijun He, Liang Yuan, Qingling Peng, Min An, Lei Wang, Feiya Xiang, Sher Ali and Ribesh Khanal
Water 2026, 18(7), 795; https://doi.org/10.3390/w18070795 - 27 Mar 2026
Viewed by 326
Abstract
Transboundary river basins face water stress exacerbated by data scarcity and political instability, and most allocation models require ideal conditions that ordinarily do not exist. This study operationalizes Water Diplomacy Theory (WDT) for data-scarce, conflict-prone basins through quantifiable allocation rules—a critical gap as [...] Read more.
Transboundary river basins face water stress exacerbated by data scarcity and political instability, and most allocation models require ideal conditions that ordinarily do not exist. This study operationalizes Water Diplomacy Theory (WDT) for data-scarce, conflict-prone basins through quantifiable allocation rules—a critical gap as 310 transboundary basins worldwide face similar challenges. We address: (1) How can a four-stage allocation framework reduce basin-wide water stress under varying Institutional Capacity (IC), Data Transparency (DT), and Stakeholder Inclusion (SI)? (2) What treaty provisions achieve bindingness under upstream-downstream power asymmetries? (3) How does this framework advance beyond existing models in equity, efficiency, and adaptive capacity? We synthesize Water Diplomacy Theory with Hydro-political Security Complex Theory to construct a novel four-stage framework: initial allocation with ecological floors, conditional reallocation triggers, interannual water banking, and satellite-verified compliance. Drawing on 14 treaty precedents and 30-year hydrological data for the Salween River, we embed these rules in an open-source water banking model. Results demonstrate that increasing IC from low to high reduces basin-wide water stress by 34% (±7%, 95% IC) under drought conditions. Stakeholder Inclusion decreases allocation conflicts by 52%. Water banking outperforms priority rules by 23% across climate scenarios. Cooperation becomes self-enforcing when IC exceeds 0.55. The novelty and contribution to existing literature our study provides are: (1) first operationalization of hybrid commons-to-treaty transition with 85.7% empirically grounded clauses; (2) evidence that binding cooperative treaty design is achievable in weak-state contexts through institutional design; and (3) a portable template for data-scarce conflict-affected basins. Full article
Show Figures

Figure 1

Back to TopTop