Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (454)

Search Parameters:
Keywords = formal verification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 769 KB  
Perspective
Concurrent/Interleaved TMS–fMRI as an MR-Guided Framework for Target Engagement
by Chiara Di Fazio and Sara Palermo
Appl. Sci. 2026, 16(9), 4135; https://doi.org/10.3390/app16094135 - 23 Apr 2026
Viewed by 84
Abstract
Concurrent/interleaved transcranial magnetic stimulation combined with functional MRI (TMS–fMRI) enables causal perturbation of targeted cortical regions while measuring whole-brain MR-based responses during stimulation. This perspective argues that the main translational value of concurrent/interleaved TMS–fMRI lies in operationalizing target engagement and network-level propagation as [...] Read more.
Concurrent/interleaved transcranial magnetic stimulation combined with functional MRI (TMS–fMRI) enables causal perturbation of targeted cortical regions while measuring whole-brain MR-based responses during stimulation. This perspective argues that the main translational value of concurrent/interleaved TMS–fMRI lies in operationalizing target engagement and network-level propagation as measurable endpoints, bridging stimulation “dose” to clinically meaningful effects. Rather than proposing a validated gold-standard protocol, we frame concurrent/interleaved TMS–fMRI as a measurement-driven translational approach in which MRI-informed targeting and MR-based readouts can be integrated to quantify target engagement under clearly specified methodological and quality-control conditions. This perspective specifically aims to make explicit an intermediate verification step that remains only partially formalized in current clinical neuromodulation workflows. We propose that MRI-based neuronavigation should move beyond template coordinates toward individualized anatomical and network-informed targeting, with the aim of improving precision, reproducibility, and safety. Building on the field’s evolution from technical feasibility to emerging clinical applications, we outline a staged framework from feasibility to biomarker potential, summarize representative protocol archetypes, and provide pragmatic recommendations for reporting and study design to improve comparability. This framework is intended to guide future concurrent/interleaved TMS–fMRI studies toward biomarker-ready designs and more clinically informative network neuromodulation. We further distinguish offline MRI-informed targeting from potential future real-time or closed-loop implementations, and we emphasize that current biomarker claims should remain proportional to the still heterogeneous evidence base. Full article
(This article belongs to the Special Issue MR-Based Neuroimaging, 2nd Edition)
Show Figures

Figure 1

32 pages, 653 KB  
Article
Synthesis of Decision Logic for Predictive Maintenance of a Marine Diesel Engine Based on Unconditional Control-Reliability Indicators
by Dmitry Tukeev, Olga Afanaseva and Aleksandr Khatrusov
Eng 2026, 7(5), 190; https://doi.org/10.3390/eng7050190 - 23 Apr 2026
Viewed by 108
Abstract
This paper proposes a formal framework for synthesizing multi-stage condition-based maintenance (CBM) decision logic for marine diesel monitoring systems. The design object is treated not as a single threshold or classifier output, but as an implementable decision logic with explicit stages of data-quality [...] Read more.
This paper proposes a formal framework for synthesizing multi-stage condition-based maintenance (CBM) decision logic for marine diesel monitoring systems. The design object is treated not as a single threshold or classifier output, but as an implementable decision logic with explicit stages of data-quality gating, thresholding, confirmation, fusion, and temporal filtering. Decision quality is evaluated using unconditional control-reliability indicators (CRIs) under a prescribed prior probability of rare abnormal events within a unified Monte Carlo verification protocol. Within a simplified Gaussian surrogate model, we compare baseline thresholding, repeated-measurement averaging, within-path confirmation, and measurement-level fusion. For the reported reference configuration, averaging five repeated measurements yields the largest reduction in the raw error criterion, “2 out of 3” confirmation provides a smaller but consistent improvement, and two-path multi-fidelity fusion is beneficial only after calibration toward the more informative path. The results show that, under rare abnormal events and limited measurement accuracy, decision quality is determined primarily by calibration of the multi-stage channel-level logic rather than by thresholding alone. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research 2026)
81 pages, 3148 KB  
Article
Global Virtual Prosumer Framework for Secure Cross-Border Energy Transactions Using IoT, Multi-Agent Intelligence, and Blockchain Smart Contracts
by Nikolaos Sifakis
Information 2026, 17(4), 396; https://doi.org/10.3390/info17040396 - 21 Apr 2026
Viewed by 165
Abstract
Global decarbonization and the rapid growth of distributed energy resources increase the need for information-centric mechanisms that can support secure, scalable, cross-border coordination under heterogeneous technical and regulatory conditions. This paper proposes a Global Virtual Prosumer (GVP) framework that integrates IoT sensing, multi-agent [...] Read more.
Global decarbonization and the rapid growth of distributed energy resources increase the need for information-centric mechanisms that can support secure, scalable, cross-border coordination under heterogeneous technical and regulatory conditions. This paper proposes a Global Virtual Prosumer (GVP) framework that integrates IoT sensing, multi-agent coordination, and permissioned blockchain smart contracts to operationalize cross-border energy services as auditable service commitments rather than physical power exchange. Building on prior work that validated MAS-based power management and blockchain-secured operation within individual Virtual Prosumers, the present contribution lies in the cross-border coordination layer and its associated contractual and evaluation mechanisms, not in the constituent technologies themselves. A layered IoT–AI–blockchain architecture is introduced, where off-chain optimization produces allocations and admissibility indicators and on-chain contracts enforce identity, feasibility guards, delegation and partner-assignment rules, oracle verification, and settlement time compliance outcomes. The contractual lifecycle is formalized through four smart-contract algorithms covering trade registration, conditional delegation, cooperative fulfillment, and cross-border settlement with explicit failure semantics and event-based audit trails. The framework is evaluated on a global case study with seven Virtual Prosumers and quantified using contract-centric KPIs that capture registration time rejections, settlement success versus non-compliance, oracle-driven failure attribution, and full lifecycle traceability. The results demonstrate internal consistency of the proposed lifecycle and the practical value of KPI-driven accountability for cross-border energy service coordination. At the same time, the evaluation is based on synthetic parameterization and an emulated contract environment; realistic deployment constraints—including consensus latency, cross-region communication reliability, and regulatory overlap—are discussed as explicit limitations and directions for future empirical validation. Full article
(This article belongs to the Special Issue IoT, AI, and Blockchain: Applications, Security, and Perspectives)
24 pages, 1327 KB  
Article
VeriFed: Temporally Consistent Continuous Cross-Chain Data Federation
by Kun Hao, Meng Bi and Yuliang Ma
Entropy 2026, 28(4), 478; https://doi.org/10.3390/e28040478 - 21 Apr 2026
Viewed by 233
Abstract
Cross-chain analytics increasingly demand continuous joins across ledgers with asynchronous state evolution. Existing solutions, however, typically assume static snapshots or neglect temporal alignment, yielding semantically inconsistent results when epochs drift. This paper introduces VeriFed, a system for temporally consistent continuous cross-chain joins. We [...] Read more.
Cross-chain analytics increasingly demand continuous joins across ledgers with asynchronous state evolution. Existing solutions, however, typically assume static snapshots or neglect temporal alignment, yielding semantically inconsistent results when epochs drift. This paper introduces VeriFed, a system for temporally consistent continuous cross-chain joins. We formalize the problem of snapshot-aligned continuous joins, design a Unified Adapter Layer (UAL) to align finalized snapshots across heterogeneous protocols, and develop incremental verification that composes per-chain proofs into a global summary via the Epoch Attestation Mesh (EAM) and the Delta-Linked Proof Forest (DLPF). To sustain high-throughput execution, VeriFed further adopts an incremental multi-objective optimizer that balances latency and monetary cost. Experiments on Ethereum transaction data with a simulated wide-area network (WAN) demonstrate that VeriFed achieves sub-second per-epoch latency (approx. 38 ms) and reduces verification overhead by orders of magnitude compared to state-of-the-art baselines, while effectively detecting tampering with zero false positives. These results confirm consistent efficiency and verifiability under continuous updates. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

35 pages, 1381 KB  
Article
Formality Requirements in the Era of Smart Contracts: A Mixed-Methods Analysis of Emerging Challenges
by Nabeel Mahdi Althabhawi, Ra’ed Fawzi Aburoub, Rizal Rahman, Faris Kamil Hasan Mihna and Hazim Akram Sallal
Information 2026, 17(4), 393; https://doi.org/10.3390/info17040393 - 21 Apr 2026
Viewed by 229
Abstract
Smart contracts raise persistent challenges regarding compliance with traditional contract formalities, including writing, signature, notarization, and in certain transactions, registration. These issues are particularly significant in high-value and public-facing transactions such as real estate, where formalities determine legal validity, evidentiary sufficiency and publicity [...] Read more.
Smart contracts raise persistent challenges regarding compliance with traditional contract formalities, including writing, signature, notarization, and in certain transactions, registration. These issues are particularly significant in high-value and public-facing transactions such as real estate, where formalities determine legal validity, evidentiary sufficiency and publicity effects. While existing scholarly work has examined these challenges from either doctrinal or technological perspectives, limited attention has been given to how the functional roles of formalities interact with blockchain architecture, practitioner perceptions and institutional legal frameworks. This study addresses this gap through a mixed-methods approach combining doctrinal legal analysis with qualitative socio-legal research based on 27 semi-structured interviews with legal professionals including attorneys, judges, and academic scholars. The analysis is grounded in a civil law framework, with particular reference to the Jordanian legal system, while references to the European Union’s eIDAS Regulation are used illustratively to demonstrate regulatory approaches to digital authentication. The findings demonstrate that blockchain-based systems can effectively support the evidentiary and attribution functions of contractual formalities through cryptographic verification, consensus mechanisms, and automated execution. However, they do not independently satisfy formalities that perform cautionary, constitutive, protective or public order function, namely notarization and registration, which remain dependent on institutional validation and legal recognition. The analysis further shows that practitioner concerns reflect not only doctrinal constraints but also institutional roles and varying levels of technical familiarity. To address these limitations, the study proposes a function-based analytical framework for evaluating smart contract formalities and identifies two complementary pathways for legal adaptation: (i) institutional integration, including registry-linkage systems and hybrid contracts; and (ii) technological adaptation, including digital authentication frameworks and legal oracles that connect on-chain execution to off-chain legal conditions. The study concludes that smart contract formalities’ challenges arise not solely from technological limitations, but from the interaction between legal doctrine, institutional structures, and system design. It advances a functional framework for aligning automation with the evidentiary, protective, and publicity functions of contractual formalities. Full article
(This article belongs to the Special Issue Recent Advances in Smart Contract and Blockchain Analysis)
Show Figures

Figure 1

25 pages, 13360 KB  
Article
An RT-Supervised Simulation-to-Simulation Framework for Path Loss Radio Map Prediction Based on Geographic Environmental Information
by Hanpeng Huai, Linsong Feng, Zhe Yuan, Yishun Li, Botao Han, Qingyu Cheng and Guoxuan He
Electronics 2026, 15(8), 1750; https://doi.org/10.3390/electronics15081750 - 21 Apr 2026
Viewed by 243
Abstract
Efficient and approximate evaluation of urban coverage is important for wireless network planning. While standard statistical propagation models are fast, they do not directly describe the physical environment of a specific urban scene and consequently often fail to accurately capture local blockage and [...] Read more.
Efficient and approximate evaluation of urban coverage is important for wireless network planning. While standard statistical propagation models are fast, they do not directly describe the physical environment of a specific urban scene and consequently often fail to accurately capture local blockage and site-specific propagation effects. Ray tracing can model these effects more directly, but becomes costly when testing many tiles, frequencies, and transmitter heights simultaneously. To address this problem, the present study investigates the use of an RT-supervised simulation-to-simulation tile-based learning framework for path loss prediction based on geographic environmental information. This methodology first builds realistic 3D city scenes from geographic data, then uses offline ray tracing to generate supervision labels across multiple carrier frequencies and base-station heights. Each city region is divided into 500 m by 500 m tiles, which are then further discretized into 125 by 125 grids. For each tile, raster priors, such as occupancy, normalized height, and a valid-ground mask, are prepared. During training and inference, the model input is organized as an 8-channel raster tensor together with a 2D condition vector for frequency and transmitter height. The raster tensor combines three stored environment priors and five online-generated transmitter-related feature maps. By utilizing masked supervision, the network learns the excess loss residual exclusively on valid outdoor pixels, and the final path loss map is reconstructed by combining the residual prediction with the FSPL prior. The final model in this work was trained on 134,317 samples and validated on 33,589 samples. In the in-city setting, used as a preliminary verification before subsequent cross-city experiments, it achieved an MAE of 5.0116 dB and an RMSE of 9.3182 dB. On the formal cross-city test with a completely unseen target city, it achieved an MAE of 4.8536 dB and an RMSE of 9.3504 dB. These results demonstrate that the proposed framework can provide a stable tile-level approximation of RT-generated path loss maps under multiple conditions. Because both training labels and evaluation references are generated by RT rather than drive-test measurements, the present study should be understood as a simulation-to-simulation surrogate framework rather than a direct validation of real-world propagation accuracy. Full article
(This article belongs to the Topic AI-Driven Wireless Channel Modeling and Signal Processing)
Show Figures

Figure 1

22 pages, 507 KB  
Article
HyperCross: A Semantic-Aware Zero-Knowledge Indexing Framework for Cross-Chain Data
by Kun Hao and Yuliang Ma
Electronics 2026, 15(8), 1741; https://doi.org/10.3390/electronics15081741 - 20 Apr 2026
Viewed by 147
Abstract
The transition from isolated distributed ledgers to a unified “Internet of Value” is hindered by the lack of efficient, verifiable, and privacy-preserving cross-chain data retrieval mechanisms. While asset bridging has matured, generalized data indexing remains a critical bottleneck, constrained by the semantic gap [...] Read more.
The transition from isolated distributed ledgers to a unified “Internet of Value” is hindered by the lack of efficient, verifiable, and privacy-preserving cross-chain data retrieval mechanisms. While asset bridging has matured, generalized data indexing remains a critical bottleneck, constrained by the semantic gap between heterogeneous storage layouts and the prohibitive verification tax of cryptographic proofs. In this paper, we present HyperCross, a novel semantic-aware zero-knowledge indexing framework designed to bridge this divide. We first formalize the heterogeneous cross-chain storage optimization problem (HCCSOP) and prove its NP-completeness. To tackle this, HyperCross employs a synergistic tri-layered architecture. At the semantic layer, we introduce a unified data abstraction (UDA) that leverages category-theoretic functors and schema morphisms to ensure mathematically rigorous state mapping for both simple assets and complex smart contract logic. At the indexing layer, a zero-knowledge learning index (ZKLI) shifts prediction intelligence to the client side, integrating zk-SNARKs with silent oblivious transfer to achieve constant-time verification (O(1)) while concealing access patterns. Finally, a multi-level cache (MLC) utilizes predictive prefetching with Δ-bounded staleness to mask network latency. Extensive evaluations demonstrate that HyperCross reduces query latency by 2.4× and storage overhead by 40% compared to state-of-the-art baselines, establishing a scalable foundation for data-intensive inter-chain applications. Full article
Show Figures

Figure 1

13 pages, 555 KB  
Essay
Governing Generative AI in Healthcare: A Normative Conceptual Framework for Epistemic Authority, Trust, and the Architecture of Responsibility
by Fatma Eren Akgün and Metin Akgün
Healthcare 2026, 14(8), 1098; https://doi.org/10.3390/healthcare14081098 - 20 Apr 2026
Viewed by 302
Abstract
Background/Objectives: Large language models (LLMs) such as ChatGPT are rapidly being integrated into healthcare for tasks ranging from clinical documentation to diagnostic support. Current ethical discussions focus predominantly on bias, privacy, and accuracy, leaving three critical governance questions unresolved: What kind of knowledge [...] Read more.
Background/Objectives: Large language models (LLMs) such as ChatGPT are rapidly being integrated into healthcare for tasks ranging from clinical documentation to diagnostic support. Current ethical discussions focus predominantly on bias, privacy, and accuracy, leaving three critical governance questions unresolved: What kind of knowledge does an LLM output represent in clinical reasoning? When is a clinician’s or patient’s trust in that output justified? Who bears responsibility when an AI-informed decision leads to patient harm? This study proposes the Epistemic Authority–Trust–Responsibility (ETR) Architecture, a normative conceptual framework that addresses these three questions as an integrated governance challenge. Methods: The framework was developed through normative conceptual analysis—a method that constructs governance proposals by synthesising philosophical principles, ethical theories, and empirical evidence. The literature was identified through structured searches of PubMed, PhilPapers, and EUR-Lex (January 2020–March 2026), drawing on the philosophy of medical knowledge, the ethics of trust and testimony, and the moral philosophy of responsibility. Results: The ETR Architecture produces four outputs: (i) a four-tier classification system that distinguishes LLM outputs—from administrative drafts to clinical evidence claims—and matches each tier to appropriate verification requirements; (ii) the concept of the ‘epistemic placebo’, formally defined as a governance measure that creates a documented appearance of compliance while lacking at least one operative element of genuine oversight; (iii) a model specifying four conditions under which trust in healthcare AI is justified; (iv) four testable hypotheses with associated research designs connecting governance design to trust calibration and patient safety. Conclusions: The 2025–2027 regulatory transition period offers a critical window for shaping how healthcare institutions govern AI. We argue that deploying LLMs without explicitly classifying their outputs and building appropriate oversight risks allows governance norms to be set by technology vendors rather than by evidence-informed, patient-centred policy. Full article
(This article belongs to the Special Issue AI-Driven Healthcare: Transforming Patient Care and Outcomes)
Show Figures

Figure 1

23 pages, 7844 KB  
Article
Explainable Logic-Driven Firewall Anomaly Detection with Knowledge Graph Visualization and Machine Learning Validation
by Abdelrahman Osman Elfaki, Abdulhadi Albluwi, Amer Aljaedi and Mohamed Hussien Mohamed Nerma
Electronics 2026, 15(8), 1714; https://doi.org/10.3390/electronics15081714 - 17 Apr 2026
Viewed by 306
Abstract
Firewall policy misconfigurations remain a major source of security vulnerabilities in modern networks, particularly as firewall rule sets grow in size and complexity. Such misconfigurations, commonly referred to as firewall anomalies, can lead to unintended access control behavior and undermine network security. In [...] Read more.
Firewall policy misconfigurations remain a major source of security vulnerabilities in modern networks, particularly as firewall rule sets grow in size and complexity. Such misconfigurations, commonly referred to as firewall anomalies, can lead to unintended access control behavior and undermine network security. In this paper, we propose a formal logic rule-based framework for the systematic detection and investigation of firewall anomalies, supported by knowledge graph-based visualization. First-order logic (FOL) is employed to precisely model firewall rules and to define major anomaly types, including shadowing, redundancy, correlation, generalization, and irrelevance, in both single and distributed firewall environments. The proposed framework introduces explicit and comprehensive logical definitions for each anomaly type, enabling deterministic, interpretable, and complete detection of rule conflicts and overlaps. Complex anomalies, particularly correlation and generalization, are systematically decomposed into well-defined logical cases to facilitate the accurate identification of subtle, order-dependent interactions among firewall rules. To enhance usability and analysis, firewall rules and detected anomalies are represented using Neo4j knowledge graphs, providing intuitive visual insights into rule relationships and anomaly causes. The effectiveness of the proposed approach is validated using a real operational backbone network dataset collected from Stanford University’s campus network. Experimental results demonstrate the framework’s ability to accurately detect both simple and complex firewall anomalies under realistic network conditions. To further validate the proposed logic rules, a machine learning-based evaluation was conducted. The findings confirm their effectiveness in accurately characterizing firewall anomalies. Unlike machine learning or heuristic-based methods, the proposed approach does not require training data and guarantees formal correctness and explainability. These features make it a robust and practical solution for firewall policy verification and network security management. Full article
Show Figures

Figure 1

26 pages, 1879 KB  
Article
NEF-DHR: A Non-Equivalent Functional Dynamic Heterogeneous Redundancy Architecture for Endogenous Safety and Security
by Bingbing Jiang, Yilin Kang and Hanzhi Cai
Entropy 2026, 28(4), 463; https://doi.org/10.3390/e28040463 - 17 Apr 2026
Viewed by 150
Abstract
Endogenous safety and security (ESS), which advocates for designing systems that are inherently safe and secure by nature, has emerged as a pivotal paradigm for addressing the inherent vulnerabilities of information systems. The Dynamic Heterogeneous Redundancy (DHR) architecture serves as its typical implementation [...] Read more.
Endogenous safety and security (ESS), which advocates for designing systems that are inherently safe and secure by nature, has emerged as a pivotal paradigm for addressing the inherent vulnerabilities of information systems. The Dynamic Heterogeneous Redundancy (DHR) architecture serves as its typical implementation by introducing dynamic, heterogeneous, redundant executors with equivalent function (EF) into the information system. However, the functional equivalence property explicitly connects the system’s output to that of the individual executors, thereby creating potential security risks that adversaries could exploit. In addition, EF-DHR faces an inherent contradiction between functional equivalence and heterogeneous implementations (HIS), leading to high engineering costs and limited applicability. To address these problems, this paper proposes the Non-Equivalent Functional DHR (NEF-DHR) architecture, leveraging function secret sharing (FSS) theory to replace EF executors with NEF components, which fundamentally eliminates the EF-HIS contradiction. Specifically, we propose the concept of ‘terminal executor output information entropy loss’ to formalize the risk of output information interception by adversaries and theoretically prove that NEF-DHR improves unpredictability and resistance to attacks. Experimental results further validate that NEF-DHR exhibits lower error rates under various attack levels, with enhanced robustness and superior ESS performance. Additionally, we generalize the DHR architecture based on three core properties (indistinguishability, output recoverability, verification) and classify ESS into three types with corresponding DHR variants. This work advances the application of entropy theory in ESS and provides a novel entropy-enhanced solution for the large-scale deployment of DHR security systems. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

13 pages, 1961 KB  
Proceeding Paper
Blockchain-Based Secure Data Sharing in Cybersecurity: A Framework for Protecting Sensitive Information
by Raneem Khaled AlFadhel and Mohammad Ali A. Hammoudeh
Comput. Sci. Math. Forum 2026, 13(1), 2; https://doi.org/10.3390/cmsf2026013002 (registering DOI) - 15 Apr 2026
Viewed by 10
Abstract
With the growing volume of sensitive data stored and processed in cloud environments, conventional security models are no longer sufficient to guarantee privacy, integrity, and trust. This paper proposes a blockchain-based framework that integrates Zero-Knowledge Proofs (ZKPs) and homomorphic encryption (HE) to enable [...] Read more.
With the growing volume of sensitive data stored and processed in cloud environments, conventional security models are no longer sufficient to guarantee privacy, integrity, and trust. This paper proposes a blockchain-based framework that integrates Zero-Knowledge Proofs (ZKPs) and homomorphic encryption (HE) to enable secure and privacy-preserving data sharing. ZKPs are employed to verify user access rights without exposing identities or underlying information, while HE allows computations to be performed directly on encrypted data, ensuring confidentiality is preserved throughout the data lifecycle. The proposed framework addresses the limitations of existing approaches that either lack encrypted computation capabilities or expose sensitive data during processing. Formal and informal analyses demonstrate the feasibility of the model in terms of encryption time, ZKP verification latency, and computation overhead. The framework is designed to be applied initially in the healthcare sector and aligns with national digital transformation initiatives such as Saudi Vision 2030. Full article
(This article belongs to the Proceedings of The 1st International Conference on Emerging Tech & Innovation (ICETI))
Show Figures

Figure 1

15 pages, 806 KB  
Article
Relational Capacity and Fragmented Authority: Coordination and Power in Indonesia’s Decentralized Regulatory Governance
by Heny Sulistiyowati, Muhammad Saleh S. Ali and Imam Mujahidin Fahmid
Sustainability 2026, 18(8), 3780; https://doi.org/10.3390/su18083780 - 10 Apr 2026
Viewed by 366
Abstract
This study examines how coordination, power, and interdependence shape regulatory governance in the decentralized edible bird’s nest (EBN) sector in Pulang Pisau, Indonesia. While decentralization is often associated with improved responsiveness and local adaptability, it frequently produces fragmented regulatory systems in which authority [...] Read more.
This study examines how coordination, power, and interdependence shape regulatory governance in the decentralized edible bird’s nest (EBN) sector in Pulang Pisau, Indonesia. While decentralization is often associated with improved responsiveness and local adaptability, it frequently produces fragmented regulatory systems in which authority is distributed without effective coordination. Using an actor-centered qualitative design combined with the MACTOR method, this study analyzes influence–dependence relations, objective alignment, and coordination bottlenecks across key actors. The findings show that regulatory performance is shaped less by formal mandates than by relational positioning within the governance system. Actors controlling technical verification and documentary gateways occupy high-influence positions, while licensing authorities remain operationally dependent. Although most actors share common objectives—such as hygiene, quality assurance, and traceability—these are pursued through fragmented procedures, resulting in coordination failures and regulatory inequality. Producers bear the greatest compliance burdens despite having limited influence over regulatory processes. The study introduces the concept of relational administrative capacity to explain how decentralized governance outcomes depend on the alignment of authority, expertise, and procedural sequencing across interdependent actors. The findings suggest that improving regulatory performance requires strengthening coordination architectures rather than adding new rules. Full article
Show Figures

Figure 1

30 pages, 4987 KB  
Article
AT-BSS: A Broker Selection Strategy for Efficient Cross-Shard Processing in Sharded IoT–Blockchain Systems
by Yue Su, Yang Xiang, Kien Nguyen and Hiroo Sekiya
Sensors 2026, 26(8), 2296; https://doi.org/10.3390/s26082296 - 8 Apr 2026
Viewed by 377
Abstract
The deep integration of the Internet of Things (IoT) and blockchain technology enables emerging applications in multi-party collaboration and trusted data sharing. However, the scalability constraints of blockchain networks remain a major bottleneck when handling high-frequency interactions in IoT–blockchain systems. Sharding addresses this [...] Read more.
The deep integration of the Internet of Things (IoT) and blockchain technology enables emerging applications in multi-party collaboration and trusted data sharing. However, the scalability constraints of blockchain networks remain a major bottleneck when handling high-frequency interactions in IoT–blockchain systems. Sharding addresses this challenge by partitioning the blockchain network into parallel sub-networks. Nevertheless, it introduces significant coordination overhead for cross-shard transactions. Among mitigation strategies, Broker-based mechanisms (e.g., BrokerChain) have attracted increasing attention for their efficiency in handling cross-shard communication by reducing verification overhead and communication latency. Despite these advantages, existing research typically treats the Broker group as a fixed configuration, neglecting the impact of Broker selection on system performance. To bridge this gap, this paper proposes the Accumulative Activity–Temporal Liveness Broker Selection Strategy (AT-BSS) to optimize cross-shard transaction processing in sharded IoT–blockchains. Specifically, we formally characterize the Accumulative Activity and Temporal Liveness of accounts in the account–transaction network and use these two metrics to identify accounts that maximize transaction-aggregation efficiency. We implement AT-BSS on the BlockEmulator platform and evaluate it against two baselines, namely, ABChain and BrokerChain. Under different settings of the number of Brokers (BrokerNum), number of shards (ShardNum), transaction arrival rate (InjectSpeed), and maximum block size (MaxBlockSize), AT-BSS consistently outperforms both baselines in terms of Transactions Per Second (TPS), Transaction Confirmation Latency (TCL), and Cross-shard Transaction Ratio (CTX). Compared with ABChain, AT-BSS achieves up to 15.5% higher TPS and reduces TCL and CTX by up to 80.2% and 28.7%, respectively. AT-BSS yields more pronounced results over BrokerChain, with TPS improvements of up to 229% and reductions of up to 97.7% in TCL and 80.5% in CTX. Full article
Show Figures

Figure 1

21 pages, 320 KB  
Article
Xenoepistemics
by Jordi Vallverdú
Philosophies 2026, 11(2), 57; https://doi.org/10.3390/philosophies11020057 - 8 Apr 2026
Viewed by 309
Abstract
Epistemology remains tacitly anthropocentric: it treats knowledge as something produced and validated through human cognitive capacities such as understanding, intuition, and transparent justification. Yet contemporary science and artificial intelligence increasingly depend on non-human systems that generate mathematically valid results, empirically successful models, and [...] Read more.
Epistemology remains tacitly anthropocentric: it treats knowledge as something produced and validated through human cognitive capacities such as understanding, intuition, and transparent justification. Yet contemporary science and artificial intelligence increasingly depend on non-human systems that generate mathematically valid results, empirically successful models, and operationally reliable inferences that no human can fully survey or interpret. This article develops xenoepistemics, a structural theory of non-anthropocentric knowledge. The central claim is that epistemic evaluation must be reformulated in terms of system-level properties—reliability, robustness, counterfactual sensitivity, and domain transfer—rather than mentalistic notions such as belief or understanding. I offer (i) a definition of xenoepistemic systems as systems that track structure in a target domain without requiring human-style semantic access; (ii) a minimal account of epistemic agency without minds that avoids trivialization; and (iii) a non-circular trust framework that distinguishes empirical success from epistemic legitimacy using independent validation regimes. This paper addresses a reflexive worry—that a human-authored theory cannot dethrone human epistemology—by separating standpoint from object: xenoepistemics is articulated by humans but is not about human cognition. I discuss the pragmatic value of xenoepistemic knowledge production, the limits of independent verification for opaque systems, domain-relative thresholds for xenoepistemic authority, and the problem of constitutionally human-inaccessible knowledge. Finally, I diagnose and formalize the Marcusian regress paradox: recurrent goalpost-shifting, whereby every machine competence is reclassified as irrelevant once achieved. Xenoepistemics reframes this debate by treating non-human knowledge as a present reality requiring new norms, not as a future curiosity. Full article
(This article belongs to the Special Issue Intelligent Inquiry into Intelligence)
Show Figures

Figure 1

48 pages, 578 KB  
Article
Invariant Threshold Symmetry in Bipolar Fuzzy Quasi-Subalgebras of Sheffer–Nelson Algebras
by Amal S. Alali, Tahsin Oner, Ravi Kumar Bandaru, Rajesh Neelamegarajan, Ibrahim Senturk and Ebrar Gunel
Symmetry 2026, 18(4), 613; https://doi.org/10.3390/sym18040613 - 5 Apr 2026
Viewed by 252
Abstract
This paper develops a rigorous algebraic framework for quasi-substructures in Sheffer-based Nelson algebras, extending the landscape of fuzzy algebraic theory. By systematically introducing (,q)-bipolar fuzzy quasi-subalgebras and ideals, we analyze their structural properties through generalized belongingness [...] Read more.
This paper develops a rigorous algebraic framework for quasi-substructures in Sheffer-based Nelson algebras, extending the landscape of fuzzy algebraic theory. By systematically introducing (,q)-bipolar fuzzy quasi-subalgebras and ideals, we analyze their structural properties through generalized belongingness and quasi-coincidence relations. We formalise invariant threshold symmetry as the condition g+(χ)+|g(χ)|=c for a constant c[0,2] and every χΩ (Definition 10) and prove its structural preservation within (,q)-bipolar fuzzy quasi-subalgebras (Theorem 4, supported by Theorems 3, 15 and 16). This enables a balanced dual evaluation of positive and negative information. Characterization theorems are established via level subsets, revealing how quasi-substructure properties are governed by bounds at critical membership values. Equivalence results unify classical and bipolar fuzzy perspectives, demonstrating that algebraic constraints preserve structural coherence across crisp and fuzzy environments. Algorithmic verification procedures are provided for practical validation in finite systems, and illustrative examples highlight applications in uncertainty modeling and decision support. Overall, the proposed theory formalizes bipolar fuzzy structures in Sheffer-based Nelson algebras, utilizing invariant threshold symmetry, level-set decomposition, and crisp equivalence to evaluate dual information. Full article
(This article belongs to the Special Issue Algebras and Symmetry in Fuzzy Set Theory)
Back to TopTop