Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (82)

Search Parameters:
Keywords = theoretical proof and implementation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 504 KB  
Article
Introducing a Resolvable Network-Based SAT Solver Using Monotone CNF–DNF Dualization and Resolution
by Gábor Kusper and Benedek Nagy
Mathematics 2026, 14(2), 317; https://doi.org/10.3390/math14020317 - 16 Jan 2026
Viewed by 300
Abstract
This paper is a theoretical contribution that introduces a new reasoning framework for SAT solving based on resolvable networks (RNs). RNs provide a graph-based representation of propositional satisfiability in which clauses are interpreted as directed reaches between disjoint subsets of Boolean variables (nodes). [...] Read more.
This paper is a theoretical contribution that introduces a new reasoning framework for SAT solving based on resolvable networks (RNs). RNs provide a graph-based representation of propositional satisfiability in which clauses are interpreted as directed reaches between disjoint subsets of Boolean variables (nodes). Building on this framework, we introduce a novel RN-based SAT solver, called RN-Solver, which replaces local assignment-driven branching by global reasoning over token distributions. Token distributions, interpreted as truth assignments, are generated by monotone CNF–DNF dualization applied to white (all-positive) clauses. New white clauses are derived via resolution along private-pivot chains, and the solver’s progression is governed by a taxonomy of token distributions (black-blocked, terminal, active, resolved, and non-resolved). The main results establish the soundness and completeness of the RN-Solver. Experimentally, the solver performs very well on pigeonhole formulas, where the separation between white and black clauses enables effective global reasoning. In contrast, its current implementation performs poorly on random 3-SAT instances, highlighting both practical limitations and significant opportunities for optimization and theoretical refinement. The presented RN-Solver implementation is a proof-of-concept which validates the underlying theory rather than a state-of-the-art competitive solver. One promising direction is the generalization of strongly connected components from directed graphs to resolvable networks. Finally, the token-based perspective naturally suggests a connection to token-superposition Petri net models. Full article
(This article belongs to the Special Issue Graph Theory and Applications, 3rd Edition)
46 pages, 687 KB  
Article
A Next-Generation Cyber-Range Framework for O-RAN and 6G Security Validation
by Evangelos Chaskos, Nicholas Kolokotronis and Stavros Shiaeles
Future Internet 2026, 18(1), 29; https://doi.org/10.3390/fi18010029 - 4 Jan 2026
Viewed by 345
Abstract
The evolution towards an Open Radio Access Network (O-RAN) and 6G introduces unprecedented openness and intelligence in mobile networks, alongside significant security challenges. Current cyber-ranges (CRs) are not prepared to address the disaggregated architecture, numerous open interfaces, AI/ML-driven RAN Intelligent Controllers (RICs), and [...] Read more.
The evolution towards an Open Radio Access Network (O-RAN) and 6G introduces unprecedented openness and intelligence in mobile networks, alongside significant security challenges. Current cyber-ranges (CRs) are not prepared to address the disaggregated architecture, numerous open interfaces, AI/ML-driven RAN Intelligent Controllers (RICs), and O-Cloud dependencies of O-RAN, nor the now-established 6G paradigms of AI-native operations and pervasive Zero Trust Architectures (ZTAs). This paper identifies a critical validation gap and proposes a novel theoretical framework for a next-generation CR, specifically architected to address the unique complexities of O-RAN’s disaggregated components, open interfaces, and advanced 6G security paradigms. Our framework features a modular architecture enabling high-fidelity emulation of O-RAN components and interfaces, integrated AI/ML security testing, and native support for ZTA validation. We also conceptualize a novel Federated Cyber-Range (FCR) architecture for enhanced scalability and specialized testing. By systematically linking identified threats to CR requirements and illustrating unique, practical O-RAN-specific exercises, this research lays foundational work for developing CRs capable of proactively assessing and strengthening the security of O-RAN and future 6G systems, while also outlining key implementation challenges. We validate the framework’s feasibility through a proof-of-concept A1 malicious policy injection exercise. Full article
(This article belongs to the Special Issue Secure and Trustworthy Next Generation O-RAN Optimisation)
Show Figures

Figure 1

19 pages, 850 KB  
Article
Natural-Language Relay Control for a SISO Thermal Plant: A Proof-of-Concept with Validation Against a Conventional Hysteresis Controller
by Sebastian Rojas-Ordoñez, Mikel Segura, Veronica Mendoza, Unai Fernandez and Ekaitz Zulueta
Appl. Sci. 2025, 15(24), 12986; https://doi.org/10.3390/app152412986 - 9 Dec 2025
Viewed by 469
Abstract
This paper presents a proof-of-concept for a natural-language-based closed-loop controller that regulates the temperature of a simple single-input single-output (SISO) thermal process. The key idea is to express a relay-with-hysteresis policy in plain English and let a local large language model (LLM) interpret [...] Read more.
This paper presents a proof-of-concept for a natural-language-based closed-loop controller that regulates the temperature of a simple single-input single-output (SISO) thermal process. The key idea is to express a relay-with-hysteresis policy in plain English and let a local large language model (LLM) interpret sensor readings and output a binary actuation command at each sampling step. Beyond interface convenience, we demonstrate that natural language can serve as a valid medium for modeling physical reality and executing deterministic reasoning in control loops. We implement a compact plant model and compare two controllers: a conventional coded relay and an LLM-driven controller prompted with the same logic and constrained to a single-token output. The workflow integrates schema validation, retries, and a safe fallback, while a stepwise evaluator checks agreement with the baseline. In a long-horizon (1000-step) simulation, the language controller reproduces the hysteresis behavior with matching switching patterns. Furthermore, sensitivity and ablation studies demonstrate the system’s robustness to measurement noise and the LLM’s ability to correctly execute the hysteresis policy, thereby preserving the theoretical robustness inherent to this control law. This work demonstrates that, for slow thermal dynamics, natural-language policies can achieve comparable performance to classical relay systems while providing a transparent, human-readable interface and facilitating rapid iteration. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

34 pages, 2006 KB  
Article
Selective Learnable Discounting in Deep Evidential Semantic Mapping
by Dongfeng Hu, Zhiyuan Li, Junhao Chen and Jian Xu
Electronics 2025, 14(23), 4602; https://doi.org/10.3390/electronics14234602 - 24 Nov 2025
Viewed by 468
Abstract
In autonomous driving and mobile robotics applications, constructing accurate and reliable three-dimensional semantic maps poses significant challenges in resolving conflicts and uncertainties among multi-frame observations in complex environments. Traditional deterministic fusion methods struggle to effectively quantify and process uncertainties in observations, while existing [...] Read more.
In autonomous driving and mobile robotics applications, constructing accurate and reliable three-dimensional semantic maps poses significant challenges in resolving conflicts and uncertainties among multi-frame observations in complex environments. Traditional deterministic fusion methods struggle to effectively quantify and process uncertainties in observations, while existing evidential deep learning approaches, despite providing uncertainty modeling frameworks, still exhibit notable limitations when dealing with spatially varying observation quality. This paper proposes a selective learnable discounting method for deep evidential semantic mapping that introduces a lightweight selective α-Net network based on the EvSemMap framework proposed by Kim and Seo. The network can adaptively detect noisy regions and predict pixel-level discounting coefficients based on input image features. Unlike traditional global discounting strategies, this work employs a theoretically principled scaling discounting formula, e^k(x)=α(x)·ek(x), that conforms to Dempster–Shafer theory, implementing a selective adjustment mechanism that reduces evidence reliability only in noisy regions while preserving original evidence strength in clean regions. Theoretical proofs verify three core properties of the proposed method: evidence discounting under preservation (ensuring no loss of classification accuracy), valid uncertainty redistribution validity (effectively suppressing overconfidence in noisy regions), and optimality of discount coefficients (achieving the matching of the theoretical optimal solution of α*(x)=1N(X)). Experimental results demonstrate that the method achieves a 43.1% improvement in Expected Calibration Error (ECE) for noisy regions and a 75.4% improvement overall, with α-Net attaining an IoU of 1.0 with noise masks on the constructed synthetic dataset—which includes common real-scenario noise types (e.g., motion blur, abnormal illumination, and sensor noise) and where RGB features correlate with observation quality—thereby fully realizing the selective discounting design objective. Combined with additional optimization via temperature calibration techniques, this method provides an effective uncertainty management solution for deep evidential semantic mapping in complex scenarios. Full article
Show Figures

Figure 1

19 pages, 349 KB  
Article
AI-Enabled ESG Compliance Audit for Stakeholders
by Eid M. Alotaibi and Abdulaziz M. Alwathnani
Sustainability 2025, 17(21), 9513; https://doi.org/10.3390/su17219513 - 25 Oct 2025
Cited by 1 | Viewed by 1837
Abstract
Environmental, social, and governance (ESG) disclosures face credibility risks due to Scope 2 Greenhouse Gas (GHG) reports lacking standardized compliance checks, raising concerns about their reliability. This study therefore develops and evaluates an AI-enabled artefact for ESG compliance auditing. This artefact applies natural [...] Read more.
Environmental, social, and governance (ESG) disclosures face credibility risks due to Scope 2 Greenhouse Gas (GHG) reports lacking standardized compliance checks, raising concerns about their reliability. This study therefore develops and evaluates an AI-enabled artefact for ESG compliance auditing. This artefact applies natural language processing (NLP) to extract reported values, implements rule-based checks grounded in the GHG Protocol, and produces transparent output. A design science research (DSR) approach guided the design, demonstration, and evaluation of the artefact, which was applied to sustainability reports from five technology companies. The results revealed that it replicates auditor judgments and reduces workload by over ninety percent in the sample. These findings serve as a proof-of-concept for automation in ESG compliance auditing. The theoretical contributions include extending the literature on AI in ESG auditing by reframing its role from producing interpretive scores to enabling transparent compliance verification. This study also demonstrates how DSR can help produce artefacts that embed rule-based logic into ESG assurance with rigor and practical relevance. The practical contributions include highlighting how a lightweight tool can enable auditors, regulators, boards, and investors to screen disclosures and benchmark credibility without sacrificing professional judgment. Full article
Show Figures

Figure 1

25 pages, 1288 KB  
Article
An Analysis of Implied Volatility, Sensitivity, and Calibration of the Kennedy Model
by Dalma Tóth-Lakits, Miklós Arató and András Ványolos
Mathematics 2025, 13(21), 3396; https://doi.org/10.3390/math13213396 - 24 Oct 2025
Viewed by 661
Abstract
The Kennedy model provides a flexible and mathematically consistent framework for modeling the term structure of interest rates, leveraging Gaussian random fields to capture the dynamics of forward rates. Building upon our earlier work, where we developed both theoretical results—including novel proofs of [...] Read more.
The Kennedy model provides a flexible and mathematically consistent framework for modeling the term structure of interest rates, leveraging Gaussian random fields to capture the dynamics of forward rates. Building upon our earlier work, where we developed both theoretical results—including novel proofs of the martingale property, connections between the Kennedy and HJM frameworks, and parameter estimation theory—and practical calibration methods, using maximum likelihood, Radon–Nikodym derivatives, and numerical optimization (stochastic gradient descent) on simulated and real par swap rate data, this study extends the analysis in several directions. We derive detailed formulas for the volatilities implied by the Kennedy model and investigate their asymptotic properties. A comprehensive sensitivity analysis is conducted to evaluate the impact of key parameters on derivative prices. We implement an industry-standard Monte Carlo method, tailored to the conditional distribution of the Kennedy field, to efficiently generate scenarios consistent with observed initial forward curves. Furthermore, we present closed-form pricing formulas for various interest rate derivatives, including zero-coupon bonds, caplets, floorlets, swaplets, and the par swap rate. A key advantage of these results is that the formulas are expressed explicitly in terms of the initial forward curve and the original parameters of the Kennedy model, which ensures both analytical tractability and consistency with market-observed data. These closed-form expressions can be directly utilized in calibration procedures, substantially accelerating multidimensional nonlinear optimization algorithms. Moreover, given an observed initial forward curve, the model provides significantly more accurate pricing formulas, enhancing both theoretical precision and practical applicability. Finally, we calibrate the Kennedy model to market-observed caplet prices. The findings provide valuable insights into the practical applicability and robustness of the Kennedy model in real-world financial markets. Full article
(This article belongs to the Special Issue Modern Trends in Mathematics, Probability and Statistics for Finance)
Show Figures

Figure 1

27 pages, 19715 KB  
Article
Applying Computational Engineering Modeling to Analyze the Social Impact of Conflict and Violent Events
by Felix Schwebel, Sebastian Meynen and Manuel García-Herranz
Entropy 2025, 27(10), 1003; https://doi.org/10.3390/e27101003 - 26 Sep 2025
Viewed by 1057
Abstract
Understanding the societal impacts of armed conflict remains challenging due to limitations in current models, which often apply fixed-radius buffers or composite indices that obscure critical dynamics. These approaches struggle to account for indirect effects, cumulative damage, and context-specific vulnerabilities, especially the question [...] Read more.
Understanding the societal impacts of armed conflict remains challenging due to limitations in current models, which often apply fixed-radius buffers or composite indices that obscure critical dynamics. These approaches struggle to account for indirect effects, cumulative damage, and context-specific vulnerabilities, especially the question of why similar events produce vastly different outcomes across regions. We introduce a novel computational framework that applies principles from engineering and material science to conflict analysis. Communities are modeled as elastic plates, “social fabrics”, whose physical properties (thickness, elasticity, coupling) are derived from spatial socioeconomic indicators. Conflict events are treated as external forces that deform this fabric, enabling the simulation of how repeated shocks propagate and accumulate. Using a custom Python-based finite element analysis implementation, we demonstrate how heterogeneous data sources can be integrated into a unified, interpretable model. Validation tests confirm theoretical behaviors, while a proof-of-concept application to Nigeria (2018) reveals emergent patterns of spillover, nonlinear accumulation, and context-sensitive impacts. This framework offers a rigorous method to distinguish structural vulnerability from external shocks and provides a tool for understanding how conflict interacts with local conditions, bridging physical modeling and social science to better capture the multifaceted nature of conflict impacts. Full article
Show Figures

Figure 1

77 pages, 8596 KB  
Review
Smart Grid Systems: Addressing Privacy Threats, Security Vulnerabilities, and Demand–Supply Balance (A Review)
by Iqra Nazir, Nermish Mushtaq and Waqas Amin
Energies 2025, 18(19), 5076; https://doi.org/10.3390/en18195076 - 24 Sep 2025
Cited by 1 | Viewed by 2455
Abstract
The smart grid (SG) plays a seminal role in the modern energy landscape by integrating digital technologies, the Internet of Things (IoT), and Advanced Metering Infrastructure (AMI) to enable bidirectional energy flow, real-time monitoring, and enhanced operational efficiency. However, these advancements also introduce [...] Read more.
The smart grid (SG) plays a seminal role in the modern energy landscape by integrating digital technologies, the Internet of Things (IoT), and Advanced Metering Infrastructure (AMI) to enable bidirectional energy flow, real-time monitoring, and enhanced operational efficiency. However, these advancements also introduce critical challenges related to data privacy, cybersecurity, and operational balance. This review critically evaluates SG systems, beginning with an analysis of data privacy vulnerabilities, including Man-in-the-Middle (MITM), Denial-of-Service (DoS), and replay attacks, as well as insider threats, exemplified by incidents such as the 2023 Hydro-Québec cyberattack and the 2024 blackout in Spain. The review further details the SG architecture and its key components, including smart meters (SMs), control centers (CCs), aggregators, smart appliances, and renewable energy sources (RESs), while emphasizing essential security requirements such as confidentiality, integrity, availability, secure storage, and scalability. Various privacy preservation techniques are discussed, including cryptographic tools like Homomorphic Encryption, Zero-Knowledge Proofs, and Secure Multiparty Computation, anonymization and aggregation methods such as differential privacy and k-Anonymity, as well as blockchain-based approaches and machine learning solutions. Additionally, the review examines pricing models and their resolution strategies, Demand–Supply Balance Programs (DSBPs) utilizing optimization, game-theoretic, and AI-based approaches, and energy storage systems (ESSs) encompassing lead–acid, lithium-ion, sodium-sulfur, and sodium-ion batteries, highlighting their respective advantages and limitations. By synthesizing these findings, the review identifies existing research gaps and provides guidance for future studies aimed at advancing secure, efficient, and sustainable smart grid implementations. Full article
(This article belongs to the Special Issue Smart Grid and Energy Storage)
Show Figures

Figure 1

33 pages, 2389 KB  
Systematic Review
Integration of Blockchain in Accounting and ESG Reporting: A Systematic Review from an Oracle-Based Perspective
by Giulio Caldarelli
J. Risk Financial Manag. 2025, 18(9), 491; https://doi.org/10.3390/jrfm18090491 - 3 Sep 2025
Viewed by 2594
Abstract
The Bitcoin network is a sophisticated accounting system that facilitates consensus and verification of transactions through cryptographic proof, eliminating the need for a central authority. Given its success, the underlying technology, generally referred to as blockchain, has been proposed as a means to [...] Read more.
The Bitcoin network is a sophisticated accounting system that facilitates consensus and verification of transactions through cryptographic proof, eliminating the need for a central authority. Given its success, the underlying technology, generally referred to as blockchain, has been proposed as a means to improve legacy accounting and reporting systems. However, integrating real-world data into a blockchain requires the use of oracles: third-party systems that, if poorly selected, may be less decentralized and transparent, potentially undermining the expected benefits. Through a systematic review of the existing literature, this study investigates whether research articles on the integration of blockchain technology in accounting and reporting have addressed the limitations posed by oracles, under the rationale that the omission of oracles constitutes a theoretical bias. Furthermore, this study examines oracle-based solutions proposed for reporting applications and classifies them based on their intended purpose. While the overall consideration of oracles remains limited, the findings indicate a steadily increasing interest in their role and implications within accounting, auditing, and ESG-related blockchain implementations. This growing attention is particularly evident in ESG reporting, where permissioned blockchains and attestation mechanisms are increasingly being examined as practical responses to data verification challenges. Full article
Show Figures

Figure 1

22 pages, 1556 KB  
Review
Quantum Cardiovascular Medicine: From Hype to Hope—A Critical Review of Real-World Applications
by Marek Tomala and Maciej Kłaczyński
J. Clin. Med. 2025, 14(17), 6029; https://doi.org/10.3390/jcm14176029 - 26 Aug 2025
Viewed by 2033
Abstract
Context: As quantum technologies advance with innovations in cardiovascular medicine, it can be challenging to distinguish genuine clinical progress from mere ideas. There will also be difficult transitions involved in moving technology from proof of concept demonstrated in the lab. This transition is [...] Read more.
Context: As quantum technologies advance with innovations in cardiovascular medicine, it can be challenging to distinguish genuine clinical progress from mere ideas. There will also be difficult transitions involved in moving technology from proof of concept demonstrated in the lab. This transition is complicated by the excitement and hype that comes with any new technology. Aim: This work aims to assess what quantum technologies are available in cardiovascular medicine for real-world use, to identify which applications are closer to clinically relevant translation, and to differentiate realistic advances from advanced-not-yet realities. Methods: A narrative review was conducted using PubMed, EMBASE, Scopus, Web of Science, IEEE Xplore, and arXiv. While real-world use of technologies was prioritized, we included all theoretical literature, regardless of date of publication. Search terms were a combination of the vocabulary of quantum technologies and the vocabulary of cardiovascular medicine. Peer-reviewed publications included primary research, reviews, theoretical works, and conference proceedings. Two reviewers independently screened all citations, and any disagreements were resolved through consensus discussion. Results: We identified three core application areas: (1) quantum sensing, such as cardiac magnetometry, where there is potential for SQUID magnetocardiography to be used for detecting cardiomyopathy; (2) quantum computing for cardiovascular risk prediction, and (3) next-generation quantum sensors for mobile cardiac imaging. Conclusions: Quantum technology in cardiovascular medicine represents modest promise in select applications, most notably, magnetocardiography. To go from “hype to hope”, clinical trials will be required to identify application domains where quantum technologies outweigh the challenges of implementation in clinical practice. Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

23 pages, 933 KB  
Review
Leveraging Multimodal Foundation Models in Biliary Tract Cancer Research
by Yashbir Singh, Jesper B. Andersen, Quincy A. Hathaway, Diana V. Vera-Garcia, Varekan Keishing, Sudhakar K. Venkatesh, Sara Salehi, Davide Povero, Michael B. Wallace, Gregory J. Gores, Yujia Wei, Natally Horvat, Bradley J. Erickson and Emilio Quaia
Tomography 2025, 11(9), 96; https://doi.org/10.3390/tomography11090096 - 25 Aug 2025
Cited by 4 | Viewed by 1809
Abstract
This review explores how multimodal foundation models (MFMs) are transforming biliary tract cancer (BTC) research. BTCs are aggressive malignancies with poor prognosis, presenting unique challenges due to difficult diagnostic methods, molecular complexity, and rarity. Importantly, intrahepatic cholangiocarcinoma (iCCA), perihilar cholangiocarcinoma (pCCA), and distal [...] Read more.
This review explores how multimodal foundation models (MFMs) are transforming biliary tract cancer (BTC) research. BTCs are aggressive malignancies with poor prognosis, presenting unique challenges due to difficult diagnostic methods, molecular complexity, and rarity. Importantly, intrahepatic cholangiocarcinoma (iCCA), perihilar cholangiocarcinoma (pCCA), and distal bile duct cholangiocarcinoma (dCCA) represent fundamentally distinct clinical entities, with iCCA presenting as mass-forming lesions amenable to biopsy and targeted therapies, while pCCA manifests as infiltrative bile duct lesions with challenging diagnosis and primarily palliative management approaches. MFMs offer potential to advance research by integrating radiological images, histopathology, multi-omics profiles, and clinical data into unified computational frameworks, with applications tailored to these distinct BTC subtypes. Key applications include enhanced biomarker discovery that identifies previously unrecognizable cross-modal patterns, potential for improving currently limited diagnostic accuracy—though validation in BTC-specific cohorts remains essential—accelerated drug repurposing, and advanced patient stratification for personalized treatment. Despite promising results, challenges such as data scarcity, high computational demands, and clinical workflow integration remain to be addressed. Future research should focus on standardized data protocols, architectural innovations, and prospective validation studies. The integration of artificial intelligence (AI)-based methodologies offers new solutions for these historically challenging malignancies. However, current evidence for BTC-specific applications remains largely theoretical, with most studies limited to proof-of-concept designs or related cancer types. Comprehensive clinical validation studies and prospective trials demonstrating patient benefit are essential prerequisites for clinical implementation. The timeline for evidence-based clinical adoption likely extends 7–10 years, contingent on successful completion of validation studies addressing current evidence gaps. Full article
(This article belongs to the Section Cancer Imaging)
Show Figures

Figure 1

37 pages, 2286 KB  
Article
Parameterised Quantum SVM with Data-Driven Entanglement for Zero-Day Exploit Detection
by Steven Jabulani Nhlapo, Elodie Ngoie Mutombo and Mike Nkongolo Wa Nkongolo
Computers 2025, 14(8), 331; https://doi.org/10.3390/computers14080331 - 15 Aug 2025
Viewed by 2377
Abstract
Zero-day attacks pose a persistent threat to computing infrastructure by exploiting previously unknown software vulnerabilities that evade traditional signature-based network intrusion detection systems (NIDSs). To address this limitation, machine learning (ML) techniques offer a promising approach for enhancing anomaly detection in network traffic. [...] Read more.
Zero-day attacks pose a persistent threat to computing infrastructure by exploiting previously unknown software vulnerabilities that evade traditional signature-based network intrusion detection systems (NIDSs). To address this limitation, machine learning (ML) techniques offer a promising approach for enhancing anomaly detection in network traffic. This study evaluates several ML models on a labeled network traffic dataset, with a focus on zero-day attack detection. Ensemble learning methods, particularly eXtreme gradient boosting (XGBoost), achieved perfect classification, identifying all 6231 zero-day instances without false positives and maintaining efficient training and prediction times. While classical support vector machines (SVMs) performed modestly at 64% accuracy, their performance improved to 98% with the use of the borderline synthetic minority oversampling technique (SMOTE) and SMOTE + edited nearest neighbours (SMOTEENN). To explore quantum-enhanced alternatives, a quantum SVM (QSVM) is implemented using three-qubit and four-qubit quantum circuits simulated on the aer_simulator_statevector. The QSVM achieved high accuracy (99.89%) and strong F1-scores (98.95%), indicating that nonlinear quantum feature maps (QFMs) can increase sensitivity to zero-day exploit patterns. Unlike prior work that applies standard quantum kernels, this study introduces a parameterised quantum feature encoding scheme, where each classical feature is mapped using a nonlinear function tuned by a set of learnable parameters. Additionally, a sparse entanglement topology is derived from mutual information between features, ensuring a compact and data-adaptive quantum circuit that aligns with the resource constraints of noisy intermediate-scale quantum (NISQ) devices. Our contribution lies in formalising a quantum circuit design that enables scalable, expressive, and generalisable quantum architectures tailored for zero-day attack detection. This extends beyond conventional usage of QSVMs by offering a principled approach to quantum circuit construction for cybersecurity. While these findings are obtained via noiseless simulation, they provide a theoretical proof of concept for the viability of quantum ML (QML) in network security. Future work should target real quantum hardware execution and adaptive sampling techniques to assess robustness under decoherence, gate errors, and dynamic threat environments. Full article
Show Figures

Figure 1

21 pages, 739 KB  
Article
Bit-Parallel Implementations of Neural Network Activation Functions in Onboard Computing Systems
by Mikhail Khachumov
Electronics 2025, 14(12), 2348; https://doi.org/10.3390/electronics14122348 - 8 Jun 2025
Viewed by 804
Abstract
This study generalizes and further develops methods for efficiently implementing artificial neural networks (ANNs) in the onboard computers of mobile robotic systems with limited resources, including unmanned aerial vehicles (UAVs). The neural networks are sped up by constructing a new unbounded activation function [...] Read more.
This study generalizes and further develops methods for efficiently implementing artificial neural networks (ANNs) in the onboard computers of mobile robotic systems with limited resources, including unmanned aerial vehicles (UAVs). The neural networks are sped up by constructing a new unbounded activation function called “s-parabola”, which meets the requirements of twice differentiability and reduces computational complexity over sigmoid-based functions. An additional contribution to this acceleration comes from activation functions based on bit-parallel computational circuits. A comprehensive review of modern publications in this subject area is provided. For autonomous problem-solving using ANNs directly on board an unmanned aerial vehicle, a trade-off between the speed and accuracy of the resulting solutions must be achieved. For this reason, we propose using fast bit-parallel circuits with limited digit capacity. The special representation and calculation of activation functions is performed based on the transformation of Jack Volder’s CORDIC iterative algorithms for trigonometric functions and Georgy Pukhov’s bit-analog calculations. Two statements are formulated, the proofs of which are based on the equivalence of the results obtained using the two approaches. We also provide theoretical and experimental estimates of the computational complexity of the algorithms achieved with different operand summation schemes. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Graphical abstract

23 pages, 2008 KB  
Article
Graph-Theoretic Detection of Anomalies in Supply Chains: A PoR-Based Approach Using Laplacian Flow and Sheaf Theory
by Hsiao-Chun Han and Der-Chen Huang
Mathematics 2025, 13(11), 1795; https://doi.org/10.3390/math13111795 - 28 May 2025
Cited by 2 | Viewed by 2390
Abstract
Based on Graph Balancing Theory, this study proposes an anomaly detection algorithm, the Supply Chain Proof of Relation (PoR), applied to enterprise procurement networks formalized as weighted directed graphs. A mathematical framework is constructed by integrating Laplacian flow conservation and the Sheaf topological [...] Read more.
Based on Graph Balancing Theory, this study proposes an anomaly detection algorithm, the Supply Chain Proof of Relation (PoR), applied to enterprise procurement networks formalized as weighted directed graphs. A mathematical framework is constructed by integrating Laplacian flow conservation and the Sheaf topological coherence principle to identify anomalous nodes whose local characteristics deviate significantly from the global features of the supply network. PoR was empirically implemented on a dataset comprising 856 Taiwanese enterprises, successfully detecting 56 entities exhibiting abnormal behavior. Anomaly intensity was visualized through trend plots, revealing nodes with rapidly increasing deviations. To validate the effectiveness of this detection, the study further analyzed the correlation between internal and external performance metrics. The results demonstrate that anomalous nodes exhibit near-zero correlations, in contrast to the significant correlations observed in normal nodes—indicating a disruption of information consistency. This research establishes a graph-theoretic framework for anomaly detection, presents a mathematical model independent of training data, and highlights the linkage between structural deviations and informational distortions. By incorporating Sheaf Theory, the study enhances the analytical depth of topological consistency. Moreover, this work demonstrates the observability of flow conservation violations within a highly complex, non-physical system such as the supply chain. It completes a logical integration of Sheaf Coherence, Graph Balancing, and High-Dimensional Anomaly Projection, and achieves a cross-mapping between Graph Structural Deviations and Statistical Inconsistencies in weighted directed graphs. This contribution advances the field of graph topology-based statistical anomaly detection, opening new avenues for the methodological integration between physical systems and economic networks. Full article
(This article belongs to the Special Issue Graph Theory: Advanced Algorithms and Applications, 2nd Edition)
Show Figures

Figure 1

13 pages, 321 KB  
Article
An Alternative Framework for Dynamic Mode Decomposition with Control
by Gyurhan Nedzhibov
AppliedMath 2025, 5(2), 60; https://doi.org/10.3390/appliedmath5020060 - 23 May 2025
Viewed by 2010
Abstract
Dynamic mode decomposition with control (DMDc) is a widely used technique for analyzing dynamic systems influenced by external control inputs. It is a recent development and an extension of dynamic mode decomposition (DMD) tailored for input–output systems. In this work, we investigate and [...] Read more.
Dynamic mode decomposition with control (DMDc) is a widely used technique for analyzing dynamic systems influenced by external control inputs. It is a recent development and an extension of dynamic mode decomposition (DMD) tailored for input–output systems. In this work, we investigate and analyze an alternative approach for computing DMDc. Compared to the traditional formulation, the proposed method restructures the computation by decoupling the influence of the state and control components, allowing for a more modular and interpretable implementation. The algorithm avoids compound operator approximations typical of standard approaches, which makes it potentially more efficient in real-time applications or systems with streaming data. The new scheme aims to improve computational efficiency while maintaining the reliability and accuracy of the decomposition. We provide a theoretical proof that the dynamic modes produced by the proposed method are exact eigenvectors of the corresponding Koopman operator. Compared to the standard DMDc approach, the new algorithm is shown to be more efficient, requiring fewer calculations and less memory. Numerical examples are presented to demonstrate the theoretical results and illustrate potential applications of the modified approach. The results highlight the promise of this alternative formulation for advancing data-driven modeling and control in various engineering and scientific domains. Full article
Show Figures

Figure 1

Back to TopTop