Next Issue
Volume 27, December
Previous Issue
Volume 27, October
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 11 (November 2025) – 94 articles

Cover Story (view full-size image): Active matter defines a class of biologically inspired systems whose single units can transform stored or ambient energy into self-propulsion. Often, due to activity, elongated units arrange in extended polar patterns disrupted by topological defects, as asters or vortices, which promote coherent motion around their cores and influence irreversibility. Although ubiquitous, the role played by these defects is not yet fully understood. Here, we study how defects emerge and affect morphology, compressibility and irreversibility in active dumbbell systems. We find that, when separated through MIPS, soft dumbbells slide on each other, producing blurred hexatic domains, strong compression effects and extended polarization patterns, in turn mirrored by peculiar entropy production profiles.  View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 793 KB  
Article
Quantum Digital Signature Using Entangled States for Network
by Changho Hong, Youn-Chang Jeong, Osung Kwon and Se-Wan Ji
Entropy 2025, 27(11), 1179; https://doi.org/10.3390/e27111179 - 20 Nov 2025
Viewed by 253
Abstract
We propose an entanglement-based quantum digital signature (QDS) protocol optimized for quantum networks. The protocol follows the Lamport-inspired QDS paradigm but eliminates QKD post-processing by signing and verifying with raw conclusive keys, thereby reducing latency and implementation complexity. We provide a finite-size security [...] Read more.
We propose an entanglement-based quantum digital signature (QDS) protocol optimized for quantum networks. The protocol follows the Lamport-inspired QDS paradigm but eliminates QKD post-processing by signing and verifying with raw conclusive keys, thereby reducing latency and implementation complexity. We provide a finite-size security analysis of robustness, unforgeability, and non-repudiation. Under standard fiber-loss and detector models, simulations show a consistent signature rate advantage over a representative Lamport-inspired QDS baseline across metro-to-regional distances. The proposed protocol is practical for near-term deployment while preserving end-to-end, finite key security guarantees. Full article
(This article belongs to the Special Issue New Advances in Quantum Communications and Quantum Computing)
Show Figures

Figure 1

27 pages, 3034 KB  
Article
An Intelligent Bearing Fault Transfer Diagnosis Method Based on Improved Domain Adaption
by Jinli Che, Liqing Fang, Qiao Ma, Guibo Yu, Xiaoting Sun and Xiujie Zhu
Entropy 2025, 27(11), 1178; https://doi.org/10.3390/e27111178 - 20 Nov 2025
Viewed by 299
Abstract
Aiming to tackle the challenge of feature transfer in cross-domain fault diagnosis for rolling bearings, an enhanced domain adaptation-based intelligent fault diagnosis method is proposed. This method systematically combines multi-layer multi-core MMD with adversarial domain classification. Specifically, we will extend alignment to multiple [...] Read more.
Aiming to tackle the challenge of feature transfer in cross-domain fault diagnosis for rolling bearings, an enhanced domain adaptation-based intelligent fault diagnosis method is proposed. This method systematically combines multi-layer multi-core MMD with adversarial domain classification. Specifically, we will extend alignment to multiple network layers, while previous work typically applied MMD to fewer layers or used single core variants. Initially, a one-dimensional convolutional neural network (1D-CNN) is utilized to extract features from both the source and target domains, thereby enhancing the diagnostic model’s cross-domain adaptability through shared feature learning. Subsequently, to address the distribution differences in feature extraction, the multi-layer multi-kernel maximum mean discrepancy (ML-MK MMD) method is employed to quantify the distribution disparity between the source and target domain features, with the objective of extracting domain-invariant features. Moreover, to further mitigate domain shift, a novel loss function is developed by integrating ML-MK MMD with a domain classifier loss, which optimizes the alignment of feature distributions between the two domains. Ultimately, testing on target domain samples demonstrates that the proposed method effectively extracts domain-invariant features, significantly reduces the distribution gap between the source and target domains, and thereby enhances cross-domain diagnostic performance. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis: From Theory to Applications)
Show Figures

Figure 1

33 pages, 9505 KB  
Article
The Evolution of the Linkage Among Geopolitical Risk, the US Dollar Index, Crude Oil Prices, and Gold Prices at Multiple Scales: A Wavelet Transform-Based Dynamic Transfer Entropy Network Method
by Hanru Yang, Sufang An, Zhiliang Dong and Xiaojuan Dong
Entropy 2025, 27(11), 1177; https://doi.org/10.3390/e27111177 - 20 Nov 2025
Viewed by 2704
Abstract
In recent years, the correlation mechanisms between geopolitical risks and financial markets have drawn considerable attention from both academic circles and investment communities. However, their multiscale, nonlinear interactive characteristics still require further investigation. To address this, this paper proposes a dynamic nonlinear causal [...] Read more.
In recent years, the correlation mechanisms between geopolitical risks and financial markets have drawn considerable attention from both academic circles and investment communities. However, their multiscale, nonlinear interactive characteristics still require further investigation. To address this, this paper proposes a dynamic nonlinear causal information network combined with a wavelet transform model and the transfer entropy method. We select the geopolitical risk index, the US dollar index, Brent and WTI crude oil prices, COMEX gold futures, and London gold prices time series as the research objects. The results suggest that the network’s structure changes with time at different time scales. On the one hand, COMEX gold (London gold) acts as the major causal information transmitter (receiver) at all scales; both of their highest values appear at the mid-scale. The US dollar index plays a bridging role in information transmission, and this mediating ability decreases with increasing time scales. On the other hand, the fastest speed of causal information transmission is at the short scale, and the slowest speed is at the mid-scale. The complexity and systematic risk of causal network decrease with increasing time scales. Importantly, at the short-scale (D1), the information transmission speed slowed during the Russian–Ukrainian conflict and further decreased after the start of the Israel–Hamas conflict. Systematic risk has increased annually since 2018. This study provides a multiscale perspective to study the nonlinear causal relationship between geopolitical risk and financial markets and serves as a reference for policy-makers and investors. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

40 pages, 12246 KB  
Article
Nonlinear Stochastic Dynamics of the Intermediate Dispersive Velocity Equation with Soliton Stability and Chaos
by Samad Wali, Maham Munawar, Atef Abdelkader, Adil Jhangeer and Mudassar Imran
Entropy 2025, 27(11), 1176; https://doi.org/10.3390/e27111176 - 20 Nov 2025
Viewed by 277
Abstract
This paper examines the nonlinear behavior of the generalized stochastic intermediate dispersive velocity (SIdV) equation, which has been widely analyzed in a non-noise deterministic framework but has yet to be studied in any depth in the presence of varying forcing strength and noise [...] Read more.
This paper examines the nonlinear behavior of the generalized stochastic intermediate dispersive velocity (SIdV) equation, which has been widely analyzed in a non-noise deterministic framework but has yet to be studied in any depth in the presence of varying forcing strength and noise types, in particular how it switches between periodic, quasi-periodic, and chaotic regimes. A stochastic wave transformation reduces the equation to simpler ordinary differential equations to make soliton overlap analysis feasible to analyze soliton robustness under deterministic and stochastic conditions. Lyapunov exponents, power spectra, recurrence quantification, correlation dimension, entropy measures, return maps, and basin stability are then used to measure the effect of white, Brownian, and colored noise on attractor formation, system stability, and spectral correlations. Order–chaos transitions as well as noise-induced complexity are more effectively described by bifurcation diagrams and by Lyapunov spectra. The results of this experiment improve the theoretical knowledge of stochastic nonlinear waves and offer information that will be useful in the fields of control engineering, energy harvesting, optical communications, and signal processing applications. Full article
(This article belongs to the Special Issue Nonlinear Dynamics of Complex Systems)
Show Figures

Figure 1

24 pages, 2224 KB  
Article
Positivity-Preserving Hybridizable Discontinuous Galerkin Scheme for Solving PNP Model
by Diana Morales and Zhiliang Xu
Entropy 2025, 27(11), 1175; https://doi.org/10.3390/e27111175 - 20 Nov 2025
Viewed by 190
Abstract
We introduce a hybridizable discontinuous Galerkin (HDG) scheme for solving the Poisson–Nernst–Planck (PNP) equations. The log-density formulation as introduced by Metti et al. in their paper “Energetically stable discretizations for charge transport and electrokinetic models. J. Comput. Phys. 2016, 306, 1-18” is utilized [...] Read more.
We introduce a hybridizable discontinuous Galerkin (HDG) scheme for solving the Poisson–Nernst–Planck (PNP) equations. The log-density formulation as introduced by Metti et al. in their paper “Energetically stable discretizations for charge transport and electrokinetic models. J. Comput. Phys. 2016, 306, 1-18” is utilized to ensure the positivity of the densities of the charged particles. We further prove that our fully discrete scheme is energy stable and mass conserving. Numerical simulations are provided to demonstrate the accuracy of the scheme in one and two spatial dimensions. A derivation of an HDG-DG space–time scheme is given, with implementation and convergence analysis left to future work. Full article
(This article belongs to the Special Issue Modeling, Analysis, and Computation of Complex Fluids)
Show Figures

Figure 1

33 pages, 2187 KB  
Article
Glymphatic Clearance in the Optic Nerve: A Multidomain Electro-Osmostic Model
by Shanfeng Xiao, Huaxiong Huang, Robert Eisenberg, Zilong Song and Shixin Xu
Entropy 2025, 27(11), 1174; https://doi.org/10.3390/e27111174 - 20 Nov 2025
Viewed by 359
Abstract
Effective metabolic waste clearance and maintaining ionic homeostasis are essential for the health and normal function of the central nervous system (CNS). To understand its mechanism and the role of fluid flow, we develop a multidomain electro-osmotic model of optic-nerve microcirculation (as a [...] Read more.
Effective metabolic waste clearance and maintaining ionic homeostasis are essential for the health and normal function of the central nervous system (CNS). To understand its mechanism and the role of fluid flow, we develop a multidomain electro-osmotic model of optic-nerve microcirculation (as a part of the CNS) that couples hydrostatic and osmotic fluid transport with electro-diffusive solute movement across axons, glia, the extracellular space (ECS), and arterial/venous/capillary perivascular spaces (PVS). Cerebrospinal fluid enters the optic nerve via the arterial parivascular space (PVS-A) and passes both the glial and ECS before exiting through the venous parivascular space (PVS-V). Exchanges across astrocytic endfeet are essential and they occur in two distinct and coupled paths: through AQP4 on glial membranes and gaps between glial endfeet, thus establishing a mechanistic substrate for two modes of glymphatic transport, at rest and during stimulus-evoked perturbations. Parameter sweeps show that lowering AQP4-mediated fluid permeability or PVS permeability elevates pressure, suppresses radial exchange (due mainly to hydrostatic pressure difference at the lateral surface and the center of the optic nerve), and slows clearance, effects most pronounced for solutes reliant on PVS–V export. The model reproduces baseline and stimulus-evoked flow and demonstrates that PVS-mediated export is the primary clearance route for both small and moderate solutes. Small molecules (e.g., Aβ) clear faster because rapid ECS diffusion broadens their distribution and enhances ECS–PVS exchange, whereas moderate species (e.g., tau monomers/oligomers) have low ECS diffusivity, depend on trans-endfoot transfer, and clear more slowly via PVS–V convection. Our framework can also be used to explain the sleep–wake effect mechanistically: enlarging ECS volume (as occurs in sleep) or permeability increases trans-interface flux and accelerates waste removal. Together, these results provide a unified physical picture of glymphatic transport in the optic nerve, yield testable predictions for how AQP4 function, PVS patency, and sleep modulate size-dependent clearance, and offer guidance for targeting impaired waste removal in neurological disease. Full article
(This article belongs to the Special Issue Modeling, Analysis, and Computation of Complex Fluids)
Show Figures

Figure 1

13 pages, 6602 KB  
Article
Deep Learning of the Biswas–Chatterjee–Sen Model
by José F. S. Neto, David S. M. Alencar, Lenilson T. Brito, Gladstone A. Alves, Francisco Welington S. Lima, Antônio M. Filho, Ronan S. Ferreira and Tayroni F. A. Alves
Entropy 2025, 27(11), 1173; https://doi.org/10.3390/e27111173 - 20 Nov 2025
Viewed by 238
Abstract
We investigate the critical properties of kinetic continuous opinion dynamics using deep learning techniques. The system consists of N continuous spin variables in the interval [1,1]. Dense neural networks are trained on spin configuration data generated via [...] Read more.
We investigate the critical properties of kinetic continuous opinion dynamics using deep learning techniques. The system consists of N continuous spin variables in the interval [1,1]. Dense neural networks are trained on spin configuration data generated via kinetic Monte Carlo simulations, accurately identifying the critical point on both square and triangular lattices. Classical unsupervised learning with principal component analysis reproduces the magnetization and allows estimation of critical exponents. Additionally, variational autoencoders are implemented to study the phase transition through the loss function, which behaves as an order parameter. A correlation function between real and reconstructed data is defined and found to be universal at the critical point. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Sociophysics, Third Edition)
Show Figures

Figure 1

11 pages, 263 KB  
Article
The Knudsen Layer in the Heat Transport Beyond the Fourier Law: Application to the Wave Propagation at Nanoscale
by Isabella Carlomagno and Antonio Sellitto
Entropy 2025, 27(11), 1172; https://doi.org/10.3390/e27111172 - 20 Nov 2025
Viewed by 209
Abstract
In agreement with the second law of thermodynamics, a new theoretical model for the description of the heat transfer at nanoscale in a rigid body is derived. The model introduces the concept of the Knudsen layer into non-equilibrium thermodynamics in order to better [...] Read more.
In agreement with the second law of thermodynamics, a new theoretical model for the description of the heat transfer at nanoscale in a rigid body is derived. The model introduces the concept of the Knudsen layer into non-equilibrium thermodynamics in order to better investigate how phonon–boundary scattering may influence the heat propagation at nanoscale. This paper, in particular, deepens the influence of the Knudsen layer on the speed of propagation of thermal waves. Full article
(This article belongs to the Section Thermodynamics)
13 pages, 547 KB  
Article
A Quantum Proxy Signature Scheme Without Restrictions on the Identity and Number of Verifiers
by Siyu Xiong
Entropy 2025, 27(11), 1171; https://doi.org/10.3390/e27111171 - 19 Nov 2025
Viewed by 203
Abstract
Quantum digital signatures (QDS) establish a framework for information-theoretically secure authentication in quantum networks. As a specialized extension of QDS, quantum proxy signatures facilitate secure delegation of signing privileges in distributed quantum environments. However, existing schemes require the predefinition of verifier identities at [...] Read more.
Quantum digital signatures (QDS) establish a framework for information-theoretically secure authentication in quantum networks. As a specialized extension of QDS, quantum proxy signatures facilitate secure delegation of signing privileges in distributed quantum environments. However, existing schemes require the predefinition of verifier identities at the system setup phase, which fundamentally constrains their deployment in real-world scenarios. To address this constraint, we propose a quantum proxy signature scheme supporting verification by arbitrary parties without pre-registration while maintaining information-theoretic security guarantees. This work presents a constructive approach to mitigating verification constraints in quantum proxy signature architectures. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

27 pages, 1621 KB  
Article
Dynamic Behavior Analysis of Complex-Configuration Organic Rankine Cycle Systems Using a Multi-Time-Scale Dynamic Modeling Framework
by Jinao Shen and Youyi Li
Entropy 2025, 27(11), 1170; https://doi.org/10.3390/e27111170 - 19 Nov 2025
Viewed by 251
Abstract
Organic Rankine Cycle (ORC) systems with complex configurations exhibit strong thermo-mechanical–electrical–magnetic coupling, making dynamic analysis computationally demanding. This study proposes a multi-time-scale modeling framework that partitions the system into second-, decisecond-, and hybrid-scale subsystems for separate computation, reducing simulation time while maintaining accuracy. [...] Read more.
Organic Rankine Cycle (ORC) systems with complex configurations exhibit strong thermo-mechanical–electrical–magnetic coupling, making dynamic analysis computationally demanding. This study proposes a multi-time-scale modeling framework that partitions the system into second-, decisecond-, and hybrid-scale subsystems for separate computation, reducing simulation time while maintaining accuracy. Dynamic models are developed for heat exchangers, expanders, pumps, generators, and converters. The method is validated on a basic ORC system using operational data, achieving a mean absolute error of 2.12%, well within the ±5% tolerance. It is then applied to a series dual-loop ORC and a multi-heat-source ORC with series heat exchangers. Results indicate that the dual-loop configuration enhances disturbance rejection to both sink and heat-source fluctuations, while dual-heat-source system dynamics are predominantly governed by the second heat source. The framework enables efficient, accurate simulation of complex ORC architectures and provides a robust basis for advanced control strategy development. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

28 pages, 11361 KB  
Article
Unveiling Self-Organization and Emergent Phenomena in Urban Transportation Systems via Multilayer Network Analysis
by Hongqing Bao, Xia Luo, Xuan Li and Yiyang Zhao
Entropy 2025, 27(11), 1169; https://doi.org/10.3390/e27111169 - 19 Nov 2025
Viewed by 300
Abstract
In the absence of system-wide planning and coordination, emerging mobility services have been integrated into urban transportation systems as independent network layers. Meanwhile, their interactions with traditional public transit give rise to complex self-organizing patterns in population mobility, manifested as coopetitive dynamics. To [...] Read more.
In the absence of system-wide planning and coordination, emerging mobility services have been integrated into urban transportation systems as independent network layers. Meanwhile, their interactions with traditional public transit give rise to complex self-organizing patterns in population mobility, manifested as coopetitive dynamics. To systematically analyze this phenomenon, this study constructs a four-layer temporal network—consisting of ride-hailing, metro, combined, and potential layers—based on a vectorized multilayer network model and inter-layer mapping relationships. An analytical framework is then developed using node strength, cosine similarity, and rich-club coefficients, along with two newly proposed indicators: the intermodal index and the node importance coefficient. The results reveal, for the first time, a spontaneously emergent intermodal phenomenon between ride-hailing and metro networks, manifested through both cross-day modal substitution and intra-day intermodal chains. The analysis further demonstrates that when sufficiently large and homogeneous demand cohorts are present, the phenomena can emerge even on non-working days. Based on the characteristics of this phenomenon, a method has been developed to identify intermodal nodes across different transport networks. Furthermore, the study uncovers a time-varying multicentric hierarchical structure within the metro network, characterized by small-scale core rich nodes and larger-scale secondary rich-node clusters. Overall, this study provides novel insights into the formation, coordination, and optimization of intermodal urban transport networks. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

22 pages, 324 KB  
Article
Quantum Gravity Spacetime: Universe vs. Multiverse
by Massimo Tessarotto and Claudio Cremaschini
Entropy 2025, 27(11), 1168; https://doi.org/10.3390/e27111168 - 19 Nov 2025
Viewed by 489
Abstract
Starting from the realization that the theory of quantum gravity (QG) cannot be deterministic due to its intrinsic quantum nature, the requirement is posed that QG should fulfill a suitable Heisenberg Generalized Uncertainty Principle (GUP) to be expressed as a local relationship determined [...] Read more.
Starting from the realization that the theory of quantum gravity (QG) cannot be deterministic due to its intrinsic quantum nature, the requirement is posed that QG should fulfill a suitable Heisenberg Generalized Uncertainty Principle (GUP) to be expressed as a local relationship determined from first principles and expressed in covariant 4-tensor form. We prove that such a principle places also a physical realizability condition denoted as “quantum covariance criterion”, which provides a possible selection rule for physically-admissible spacetimes. Such a requirement is not met by most of current QG theories (e.g., string theory, Geometrodynamics, loop quantum gravity, GUP and minimum-length-theories), which are based on the so-called multiverse representation of space-time in which the variational tensor field coincides with the spacetime metric tensor. However, an alternative is provided by theories characterized by a universe representation, namely in which the variational tensor field differs from the unique “background” metric tensor. It is shown that the latter theories satisfy the said Heisenberg GUP and also fulfill the aforementioned physical realizability condition. Full article
17 pages, 820 KB  
Article
Polar Coding and Early SIC Decoding for Uplink Heterogeneous NOMA
by Chu-Jung Wu, Chien-Ying Lin and Yu-Chih Huang
Entropy 2025, 27(11), 1167; https://doi.org/10.3390/e27111167 - 18 Nov 2025
Viewed by 326
Abstract
In modern communication systems, packets with different blocklengths often coexist, presenting new challenges for interference management and decoding. In scenarios where short-packet transmissions must meet strict latency and reliability requirements, conventional interference cancellation decoding strategies may be insufficient, especially when coexisting with long-packet [...] Read more.
In modern communication systems, packets with different blocklengths often coexist, presenting new challenges for interference management and decoding. In scenarios where short-packet transmissions must meet strict latency and reliability requirements, conventional interference cancellation decoding strategies may be insufficient, especially when coexisting with long-packet services. This work proposes a novel interleaver design for polar codes that enables early decoding in successive interference cancellation (SIC) frameworks. To support this capability, a minimal yet essential modification to the interleaver used in the 5G New Radio (NR) polar coding scheme is introduced. This tailored interleaver facilitates the reliable recovery of short-packet signals before the complete decoding of coexisting long packets, substantially improving early decoding performance. Importantly, the proposed modification retains compatibility with the overall 5G NR polar code structure, ensuring practical implementability. Simulation results demonstrate that our approach yields significantly enhanced decoding accuracy in heterogeneous traffic scenarios representative of next-generation wireless systems. Full article
(This article belongs to the Special Issue Next-Generation Channel Coding: Theory and Applications)
Show Figures

Figure 1

20 pages, 2804 KB  
Article
Towards a Global Scale Quantum Information Network: A Study Applied to Satellite-Enabled Distributed Quantum Computing
by Laurent de Forges de Parny, Luca Paccard, Mathieu Bertrand, Luca Lazzarini, Valentin Leloup, Raphael Aymeric, Agathe Blaise, Stéphanie Molin, Pierre Besancenot, Cyrille Laborde and Mathias van den Bossche
Entropy 2025, 27(11), 1166; https://doi.org/10.3390/e27111166 - 18 Nov 2025
Viewed by 509
Abstract
Recent developments have reported on the feasibility of interconnecting small quantum registers in a quantum information network of a few meter-scale for distributed quantum computing purposes. Small quantum processors in a network represent a promising solution to the scalability problem of manipulating more [...] Read more.
Recent developments have reported on the feasibility of interconnecting small quantum registers in a quantum information network of a few meter-scale for distributed quantum computing purposes. Small quantum processors in a network represent a promising solution to the scalability problem of manipulating more than thousands of noise-free qubits. Here, we propose and assess a satellite-enabled distributed quantum computing system at the French national scale based on existing infrastructures in Paris and Nice. We consider a system composed of both a ground and a Space segment, allowing for the distribution of end-to-end entanglement between Alice in Paris and Bob in Nice, each owning a few-qubit processor composed of trapped ions. In the context of quantum computing, this entanglement resource can be used for the teleportation of a qubit state or for gate teleportation. After having developed a model, we numerically assess the entanglement distribution rate and fidelity generated by this space-based quantum information network and discuss concrete use cases and service performance levels in the framework of distributed quantum computing. We obtain 90 end-to-end entangled photon pairs distributed over a satellite pass of 331 s that can perform a teleportation-based controlled-Z operation with a fidelity of at most 82%. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

22 pages, 3086 KB  
Article
Nonclassicality and Coherent Error Detection via Pseudo-Entropy
by Assaf Katz, Shalom Bloch and Eliahu Cohen
Entropy 2025, 27(11), 1165; https://doi.org/10.3390/e27111165 - 17 Nov 2025
Viewed by 376
Abstract
Pseudo-entropy is a complex-valued generalization of entanglement entropy defined on non-Hermitian transition operators and induced by post-selection. We present a simulation-based protocol for detecting nonclassicality and coherent errors in quantum circuits using this pseudo-entropy measure Sˇ, focusing on its imaginary part [...] Read more.
Pseudo-entropy is a complex-valued generalization of entanglement entropy defined on non-Hermitian transition operators and induced by post-selection. We present a simulation-based protocol for detecting nonclassicality and coherent errors in quantum circuits using this pseudo-entropy measure Sˇ, focusing on its imaginary part Sˇ as a diagnostic tool. Our method enables resource-efficient classification of phase-coherent errors, such as those from miscalibrated CNOT gates, even under realistic noise conditions. By quantifying the transition between classical-like and quantum-like behavior through threshold analysis, we provide theoretical benchmarks for error classification that can inform hardware calibration strategies. Numerical simulations demonstrate that 55% of the parameter space remains classified as classical-like (below classification thresholds) at hardware-calibrated sensitivity levels, with statistical significance confirmed through rigorous sensitivity analysis. Robustness to noise and comparison with standard entropy-based methods are demonstrated in a simulation. While hardware validation remains necessary, this work bridges theoretical concepts of nonclassicality with practical quantum error classification frameworks, providing a foundation for experimental quantum computing applications. Full article
Show Figures

Figure 1

33 pages, 2581 KB  
Article
Information-Theoretic ESG Index Direction Forecasting: A Complexity-Aware Framework
by Kadriye Nurdanay Öztürk and Öyküm Esra Yiğit
Entropy 2025, 27(11), 1164; https://doi.org/10.3390/e27111164 - 17 Nov 2025
Viewed by 850
Abstract
Sustainable finance exhibits non-linear dynamics, regime shifts, and distributional drift that challenge conventional forecasting, particularly in volatile emerging markets. Conventional models, which often overlook this structural complexity, can struggle to produce stable or reliable probabilistic forecasts. To address this challenge, this study introduces [...] Read more.
Sustainable finance exhibits non-linear dynamics, regime shifts, and distributional drift that challenge conventional forecasting, particularly in volatile emerging markets. Conventional models, which often overlook this structural complexity, can struggle to produce stable or reliable probabilistic forecasts. To address this challenge, this study introduces a complexity-aware forecasting framework that operationalizes information-theoretic meta features, Shannon entropy (SE), permutation entropy (PE) and Kullback–Leibler (KL) divergence to make Environmental, Social, and Governance (ESG) index forecasting more stable, probabilistically accurate, and operationally reliable. Applied in an emerging-market setting using Türkiye’s ESG index as a natural stress test, the framework was benchmarked against a macro-technical baseline with a calibrated XGBoost classifier under a strictly chronological, leakage-controlled nested cross-validation protocol and evaluated on a strictly held-out test set. In development, the framework achieved statistically significant improvements in both stability and calibration, reducing fold-level dispersion (by 40.4–66.6%) across all metrics and enhancing probability-level alignment with Brier score reduced by 0.0140 and the ECE by 0.0287. Furthermore, a meta-analytic McNemar’s test confirmed a significant reduction in misclassifications across the development folds. On the strictly held-out test set, the framework’s superiority was confirmed by a statistically significant reduction in classification errors (exact McNemar p < 0.001), alongside strong gains in imbalance-robust metrics such as BAcc (0.618, +12.8%) and the MCC (0.288, +38.5%), achieving an F1-score of 0.719. Overall, the findings of the complexity-aware framework indicate that explicitly representing the market’s informational state and transitions yields more stable, well-calibrated, and operationally reliable forecasts in regime-shifting financial environments, supporting enhanced robustness and practical deployability. Full article
Show Figures

Figure 1

15 pages, 949 KB  
Article
Utility–Leakage Trade-Off for Federated Representation Learning
by Yuchen Liu, Onur Günlü, Yuanming Shi and Youlong Wu
Entropy 2025, 27(11), 1163; https://doi.org/10.3390/e27111163 - 15 Nov 2025
Viewed by 296
Abstract
Federated representation learning (FRL) is a promising technique for learning shared data representations that capture general features across decentralized clients without sharing raw data. However, there is a risk of sensitive information leakage from learned representations. The conventional differential privacy (DP) mechanism protects [...] Read more.
Federated representation learning (FRL) is a promising technique for learning shared data representations that capture general features across decentralized clients without sharing raw data. However, there is a risk of sensitive information leakage from learned representations. The conventional differential privacy (DP) mechanism protects the privacy of the whole data by randomizing (adding noise or random response) at the cost of deteriorating learning performance. Inspired by the fact that some data information may be public or non-private and only sensitive information (e.g., race) should be protected, we investigate the information-theoretic protection on specific sensitive information for FRL. To characterize the trade-off between utility and sensitive information leakage, we adopt mutual information-based metrics to measure utility and sensitive information leakage, and propose a method that maximizes the utility performance, while restricting sensitive information leakage less than any positive value ϵ via the local DP mechanism. Simulation demonstrates that our scheme can achieve the best utility–leakage trade-off among baseline schemes, and more importantly can adjust the trade-off between leakage and utility by controlling the noise level in local DP. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches for Machine Learning and AI)
Show Figures

Figure 1

25 pages, 2149 KB  
Article
A Multi-Objective Framework for Biomethanol Process Integration in Sugarcane Biorefineries Under a Multiperiod MILP Superstructure
by Victor Fernandes Garcia, Reynaldo Palacios-Bereche and Adriano Viana Ensinas
Entropy 2025, 27(11), 1162; https://doi.org/10.3390/e27111162 - 15 Nov 2025
Viewed by 271
Abstract
The growing demand for renewable energy positions biorefineries as key to enhancing biofuel competitiveness. This study proposes a novel MILP superstructure integrating resource seasonality, process selection, and heat integration to optimize biomethanol production in a sugarcane biorefinery. A multi-objective optimization balancing net present [...] Read more.
The growing demand for renewable energy positions biorefineries as key to enhancing biofuel competitiveness. This study proposes a novel MILP superstructure integrating resource seasonality, process selection, and heat integration to optimize biomethanol production in a sugarcane biorefinery. A multi-objective optimization balancing net present value (NPV) and avoided CO2 emissions reveals that energy integration improves environmental performance with limited economic impact. The model estimates the production of up to 66.85 kg of biomethanol/ton sugarcane from bagasse gasification, 40.7 kg e-methanol/ton sugarcane via CO2 hydrogenation, and 3.68 kg of biomethane/ton sugarcane from biogas upgrading. Hydrogen production through biomethane reforming and photovoltaic-powered electrolysis increases methanol output without raising emissions. The integrated system achieves energy efficiencies of up to 57.3% and enables the avoidance of up to 493 kg of CO2/ton sugarcane over the planning horizon. When thermal integration is excluded, efficiency drops by 8% and net energy production per area falls by 11%, due to the need to divert bagasse to cogeneration. Although economic challenges remain, CO2 remuneration ranging from USD 3.27 to USD 129.79 per ton could ensure project viability. These findings highlight the role of integrated energy systems in enabling sustainable and economically feasible sugarcane biorefineries. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Energy Systems)
Show Figures

Figure 1

15 pages, 475 KB  
Article
Unveiling Sudden Transitions Between Classical and Quantum Decoherence in the Hyperfine Structure of Hydrogen Atoms
by Kamal Berrada and Smail Bougouffa
Entropy 2025, 27(11), 1161; https://doi.org/10.3390/e27111161 - 15 Nov 2025
Viewed by 339
Abstract
This paper investigates the dynamics of quantum and classical geometric correlations in the hyperfine structure of the hydrogen atom under pure dephasing noise, focusing on the interplay between entangled initial states and environmental effects. We employ the Lindblad master equation to model dephasing, [...] Read more.
This paper investigates the dynamics of quantum and classical geometric correlations in the hyperfine structure of the hydrogen atom under pure dephasing noise, focusing on the interplay between entangled initial states and environmental effects. We employ the Lindblad master equation to model dephasing, deriving differential equations for the density matrix elements to capture the evolution of the system. The study explores various entangled initial states, characterized by parameters a1, a2, and a3, and their impact on correlation dynamics under different dephasing rates Γ. A trace distance approach is utilized to quantify classical and quantum geometric correlations, offering comparative insights into their behavior. Numerical analysis reveals a transition point where classical and quantum correlations equalize, followed by distinct decay and stabilization phases, influenced by initial coherence along the z-axis. Our results reveal a universal sudden transition from classical to quantum decoherence, consistent with observations in other open quantum systems. They highlight how initial state preparation and dephasing strength critically influence the stability of quantum and classical correlations, with direct implications for quantum metrology and the development of noise-resilient quantum technologies. By focusing on the hyperfine structure of hydrogen, this study addresses a timely and relevant problem, bridging fundamental quantum theory with experimentally accessible atomic systems and emerging quantum applications. Full article
(This article belongs to the Special Issue Quantum Information and Quantum Computation)
Show Figures

Figure 1

20 pages, 1484 KB  
Article
Post-Quantum Secure Lightweight Revocable IBE with Decryption Key Exposure Resistance
by Dandan Zhang, Hongwei Ju, Zixuan Yan, Shanqiang Feng and Fengyin Li
Entropy 2025, 27(11), 1160; https://doi.org/10.3390/e27111160 - 14 Nov 2025
Viewed by 371
Abstract
Revocable Identity-Based Encryption (RIBE) can dynamically revoke users whose secret keys have been compromised, ensuring a system’s backward security. An RIBE scheme with decryption key exposure resistance (DKER) guarantees the confidentiality of ciphertext during any time period where the decryption key remains undisclosed. [...] Read more.
Revocable Identity-Based Encryption (RIBE) can dynamically revoke users whose secret keys have been compromised, ensuring a system’s backward security. An RIBE scheme with decryption key exposure resistance (DKER) guarantees the confidentiality of ciphertext during any time period where the decryption key remains undisclosed. Existing RIBE schemes with DKER generate O(rlog(N/r)) ciphertexts for each plaintext message. Redundant ciphertexts impose significant computational burdens on users and substantial communication overhead on the system. To reduce high computation and communication overhead in existing schemes, this paper proposes a dual-key combination trapdoor generation method. Based on the proposed method, an indirect RIBE scheme with DKER is constructed, reducing ciphertext redundancy and obtaining computation and communication efficiency. Firstly, this paper proposes a dual-key combination trapdoor generation mechanism. By constructing an Inhomogeneous Small Integer Solution (ISIS) instance, the Key Generation Center (KGC) generates and distributes short bases to users as their identity keys. Subsequently, based on the constructed ISIS instance, a new inverse ISIS instance is derived. Furthermore, during each time period, KGC generates short bases for all non-revoked users as their time keys. By linearly combining their identity key with the corresponding time key, every non-revoked user can derive a re-randomized decryption key, achieving controlled key derivation. Secondly, based on the proposed method, a Post-Quantum Secure, Lightweight RIBE scheme with DKER (PQS-LRIBE-DKER) is constructed. For every non-revoked user, their identity key and time key serve as their own user secret key and key update, respectively. Controllable key derivation enables indirect revocation of the scheme. By adopting an indirect revocation, the PQS-LRIBE-DKER scheme achieves a single ciphertext per plaintext message, significantly reducing the sender’s computational load and the system’s communication overhead. Finally, under the hardness assumptions of the Learning with Errors (LWE) and ISIS problems, we prove that the proposed scheme achieves selective identity security in the standard model. Full article
Show Figures

Figure 1

21 pages, 7497 KB  
Article
Robust Deep Active Learning via Distance-Measured Data Mixing and Adversarial Training
by Shinan Song, Xing Wang, Shike Dong and Jingyan Jiang
Entropy 2025, 27(11), 1159; https://doi.org/10.3390/e27111159 - 14 Nov 2025
Viewed by 274
Abstract
Accurate uncertainty estimation in unlabeled data represents a fundamental challenge in active learning. Traditional deep active learning approaches suffer from a critical limitation: uncertainty-based selection strategies tend to concentrate excessively around noisy decision boundaries, while diversity-based methods may miss samples that are crucial [...] Read more.
Accurate uncertainty estimation in unlabeled data represents a fundamental challenge in active learning. Traditional deep active learning approaches suffer from a critical limitation: uncertainty-based selection strategies tend to concentrate excessively around noisy decision boundaries, while diversity-based methods may miss samples that are crucial for decision-making. This over-reliance on confidence metrics when employing deep neural networks as backbone architectures often results in suboptimal data selection. We introduce Distance-Measured Data Mixing (DM2), a novel framework that estimates sample uncertainty through distance-weighted data mixing to capture inter-sample relationships and the underlying data manifold structure. This approach enables informative sample selection across the entire data distribution while maintaining focus on near-boundary regions without overfitting to the most ambiguous instances. To address noise and instability issues inherent in boundary regions, we propose a boundary-aware feature fusion mechanism integrated with fast gradient adversarial training. This technique generates adversarial counterparts of selected near-boundary samples and trains them jointly with the original instances, thereby enhancing model robustness and generalization capabilities under complex or imbalanced data conditions. Comprehensive experiments across diverse tasks, model architectures, and data modalities demonstrate that our approach consistently surpasses strong uncertainty-based and diversity-based baselines while significantly reducing the number of labeled samples required for effective learning. Full article
Show Figures

Figure 1

41 pages, 2717 KB  
Review
Quantum Shannon Information Theory—Design of Communication, Ciphers, and Sensors
by Osamu Hirota
Entropy 2025, 27(11), 1158; https://doi.org/10.3390/e27111158 - 14 Nov 2025
Viewed by 420
Abstract
One of the key aspects of Shannon theory is that it provides guidance for designing the most efficient systems, such as minimizing errors and clarifying the limits of coding. This theory has seen great developments in the 50 years since 1948. It has [...] Read more.
One of the key aspects of Shannon theory is that it provides guidance for designing the most efficient systems, such as minimizing errors and clarifying the limits of coding. This theory has seen great developments in the 50 years since 1948. It has played a vital role in enabling the development of modern ultra-fast, stable, and highly dependable information and communication systems. Shannon theory is supported by statistical communication theories such as detection and estimation theory. The theory of communication systems that transmit Shannon information using quantum media is called quantum Shannon information theory, and research began in the 1960s. The theoretical formulation comparable to conventional Shannon theory has been completed. Its important role is to suggest that application of quantum effects will surpass existing communication performance. It would be meaningless if performance, efficiency, and utility were to deteriorate due to quantum effects, even if a certain new function is given. This paper suggests that there are various limitations to utilizing quantum Shannon information theory to benefit real-world communication systems and presents a theoretical framework for achieving the ultimate goal. Finally, we present the perfect secure cipher that overcomes the Shannon impossibility theorem without degrading communication performance and sensors as an example. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

24 pages, 2475 KB  
Article
Adaptive Belief Rule Base Modeling of Complex Industrial Systems Based on Sigmoid Functions
by Haolan Huang, Shucheng Feng, Jingying Li, Tianshu Guan and Hailong Zhu
Entropy 2025, 27(11), 1157; https://doi.org/10.3390/e27111157 - 14 Nov 2025
Viewed by 264
Abstract
In response to the challenges posed by multifactorial nonlinear relationships and uncertainties, and to address the limitations of the existing Belief Rule Base (BRB) in nonlinear fitting, uncertainty representation, and parameter optimization, this paper presents an improved reliable modeling method using a nonlinear [...] Read more.
In response to the challenges posed by multifactorial nonlinear relationships and uncertainties, and to address the limitations of the existing Belief Rule Base (BRB) in nonlinear fitting, uncertainty representation, and parameter optimization, this paper presents an improved reliable modeling method using a nonlinear belief rule base (R-NBRB). First, the linear inference mechanism is replaced by a smooth nonlinear S-function. This replacement better adapts to nonlinear dynamics in complex industrial systems. Second, attribute reliability is quantified through a reliability assessment method. Data, reliability, and expert knowledge are integrated using the Evidential Reasoning (ER) algorithm. Uncertainty is expressed in the form of belief degrees. Finally, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm is applied to optimize the inference parameters. Decision bias caused by insufficient expert knowledge is thereby reduced. Experiments were conducted on a task involving the detection of a petroleum pipeline leak. The mean squared error (MSE) of the R-NBRB model is only 0.2569. This represents a 28.24% reduction compared with the BRB model. The proposed method’s effectiveness and adaptability in complex industrial situations are confirmed. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

17 pages, 2324 KB  
Article
Road Agglomerate Fog Detection Method Based on the Fusion of SURF and Optical Flow Characteristics from UAV Perspective
by Fuyang Guo, Haiqing Liu, Mengmeng Zhang, Mengyuan Jing and Xiaolong Gong
Entropy 2025, 27(11), 1156; https://doi.org/10.3390/e27111156 - 14 Nov 2025
Viewed by 246
Abstract
Road agglomerate fog seriously threatens driving safety, making real-time fog state detection crucial for implementing reliable traffic control measures. With advantages in aerial perspective and a broad field of view, UAVs have emerged as a novel solution for road agglomerate fog monitoring. This [...] Read more.
Road agglomerate fog seriously threatens driving safety, making real-time fog state detection crucial for implementing reliable traffic control measures. With advantages in aerial perspective and a broad field of view, UAVs have emerged as a novel solution for road agglomerate fog monitoring. This paper proposes an agglomerate fog detection method based on the fusion of SURF and optical flow characteristics. To synthesize an adequate agglomerate fog sample set, a novel network named FogGAN is presented by injecting physical cues into the generator using a limited number of field-collected fog images. Taking the region of interest (ROI) for agglomerate fog detection in the UAV image as the basic unit, SURF is employed to describe static texture features, while optical flow is employed to capture frame-to-frame motion characteristics, and a multi-feature fusion approach based on Bayesian theory is subsequently introduced. Experimental results demonstrate the effectiveness of FogGAN for its capability to generate a more realistic dataset of agglomerate fog sample images. Furthermore, the proposed SURF and optical flow fusion method performs higher precision, recall, and F1-score for UAV perspective images compared with XGBoost-based and survey-informed fusion methods. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 5674 KB  
Article
Supervised and Unsupervised Learning with Numerical Computation for the Wolfram Cellular Automata
by Kui Tuo, Shengfeng Deng, Yuxiang Yang, Yanyang Wang, Qiuping Wang, Wei Li and Wenjun Zhang
Entropy 2025, 27(11), 1155; https://doi.org/10.3390/e27111155 - 14 Nov 2025
Viewed by 421
Abstract
The local rules of elementary cellular automata (ECA) with one-dimensional three-cell neighborhoods are represented by eight-bit binary numbers that encode deterministic update rules. This class of systems is also commonly referred to as the Wolfram cellular automata. These automata are widely utilized to [...] Read more.
The local rules of elementary cellular automata (ECA) with one-dimensional three-cell neighborhoods are represented by eight-bit binary numbers that encode deterministic update rules. This class of systems is also commonly referred to as the Wolfram cellular automata. These automata are widely utilized to investigate self-organization phenomena and the dynamics of complex systems. In this work, we employ numerical simulations and computational methods to investigate the asymptotic density and dynamical evolution mechanisms in Wolfram automata. We explore alternative initial conditions under which certain Wolfram rules generate similar fractal patterns over time, even when starting from a single active site. Our results reveal the relationship between the asymptotic density and the initial density of selected rules. Furthermore, we apply both supervised and unsupervised learning methods to identify the configurations associated with different Wolfram rules. The supervised learning methods effectively identify the configurations of various Wolfram rules, while unsupervised methods like principal component analysis and autoencoders can approximately cluster configurations of different Wolfram rules into distinct groups, yielding results that align well with simulated density outputs. Machine learning methods offer significant advantages in identifying different Wolfram rules, as they can effectively distinguish highly similar configurations that are challenging to differentiate manually. Full article
Show Figures

Figure 1

22 pages, 3322 KB  
Article
Research on Integrated Modularization of Supercritical Carbon Dioxide System for Aircraft Carrier Nuclear Power
by Shengya Hou, Junren Chen, Fengyuan Zhang and Qiguo Yang
Entropy 2025, 27(11), 1154; https://doi.org/10.3390/e27111154 - 14 Nov 2025
Viewed by 568
Abstract
This paper innovatively presents an integrated nuclear-powered supercritical carbon dioxide (S-CO2) system for aircraft carriers, replacing the conventional secondary-loop steam Rankine cycle with a regenerative S-CO2 power cycle. The system comprises two modules: a nuclear reactor module and a S-CO [...] Read more.
This paper innovatively presents an integrated nuclear-powered supercritical carbon dioxide (S-CO2) system for aircraft carriers, replacing the conventional secondary-loop steam Rankine cycle with a regenerative S-CO2 power cycle. The system comprises two modules: a nuclear reactor module and a S-CO2 power module. Comprehensive thermodynamic, economic, and compactness analyses were conducted, using exergy efficiency, levelized energy cost (LEC), and heat transfer area per unit power output (APR) as objective functions for optimization. Parameter analysis revealed the influence of key operating parameters on system performance, and a multi-objective optimization approach based on genetic algorithms was employed to determine optimal system parameters. The results indicate that the system achieves an exergy efficiency of 45%, an APR of 0.168 m2 kW−1, and an LEC of 2.1 cents/(kW·h). This high compactness, combined with superior thermodynamic and economic performance, underscores the feasibility of the S-CO2 system for integration into nuclear-powered aircraft carriers, offering significant potential to enhance their overall performance and operational efficiency. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Energy Systems)
Show Figures

Figure 1

22 pages, 556 KB  
Article
On the Shortfall of Tail-Based Entropy and Its Application to Capital Allocation
by Pingyun Li and Chuancun Yin
Entropy 2025, 27(11), 1153; https://doi.org/10.3390/e27111153 - 13 Nov 2025
Viewed by 325
Abstract
We introduce and study the shortfall of tail-based entropy (STE), a tail-sensitive risk functional that combines expected shortfall (ES) and tail-based entropy (TE). Beyond the tail mean, STE imposes a rank-dependent penalty on tail variability, thereby capturing both the magnitude and variability of [...] Read more.
We introduce and study the shortfall of tail-based entropy (STE), a tail-sensitive risk functional that combines expected shortfall (ES) and tail-based entropy (TE). Beyond the tail mean, STE imposes a rank-dependent penalty on tail variability, thereby capturing both the magnitude and variability of tail risk under extremes. The framework encompasses several shortfall-type measures as special cases, such as Gini shortfall, extended Gini shortfall, shortfall of cumulative residual entropy, shortfall of right-tail deviation, and shortfall of cumulative residual Tsallis entropy. We provide equivalent characterizations of STE, derive sufficient conditions for coherence, and establish monotonicity with respect to tail-variability order. As an application, we investigate STE-based capital allocation, deriving closed-form allocation formulas under elliptical and extended skew-normal distributions, along with several illustrative special cases. Finally, an empirical analysis with insurance company data illustrates the implementation and evaluates the performance of the allocation rule. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

21 pages, 4687 KB  
Article
Research on Image Encryption with Multi-Level Keys Based on a Six-Dimensional Memristive Chaotic System
by Xiaobin Zhang, Yaxuan Chai, Shitao Xiang and Shaozhen Li
Entropy 2025, 27(11), 1152; https://doi.org/10.3390/e27111152 - 13 Nov 2025
Viewed by 284
Abstract
To address the security of digital images, this paper proposes a novel image encryption algorithm based on a six-dimensional memristive chaotic system. First, the algorithm uses the Secure Hash Algorithm 256 (SHA-256) to generate a hash value, from which the initial dynamic key [...] Read more.
To address the security of digital images, this paper proposes a novel image encryption algorithm based on a six-dimensional memristive chaotic system. First, the algorithm uses the Secure Hash Algorithm 256 (SHA-256) to generate a hash value, from which the initial dynamic key is derived. Next, it integrates Zigzag scrambling, chaotic index scrambling, and diffusion operations to form an encryption scheme with multiple rounds of scrambling and diffusion. In this framework, after each encryption operation, a part of the dynamic key is changed according to the input parameters, and the six-dimensional memristive chaotic system continues iterating to generate the pseudo-random sequence for the next operation. Finally, the proposed algorithm is evaluated using indicators including information entropy, histograms, the Number of Pixels Change Rate (NPCR) and Unified Average Changing Intensity (UACI), encryption time, and so on. The results show that the information entropy of the encrypted image reaches 7.9979; its Chi-square statistic is 186.6875; the average NPCR and UACI are 99.6111% and 33.4643%, respectively; and the encryption time is 0.342 s for the 256 × 256 Cameraman image. These indicate that image encryption is not only effective in encrypting images but also resistant to many conventional attacks. Full article
Show Figures

Figure 1

20 pages, 8724 KB  
Article
An Outlier Suppression and Adversarial Learning Model for Anomaly Detection in Multivariate Time Series
by Wei Zhang, Ting Li, Ping He, Yuqing Yang and Shengrui Wang
Entropy 2025, 27(11), 1151; https://doi.org/10.3390/e27111151 - 13 Nov 2025
Viewed by 395
Abstract
Multivariate time series anomaly detection is a critical task in modern engineering, with applications spanning environmental monitoring, network security, and industrial systems. While reconstruction-based methods have shown promise, they often suffer from overfitting and fail to adequately distinguish between normal and anomalous data, [...] Read more.
Multivariate time series anomaly detection is a critical task in modern engineering, with applications spanning environmental monitoring, network security, and industrial systems. While reconstruction-based methods have shown promise, they often suffer from overfitting and fail to adequately distinguish between normal and anomalous data, limiting their generalization capabilities. To address these challenges, we propose the AOST model, which integrates adversarial learning with an outlier suppression mechanism within a Transformer framework. The model introduces an outlier suppression attention mechanism to enhance the distinction between normal and anomalous data points, thereby improving sensitivity to deviations. Additionally, a dual-decoder generative adversarial architecture is employed to enforce consistent data distribution learning, enhancing robustness and generalization. A novel anomaly scoring strategy based on longitudinal differences further refines detection accuracy. Extensive experiments on three public datasets—SWaT, WADI, SMAP, and PSM—demonstrate the model’s superior performance, achieving an average F1 score of 88.74%, which surpasses existing state-of-the-art methods. These results underscore the effectiveness of AOST in advancing multivariate time series anomaly detection. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

23 pages, 1540 KB  
Article
Learning in Probabilistic Boolean Networks via Structural Policy Gradients
by Pedro Juan Rivera Torres
Entropy 2025, 27(11), 1150; https://doi.org/10.3390/e27111150 - 13 Nov 2025
Viewed by 338
Abstract
We revisit Probabilistic Boolean Networks as trainable function approximators. The key obstacle, non-differentiable structural choices (which predictors to read and which Boolean operators to apply), is addressed by casting the PBN’s structure as a stochastic policy whose parameters are optimized with score-function (REINFORCE) [...] Read more.
We revisit Probabilistic Boolean Networks as trainable function approximators. The key obstacle, non-differentiable structural choices (which predictors to read and which Boolean operators to apply), is addressed by casting the PBN’s structure as a stochastic policy whose parameters are optimized with score-function (REINFORCE) gradients. Continuous output heads (logistic/linear/softmax or policy logits) are trained with ordinary gradients. We call the resulting model a Learning PBN. We formalize the Learning Probabilistic Boolean Network, derive unbiased structural gradients with variance reduction, and prove a universal approximation property over discretized inputs. Empirically, Learning Probabilistic Boolean Networks approach ANN performance across classification (accuracy ↑), regression (RMSE ↓), representation quality via clustering (ARI ↑), and reinforcement learning (return ↑) while yielding interpretable, rule-like internal units. We analyze the effect of binning resolution, operator sets, and unit counts, and show how the learned logic stabilizes as training progresses. Our results indicate that PBNs can serve as general-purpose learners, competitive with ANNs in tabular/noisy regimes, without sacrificing interpretability. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop