Previous Issue
Volume 27, April
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 5 (May 2025) – 90 articles

Cover Story (view full-size image): This picture shows the uncertainty sets associated with received codewords in a communication system. Each set represents a possible received signal, and to achieve capacity at a given level of confidence, the sets are tightly packed in space and partially overlap. In this paper, we demonstrate that the highest rate of communication at a given level of confidence corresponds to a non-stochastic notion of mutual information between transmitted and received signals, expressed in terms of the quantization of the range of uncertainty of one signal given knowledge of the other. This provides a generalization of Kolmogorov’s capacity to sets with partial overlap and an information-theoretic interpretation of this quantity that admits the possibility of decoding errors, as in classical information theory, while retaining the worst-case, non-stochastic character of Kolmogorov’s approach. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 2167 KiB  
Article
Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits
by Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh and Deniz Gündüz
Entropy 2025, 27(5), 541; https://doi.org/10.3390/e27050541 - 20 May 2025
Abstract
We consider the distributed stochastic gradient descent problem, where a main node distributes gradient calculations among n workers. By assigning tasks to all workers and waiting only for the k fastest ones, the main node can trade off the algorithm’s error with its [...] Read more.
We consider the distributed stochastic gradient descent problem, where a main node distributes gradient calculations among n workers. By assigning tasks to all workers and waiting only for the k fastest ones, the main node can trade off the algorithm’s error with its runtime by gradually increasing k as the algorithm evolves. However, this strategy, referred to as adaptive k-sync, neglects the cost of unused computations and of communicating models to workers that reveal a straggling behavior. We propose a cost-efficient scheme that assigns tasks only to k workers, and gradually increases k. To learn which workers are the fastest while assigning gradient calculations, we introduce the use of a combinatorial multi-armed bandit model. Assuming workers have exponentially distributed response times with different means, we provide both empirical and theoretical guarantees on the regret of our strategy, i.e., the extra time spent learning the mean response times of the workers. Furthermore, we propose and analyze a strategy that is applicable to a large class of response time distributions. Compared to adaptive k-sync, our scheme achieves significantly lower errors with the same computational efforts and less downlink communication while being inferior in terms of speed. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches for Machine Learning and AI)
23 pages, 2258 KiB  
Article
Research on Cold Chain Logistics Joint Distribution Vehicle Routing Optimization Based on Uncertainty Entropy and Time-Varying Network
by Huaixia Shi, Yu Hong, Qinglei Zhang and Jiyun Qin
Entropy 2025, 27(5), 540; https://doi.org/10.3390/e27050540 - 20 May 2025
Abstract
The sharing economy is an inevitable trend in cold chain logistics. Most cold chain logistics enterprises are small and operate independently, with limited collaboration. Joint distribution is key to integrating cold chain logistics and the sharing economy. It aims to share logistics resources, [...] Read more.
The sharing economy is an inevitable trend in cold chain logistics. Most cold chain logistics enterprises are small and operate independently, with limited collaboration. Joint distribution is key to integrating cold chain logistics and the sharing economy. It aims to share logistics resources, provide collective customer service, and optimize distribution routes. However, existing studies have overlooked uncertainty factors in joint distribution optimization. To address this, we propose the Cold Chain Logistics Joint Distribution Vehicle Routing Problem with Time-Varying Network (CCLJDVRP-TVN). This model integrates traffic congestion uncertainty and constructs a time-varying network to reflect real-world conditions. The solution combines simulated annealing strategies with genetic algorithms. It also uses the entropy mechanism to optimize uncertainties, improving global search performance. The method was applied to optimize vehicle routing for three cold chain logistics companies in Beijing. The results show a reduction in logistics costs by 18.3%, carbon emissions by 15.8%, and fleet size by 12.5%. It also effectively addresses the impact of congestion and uncertainty on distribution. This study offers valuable theoretical support for optimizing joint distribution and managing uncertainties in cold chain logistics. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

13 pages, 1039 KiB  
Article
A Two-State Random Walk Model of Sperm Search on Confined Domains
by Martin Bier, Maciej Majka and Cameron Schmidt
Entropy 2025, 27(5), 539; https://doi.org/10.3390/e27050539 - 19 May 2025
Abstract
Mammalian fertilization depends on sperm successfully navigating a spatially and chemically complex microenvironment in the female reproductive tract. This process is often conceptualized as a competitive race, but is better understood as a collective random search. Sperm within an ejaculate exhibit a diverse [...] Read more.
Mammalian fertilization depends on sperm successfully navigating a spatially and chemically complex microenvironment in the female reproductive tract. This process is often conceptualized as a competitive race, but is better understood as a collective random search. Sperm within an ejaculate exhibit a diverse distribution of motility patterns, with some moving in relatively straight lines and others following tightly turning trajectories. Here, we present a two-state random walk model in which sperm switch from high-persistence-length to low-persistence-length motility modes. In reproductive biology, such a switch is often recognized as “hyperactivation”. We study a circularly symmetric setup with sperm emerging at the center and searching a finite-area disk. We explore the implications of switching on search efficiency. The first proposed model describes an adaptive search strategy in which sperm achieve improved spatial coverage without cell-to-cell or environment-to-cell communication. The second model that we study adds a small amount of environment-to-cell communication. The models resemble macroscopic search-and-rescue tactics, but without organization or networked communication. Our findings provide a quantitative framework linking sperm motility patterns to efficient search strategies, offering insights into sperm physiology and the stochastic search dynamics of self-propelled particles. Full article
24 pages, 5161 KiB  
Article
Fixed-Time Cooperative Formation Control of Heterogeneous Systems Under Multiple Constraints
by Yandong Li, Wei Zhao, Ling Zhu, Zehua Zhang and Yuan Guo
Entropy 2025, 27(5), 538; https://doi.org/10.3390/e27050538 - 17 May 2025
Viewed by 124
Abstract
This paper proposes a fixed-time formation-tracking control problem for a heterogeneous multi-agent system (MAS) consisting of six unmanned aerial vehicles (UAVs) and three unmanned ground vehicles (UGVs) under actuator attacks, external disturbances, and input saturation. First, a distributed sliding mode estimator and controller [...] Read more.
This paper proposes a fixed-time formation-tracking control problem for a heterogeneous multi-agent system (MAS) consisting of six unmanned aerial vehicles (UAVs) and three unmanned ground vehicles (UGVs) under actuator attacks, external disturbances, and input saturation. First, a distributed sliding mode estimator and controller tailored for UAV-UGV heterogeneous systems are proposed based on sliding mode techniques. Second, by integrating repulsive potential functions with sliding manifolds, a distributed fixed-time adaptive sliding mode control protocol was designed. This protocol ensures collision avoidance while enabling the MASs to track desired trajectories and achieve a predefined formation configuration within a fixed time. The fixed-time stability of the closed-loop system was rigorously proven via Lyapunov theory. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

19 pages, 380 KiB  
Article
Bootstrap Confidence Intervals for Multiple Change Points Based on Two-Stage Procedures
by Li Hou, Baisuo Jin, Yuehua Wu and Fangwei Wang
Entropy 2025, 27(5), 537; https://doi.org/10.3390/e27050537 - 17 May 2025
Viewed by 101
Abstract
This paper investigates the construction of confidence intervals for multiple change points in linear regression models. First, we detect multiple change points by performing variable selection on blocks of the input sequence; second, we re-estimate their exact locations in a refinement step. Specifically, [...] Read more.
This paper investigates the construction of confidence intervals for multiple change points in linear regression models. First, we detect multiple change points by performing variable selection on blocks of the input sequence; second, we re-estimate their exact locations in a refinement step. Specifically, we exploit an orthogonal greedy algorithm to recover the number of change points consistently in the cutting stage, and employ the sup-Wald-type test statistic to determine the locations of multiple change points in the refinement stage. Based on a two-stage procedure, we propose bootstrapping the estimated centered error sequence, which can accommodate unknown magnitudes of changes and ensure the asymptotic validity of the proposed bootstrapping method. This enables us to construct confidence intervals using the empirical distribution of the resampled data. The proposed method is illustrated with simulations and real data examples. Full article
Show Figures

Figure 1

12 pages, 11669 KiB  
Article
Using Nearest-Neighbor Distributions to Quantify Machine Learning of Materials’ Microstructures
by Jeffrey M. Rickman, Katayun Barmak, Matthew J. Patrick and Godfred Adomako Mensah
Entropy 2025, 27(5), 536; https://doi.org/10.3390/e27050536 - 17 May 2025
Viewed by 126
Abstract
Machine learning strategies for the semantic segmentation of materials’ micrographs, such as U-Net, have been employed in recent years to enable the automated identification of grain-boundary networks in polycrystals. For example, most recently, this architecture has allowed researchers to address the long-standing problem [...] Read more.
Machine learning strategies for the semantic segmentation of materials’ micrographs, such as U-Net, have been employed in recent years to enable the automated identification of grain-boundary networks in polycrystals. For example, most recently, this architecture has allowed researchers to address the long-standing problem of automated image segmentation of thin-film microstructures in bright-field TEM micrographs. Such approaches are typically based on the minimization of a binary cross-entropy loss function that compares constructed images to a ground truth at the pixel level over many epochs. In this work, we quantify the rate at which the underlying microstructural features embodied in the grain-boundary network, as described stereologically, are also learned in this process. In particular, we assess the rate of microstructural learning in terms of the moments of the k-th nearest-neighbor pixel distributions and associated metrics, including a microstructural cross-entropy, that embody the spatial correlations among the pixels through a hierarchy of n-point correlation functions. From the moments of these distributions, we obtain so-called learning functions that highlight the rate at which the important topological features of a grain-boundary network appear. It is found that the salient features of network structure emerge after relatively few epochs, suggesting that grain size, network topology, etc., are learned early (as measured in epochs) during the segmentation process. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 5007 KiB  
Article
DFST-UNet: Dual-Domain Fusion Swin Transformer U-Net for Image Forgery Localization
by Jianhua Yang, Anjun Xie, Tao Mai and Yifang Chen
Entropy 2025, 27(5), 535; https://doi.org/10.3390/e27050535 - 17 May 2025
Viewed by 95
Abstract
Image forgery localization is critical in defending against the malicious manipulation of image content, and is attracting increasing attention worldwide. In this paper, we propose a Dual-domain Fusion Swin Transformer U-Net (DFST-UNet) for image forgery localization. DFST-UNet is built on a U-shaped encoder–decoder [...] Read more.
Image forgery localization is critical in defending against the malicious manipulation of image content, and is attracting increasing attention worldwide. In this paper, we propose a Dual-domain Fusion Swin Transformer U-Net (DFST-UNet) for image forgery localization. DFST-UNet is built on a U-shaped encoder–decoder architecture. Swin Transformer blocks are integrated into the U-Net architecture to capture long-range context information and perceive forged regions at different scales. Considering the fact that high-frequency forgery information is an essential clue for forgery localization, a dual-stream encoder is proposed to comprehensively expose forgery clues in both the RGB domain and the frequency domain. A novel high-frequency feature extractor module (HFEM) is designed to extract robust high-frequency features. A hierarchical attention fusion module (HAFM) is designed to effectively fuse the dual-domain features. Extensive experimental results demonstrate the superiority of DFST-UNet over the state-of-the-art methods in the task of image forgery localization. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

18 pages, 1062 KiB  
Article
Investigation of the Internal Structure of Hard-to-Reach Objects Using a Hybrid Algorithm on the Example of Walls
by Rafał Brociek, Józef Szczotka, Mariusz Pleszczyński, Francesca Nanni and Christian Napoli
Entropy 2025, 27(5), 534; https://doi.org/10.3390/e27050534 - 16 May 2025
Viewed by 37
Abstract
The article presents research on the application of computed tomography with an incomplete dataset to the problem of examining the internal structure of walls. The case of incomplete information in computed tomography often occurs in various applications, e.g., when examining large objects or [...] Read more.
The article presents research on the application of computed tomography with an incomplete dataset to the problem of examining the internal structure of walls. The case of incomplete information in computed tomography often occurs in various applications, e.g., when examining large objects or when examining hard-to-reach objects. Algorithms dedicated to this type of problem can be used to detect anomalies (defects, cracks) in the walls, among other artifacts. Situations of this type may occur, for example, in old buildings, where special caution should be exercised. The approach presented in the article consists of a non-standard solution to the problem of reconstructing the internal structure of the tested object. The classical approach involves constructing an appropriate system of equations based on X-rays, the solution of which describes the structure. However, this approach has a drawback: solving such systems of equations is computationally very complex, because the algorithms used, combined with incomplete information, converge very slowly. In this article, we propose a different approach that eliminates this problem. To simulate the structure of the tested object, we use a hybrid algorithm that is a combination of a metaheuristic optimization algorithm (Group Teaching Optimization Algorithm) and a numerical optimization method (Hook-Jeeves method). In order to solve the considered inverse problem, a functional measuring the fit of the model to the measurement data is created. The hybrid algorithm presented in this paper was used to find the minimum of this functional. This paper also shows computational examples illustrating the effectiveness of the algorithms. Full article
(This article belongs to the Special Issue Inverse Problems: Advanced Methods and Innovative Applications)
19 pages, 2005 KiB  
Article
Network Risk Diffusion and Resilience in Emerging Stock Markets
by Jiang-Cheng Li, Yi-Zhen Xu and Chen Tao
Entropy 2025, 27(5), 533; https://doi.org/10.3390/e27050533 - 16 May 2025
Viewed by 22
Abstract
With the acceleration of globalization, the connections between emerging market economies are becoming increasingly intricate, making it crucial to understand the mechanisms of risk transmission. This study employs the transfer entropy model to analyze risk diffusion and network resilience across ten emerging market [...] Read more.
With the acceleration of globalization, the connections between emerging market economies are becoming increasingly intricate, making it crucial to understand the mechanisms of risk transmission. This study employs the transfer entropy model to analyze risk diffusion and network resilience across ten emerging market countries. The findings reveal that Brazil, Mexico, and Saudi Arabia are the primary risk exporters, while countries such as India, South Africa, and Indonesia predominantly act as risk receivers. The research highlights the profound impact of major events such as the 2008 global financial crisis and the 2020 COVID-19 pandemic on risk diffusion, with risk diffusion peaking during the pandemic. Additionally, the study underscores the importance of network resilience, suggesting that certain levels of noise and shocks can enhance resilience and improve network stability. While the global economy gradually recovered following the 2008 financial crisis, the post-pandemic recovery has been slower, with external shocks and noise presenting long-term challenges to network resilience. This study emphasizes the importance of understanding network resilience and risk diffusion mechanisms, offering new insights for managing risk transmission in future global economic crises. Full article
(This article belongs to the Special Issue Complexity in Financial Networks)
Show Figures

Figure 1

23 pages, 4430 KiB  
Article
Exergetic Analysis and Design of a Mechanical Compression Stage—Application for a Cryogenic Air Separation Plant
by Adalia Andreea Percembli (Chelmuș), Arthur Dupuy, Lavinia Grosu, Daniel Dima and Alexandru Dobrovicescu
Entropy 2025, 27(5), 532; https://doi.org/10.3390/e27050532 - 16 May 2025
Viewed by 30
Abstract
This study focuses on the compression area of a cryogenic air separation unit (ASU). The mechanism of exergy consumption in the compressor was revealed. The influence of the compression ratio and of the isentropic efficiency per stage give arguments for proper choice of [...] Read more.
This study focuses on the compression area of a cryogenic air separation unit (ASU). The mechanism of exergy consumption in the compressor was revealed. The influence of the compression ratio and of the isentropic efficiency per stage give arguments for proper choice of these decisional parameters. For the purchase cost of the compressor, an exergoeconomic correlation based on the exergetic product represented by the compression ratio and the isentropic efficiency as the Second Law coefficient of performance was used instead of the common thermo-economic one based only on the cost of materials. The impact of the suction temperature on the compressor operating performance is shown, making the gap between the compression stage and the associated intercooler. After optimization of the global system, a specific exergy destruction is assigned to each inter-stage compression cooler. To fit this optimum exergy consumption, a design procedure for the inter-stages and final coolers based on the number of heat transfer units (NTU-ε) method and the number of exergy units destroyed (NEUD) is shown. Graphs are provided that make the application of the method straightforward and much easier to use compared to the usual logarithmic mean temperature difference. A 25% increase in the compression ratio per stage leads to a decrease in the exergy efficiency of 3%, while the purchase cost of the compressor rises by 80%. An increase in the isentropic efficiency of the compressor from 0.7 to 0.85 leads to an increase in the exergetic performance coefficient of 21%, while the compressor purchase cost triples. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Energy Systems)
Show Figures

Figure 1

46 pages, 1415 KiB  
Article
Higher Algebraic K-Theory of Causality
by Sridhar Mahadevan
Entropy 2025, 27(5), 531; https://doi.org/10.3390/e27050531 - 16 May 2025
Viewed by 31
Abstract
Causal discovery involves searching intractably large spaces. Decomposing the search space into classes of observationally equivalent causal models is a well-studied avenue to making discovery tractable. This paper studies the topological structure underlying causal equivalence to develop a categorical formulation of Chickering’s transformational [...] Read more.
Causal discovery involves searching intractably large spaces. Decomposing the search space into classes of observationally equivalent causal models is a well-studied avenue to making discovery tractable. This paper studies the topological structure underlying causal equivalence to develop a categorical formulation of Chickering’s transformational characterization of Bayesian networks. A homotopic generalization of the Meek–Chickering theorem on the connectivity structure within causal equivalence classes and a topological representation of Greedy Equivalence Search (GES) that moves from one equivalence class of models to the next are described. Specifically, this work defines causal models as propable symmetric monoidal categories (cPROPs), which define a functor category CP from a coalgebraic PROP P to a symmetric monoidal category C. Such functor categories were first studied by Fox, who showed that they define the right adjoint of the inclusion of Cartesian categories in the larger category of all symmetric monoidal categories. cPROPs are an algebraic theory in the sense of Lawvere. cPROPs are related to previous categorical causal models, such as Markov categories and affine CDU categories, which can be viewed as defined by cPROP maps specifying the semantics of comonoidal structures corresponding to the “copy-delete” mechanisms. This work characterizes Pearl’s structural causal models (SCMs) in terms of Cartesian cPROPs, where the morphisms that define the endogenous variables are purely deterministic. A higher algebraic K-theory of causality is developed by studying the classifying spaces of observationally equivalent causal cPROP models by constructing their simplicial realization through the nerve functor. It is shown that Meek–Chickering causal DAG equivalence generalizes to induce a homotopic equivalence across observationally equivalent cPROP functors. A homotopic generalization of the Meek–Chickering theorem is presented, where covered edge reversals connecting equivalent DAGs induce natural transformations between homotopically equivalent cPROP functors and correspond to an equivalence structure on the corresponding string diagrams. The Grothendieck group completion of cPROP causal models is defined using the Grayson–Quillen construction and relate the classifying space of cPROP causal equivalence classes to classifying spaces of an induced groupoid. A real-world domain modeling genetic mutations in cancer is used to illustrate the framework in this paper. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications)
Show Figures

Figure 1

10 pages, 1838 KiB  
Article
A Monte Carlo Study of Dynamic Phase Transitions Observed in the Kinetic S = 1 Ising Model on Nonregular Lattices
by Yusuf Yüksel
Entropy 2025, 27(5), 530; https://doi.org/10.3390/e27050530 - 16 May 2025
Viewed by 18
Abstract
In the present paper, we discuss the thermodynamic and dynamic phase transition properties of the kinetic Blume–Capel model with spin-1, defined on non-regular lattices, namely decorated simple cubic, decorated triangular, and decorated square (Lieb) lattice geometries. Benefiting from the recent results obtained for [...] Read more.
In the present paper, we discuss the thermodynamic and dynamic phase transition properties of the kinetic Blume–Capel model with spin-1, defined on non-regular lattices, namely decorated simple cubic, decorated triangular, and decorated square (Lieb) lattice geometries. Benefiting from the recent results obtained for the thermodynamic phase transitions of the aforementioned lattice topologies [Azhari, M. and Yu, U., J. Stat. Mech. (2022) 033204], we explore the variation of the dynamic order parameter, dynamic scaling variance, and dynamic magnetic susceptibility as functions of the amplitude, bias, and period of the oscillating field sequence. According to the simulations, a second-order dynamic phase transition takes place at a critical field period for the systems with zero bias. A particular emphasis has also been devoted to metamagnetic anomalies emerging in the dynamic paramagnetic phase. In this regard, the generic two-peak symmetric behavior of the dynamic response functions has been found in the slow critical dynamics (i.e. dynamic paramagnetic) regime. Our results yield that the characteristics of the dynamic phase transitions observed in the kinetic Ising model on regular lattices can be extended to such non-regular lattices with a larger spin value. Full article
(This article belongs to the Special Issue Ising Model—100 Years Old and Still Attractive)
Show Figures

Figure 1

15 pages, 429 KiB  
Article
A Note on the Relativistic Transformation Properties of Quantum Stochastic Calculus
by John E. Gough
Entropy 2025, 27(5), 529; https://doi.org/10.3390/e27050529 - 15 May 2025
Viewed by 66
Abstract
We present a simple argument to derive the transformation of the quantum stochastic calculus formalism between inertial observers and derive the quantum open system dynamics for a system moving in a vacuum (or, more generally, a coherent) quantum field under the usual Markov [...] Read more.
We present a simple argument to derive the transformation of the quantum stochastic calculus formalism between inertial observers and derive the quantum open system dynamics for a system moving in a vacuum (or, more generally, a coherent) quantum field under the usual Markov approximation. We argue, however, that, for uniformly accelerated open systems, the formalism must break down as we move from a Fock representation over the algebra of field observables over all of Minkowski space to the restriction regarding the algebra of observables over a Rindler wedge. This leads to quantum noise having a unitarily inequivalent non-Fock representation: in particular, the latter is a thermal representation at the Unruh temperature. The unitary inequivalence is ultimately a consequence of the underlying flat noise spectrum approximation for the fundamental quantum stochastic processes. We derive the quantum stochastic limit for a uniformly accelerated (two-level) detector and establish an open system description of the relaxation to thermal equilibrium at the Unruh temperature. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness V)
Show Figures

Figure 1

28 pages, 7273 KiB  
Article
Comparative Study on Flux Solution Methods of Discrete Unified Gas Kinetic Scheme
by Wenqiang Guo
Entropy 2025, 27(5), 528; https://doi.org/10.3390/e27050528 - 15 May 2025
Viewed by 83
Abstract
In this work, the Simpson method is proposed to calculate the interface flux of a discrete unified gas kinetic scheme (DUGKS) according to the distribution function at the node and the midpoint of the interface, which is noted by Simpson–DUGKS. Moreover, the optimized [...] Read more.
In this work, the Simpson method is proposed to calculate the interface flux of a discrete unified gas kinetic scheme (DUGKS) according to the distribution function at the node and the midpoint of the interface, which is noted by Simpson–DUGKS. Moreover, the optimized DUGKS and Simpson–DUGKS considering the force term are derived. Then, the original DUGKS, optimized DUGKS, and Simpson–DUGKS are compared and analyzed in theory. Finally, the numerical tests are performed under different grid numbers (N). In the steady unidirectional flow (Couette flow and Poiseuille flow), the three methods are stable under different Courant–Friedrichs–Lewy (CFL) numbers, and the calculated L2 errors are the same. In the Taylor–Green vortex flow, the L2 error of the optimized DUGKS is the smallest with respect to the analytical solution of velocity, but the L2 error of the optimized DUGKS is the largest with respect to the analytical solution of density. In the lid-driven cavity flow, the results of the optimized DUGKS deviate more from the reference results in terms of accuracy, especially in the case of a small grid number. In terms of computational efficiency, it should be noted that the computational time of optimized DUGKS increases by about 40% compared with the original DUGKS when CFL = 0.1 and N = 16, and the calculation time of Simpson–DUGKS is reduced by about 59% compared with the original DUGKS when CFL = 0.95 and N = 16. Full article
(This article belongs to the Special Issue Mesoscopic Fluid Mechanics)
Show Figures

Figure 1

23 pages, 1158 KiB  
Article
Quantum Exact Response Theory Based on the Dissipation Function
by Enrico Greppi and Lamberto Rondoni
Entropy 2025, 27(5), 527; https://doi.org/10.3390/e27050527 - 15 May 2025
Viewed by 61
Abstract
The exact response theory based on the Dissipation Function applies to general dynamical systems and has yielded excellent results in various applications. In this article, we propose a method to apply it to quantum mechanics. In many quantum systems, it has not yet [...] Read more.
The exact response theory based on the Dissipation Function applies to general dynamical systems and has yielded excellent results in various applications. In this article, we propose a method to apply it to quantum mechanics. In many quantum systems, it has not yet been possible to overcome the perturbative approach, and the most developed theory is the linear one. Extensions of the exact response theory developed in the field of nonequilibrium molecular dynamics could prove useful in quantum mechanics, as perturbations of small systems or far-from-equilibrium states cannot always be taken as small perturbations. Here, we introduce a quantum analogue of the classical Dissipation Function. We then derive a quantum expression for the exact calculation of time-dependent expectation values of observables, in a form analogous to that of the classical theory. We restrict our analysis to finite-dimensional Hilbert spaces, for the sake of simplicity, and we apply our method to specific examples, like qubit systems, for which exact results can be obtained by standard techniques. This way, we prove the consistency of our approach with the existing methods, where they apply. Although not required for open systems, we propose a self-adjoint version of our Dissipation Operator, obtaining a second equivalent expression of response, where the contribution of an anti-self-adjoint operator appears. We conclude by using new formalism to solve the Lindblad equations, obtaining exact results for a specific case of qubit decoherence, and suggesting possible future developments of this work. Full article
(This article belongs to the Special Issue Quantum Nonstationary Systems—Second Edition)
Show Figures

Figure 1

29 pages, 18881 KiB  
Article
A Novel Entropy-Based Approach for Thermal Image Segmentation Using Multilevel Thresholding
by Thaweesak Trongtirakul, Karen Panetta, Artyom M. Grigoryan and Sos S. Agaian
Entropy 2025, 27(5), 526; https://doi.org/10.3390/e27050526 - 14 May 2025
Viewed by 180
Abstract
Image segmentation is a fundamental challenge in computer vision, transforming complex image representations into meaningful, analyzable components. While entropy-based multilevel thresholding techniques, including Otsu, Shannon, fuzzy, Tsallis, Renyi, and Kapur approaches, have shown potential in image segmentation, they encounter significant limitations when processing [...] Read more.
Image segmentation is a fundamental challenge in computer vision, transforming complex image representations into meaningful, analyzable components. While entropy-based multilevel thresholding techniques, including Otsu, Shannon, fuzzy, Tsallis, Renyi, and Kapur approaches, have shown potential in image segmentation, they encounter significant limitations when processing thermal images, such as poor spatial resolution, low contrast, lack of color and texture information, and susceptibility to noise and background clutter. This paper introduces a novel adaptive unsupervised entropy algorithm (A-Entropy) to enhance multilevel thresholding for thermal image segmentation. Our key contributions include (i) an image-dependent thermal enhancement technique specifically designed for thermal images to improve visibility and contrast in regions of interest, (ii) a so-called A-Entropy concept for unsupervised thermal image thresholding, and (iii) a comprehensive evaluation using the Benchmarking IR Dataset for Surveillance with Aerial Intelligence (BIRDSAI). Experimental results demonstrate the superiority of our proposal compared to other state-of-the-art methods on the BIRDSAI dataset, which comprises both real and synthetic thermal images with substantial variations in scale, contrast, background clutter, and noise. Comparative analysis indicates improved segmentation accuracy and robustness compared to traditional entropy-based methods. The framework’s versatility suggests promising applications in brain tumor detection, optical character recognition, thermal energy leakage detection, and face recognition. Full article
Show Figures

Figure 1

15 pages, 1914 KiB  
Article
Nonequilibrium Steady States in Active Systems: A Helmholtz–Hodge Perspective
by Horst-Holger Boltz and Thomas Ihle
Entropy 2025, 27(5), 525; https://doi.org/10.3390/e27050525 - 14 May 2025
Viewed by 119
Abstract
We revisit the question of the existence of a potential function, the Cole–Hopf transform of the stationary measure, for nonequilibrium steady states, in particular those found in active matter systems. This has been the subject of ongoing research for more than fifty years, [...] Read more.
We revisit the question of the existence of a potential function, the Cole–Hopf transform of the stationary measure, for nonequilibrium steady states, in particular those found in active matter systems. This has been the subject of ongoing research for more than fifty years, but continues to be relevant. In particular, we want to make a connection to some recent work on the theory of Helmholtz–Hodge decompositions and address the recently suggested notion of typical trajectories in such systems. Full article
(This article belongs to the Collection Foundations of Statistical Mechanics)
Show Figures

Figure 1

39 pages, 4030 KiB  
Article
Reference Point and Grid Method-Based Evolutionary Algorithm with Entropy for Many-Objective Optimization Problems
by Qi Leng, Bo Shan and Chong Zhou
Entropy 2025, 27(5), 524; https://doi.org/10.3390/e27050524 - 14 May 2025
Viewed by 88
Abstract
In everyday scenarios, there are many challenges involving multi-objective optimization. As the count of objective functions rises to four or beyond, the problem’s complexity intensifies considerably, often making it challenging for traditional algorithms to arrive at satisfactory solutions. The non-dominated sorting evolutionary reference [...] Read more.
In everyday scenarios, there are many challenges involving multi-objective optimization. As the count of objective functions rises to four or beyond, the problem’s complexity intensifies considerably, often making it challenging for traditional algorithms to arrive at satisfactory solutions. The non-dominated sorting evolutionary reference point-based (NSGA-III) and the grid-based evolutionary algorithms (GrEA) are two prevalent algorithms for many-objective optimization. These two algorithms preserve population diversity by employing reference point and grid mechanisms, respectively. However, they still have limitations when addressing many-objective optimization problems. Due to the uniform distribution of reference points, the reference point-based methods do not obtain good performance on problems with an irregular Pareto front, while grid-based methods do not achieve good results on problems with a regular Pareto front because of the uneven partition of grids. To address the limitations of reference point-based algorithms and grid-based approaches in tackling both regular and irregular problems, a reference point- and grid-based evolutionary algorithm with entropy is proposed for many-objective optimization, denoted as RGEA, which aims to solve both regular and irregular many-objective optimization problems. Entropy is introduced to measure the shape of the Pareto front of a many-objective optimization problem. In RGEA, a parameter α is introduced to determine the interval for calculating the entropy value. By comparing the current entropy value with the maximum entropy value, the reference point-based method or the grid-based method can be determined. In order to verify the performance of the proposed algorithm, a comprehensive experiment was designed on some popular test suites with 3-to-10 objectives. In addition, RGEA was compared against six algorithms without adaptive technology and six algorithms with adaptive technology. A great number of experimental results were obtained showing that RGEA can obtain good results. Full article
Show Figures

Figure 1

22 pages, 1850 KiB  
Article
Tail Risk Spillover Between Global Stock Markets Based on Effective Rényi Transfer Entropy and Wavelet Analysis
by Jingjing Jia
Entropy 2025, 27(5), 523; https://doi.org/10.3390/e27050523 - 14 May 2025
Viewed by 149
Abstract
To examine the spillover of tail-risk information across global stock markets, we select nine major stock markets for the period spanning from June 2014 to May 2024 as the sample data. First, we employ effective Rényi transfer entropy to measure the tail-risk information [...] Read more.
To examine the spillover of tail-risk information across global stock markets, we select nine major stock markets for the period spanning from June 2014 to May 2024 as the sample data. First, we employ effective Rényi transfer entropy to measure the tail-risk information spillover. Second, we construct a Diebold–Yilmaz connectedness table to explore the overall characteristics of tail-risk information spillover across the global stock markets. Third, we integrate wavelet analysis with effective Rényi transfer entropy to assess the multi-scale characteristics of the information spillover. Our findings lead to several key conclusions: (1) US and European stock markets are the primary sources of tail-risk information spillover, while Asian stock markets predominantly act as net information receivers; (2) the intensity of tail-risk information spillover is most pronounced between markets at the medium-high trading frequency, and as trading frequency decreases, information spillover becomes more complex; (3) across all trading frequencies, the US stock market emerges as the most influential, while the Japanese stock market is the most vulnerable. China’s stock market, in contrast, demonstrates relative independence. Full article
(This article belongs to the Special Issue Complexity in Financial Networks)
Show Figures

Figure 1

18 pages, 286 KiB  
Article
The Physics and Metaphysics of Social Powers: Bridging Cognitive Processing and Social Dynamics, a New Perspective on Power Through Active Inference
by Mahault Albarracin, Sonia de Jager and David Hyland
Entropy 2025, 27(5), 522; https://doi.org/10.3390/e27050522 - 14 May 2025
Viewed by 125
Abstract
Power operates across multiple scales, from physical action to complex social dynamics, and is constrained by fundamental principles. In the social realm, power is shaped by interactions and cognitive capacity: socially-facilitated empowerment enhances an agent’s information-processing ability, either by delegating tasks or leveraging [...] Read more.
Power operates across multiple scales, from physical action to complex social dynamics, and is constrained by fundamental principles. In the social realm, power is shaped by interactions and cognitive capacity: socially-facilitated empowerment enhances an agent’s information-processing ability, either by delegating tasks or leveraging collective resources. This computational advantage expands access to policies and buffers against vulnerabilities, amplifying an individual’s or group’s influence. In AIF, social power emerges from the capacity to attract attention and process information effectively. Our semantic habitat—narratives, ideologies, representations, etc.—functions through attentional scripts that coordinate social behavior. Shared scripts shape power dynamics by structuring collective attention. Speculative scripts serve as cognitive tools for low-risk learning, allowing agents to explore counterfactuals and refine predictive models. However, dominant scripts can reinforce misinformation, echo chambers, and power imbalances by directing collective attention toward self-reinforcing policies. We argue that power through scripts stems not only from associations with influential agents but also from the ability to efficiently process information, creating a feedback loop of increasing influence. This reframes power beyond traditional material and cultural dimensions, towards an informational and computational paradigm—what we term possibilistic power, i.e., the capacity to explore and shape future trajectories. Understanding these mechanisms has critical implications for political organization and technological foresight. Full article
(This article belongs to the Special Issue Active Inference in Cognitive Neuroscience)
22 pages, 287 KiB  
Article
Two-Step Estimation Procedure for Parametric Copula-Based Regression Models for Semi-Competing Risks Data
by Qingmin Zhang, Bowen Duan, Małgorzata Wojtyś and Yinghui Wei
Entropy 2025, 27(5), 521; https://doi.org/10.3390/e27050521 - 13 May 2025
Viewed by 152
Abstract
Non-terminal and terminal events in semi-competing risks data are typically associated and may be influenced by covariates. We employed regression modeling for semi-competing risks data under a copula-based framework to evaluate the effects of covariates on the two events and the association between [...] Read more.
Non-terminal and terminal events in semi-competing risks data are typically associated and may be influenced by covariates. We employed regression modeling for semi-competing risks data under a copula-based framework to evaluate the effects of covariates on the two events and the association between them. Due to the complexity of the copula structure, we propose a new method that integrates a novel two-step algorithm with the Bound Optimization by Quadratic Approximation (BOBYQA) method. This approach effectively mitigates the influence of initial values and demonstrates greater robustness. The simulations validate the performance of the proposed method. We further applied our proposed method to the Amsterdam Cohort Study (ACS) real data, where some improvements could be found. Full article
23 pages, 3783 KiB  
Article
Design of Covert Communication Waveform Based on Phase Randomization
by Wenjie Zhou, Zhenyong Wang, Jun Shi and Qing Guo
Entropy 2025, 27(5), 520; https://doi.org/10.3390/e27050520 - 13 May 2025
Viewed by 169
Abstract
Covert wireless communication is designed to securely transmit hidden information between two devices. Its primary objective is to conceal the existence of transmitted data, rendering communication signals difficult for unauthorized parties to detect, intercept, or decipher during transmission. In this paper, we propose [...] Read more.
Covert wireless communication is designed to securely transmit hidden information between two devices. Its primary objective is to conceal the existence of transmitted data, rendering communication signals difficult for unauthorized parties to detect, intercept, or decipher during transmission. In this paper, we propose a Noise-like Multi-Carrier Random Phase Communication System (NRPCS) to enhance covert wireless communication by significantly complicating the detection and interception of transmitted signals. The proposed system utilizes bipolar modulation and Cyclic Code Shift Keying (CCSK) modulation, complemented by a random sequence generation mechanism, to increase the randomness and complexity of the transmitted signals. A mathematical model of the NRPCS waveform is formulated, and detailed analyses of the system’s time-domain basis functions, correlation properties, and power spectral characteristics are conducted to substantiate its noise-like behavior. Simulation results indicate that, compared to traditional fixed-frequency transmission methods, NRPCS substantially improves both the Low Probability of Detection (LPD) and the Low Probability of Interception (LPI). Further research results demonstrate that unauthorized eavesdroppers are unable to effectively demodulate signals without knowledge of the employed modulation scheme, thus significantly enhancing the overall security of communication. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives)
Show Figures

Figure 1

2 pages, 342 KiB  
Correction
Correction: Klemann et al. Quantifying the Resilience of a Healthcare System: Entropy and Network Science Perspectives. Entropy 2024, 26, 21
by Désirée Klemann, Windi Winasti, Fleur Tournois, Helen Mertens and Frits van Merode
Entropy 2025, 27(5), 519; https://doi.org/10.3390/e27050519 - 13 May 2025
Viewed by 93
Abstract
In the original publication [...] Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 11

46 pages, 510 KiB  
Article
Towards Nonlinearity: The p-Regularity Theory
by Ewa Bednarczuk, Olga Brezhneva, Krzysztof Leśniewski, Agnieszka Prusińska and Alexey A. Tret’yakov
Entropy 2025, 27(5), 518; https://doi.org/10.3390/e27050518 - 12 May 2025
Viewed by 115
Abstract
We present recent advances in the analysis of nonlinear problems involving singular (degenerate) operators. The results are obtained within the framework of p-regularity theory, which has been successfully developed over the past four decades. We illustrate the theory with applications to degenerate [...] Read more.
We present recent advances in the analysis of nonlinear problems involving singular (degenerate) operators. The results are obtained within the framework of p-regularity theory, which has been successfully developed over the past four decades. We illustrate the theory with applications to degenerate problems in various areas of mathematics, including optimization and differential equations. In particular, we address the problem of describing the tangent cone to the solution set of nonlinear equations in singular cases. The structure of p-factor operators is used to propose optimality conditions and to construct novel numerical methods for solving degenerate nonlinear equations and optimization problems. The numerical methods presented in this paper represent the first approaches targeting solutions to degenerate problems such as the Van der Pol differential equation, boundary-value problems with small parameters, and partial differential equations where Poincaré’s method of small parameters fails. Additionally, these methods may be extended to nonlinear degenerate dynamical systems and other related problems. Full article
(This article belongs to the Section Complexity)
18 pages, 356 KiB  
Article
Entropy of the Quantum–Classical Interface: A Potential Metric for Security
by Sarah Chehade, Joel A. Dawson, Stacy Prowell and Ali Passian
Entropy 2025, 27(5), 517; https://doi.org/10.3390/e27050517 - 12 May 2025
Viewed by 234
Abstract
Hybrid quantum–classical systems are emerging as key platforms in quantum computing, sensing, and communication technologies, but the quantum–classical interface (QCI)—the boundary enabling these systems—introduces unique and largely unexplored security vulnerabilities. This position paper proposes using entropy-based metrics to monitor and enhance security, specifically [...] Read more.
Hybrid quantum–classical systems are emerging as key platforms in quantum computing, sensing, and communication technologies, but the quantum–classical interface (QCI)—the boundary enabling these systems—introduces unique and largely unexplored security vulnerabilities. This position paper proposes using entropy-based metrics to monitor and enhance security, specifically at the QCI. We present a theoretical security outline that leverages well-established information-theoretic entropy measures, such as Shannon entropy, von Neumann entropy, and quantum relative entropy, to detect anomalous behaviors and potential breaches at the QCI. By linking entropy fluctuations to scenarios of practical relevance—including quantum key distribution, quantum sensing, and hybrid control systems—we promote the potential value and applicability of entropy-based security monitoring. While explicitly acknowledging practical limitations and theoretical assumptions, we argue that entropy-based metrics provide a complementary approach to existing security methods, inviting further empirical studies and theoretical refinements that can strengthen future quantum technologies. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

21 pages, 7300 KiB  
Article
Public Opinion Propagation Prediction Model Based on Dynamic Time-Weighted Rényi Entropy and Graph Neural Network
by Qiujuan Tong, Xiaolong Xu, Jianke Zhang and Huawei Xu
Entropy 2025, 27(5), 516; https://doi.org/10.3390/e27050516 - 12 May 2025
Viewed by 201
Abstract
Current methods for public opinion propagation prediction struggle to jointly model temporal dynamics, structural complexity, and dynamic node influence in evolving social networks. To overcome these limitations, this paper proposes a public opinion dissemination prediction model based on the integration of dynamic time-weighted [...] Read more.
Current methods for public opinion propagation prediction struggle to jointly model temporal dynamics, structural complexity, and dynamic node influence in evolving social networks. To overcome these limitations, this paper proposes a public opinion dissemination prediction model based on the integration of dynamic time-weighted Rényi entropy (DTWRE) and graph neural networks. By incorporating a time-weighted mechanism, the model devises two tiers of Rényi entropy metrics—local node entropy and global time-step entropy—to effectively quantify the uncertainty and complexity of network topology at different time points. Simultaneously, by integrating DTWRE features with high-dimensional node embeddings generated by Node2Vec and utilizing GraphSAGE to construct a spatiotemporal fusion modeling framework, the model achieves precise prediction of link formation and key node identification in public opinion dissemination. The model was validated on multiple public opinion datasets, and the results indicate that, compared to baseline methods, it exhibits significant advantages in several evaluation metrics such as AUC, thereby fully demonstrating the effectiveness of the dynamic time-weighted mechanism in capturing the temporal evolution of public opinion dissemination and the dynamic changes in network structure. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches for Machine Learning and AI)
Show Figures

Figure 1

18 pages, 417 KiB  
Article
Comparing Singlet Testing Schemes
by George Cowperthwaite and Adrian Kent
Entropy 2025, 27(5), 515; https://doi.org/10.3390/e27050515 - 11 May 2025
Viewed by 115
Abstract
We compare schemes for testing whether two parties share a two-qubit singlet state. The first, standard, scheme tests Braunstein–Caves (or CHSH) inequalities, comparing the correlations of local measurements drawn from a fixed finite set against the quantum predictions for a singlet. The second, [...] Read more.
We compare schemes for testing whether two parties share a two-qubit singlet state. The first, standard, scheme tests Braunstein–Caves (or CHSH) inequalities, comparing the correlations of local measurements drawn from a fixed finite set against the quantum predictions for a singlet. The second, alternative, scheme tests the correlations of local measurements, drawn randomly from the set of those that are θ-separated on the Bloch sphere, against the quantum predictions. We formulate each scheme as a hypothesis test and then evaluate the test power in a number of adversarial scenarios involving an eavesdropper altering or replacing the singlet qubits. We find the ‘random measurement’ test to be superior in most natural scenarios. Full article
(This article belongs to the Special Issue Editorial Board Members' Collection Series on Quantum Entanglement)
Show Figures

Figure 1

25 pages, 461 KiB  
Article
A Deflationary Account of Information in Terms of Probability
by Riccardo Manzotti
Entropy 2025, 27(5), 514; https://doi.org/10.3390/e27050514 - 11 May 2025
Viewed by 381
Abstract
In this paper, I argue that information is nothing more than an abstract object; therefore, it does not exist fundamentally. It is neither a concrete physical entity nor a form of “stuff” that “flows” through communication channels or that is “carried” by vehicles [...] Read more.
In this paper, I argue that information is nothing more than an abstract object; therefore, it does not exist fundamentally. It is neither a concrete physical entity nor a form of “stuff” that “flows” through communication channels or that is “carried” by vehicles or that is stored in memories, messages, books, or brains—these are misleading metaphors. To support this thesis, I adopt three different approaches. First, I present a series of concrete cases that challenge our commonsensical belief that information is a real entity. Second, I apply Eleaticism (the principle that entities lacking causal efficacy do not exist). Finally, I provide a mathematical derivation showing that information reduces to probability and is therefore unnecessary both ontologically and epistemically. In conclusion, I maintain that information is a causally redundant epistemic construct that does not exist fundamentally, regardless of its remarkable epistemic convenience. What, then, is information? It is merely a very efficient way of describing reality—a manner of speaking, nothing more. Full article
(This article belongs to the Special Issue Integrated Information Theory and Consciousness II)
20 pages, 24982 KiB  
Article
A Novel One-Dimensional Chaotic System for Image Encryption in Network Transmission Through Base64 Encoding
by Linqing Huang, Qingye Huang, Han Chen, Shuting Cai, Xiaoming Xiong and Jian Yang
Entropy 2025, 27(5), 513; https://doi.org/10.3390/e27050513 - 10 May 2025
Viewed by 198
Abstract
Continuous advancements in digital image transmission technology within network environments have heightened the necessity for secure, convenient, and well-suited image encryption systems. Base64 encoding possesses the ability to convert raw data into printable ASCII characters, facilitating excellent stability transmission across various communication protocols. [...] Read more.
Continuous advancements in digital image transmission technology within network environments have heightened the necessity for secure, convenient, and well-suited image encryption systems. Base64 encoding possesses the ability to convert raw data into printable ASCII characters, facilitating excellent stability transmission across various communication protocols. In this paper, base64 encoding is first used in image encryption to pursue high application value in network transmission. First, a novel one-dimensional discrete chaotic system (1D-LSCM) with complex chaotic behavior is introduced and extensively tested. Second, a new multi-image encryption algorithm based on the proposed 1D-LSCM and base64 encoding is presented. Technically, three original grayscale images are constructed as a color image and encoded in base64 characters. To purse high plaintext sensitivity, the original image is input to the SHA-256 hash function and its output is used to influence the generated keystream employed in the permutation and diffusion process. After scramble and diffusion operations, the base64 ciphertext is obtained. Finally, test results derived from comprehensive tests prove that our proposed algorithm has remarkable security and encryption efficiency. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 6099 KiB  
Article
Fault Diagnosis of Planetary Gearbox Based on Hierarchical Refined Composite Multiscale Fuzzy Entropy and Optimized LSSVM
by Xin Xia and Xiaolu Wang
Entropy 2025, 27(5), 512; https://doi.org/10.3390/e27050512 - 10 May 2025
Viewed by 202
Abstract
Efficient extraction and classification of fault features remain critical challenges in planetary gearbox fault diagnosis. A fault diagnosis framework is proposed that integrates hierarchical refined composite multiscale fuzzy entropy (HRCMFE) for feature extraction and a gray wolf optimization (GWO)-optimized least squares support vector [...] Read more.
Efficient extraction and classification of fault features remain critical challenges in planetary gearbox fault diagnosis. A fault diagnosis framework is proposed that integrates hierarchical refined composite multiscale fuzzy entropy (HRCMFE) for feature extraction and a gray wolf optimization (GWO)-optimized least squares support vector machine (LSSVM) for classification. Firstly, the HRCMFE is developed for feature extraction, which combines the segmentation advantage of hierarchical entropy (HE) and the computational stability advantage of refined composite multiscale fuzzy entropy (RCMFE). Secondly, the hyperparameters of LSSVM are optimized by GWO using a proposed fitness function. Finally, fault diagnosis of the planetary gearbox is achieved by the optimized LSSVM using the HRCMFE-extracted features. Simulation and experimental study results indicate that the proposed method demonstrates superior effectiveness in both feature discriminability and diagnosis accuracy. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis: From Theory to Applications)
Show Figures

Figure 1

Previous Issue
Back to TopTop