Previous Issue
Volume 27, January
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 2 (February 2025) – 87 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 392 KiB  
Article
Partition Function Zeros of Paths and Normalization Zeros of ASEPS
by Zdzislaw Burda and Desmond A. Johnston
Entropy 2025, 27(2), 183; https://doi.org/10.3390/e27020183 - 10 Feb 2025
Viewed by 163
Abstract
We exploit the equivalence between the partition function of an adsorbing Dyck walk model and the Asymmetric Simple Exclusion Process (ASEP) normalization to obtain the thermodynamic limit of the locus of the ASEP normalization zeros from a conformal map. We discuss the equivalence [...] Read more.
We exploit the equivalence between the partition function of an adsorbing Dyck walk model and the Asymmetric Simple Exclusion Process (ASEP) normalization to obtain the thermodynamic limit of the locus of the ASEP normalization zeros from a conformal map. We discuss the equivalence between this approach and using an electrostatic analogy to determine the locus, both in the case of the ASEP and the random allocation model. Full article
Show Figures

Figure 1

7 pages, 377 KiB  
Article
NPA Hierarchy and Extremal Criterion in the Simplest Bell Scenario
by Satoshi Ishizaka
Entropy 2025, 27(2), 182; https://doi.org/10.3390/e27020182 - 9 Feb 2025
Viewed by 192
Abstract
It is difficult to establish an analytical criterion to identify the boundaries of quantum correlations, even for the simplest Bell scenario. Here, we briefly reviewed the plausible analytical criterion, and we found a way to confirm the extremal conditions from another direction. For [...] Read more.
It is difficult to establish an analytical criterion to identify the boundaries of quantum correlations, even for the simplest Bell scenario. Here, we briefly reviewed the plausible analytical criterion, and we found a way to confirm the extremal conditions from another direction. For that purpose, we analyzed the Navascués-Pironio-Acín (NPA) hierarchy to study the algebraic structure and found that the problem could not be simplified using the 1+AB level. However, considering the plausible criterion, the 1+AB and second levels for correlations were equal, and the extremal condition in the simplest Bell scenario was replaced by that in the 1+AB level. Thus, the correctness of the plausible criterion was verified, and the results demonstrated that the plausible criterion held, thereby explaining its simplicity. It seemed plausible, but now it becomes more certain. Full article
(This article belongs to the Section Quantum Information)
25 pages, 1981 KiB  
Article
Sequence-Aware Vision Transformer with Feature Fusion for Fault Diagnosis in Complex Industrial Processes
by Zhong Zhang, Ming Xu, Song Wang, Xin Guo, Jinfeng Gao and Aiguo Patrick Hu
Entropy 2025, 27(2), 181; https://doi.org/10.3390/e27020181 - 8 Feb 2025
Viewed by 229
Abstract
Industrial fault diagnosis faces unique challenges with high-dimensional data, long time-series, and complex couplings, which are characterized by significant information entropy and intricate information dependencies inherent in datasets. Traditional image processing methods are effective for local feature extraction but often miss global temporal [...] Read more.
Industrial fault diagnosis faces unique challenges with high-dimensional data, long time-series, and complex couplings, which are characterized by significant information entropy and intricate information dependencies inherent in datasets. Traditional image processing methods are effective for local feature extraction but often miss global temporal patterns, crucial for accurate diagnosis. While deep learning models like Vision Transformer (ViT) capture broader temporal features, they struggle with varying fault causes and time dependencies inherent in industrial data, where adding encoder layers may even hinder performance. This paper proposes a novel global and local feature fusion sequence-aware ViT (GLF-ViT), modifying feature embedding to retain sampling point correlations and preserve more local information. By fusing global features from the classification token with local features from the encoder, the algorithm significantly enhances complex fault diagnosis. Experimental analyses on data segment length, network depth, feature fusion and attention head receptive field validate the approach, demonstrating that a shallower encoder network is better suited for high-dimensional time-series fault diagnosis in complex industrial processes compared to deeper networks. The proposed method outperforms state-of-the-art algorithms on the Tennessee Eastman (TE) dataset and demonstrates excellent performance when further validated on a power transmission fault dataset. Full article
25 pages, 2729 KiB  
Article
Cloud Model-Based Adaptive Time-Series Information Granulation Algorithm and Its Similarity Measurement
by Hailan Chen, Xuedong Gao, Qi Wu and Ruojin Huang
Entropy 2025, 27(2), 180; https://doi.org/10.3390/e27020180 - 8 Feb 2025
Viewed by 183
Abstract
To efficiently reduce the dimensionality of time series and enhance the efficiency of subsequent data-mining tasks, this study introduces cloud model theory to propose a novel information granulation method and its corresponding similarity measurement. First, we present an information granulation validity index of [...] Read more.
To efficiently reduce the dimensionality of time series and enhance the efficiency of subsequent data-mining tasks, this study introduces cloud model theory to propose a novel information granulation method and its corresponding similarity measurement. First, we present an information granulation validity index of time series (IGV) based on the entropy and expectation of the cloud model. Taking IGV as the granulation target for time series, an adaptive information granulation algorithm for time series (CMAIG) is proposed, which can transform a time series into a granular time series consisting of several normal clouds without pre-specifying the number of information granules, achieving efficient dimensionality reduction. Then, a new similarity measurement method (CMAIG_ECM) is designed to calculate the similarity between two granular time series. Finally, the hierarchical clustering algorithm based on the proposed time series information granulation method and granular time series similarity measurement method (CMAIG_ECM_HC) is carried out on some UCR datasets and a real stock dataset, and experimental studies demonstrate that CMAIG_ECM_HC has superior performance in clustering time series with different shapes and trends. Full article
(This article belongs to the Section Multidisciplinary Applications)
24 pages, 1299 KiB  
Article
Occupation Times on the Legs of a Diffusion Spider
by Paavo Salminen and David Stenlund
Entropy 2025, 27(2), 179; https://doi.org/10.3390/e27020179 - 8 Feb 2025
Viewed by 112
Abstract
We study the joint moments of occupation times on the legs of a diffusion spider. Specifically, we give a recursive formula for the Laplace transform of the joint moments, which extends earlier results for a one-dimensional diffusion. For a Bessel spider, of which [...] Read more.
We study the joint moments of occupation times on the legs of a diffusion spider. Specifically, we give a recursive formula for the Laplace transform of the joint moments, which extends earlier results for a one-dimensional diffusion. For a Bessel spider, of which the Brownian spider is a special case, our approach yields an explicit formula for the joint moments of the occupation times. Full article
(This article belongs to the Special Issue The Random Walk Path of Pál Révész in Probability)
15 pages, 3622 KiB  
Article
Analysis of Aftershocks from California and Synthetic Series by Using Visibility Graph Algorithm
by Alejandro Muñoz-Diosdado, Ana María Aguilar-Molina, Eric Eduardo Solis-Montufar and José Alberto Zamora-Justo
Entropy 2025, 27(2), 178; https://doi.org/10.3390/e27020178 - 8 Feb 2025
Viewed by 276
Abstract
The use of the Visibility Graph Algorithm (VGA) has proven to be a valuable tool for analyzing both real and synthetic seismicity series. Specifically, VGA transforms time series into a network representation in which structural properties such as node connectivity, clustering, and community [...] Read more.
The use of the Visibility Graph Algorithm (VGA) has proven to be a valuable tool for analyzing both real and synthetic seismicity series. Specifically, VGA transforms time series into a network representation in which structural properties such as node connectivity, clustering, and community structure can be quantitatively measured, thereby revealing underlying correlations and dynamics that may remain hidden in traditional linear or spectral analyses. The time series transformation into complex networks with VGA provides a new approach to analyze seismic dynamics, allowing scientists to extract trends and behaviors that may not be possible by classical time-series analysis. On the other hand, many studies attempt to find viable trends in order to identify preparation mechanisms prior to a strong earthquake or to analyze the aftershocks. In this work, the seismic activity of Southern California Earthquake was analyzed focusing only on the significant earthquakes. For this purpose, seismic series preceding and following each earthquake were constructed using a windowing method with different overlaps and the slope of the connectivity (k) versus magnitude (M) graph (k-M slope) and the average degree were computed from the mapped complex networks. The results revealed a significant decrease in these parameters after the earthquake, due to the contribution of the aftershocks from the main event. Interestingly, the study was extended to synthetic seismicity series and the same behavior was observed for both k-M slope and average degree. This finding suggests that the spring-block model reproduces a relaxation mechanism following a large-magnitude event like those of real seismic aftershocks. However, this conclusion contrasts with conclusions drawn by other researchers. These results highlight the utility of VGA in studying events that precede and follow major earthquakes. This technique may be used to extract some useful trends in seismicity, which could eventually be employed for a deeper understanding and possible forecasting of seismic behavior. Full article
(This article belongs to the Special Issue Time Series Analysis in Earthquake Complex Networks)
Show Figures

Figure 1

14 pages, 591 KiB  
Article
Punctuation Patterns in Finnegans Wake by James Joyce Are Largely Translation-Invariant
by Krzysztof Bartnicki, Stanisław Drożdż, Jarosław Kwapień and Tomasz Stanisz
Entropy 2025, 27(2), 177; https://doi.org/10.3390/e27020177 - 7 Feb 2025
Viewed by 327
Abstract
The complexity characteristics of texts written in natural languages are significantly related to the rules of punctuation. In particular, the distances between punctuation marks measured by the number of words quite universally follow the family of Weibull distributions known from survival analyses. However, [...] Read more.
The complexity characteristics of texts written in natural languages are significantly related to the rules of punctuation. In particular, the distances between punctuation marks measured by the number of words quite universally follow the family of Weibull distributions known from survival analyses. However, the values of two parameters marking specific forms of these distributions distinguish specific languages. This is such a strong constraint that the punctuation distributions of texts translated from the original language into another adopt quantitative characteristics of the target language. All these changes take place within Weibull distributions such that the corresponding hazard functions are always increasing. Recent previous research shows that James Joyce’s famous novel Finnegans Wake is subject to such an extreme distribution from the Weibull family that the corresponding hazard function is clearly decreasing. At the same time, the distances of sentence-ending punctuation marks, determining the sentence length variability, have an almost perfect multifractal organization to an extent found nowhere else in the literature thus far. In the present contribution, based on several available translations (Dutch, French, German, Polish, and Russian) of Finnegans Wake, it is shown that the punctuation characteristics of this work remain largely translation-invariant, contrary to the common cases. These observations may constitute further evidence that Finnegans Wake is a translinguistic work in this respect as well, in line with Joyce’s original intention. Full article
(This article belongs to the Special Issue Complexity Characteristics of Natural Language)
Show Figures

Figure 1

4 pages, 159 KiB  
Editorial
An Entropy Approach to Interdependent Human–Machine Teams
by William Lawless and Ira S. Moskowitz
Entropy 2025, 27(2), 176; https://doi.org/10.3390/e27020176 - 7 Feb 2025
Viewed by 276
Abstract
Overview of recent developments in the field [...] Full article
25 pages, 1726 KiB  
Article
Fault Diagnosis of Semi-Supervised Electromechanical Transmission Systems Under Imbalanced Unlabeled Sample Class Information Screening
by Chaoge Wang, Pengpeng Jia, Xinyu Tian, Xiaojing Tang, Xiong Hu and Hongkun Li
Entropy 2025, 27(2), 175; https://doi.org/10.3390/e27020175 - 6 Feb 2025
Viewed by 277
Abstract
In the health monitoring of electromechanical transmission systems, the collected state data typically consist of only a minimal amount of labeled data, with a vast majority remaining unlabeled. Consequently, deep learning-based diagnostic models encounter the challenge of scarcity in labeled data and abundance [...] Read more.
In the health monitoring of electromechanical transmission systems, the collected state data typically consist of only a minimal amount of labeled data, with a vast majority remaining unlabeled. Consequently, deep learning-based diagnostic models encounter the challenge of scarcity in labeled data and abundance in unlabeled data. Traditional semi-supervised deep learning methods based on pseudo-label self-training, while alleviating the issue of labeled data scarcity to some extent, neglect the reliability of pseudo-label information, the accuracy of feature extraction from unlabeled data, and the imbalance in sample selection. To address these issues, this paper proposes a novel semi-supervised fault diagnosis method under imbalanced unlabeled sample class information screening. Firstly, an information screening mechanism for unlabeled data based on active learning is established. This mechanism discriminates based on the variability of intrinsic feature information in fault samples, accurately screening out unlabeled samples located near decision boundaries that are difficult to separate clearly. Then, combining the maximum membership degree of these unlabeled data in the classification space of the supervised model and interacting with the active learning expert system, label information is assigned to the screened unlabeled data. Secondly, a cost-sensitive function driven by data imbalance is constructed to address the class imbalance problem in unlabeled sample screening, adaptively adjusting the weights of different class samples during model training to guide the training of the supervised model. Ultimately, through dynamic optimization of the supervised model and the feature extraction capability of unlabeled samples, the recognition ability of the diagnostic model for unlabeled samples is significantly enhanced. Validation through two datasets, encompassing a total of 12 experimental scenarios, demonstrates that in scenarios with only a small amount of labeled data, the proposed method achieves a diagnostic accuracy increment exceeding 10% compared to existing typical methods, fully validating the effectiveness and superiority of the proposed method in practical applications. Full article
20 pages, 7780 KiB  
Article
Statistical Characteristics of Strong Earthquake Sequence in Northeastern Tibetan Plateau
by Ying Wang, Rui Wang, Peng Han, Tao Zhao, Miao Miao, Lina Su, Zhaodi Jin and Jiancang Zhuang
Entropy 2025, 27(2), 174; https://doi.org/10.3390/e27020174 - 6 Feb 2025
Viewed by 269
Abstract
As the forefront of inland extension on the Indian plate, the northeastern Tibetan Plateau, marked by low strain rates and high stress levels, is one of the regions with the highest seismic risk. Analyzing seismicity through statistical methods holds significant scientific value for [...] Read more.
As the forefront of inland extension on the Indian plate, the northeastern Tibetan Plateau, marked by low strain rates and high stress levels, is one of the regions with the highest seismic risk. Analyzing seismicity through statistical methods holds significant scientific value for understanding tectonic conditions and assessing earthquake risk. However, seismic monitoring capacity in this region remains limited, and earthquake frequency is low, complicating efforts to improve earthquake catalogs through enhanced identification and localization techniques. Bi-scale empirical probability integral transformation (BEPIT), a statistical method, can address these data gaps by supplementing missing events shortly after moderate to large earthquakes, resulting in a more reliable statistical data set. In this study, we analyzed six earthquake sequences with magnitudes of MS ≥ 6.0 that occurred in northeastern Tibet since 2009, following the upgrade of the regional seismic network. Using BEPIT, we supplemented short-term missing aftershocks in these sequences, creating a more complete earthquake catalog. ETAS model parameters and b values for these sequences were then estimated using maximum likelihood methods to analyze parameter variability across sequences. The findings indicate that the b value is low, reflecting relatively high regional stress. The background seismicity rate is very low, with most mainshocks in these sequences being background events rather than foreshock-driven events. The p-parameter of the ETAS model is high, indicating that aftershocks decay relatively quickly, while the α-parameter is also elevated, suggesting that aftershocks are predominantly induced by the mainshock. These conditions suggest that earthquake prediction in this region is challenging through seismicity analysis alone, and alternative approaches integrating non-seismic data, such as electromagnetic and fluid monitoring, may offer more viable solutions. This study provides valuable insights into earthquake forecasting in the northeastern Tibetan Plateau. Full article
(This article belongs to the Special Issue Time Series Analysis in Earthquake Complex Networks)
21 pages, 3281 KiB  
Article
Multi-Space Feature Fusion and Entropy-Based Metrics for Underwater Image Quality Assessment
by Baozhen Du, Hongwei Ying, Jiahao Zhang and Qunxin Chen
Entropy 2025, 27(2), 173; https://doi.org/10.3390/e27020173 - 6 Feb 2025
Viewed by 351
Abstract
In marine remote sensing, underwater images play an indispensable role in ocean exploration, owing to their richness in information and intuitiveness. However, underwater images often encounter issues such as color shifts, loss of detail, and reduced clarity, leading to the decline of image [...] Read more.
In marine remote sensing, underwater images play an indispensable role in ocean exploration, owing to their richness in information and intuitiveness. However, underwater images often encounter issues such as color shifts, loss of detail, and reduced clarity, leading to the decline of image quality. Therefore, it is critical to study precise and efficient methods for assessing underwater image quality. A no-reference multi-space feature fusion and entropy-based metrics for underwater image quality assessment (MFEM-UIQA) are proposed in this paper. Considering the color shifts of underwater images, the chrominance difference map is created from the chrominance space and statistical features are extracted. Moreover, considering the information representation capability of entropy, entropy-based multi-channel mutual information features are extracted to further characterize chrominance features. For the luminance space features, contrast features from luminance images based on gamma correction and luminance uniformity features are extracted. In addition, logarithmic Gabor filtering is applied to the luminance space images for subband decomposition and entropy-based mutual information of subbands is captured. Furthermore, underwater image noise features, multi-channel dispersion information, and visibility features are extracted to jointly represent the perceptual features. The experiments demonstrate that the proposed MFEM-UIQA surpasses the state-of-the-art methods. Full article
(This article belongs to the Collection Entropy in Image Analysis)
Show Figures

Figure 1

15 pages, 810 KiB  
Article
Dynamical Complexity in Geomagnetically Induced Current Activity Indices Using Block Entropy
by Adamantia Zoe Boutsi, Constantinos Papadimitriou, Georgios Balasis, Christina Brinou, Emmeleia Zampa and Omiros Giannakis
Entropy 2025, 27(2), 172; https://doi.org/10.3390/e27020172 - 6 Feb 2025
Viewed by 506
Abstract
Geomagnetically Induced Currents (GICs) are a manifestation of space weather events at ground level. GICs have the potential to cause power failures in electric grids. The GIC index is a proxy of the ground geoelectric field derived solely from geomagnetic field data. Information [...] Read more.
Geomagnetically Induced Currents (GICs) are a manifestation of space weather events at ground level. GICs have the potential to cause power failures in electric grids. The GIC index is a proxy of the ground geoelectric field derived solely from geomagnetic field data. Information theory can be used to shed light on the dynamics of complex systems, such as the coupled solar wind–magnetosphere–ionosphere–ground system. We performed block entropy analysis of the GIC activity indices at middle-latitude European observatories around the St. Patrick’s Day March 2015 intense magnetic storm and Mother’s Day (or Gannon) May 2024 superintense storm. We found that the GIC index values were generally higher for the May 2024 storm, indicating elevated risk levels. Furthermore, the entropy values of the SYM-H and GIC indices were higher in the time interval before the storms than during the storms, indicating transition from a system with lower organization to one with higher organization. These findings, including the temporal dynamics of the entropy and GIC indices, highlight the potential of this method to reveal pre-storm susceptibility and relaxation processes. This study not only adds to the knowledge of geomagnetic disturbances but also provides valuable practical implications for space weather forecasting and geospatial risk assessment. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

18 pages, 966 KiB  
Article
Mean Field Initialization of the Annealed Importance Sampling Algorithm for an Efficient Evaluation of the Partition Function Using Restricted Boltzmann Machines
by Arnau Prat Pou, Enrique Romero, Jordi Martí and Ferran Mazzanti
Entropy 2025, 27(2), 171; https://doi.org/10.3390/e27020171 - 6 Feb 2025
Viewed by 375
Abstract
Probabilistic models in physics often require the evaluation of normalized Boltzmann factors, which in turn implies the computation of the partition function Z. Obtaining the exact value of Z, though, becomes a forbiddingly expensive task as the system size increases. A [...] Read more.
Probabilistic models in physics often require the evaluation of normalized Boltzmann factors, which in turn implies the computation of the partition function Z. Obtaining the exact value of Z, though, becomes a forbiddingly expensive task as the system size increases. A possible way to tackle this problem is to use the Annealed Importance Sampling (AIS) algorithm, which provides a tool to stochastically estimate the partition function of the system. The nature of AIS allows for an efficient and parallel implementation in Restricted Boltzmann Machines (RBMs). In this work, we evaluate the partition function of magnetic spin and spin-like systems mapped into RBMs using AIS. So far, the standard application of the AIS algorithm starts from the uniform probability distribution and uses a large number of Monte Carlo steps to obtain reliable estimations of Z following an annealing process. We show that both the quality of the estimation and the cost of the computation can be significantly improved by using a properly selected mean-field starting probability distribution. We perform a systematic analysis of AIS in both small- and large-sized problems, and compare the results to exact values in problems where these are known. As a result, we propose two successful strategies that work well in all the problems analyzed. We conclude that these are good starting points to estimate the partition function with AIS with a relatively low computational cost. The procedures presented are not linked to any learning process, and therefore do not require a priori knowledge of a training dataset. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

9 pages, 437 KiB  
Article
Critical Relaxation in the Quantum Yang–Lee Edge Singularity
by Yue-Mei Sun, Xinyu Wang and Liang-Jun Zhai
Entropy 2025, 27(2), 170; https://doi.org/10.3390/e27020170 - 6 Feb 2025
Viewed by 286
Abstract
We study the relaxation dynamics near the critical points of the Yang–Lee edge singularities (YLESs) in the quantum Ising chain in an imaginary longitudinal field with a polarized initial state. We find that scaling behaviors are manifested in the relaxation process after a [...] Read more.
We study the relaxation dynamics near the critical points of the Yang–Lee edge singularities (YLESs) in the quantum Ising chain in an imaginary longitudinal field with a polarized initial state. We find that scaling behaviors are manifested in the relaxation process after a non-universal transient time. We show that for the paramagnetic Hamiltonian, the magnetization oscillates periodically with the period being inversely proportional to the gap between the lowest energy level; for the ferromagnetic Hamiltonian, the magnetization decays to a saturated value; while for the critical Hamiltonian, the magnetization increases linearly. A scaling theory is developed to describe these scaling properties. In this theory, we show that for a small- and medium-sized system, the scaling behavior is described by the (0+1)-dimensional YLES. Full article
(This article belongs to the Special Issue Non-Equilibrium Quantum Many-Body Dynamics)
Show Figures

Figure 1

14 pages, 10376 KiB  
Article
R Version of the Kedem–Katchalsky–Peusner Equations for Liquid Interface Potentials in a Membrane System
by Andrzej Ślęzak and Sławomir M. Grzegorczyn
Entropy 2025, 27(2), 169; https://doi.org/10.3390/e27020169 - 6 Feb 2025
Viewed by 216
Abstract
Peusner’s network thermodynamics (PNT) is an important way of describing processes in nonequilibrium thermodynamics. PNT allows energy transport and conversion processes in membrane systems to be described. This conversion concerns internal energy transformation into free and dissipated energies linked with the membrane transport [...] Read more.
Peusner’s network thermodynamics (PNT) is an important way of describing processes in nonequilibrium thermodynamics. PNT allows energy transport and conversion processes in membrane systems to be described. This conversion concerns internal energy transformation into free and dissipated energies linked with the membrane transport of solutes. A transformation of the Kedem–Katchalsky (K-K) equations into the R variant of Kedem–Katchalsky–Peusner (K-K-P) equations was developed for the transport of binary electrolytic solutions through a membrane. The procedure was verified for a system in which a membrane Ultra Flo 145 Dialyser separated aqueous NaCl solutions. Peusner coefficients were calculated by the transformation of the K-K coefficients. Next, the coupling coefficients of the membrane processes and energy fluxes for electrolyte solutions transported through the membrane were calculated based on the Peusner coefficients. The efficiency of energy conversion in the membrane transport processes was estimated, and this coefficient increased nonlinearly with the increase in the solute concentration in the membrane. In addition, the energy fluxes as functions of ionic current density for constant solute fluxes were also investigated for membrane transport processes in the Ultra Flo 145 Dialyser membrane. Full article
(This article belongs to the Special Issue Thermodynamic Modelling in Membrane, 2nd Edition)
Show Figures

Figure 1

22 pages, 3949 KiB  
Article
Hidden Markov Neural Networks
by Lorenzo Rimella and Nick Whiteley
Entropy 2025, 27(2), 168; https://doi.org/10.3390/e27020168 - 5 Feb 2025
Viewed by 282
Abstract
We define an evolving in-time Bayesian neural network called a Hidden Markov Neural Network, which addresses the crucial challenge in time-series forecasting and continual learning: striking a balance between adapting to new data and appropriately forgetting outdated information. This is achieved by modelling [...] Read more.
We define an evolving in-time Bayesian neural network called a Hidden Markov Neural Network, which addresses the crucial challenge in time-series forecasting and continual learning: striking a balance between adapting to new data and appropriately forgetting outdated information. This is achieved by modelling the weights of a neural network as the hidden states of a Hidden Markov model, with the observed process defined by the available data. A filtering algorithm is employed to learn a variational approximation of the evolving-in-time posterior distribution over the weights. By leveraging a sequential variant of Bayes by Backprop, enriched with a stronger regularization technique called variational DropConnect, Hidden Markov Neural Networks achieve robust regularization and scalable inference. Experiments on MNIST, dynamic classification tasks, and next-frame forecasting in videos demonstrate that Hidden Markov Neural Networks provide strong predictive performance while enabling effective uncertainty quantification. Full article
(This article belongs to the Special Issue Advances in Probabilistic Machine Learning)
12 pages, 265 KiB  
Article
The Information Loss Problem and Hawking Radiation as Tunneling
by Baocheng Zhang, Christian Corda and Qingyu Cai
Entropy 2025, 27(2), 167; https://doi.org/10.3390/e27020167 - 5 Feb 2025
Viewed by 372
Abstract
In this paper, we review some methods that have tried to solve the information loss problem. In particular, we revisit the solution based on Hawking radiation as tunneling and provide a detailed statistical interpretation of the black hole entropy in terms of the [...] Read more.
In this paper, we review some methods that have tried to solve the information loss problem. In particular, we revisit the solution based on Hawking radiation as tunneling and provide a detailed statistical interpretation of the black hole entropy in terms of the quantum tunneling probability of Hawking radiation from the black hole. In addition, we show that black hole evaporation is governed by a time-dependent Schrödinger equation that sends pure states into pure states rather than into mixed states (Hawking had originally established that the final result would be mixed states). This is further confirmation of the fact that black hole evaporation is unitary. Full article
(This article belongs to the Special Issue Black Hole Information Problem: Challenges and Perspectives)
17 pages, 1487 KiB  
Article
Perceptual Complexity as Normalized Shannon Entropy
by Norberto M. Grzywacz
Entropy 2025, 27(2), 166; https://doi.org/10.3390/e27020166 - 5 Feb 2025
Viewed by 252
Abstract
Complexity is one of the most important variables in how the brain performs decision making based on esthetic values. Multiple definitions of perceptual complexity have been proposed, with one of the most fruitful being the Normalized Shannon Entropy one. However, the Normalized Shannon [...] Read more.
Complexity is one of the most important variables in how the brain performs decision making based on esthetic values. Multiple definitions of perceptual complexity have been proposed, with one of the most fruitful being the Normalized Shannon Entropy one. However, the Normalized Shannon Entropy definition has theoretical gaps that we address in this article. Focusing on visual perception, we first address whether normalization fully corrects for the effects of measurement resolution on entropy. The answer is negative, but the remaining effects are minor, and we propose alternate definitions of complexity, correcting this problem. Related to resolution, we discuss the ideal spatial range in the computation of spatial complexity. The results show that this range must be small but not too small. Furthermore, it is suggested by the analysis of this range that perceptual spatial complexity is based solely on translational isometry. Finally, we study how the complexities of distinct visual variables interact. We argue that the complexities of the variables of interest to the brain’s visual system may not interact linearly because of interclass correlation. But the interaction would be linear if the brain weighed complexities as in Kempthorne’s λ-Bayes-based compromise problem. We finish by listing several experimental tests of these theoretical ideas on complexity. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

18 pages, 5598 KiB  
Article
Crack-Detection Algorithm Integrating Multi-Scale Information Gain with Global–Local Tight–Loose Coupling
by Yun Bai, Zhiyao Li, Runqi Liu, Jiayi Feng and Biao Li
Entropy 2025, 27(2), 165; https://doi.org/10.3390/e27020165 - 5 Feb 2025
Viewed by 336
Abstract
In this study, an improved target-detection model based on information theory is proposed to address the difficulties of crack-detection tasks, such as slender target shapes, blurred boundaries, and complex backgrounds. By introducing a multi-scale information gain mechanism and a global–local feature coupling strategy, [...] Read more.
In this study, an improved target-detection model based on information theory is proposed to address the difficulties of crack-detection tasks, such as slender target shapes, blurred boundaries, and complex backgrounds. By introducing a multi-scale information gain mechanism and a global–local feature coupling strategy, the model has significantly improved feature extraction and expression capabilities. Experimental results show that, on a single-crack dataset, the model’s mAP@50 and mAP@50-95 are 1.6% and 0.8% higher than the baseline model RT-DETR, respectively; on a multi-crack dataset, these two indicators are improved by 1.2% and 1.0%, respectively. The proposed method shows good robustness and detection accuracy in complex scenarios, providing new ideas and technical support for in-depth research in the field of crack detection. Full article
Show Figures

Figure 1

24 pages, 1315 KiB  
Article
The Nonlinear Dynamics and Chaos Control of Pricing Games in Group Robot Systems
by Chen Wang, Yi Sun, Ying Han and Chao Zhang
Entropy 2025, 27(2), 164; https://doi.org/10.3390/e27020164 - 4 Feb 2025
Viewed by 343
Abstract
System stability control in resource allocation is a critical issue in group robot systems. Against this backdrop, this study investigates the nonlinear dynamics and chaotic phenomena that arise during pricing games among finitely rational group robots and proposes control strategies to mitigate chaotic [...] Read more.
System stability control in resource allocation is a critical issue in group robot systems. Against this backdrop, this study investigates the nonlinear dynamics and chaotic phenomena that arise during pricing games among finitely rational group robots and proposes control strategies to mitigate chaotic behaviors. A system model and a business model for group robots are developed based on market mechanism mapping, and the dynamics of resource allocation are formulated as a second-order discrete nonlinear system using game theory. Numerical simulations reveal that small perturbations in system parameters, such as pricing adjustment speed, product demand coefficients, and resource substitution coefficients, can induce chaotic behaviors. To address these chaotic phenomena, a control method combining state feedback and parameter adjustment is proposed. This approach dynamically tunes the state feedback intensity of the system via a control parameter M, thereby delaying bifurcations and suppressing chaotic behaviors. It ensures that the distribution of system eigenvalues satisfies stability conditions, allowing control over unstable periodic orbits and period-doubling bifurcations. Simulation results demonstrate that the proposed control method effectively delays period-doubling bifurcations and stabilizes unstable periodic orbits in chaotic attractors. The stability of the system’s Nash equilibrium is significantly improved, and the parameter range for equilibrium pricing is expanded. These findings provide essential theoretical foundations and practical guidance for the design and application of group robot systems. Full article
20 pages, 1923 KiB  
Article
PRG4CNN: A Probabilistic Model Checking-Driven Robustness Guarantee Framework for CNNs
by Yang Liu and Aohui Fang
Entropy 2025, 27(2), 163; https://doi.org/10.3390/e27020163 - 3 Feb 2025
Viewed by 411
Abstract
As an important kind of DNN (deep neural network), CNN (convolutional neural network) has made remarkable progress and been widely used in the vision and decision-making of autonomous robots. Nonetheless, in many scenarios, even a minor perturbation in input for CNNs may lead [...] Read more.
As an important kind of DNN (deep neural network), CNN (convolutional neural network) has made remarkable progress and been widely used in the vision and decision-making of autonomous robots. Nonetheless, in many scenarios, even a minor perturbation in input for CNNs may lead to serious errors, which means CNNs lack robustness. Formal verification is an effective method to guarantee the robustness of CNNs. Existing works predominantly concentrate on local robustness verification, which requires considerable time and space. Probabilistic robustness quantifies the robustness of CNNs, which is a practical mode of potential measurement. The state-of-the-art of probabilistic robustness verification is a test-driven approach, which is used to manually decide whether a DNN satisfies the probabilistic robustness and does not involve robustness repair. Robustness repair can improve the robustness of CNNs further. To address this issue, we propose a probabilistic model checking-driven robustness guarantee framework for CNNs, i.e., PRG4CNN. This is the first automated and complete framework for guaranteeing the probabilistic robustness of CNNs. It comprises four steps, as follows: (1) modeling a CNN as an MDP (Markov decision processes) by model learning, (2) specifying the probabilistic robustness of the CNN via the PCTL (Probabilistic Computational Tree Logic) formula, (3) verifying the probabilistic robustness with a probabilistic model checker, and (4) probabilistic robustness repair by counterexample-guided sensitivity analysis, if probabilistic robustness does not hold on the CNN. We here conduct experiments on various scales of CNNs trained on the handwriting dataset MNIST, and demonstrate the effectiveness of PRG4CNN. Full article
(This article belongs to the Special Issue Information-Theoretic Methods for Trustworthy Machine Learning)
Show Figures

Figure 1

19 pages, 506 KiB  
Article
Unique Method for Prognosis of Risk of Depressive Episodes Using Novel Measures to Model Uncertainty Under Data Privacy
by Barbara Pękala, Dawid Kosior, Wojciech Rząsa, Katarzyna Garwol and Janusz Czuma
Entropy 2025, 27(2), 162; https://doi.org/10.3390/e27020162 - 3 Feb 2025
Viewed by 498
Abstract
The research described in this paper focuses on key aspects of learning from data concerning the symptoms of depression and how to prevent it. The computer support system designed for that purpose combines data privacy protection from various sources and uncertainty modeling, especially [...] Read more.
The research described in this paper focuses on key aspects of learning from data concerning the symptoms of depression and how to prevent it. The computer support system designed for that purpose combines data privacy protection from various sources and uncertainty modeling, especially for incomplete data. The mentioned aspects are key to real-life medical diagnostic problems. From among the different paradigms of machine learning, a federated learning-based approach was chosen as the most suitable to take up the challenge. Importantly, computer support in medical diagnostics often requires algorithms that are appropriate for processing data expressing uncertainty and that can ensure high-quality diagnostics. To achieve this goal, a novel decision-making algorithm is used that employs interval entropy measures based on the theory of interval-valued fuzzy sets. Such an approach enables one to take into account diagnostic uncertainty, express it exactly, and interpret it easily. Furthermore, the applied classification technique offers the possibility of a straightforward explanation of the diagnosis, which is a situation required by many physicians. The presented solution combines innovative technological approaches with practical user needs, fostering the development of more effective tools in mental health prevention. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
Show Figures

Figure 1

11 pages, 1082 KiB  
Article
Fiducial Inference in Linear Mixed-Effects Models
by Jie Yang, Xinmin Li, Hongwei Gao and Chenchen Zou
Entropy 2025, 27(2), 161; https://doi.org/10.3390/e27020161 - 3 Feb 2025
Viewed by 340
Abstract
We develop a novel framework for fiducial inference in linear mixed-effects (LME) models, with the standard deviation of random effects reformulated as coefficients. The exact fiducial density is derived as the equilibrium measure of a reversible Markov chain over the parameter space. The [...] Read more.
We develop a novel framework for fiducial inference in linear mixed-effects (LME) models, with the standard deviation of random effects reformulated as coefficients. The exact fiducial density is derived as the equilibrium measure of a reversible Markov chain over the parameter space. The density is equivalent in form to a Bayesian LME with noninformative prior, while the underlying fiducial structure adds new benefits to unify the inference of random effects and all other parameters in a neat and simultaneous way. Our fiducial LME needs no additional tests or statistics for zero variance and is more suitable for small sample sizes. In simulation and empirical analysis, our confidence intervals (CIs) are comparable to those based on Bayesian and likelihood profiling methods. And our inference for the variance of random effects has competitive power with the likelihood ratio test. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

15 pages, 16805 KiB  
Article
Performance Improvement for Discretely Modulated Continuous-Variable Measurement-Device-Independent Quantum Key Distribution with Imbalanced Modulation
by Zehui Liu, Jiandong Bai, Fengchao Li, Yijun Li, Yan Tian and Wenyuan Liu
Entropy 2025, 27(2), 160; https://doi.org/10.3390/e27020160 - 3 Feb 2025
Viewed by 460
Abstract
The modulation mode at the transmitters plays a crucial role in the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. However, in practical applications, differences in the modulation schemes between two transmitters can inevitably impact protocol performance, particularly when using discrete modulation with four-state [...] Read more.
The modulation mode at the transmitters plays a crucial role in the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. However, in practical applications, differences in the modulation schemes between two transmitters can inevitably impact protocol performance, particularly when using discrete modulation with four-state or eight-state formats. This work primarily investigates the effect of imbalanced modulation at the transmitters on the security of the CV-MDI-QKD protocol under both symmetric and asymmetric distance scenarios. By employing imbalanced discrete modulation maps and numerical convex optimization techniques, the proposed CV-MDI-QKD protocol achieves a notably higher secret key rate and outperforms existing protocols in terms of maximum transmission distance. Specifically, simulation results demonstrate that the secret key rate and maximum transmission distance are boosted by approximately 77.77% and 24.3%, respectively, compared to the original protocol. This novel and simplified modulation method can be seamlessly implemented in existing experimental setups without requiring equipment modifications. Furthermore, it provides a practical approach to enhancing protocol performance and enabling cost-effective applications in secure quantum communication networks under real-world environments. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

11 pages, 3819 KiB  
Article
Improved CNN Prediction Based Reversible Data Hiding for Images
by Yingqiang Qiu, Wanli Peng and Xiaodan Lin
Entropy 2025, 27(2), 159; https://doi.org/10.3390/e27020159 - 3 Feb 2025
Viewed by 367
Abstract
This paper proposes a reversible data hiding (RDH) scheme for images with an improved convolutional neural network (CNN) predictor (ICNNP) that consists of three modules for feature extraction, pixel prediction, and complexity prediction, respectively. Due to predicting the complexity of each pixel with [...] Read more.
This paper proposes a reversible data hiding (RDH) scheme for images with an improved convolutional neural network (CNN) predictor (ICNNP) that consists of three modules for feature extraction, pixel prediction, and complexity prediction, respectively. Due to predicting the complexity of each pixel with the ICNNP during the embedding process, the proposed scheme can achieve superior performance compared to a CNNP-based scheme. Specifically, an input image is first split into two sub-images, i.e., a “Circle” sub-image and a “Square” sub-image. Meanwhile, each sub-image is applied to predict another one with the ICNNP. Then, the prediction errors of pixels are sorted based on the predicted pixel complexities. In light of this, some sorted prediction errors with less complexity are selected to be efficiently applied for low-distortion data embedding with a traditional histogram-shifting technique. Experimental results show that the proposed ICNNP can achieve better rate-distortion performance than the CNNP, demonstrating its effectiveness. Full article
Show Figures

Figure 1

13 pages, 5323 KiB  
Article
Solutions of a Class of Switch Dynamical Systems
by Marius-F. Danca
Entropy 2025, 27(2), 158; https://doi.org/10.3390/e27020158 - 2 Feb 2025
Viewed by 460
Abstract
In this paper, the solutions of a class of switch dynamical systems are investigated. The right-hand side of the underlying equations is discontinuous with respect to the state variable. The discontinuity is represented by jump discontinuous functions such as signum or Heaviside functions. [...] Read more.
In this paper, the solutions of a class of switch dynamical systems are investigated. The right-hand side of the underlying equations is discontinuous with respect to the state variable. The discontinuity is represented by jump discontinuous functions such as signum or Heaviside functions. In this paper, a novel approach of the solutions of this class of discontinuous equations is presented. The initial value problem is restated as a differential inclusion via Filippov’s regularization, after which, via the approximate selection results, the differential inclusion is transformed into a continuous, single-valued differential equation. Besides its existence, a sufficient uniqueness condition, the strengthened one-sided Lipschitz Condition, is also introduced. The important issue of the numerical integration of this class of equations is addressed, emphasizing by examples the errors that could appear if the discontinuity problem is neglected. The example of a mechanical system, a preloaded compliance system, is considered along with other examples. Full article
Show Figures

Figure 1

22 pages, 722 KiB  
Article
Levy Noise Affects Ornstein–Uhlenbeck Memory
by Iddo Eliazar
Entropy 2025, 27(2), 157; https://doi.org/10.3390/e27020157 - 2 Feb 2025
Viewed by 297
Abstract
This paper investigates the memory of the Ornstein–Uhlenbeck process (OUP) via three ratios of the OUP increments: signal-to-noise, noise-to-noise, and tail-to-tail. Intuition suggests the following points: (1) changing the noise that drives the OUP from Gauss to Levy will not affect the memory, [...] Read more.
This paper investigates the memory of the Ornstein–Uhlenbeck process (OUP) via three ratios of the OUP increments: signal-to-noise, noise-to-noise, and tail-to-tail. Intuition suggests the following points: (1) changing the noise that drives the OUP from Gauss to Levy will not affect the memory, as both noises share the common `independent increments’ property; (2) changing the auto-correlation of the OUP from exponential to slowly decaying will affect the memory, as the change yields a process with long-range correlations; and (3) with regard to Levy driving noise, the greater the noise fluctuations, the noisier the prediction of the OUP increments. This paper shows that intuition is plain wrong. Indeed, a detailed analysis establishes that for each of the three above-mentioned points, the very converse holds. Hence, Levy noise has a significant and counter-intuitive effect on Ornstein–Uhlenbeck memory. Full article
(This article belongs to the Collection Foundations of Statistical Mechanics)
31 pages, 1195 KiB  
Article
Machine Learning Predictors for Min-Entropy Estimation
by Javier Blanco-Romero, Vicente Lorenzo, Florina Almenares Mendoza and Daniel Díaz-Sánchez
Entropy 2025, 27(2), 156; https://doi.org/10.3390/e27020156 - 2 Feb 2025
Viewed by 338
Abstract
This study investigates the application of machine learning predictors for the estimation of min-entropy in random number generators (RNGs), a key component in cryptographic applications where accurate entropy assessment is essential for cybersecurity. Our research indicates that these predictors, and indeed any predictor [...] Read more.
This study investigates the application of machine learning predictors for the estimation of min-entropy in random number generators (RNGs), a key component in cryptographic applications where accurate entropy assessment is essential for cybersecurity. Our research indicates that these predictors, and indeed any predictor that leverages sequence correlations, primarily estimate average min-entropy, a metric not extensively studied in this context. We explore the relationship between average min-entropy and the traditional min-entropy, focusing on their dependence on the number of target bits being predicted. Using data from generalized binary autoregressive models, a subset of Markov processes, we demonstrate that machine learning models (including a hybrid of convolutional and recurrent long short-term memory layers and the transformer-based GPT-2 model) outperform traditional NIST SP 800-90B predictors in certain scenarios. Our findings underscore the importance of considering the number of target bits in min-entropy assessment for RNGs and highlight the potential of machine learning approaches in enhancing entropy estimation techniques for improved cryptographic security. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
17 pages, 784 KiB  
Article
Effects of Multiplicative Noise in Bistable Dynamical Systems
by Sara C. Quintanilha Valente, Rodrigo da Costa Lima Bruni, Zochil González Arenas and Daniel G. Barci
Entropy 2025, 27(2), 155; https://doi.org/10.3390/e27020155 - 2 Feb 2025
Viewed by 359
Abstract
This study explores the escape dynamics of bistable systems influenced by multiplicative noise, extending the classical Kramers rate formula to scenarios involving state-dependent diffusion in asymmetric potentials. Using a generalized stochastic calculus framework, we derive an analytical expression for the escape rate and [...] Read more.
This study explores the escape dynamics of bistable systems influenced by multiplicative noise, extending the classical Kramers rate formula to scenarios involving state-dependent diffusion in asymmetric potentials. Using a generalized stochastic calculus framework, we derive an analytical expression for the escape rate and corroborate it with numerical simulations. The results highlight the critical role of the equilibrium potential Ueq(x), which incorporates noise intensity, stochastic prescription, and diffusion properties. We show how asymmetries and stochastic calculus prescriptions influence transition rates and equilibrium configurations. Using path integral techniques and weak noise approximations, we analyze the interplay between noise and potential asymmetry, uncovering phenomena such as barrier suppression and metastable state decay. The agreement between numerical and analytical results underscores the robustness of the proposed framework. This work provides a comprehensive foundation for studying noise-induced transitions in stochastic systems, offering insights into a broad range of applications in physics, chemistry, and biology. Full article
15 pages, 4130 KiB  
Article
Protocells Either Synchronize or Starve
by Marco Villani and Roberto Serra
Entropy 2025, 27(2), 154; https://doi.org/10.3390/e27020154 - 2 Feb 2025
Viewed by 341
Abstract
Two different processes take place in self-reproducing protocells, i.e., (i) cell reproduction by fission and (ii) duplication of the genetic material. One major problem is indeed that of assuring that the two processes take place at the same pace, i.e., that they synchronize, [...] Read more.
Two different processes take place in self-reproducing protocells, i.e., (i) cell reproduction by fission and (ii) duplication of the genetic material. One major problem is indeed that of assuring that the two processes take place at the same pace, i.e., that they synchronize, which is a necessary condition for sustainable growth. In previous theoretical works, using dynamical models, we had shown that such synchronization can spontaneously emerge, generation after generation, under a broad set of hypotheses about the architecture of the protocell, the nature of the self-replicating molecules, and the types of kinetic equations. However, an important class of cases (quadratic or higher-order self-replication) did not synchronize in the models we had used, but could actually lead to divergence of the concentration of replicators. We show here that this behavior is due to a simplification of the previous models, i.e., the “buffering” hypothesis, which assumes instantaneous equilibrium of the internal and external concentrations of those compounds which can cross the cell membrane. That divergence disappears if we make use of more realistic dynamical models, with finite transmembrane diffusion rates of the precursors of replicators. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Previous Issue
Back to TopTop