Previous Issue
Volume 27, August
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 9 (September 2025) – 61 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 348 KB  
Article
Two Types of Geometric Jensen–Shannon Divergences
by Frank Nielsen
Entropy 2025, 27(9), 947; https://doi.org/10.3390/e27090947 - 11 Sep 2025
Abstract
The geometric Jensen–Shannon divergence (G-JSD) has gained popularity in machine learning and information sciences thanks to its closed-form expression between Gaussian distributions. In this work, we introduce an alternative definition of the geometric Jensen–Shannon divergence tailored to positive densities which does not normalize [...] Read more.
The geometric Jensen–Shannon divergence (G-JSD) has gained popularity in machine learning and information sciences thanks to its closed-form expression between Gaussian distributions. In this work, we introduce an alternative definition of the geometric Jensen–Shannon divergence tailored to positive densities which does not normalize geometric mixtures. This novel divergence is termed the extended G-JSD, as it applies to the more general case of positive measures. We explicitly report the gap between the extended G-JSD and the G-JSD when considering probability densities, and show how to express the G-JSD and extended G-JSD using the Jeffreys divergence and the Bhattacharyya distance or Bhattacharyya coefficient. The extended G-JSD is proven to be an f-divergence, which is a separable divergence satisfying information monotonicity and invariance in information geometry. We derive a corresponding closed-form formula for the two types of G-JSDs when considering the case of multivariate Gaussian distributions that is often met in applications. We consider Monte Carlo stochastic estimations and approximations of the two types of G-JSD using the projective γ-divergences. Although the square root of the JSD yields a metric distance, we show that this is no longer the case for the two types of G-JSD. Finally, we explain how these two types of geometric JSDs can be interpreted as regularizations of the ordinary JSD. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
14 pages, 498 KB  
Article
Analytic Solutions and Entropy Production of the Double-Diffusive Equation System
by Imre Ferenc Barna and László Mátyás
Entropy 2025, 27(9), 946; https://doi.org/10.3390/e27090946 - 10 Sep 2025
Abstract
We investigate the partial differential equation system which describes the double-diffusion convection phenomena with the reduction formalism. Double-diffusion refers to when two scalar quantities with different diffusivity, such as heat and solute concentration, contribute to density gradients within a fluid under the influence [...] Read more.
We investigate the partial differential equation system which describes the double-diffusion convection phenomena with the reduction formalism. Double-diffusion refers to when two scalar quantities with different diffusivity, such as heat and solute concentration, contribute to density gradients within a fluid under the influence of gravity. The time-dependent self-similar trial function is applied and analytic results are presented for the dynamical variables and analyzed in detail. Additionally, the entropy production was derived as well. In the second part of the study we investigate the role of an additional heat source. Full article
(This article belongs to the Special Issue Dissipative Physical Dynamics)
Show Figures

Figure 1

28 pages, 2519 KB  
Article
On the Entropy-Based Localization of Inequality in Probability Distributions
by Rajeev Rajaram, Nathan Ritchey and Brian Castellani
Entropy 2025, 27(9), 945; https://doi.org/10.3390/e27090945 - 10 Sep 2025
Abstract
We present a novel method for localizing inequality within probability distributions by applying a recursive Hahn decomposition to the degree of uniformity—a measure derived from the exponential of Shannon entropy. This approach partitions the probability space into disjoint regions exhibiting progressively sharper deviations [...] Read more.
We present a novel method for localizing inequality within probability distributions by applying a recursive Hahn decomposition to the degree of uniformity—a measure derived from the exponential of Shannon entropy. This approach partitions the probability space into disjoint regions exhibiting progressively sharper deviations from uniformity, enabling structural insights into how and where inequality is concentrated. To demonstrate its broad applicability, we apply the method to both standard and contextualized systems: the discrete binomial and continuous exponential distributions serve as canonical cases, while two hypothetical examples illustrate domain-specific applications. In the first, we analyze localized risk concentrations in disease contraction data, revealing targeted zones of epidemiological disparity. In the second, we uncover stress localization in a non-uniformly loaded beam, demonstrating the method’s relevance to physical systems with spatial heterogeneity. This decomposition reveals aspects of structural disparity that are often obscured by scalar summaries. The resulting recursive tree offers a multi-scale representation of informational non-uniformity, capturing the emergence and localization of inequality across the distribution. The framework may have implications for understanding entropy localization, transitions in informational structure, and the dynamics of heterogeneous systems. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

11 pages, 1243 KB  
Editorial
Five Fristonian Formulae
by Thomas Parr, Giovanni Pezzulo, Rosalyn Moran, Maxwell Ramstead, Axel Constant and Anjali Bhat
Entropy 2025, 27(9), 944; https://doi.org/10.3390/e27090944 - 10 Sep 2025
Abstract
This paper is the contribution of the editorial team for a special issue designed to celebrate the scientific contributions of Karl Friston on his 65th birthday [...] Full article
Show Figures

Figure 1

27 pages, 1002 KB  
Article
Exergy Efficiency of Closed and Unsteady-Flow Systems
by Yunus A. Çengel and Mehmet Kanoğlu
Entropy 2025, 27(9), 943; https://doi.org/10.3390/e27090943 - 10 Sep 2025
Abstract
Exergy efficiency is viewed as the degree of approaching reversible operation, with a value of 100 percent for a reversible process characterized by zero entropy generation or equivalently zero exergy destruction since Xdestroyed = T0Sgen. As such, exergy [...] Read more.
Exergy efficiency is viewed as the degree of approaching reversible operation, with a value of 100 percent for a reversible process characterized by zero entropy generation or equivalently zero exergy destruction since Xdestroyed = T0Sgen. As such, exergy efficiency becomes a measure of thermodynamic perfection. There are different conceptual definitions of exergy efficiency, the most common ones being (1) the ratio of exergy output to exergy input ηex = Xoutput/Xinput = 1 − (Xdestroyed + Xloss)/Xinput, (2) the ratio of the product exergy to fuel exergy ηex = Xproduct/Xfuel = 1 − (Xdestroyed + Xloss)/Xfuel, and (3) the ratio of exergy recovered to exergy expended ηex = Xrecovered/Xexpended = 1 − Xdestroyed/Xexpended. Most exergy efficiency definitions are formulated with steady-flow systems in mind, and they are generally applied to systems in steady operation such as power plants and refrigeration systems whose exergy content remains constant. If these definitions are to be used for closed and unsteady-flow systems, the terms need to be interpreted broadly to account for the exergy change of the systems as exergy input or output, as appropriate. In this paper, general exergy efficiency relations are developed for closed and unsteady-flow systems and their use is demonstrated with applications. Also, the practicality of the use of the term exergy loss Xloss is questioned, and limitations on the definition ηex = Wact,out/Wrev,out are discussed. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Energy Systems)
Show Figures

Figure 1

28 pages, 7844 KB  
Article
Three-Dimensional Sound Source Localization with Microphone Array Combining Spatial Entropy Quantification and Machine Learning Correction
by Guangneng Li, Feiyu Zhao, Wei Tian and Tong Yang
Entropy 2025, 27(9), 942; https://doi.org/10.3390/e27090942 - 9 Sep 2025
Abstract
In recent years, with the popularization of intelligent scene monitoring, sound source localization (SSL) has become a major means for indoor monitoring and target positioning. However, existing sound source localization solutions are difficult to extend to multi-source and three-dimensional scenarios. To address this, [...] Read more.
In recent years, with the popularization of intelligent scene monitoring, sound source localization (SSL) has become a major means for indoor monitoring and target positioning. However, existing sound source localization solutions are difficult to extend to multi-source and three-dimensional scenarios. To address this, this paper proposes a three-dimensional sound source localization technology based on eight microphones. Specifically, the method employs a rectangular eight-microphone array and captures Direction-of-Arrival (DOA) information via the direct path relative transfer function (DP-RTF). It introduces spatial entropy to quantify the uncertainty caused by the exponentially growing DOA combinations as the number of sound sources increases, while further reducing the spatial entropy of sound source localization through geometric intersection. This solves the problem that traditional sound source localization methods cannot be applied to multi-source and three-dimensional scenarios. On the other hand, machine learning is used to eliminate coordinate deviations caused by DOA estimation errors of the direct path relative transfer function (DP-RTF) and deviations in microphone geometric parameters. Both simulation experiments and real-scene experiments show that the positioning error of the proposed method in three-dimensional scenarios is about 10.0 cm. Full article
Show Figures

Figure 1

31 pages, 892 KB  
Article
Federated Learning over MU-MIMO Vehicular Networks
by Maria Raftopoulou, José Mairton B. da Silva, Jr., Remco Litjens, H. Vincent Poor and Piet Van Mieghem
Entropy 2025, 27(9), 941; https://doi.org/10.3390/e27090941 - 9 Sep 2025
Abstract
Many algorithms related to vehicular applications, such as enhanced perception of the environment, benefit from frequent updates and the use of data from multiple vehicles. Federated learning is a promising method to improve the accuracy of algorithms in the context of vehicular networks. [...] Read more.
Many algorithms related to vehicular applications, such as enhanced perception of the environment, benefit from frequent updates and the use of data from multiple vehicles. Federated learning is a promising method to improve the accuracy of algorithms in the context of vehicular networks. However, limited communication bandwidth, varying wireless channel quality, and potential latency requirements may impact the number of vehicles selected for training per communication round and their assigned radio resources. In this work, we characterize the vehicles participating in federated learning based on their importance to the learning process and their use of wireless resources. We then address the joint vehicle selection and resource allocation problem, considering multi-cell networks with multi-user multiple-input multiple-output (MU-MIMO)-capable base stations and vehicles. We propose a “vehicle-beam-iterative” algorithm to approximate the solution to the resulting optimization problem. We then evaluate its performance through extensive simulations, using realistic road and mobility models, for the task of object classification of European traffic signs. Our results indicate that MU-MIMO improves the convergence time of the global model. Moreover, the application-specific accuracy targets are reached faster in scenarios where the vehicles have the same training data set sizes than in scenarios where the data set sizes differ. Full article
Show Figures

Figure 1

13 pages, 1976 KB  
Article
Determining the Upper-Bound on the Code Distance of Quantum Stabilizer Codes Through the Monte Carlo Method Based on Fully Decoupled Belief Propagation
by Zhipeng Liang, Zicheng Wang, Zhengzhong Yi, Fusheng Yang and Xuan Wang
Entropy 2025, 27(9), 940; https://doi.org/10.3390/e27090940 - 9 Sep 2025
Abstract
The code distance is a critical parameter of quantum stabilizer codes (QSCs), and determining it—whether exactly or approximately—is known to be an NP-complete problem. However, its upper bound can be determined efficiently by some methods such as the Monte Carlo method. Leveraging the [...] Read more.
The code distance is a critical parameter of quantum stabilizer codes (QSCs), and determining it—whether exactly or approximately—is known to be an NP-complete problem. However, its upper bound can be determined efficiently by some methods such as the Monte Carlo method. Leveraging the Monte Carlo method, we propose an algorithm to compute the upper bound on the code distance of a given QSC using fully decoupled belief propagation combined with ordered statistics decoding (FDBP-OSD). Our algorithm demonstrates high precision: for various QSCs with known distances, the computed upper bounds match the actual values. Additionally, we explore upper bounds for the minimum weight of logical X operators in the Z-type Tanner-graph-recursive-expansion (Z-TGRE) code and the Chamon code—an XYZ product code constructed from three repetition codes. The results on Z-TGRE codes align with theoretical analysis, while the results on Chamon codes suggest that XYZ product codes may achieve a code distance of O(N2/3), which supports the conjecture of Leverrier et al. Full article
(This article belongs to the Special Issue Quantum Error Correction and Fault-Tolerance)
Show Figures

Figure 1

20 pages, 2671 KB  
Article
Multivariate Time Series Anomaly Detection Based on Inverted Transformer with Multivariate Memory Gate
by Yuan Ma, Weiwei Liu, Changming Xu, Luyi Bai, Ende Zhang and Junwei Wang
Entropy 2025, 27(9), 939; https://doi.org/10.3390/e27090939 - 8 Sep 2025
Abstract
In the industrial IoT, it is vital to detect anomalies in multivariate time series, yet it faces numerous challenges, including highly imbalanced datasets, complex and high-dimensional data, and large disparities across variables. Despite the recent surge in proposals for deep learning-based methods, these [...] Read more.
In the industrial IoT, it is vital to detect anomalies in multivariate time series, yet it faces numerous challenges, including highly imbalanced datasets, complex and high-dimensional data, and large disparities across variables. Despite the recent surge in proposals for deep learning-based methods, these approaches typically treat the multivariate data at each point in time as a unique token, weakening the personalized features and dependency relationships between variables. As a result, their performance tends to degrade under highly imbalanced conditions, and reconstruction-based models are prone to overfitting abnormal patterns, leading to excessive reconstruction of anomalous inputs. In this paper, we propose ITMMG, an inverted Transformer with a multivariate memory gate. ITMMG employs an inverted token embedding strategy and multivariate memory to capture deep dependencies among variables and the normal patterns of individual variables. The experimental results obtained demonstrate that the proposed method exhibits superior performance in terms of detection accuracy and robustness compared with existing baseline methods across a range of standard time series anomaly detection datasets. This significantly reduces the probability of misclassifying anomalous samples during reconstruction. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

24 pages, 3114 KB  
Article
GNSS Interference Identification Driven by Eye Pattern Features: ICOA–CNN–ResNet–BiLSTM Optimized Deep Learning Architecture
by Chuanyu Wu, Yuanfa Ji and Xiyan Sun
Entropy 2025, 27(9), 938; https://doi.org/10.3390/e27090938 - 7 Sep 2025
Viewed by 164
Abstract
In this study, the key challenges faced by global navigation satellite systems (GNSSs) in the field of security are addressed, and an eye diagram-based deep learning framework for intelligent classification of interference types is proposed. GNSS signals are first transformed into two-dimensional eye [...] Read more.
In this study, the key challenges faced by global navigation satellite systems (GNSSs) in the field of security are addressed, and an eye diagram-based deep learning framework for intelligent classification of interference types is proposed. GNSS signals are first transformed into two-dimensional eye diagrams, enabling a novel visual representation wherein interference types are distinguished through entropy-centric feature analysis. Specifically, the quantification of information entropy within these diagrams serves as a theoretical foundation for extracting salient discriminative features, reflecting the structural complexity and uncertainty of the underlying signal distortions. We designed a hybrid architecture that integrates spatial feature extraction, gradient stability enhancement, and time dynamics modeling capabilities and combines the advantages of a convolutional neural network, residual network, and bidirectional long short-term memory network. To further improve model performance, we propose an improved coati optimization algorithm (ICOA), which combines chaotic mapping, an elite perturbation mechanism, and an adaptive weighting strategy for hyperparameter optimization. Compared with mainstream optimization methods, this algorithm improves the convergence accuracy by more than 30%. Experimental results on jamming datasets (continuous wave interference, chirp interference, pulse interference, frequency-modulated interference, amplitude-modulated interference, and spoofing interference) demonstrate that our method achieved performance in terms of accuracy, precision, recall, F1 score, and specificity, with values of 98.02%, 97.09%, 97.24%, 97.14%, and 99.65%, respectively, which represent improvements of 1.98%, 2.80%, 6.10%, 4.59%, and 0.33% over the next-best model. This study provides an efficient, entropy-aware, intelligent, and practically feasible solution for GNSS interference identification. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

20 pages, 1690 KB  
Article
3V-GM: A Tri-Layer “Point–Line–Plane” Critical Node Identification Algorithm for New Power Systems
by Yuzhuo Dai, Min Zhao, Gengchen Zhang and Tianze Zhao
Entropy 2025, 27(9), 937; https://doi.org/10.3390/e27090937 - 7 Sep 2025
Viewed by 256
Abstract
With the increasing penetration of renewable energy, the stochastic and intermittent nature of its generation increases operational uncertainty and vulnerability, posing significant challenges for grid stability. However, traditional algorithms typically identify critical nodes by focusing solely on the network topology or power flow, [...] Read more.
With the increasing penetration of renewable energy, the stochastic and intermittent nature of its generation increases operational uncertainty and vulnerability, posing significant challenges for grid stability. However, traditional algorithms typically identify critical nodes by focusing solely on the network topology or power flow, or by combining the two, which leads to the inaccurate and incomplete identification of essential nodes. To address this, we propose the Three-Dimensional Value-Based Gravity Model (3V-GM), which integrates structural and electrical–physical attributes across three layers. In the plane layer, we combine each node’s global topological position with its real-time supply–demand voltage state. In the line layer, we introduce an electrical coupling distance to quantify the strength of electromagnetic interactions between nodes. In the point layer, we apply eigenvector centrality to detect latent hub nodes whose influence is not immediately apparent. The performance of our proposed method was evaluated by examining the change in the load loss rate as nodes were sequentially removed. To assess the effectiveness of the 3V-GM approach, simulations were conducted on the IEEE 39 system, as well as six other benchmark networks. The simulations were performed using Python scripts, with operational parameters such as bus voltages, active and reactive power flows, and branch impedances obtained from standard test cases provided by MATPOWER v7.1. The results consistently show that removing the same number of nodes identified by 3V-GM leads to a greater load loss compared to the six baseline methods. This demonstrates the superior accuracy and stability of our approach. Additionally, an ablation experiment, which decomposed and recombined the three layers, further highlights the unique contribution of each component to the overall performance. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

22 pages, 415 KB  
Article
Infodemic Source Detection with Information Flow: Foundations and Scalable Computation
by Zimeng Wang, Chao Zhao, Qiaoqiao Zhou, Chee Wei Tan and Chung Chan
Entropy 2025, 27(9), 936; https://doi.org/10.3390/e27090936 - 6 Sep 2025
Viewed by 1157
Abstract
We consider the problem of identifying the source of a rumor in a network, given only a snapshot observation of infected nodes after the rumor has spread. Classical approaches, such as the maximum likelihood (ML) and joint maximum likelihood (JML) estimators based on [...] Read more.
We consider the problem of identifying the source of a rumor in a network, given only a snapshot observation of infected nodes after the rumor has spread. Classical approaches, such as the maximum likelihood (ML) and joint maximum likelihood (JML) estimators based on the conventional Susceptible–Infectious (SI) model, exhibit degeneracy, failing to uniquely identify the source even in simple network structures. To address these limitations, we propose a generalized estimator that incorporates independent random observation times. To capture the structure of information flow beyond graphs, our formulations consider rate constraints on the rumor and the multicast capacities for cyclic polylinking networks. Furthermore, we develop forward elimination and backward search algorithms for rate-constrained source detection and validate their effectiveness and scalability through comprehensive simulations. Our study establishes a rigorous and scalable foundation on the infodemic source detection. Full article
(This article belongs to the Special Issue Applications of Information Theory to Machine Learning)
Show Figures

Figure 1

9 pages, 1589 KB  
Article
Application of the Three-Group Model to the 2024 US Elections
by Miron Kaufman, Sanda Kaufman and Hung T. Diep
Entropy 2025, 27(9), 935; https://doi.org/10.3390/e27090935 - 6 Sep 2025
Viewed by 224
Abstract
Political polarization in Western democracies has accelerated in the last decade, with negative social consequences. Research across disciplines on antecedents, manifestations and societal impacts is hindered by social systems’ complexity: their constant flux impedes tracing causes of observed trends and prediction of consequences, [...] Read more.
Political polarization in Western democracies has accelerated in the last decade, with negative social consequences. Research across disciplines on antecedents, manifestations and societal impacts is hindered by social systems’ complexity: their constant flux impedes tracing causes of observed trends and prediction of consequences, hampering their mitigation. Social physics models exploit a characteristic of complex systems: what seems chaotic at one observation level may exhibit patterns at a higher level. Therefore, dynamic modeling of complex systems allows anticipation of possible events. We use this approach to anticipate 2024 US election results. We consider the highly polarized Democrats and Republicans, and Independents fluctuating between them. We generate average group-stance scenarios in time and explore how polarization and depolarization might have affected 2024 voting outcomes. We find that reducing polarization might advantage the larger voting group. We also explore ways to reduce polarization, and their potential effects on election results. The results inform regarding the perils of polarization trends, and on possibilities of changing course. Full article
Show Figures

Figure 1

23 pages, 5122 KB  
Article
Time-Varying Autoregressive Models: A Novel Approach Using Physics-Informed Neural Networks
by Zhixuan Jia and Chengcheng Zhang
Entropy 2025, 27(9), 934; https://doi.org/10.3390/e27090934 - 4 Sep 2025
Viewed by 354
Abstract
Time series models are widely used to examine temporal dynamics and uncover patterns across diverse fields. A commonly employed approach for modeling such data is the (Vector) Autoregressive (AR/VAR) model, in which each variable is represented as a linear combination of its own [...] Read more.
Time series models are widely used to examine temporal dynamics and uncover patterns across diverse fields. A commonly employed approach for modeling such data is the (Vector) Autoregressive (AR/VAR) model, in which each variable is represented as a linear combination of its own and others’ lagged values. However, the traditional (V)AR framework relies on the key assumption of stationarity, that autoregressive coefficients remain constant over time, which is often violated in practice, especially in systems affected by structural breaks, seasonal fluctuations, or evolving causal mechanisms. To overcome this limitation, Time-Varying (Vector) Autoregressive (TV-AR/TV-VAR) models have been developed, enabling model parameters to evolve over time and thus better capturing non-stationary behavior. Conventional approaches to estimating such models, including generalized additive modeling and kernel smoothing techniques, often require strong assumptions about basis functions, which can restrict their flexibility and applicability. To address these challenges, we introduce a novel framework that leverages physics-informed neural networks (PINN) to model TV-AR/TV-VAR processes. The proposed method extends the PINN framework to time series analysis by reducing reliance on explicitly defined physical structures, thereby broadening its applicability. Its effectiveness is validated through simulations on synthetic data and an empirical study of real-world health-related time series. Full article
Show Figures

Figure 1

24 pages, 4429 KB  
Article
Ascertaining Susceptibilities in Smart Contracts: A Quantum Machine Learning Approach
by Amulyashree Sridhar, Kalyan Nagaraj, Shambhavi Bangalore Ravi and Sindhu Kurup
Entropy 2025, 27(9), 933; https://doi.org/10.3390/e27090933 - 4 Sep 2025
Viewed by 459
Abstract
The current research aims to discover applications of QML approaches in realizing liabilities within smart contracts. These contracts are essential commodities of the blockchain interface and are also decisive in developing decentralized products. But liabilities in smart contracts could result in unfamiliar system [...] Read more.
The current research aims to discover applications of QML approaches in realizing liabilities within smart contracts. These contracts are essential commodities of the blockchain interface and are also decisive in developing decentralized products. But liabilities in smart contracts could result in unfamiliar system failures. Presently, static detection tools are utilized to discover accountabilities. However, they could result in instances of false narratives due to their dependency on predefined rules. In addition, these policies can often be superseded, failing to generalize on new contracts. The detection of liabilities with ML approaches, correspondingly, has certain limitations with contract size due to storage and performance issues. Nevertheless, employing QML approaches could be beneficial as they do not necessitate any preconceived rules. They often learn from data attributes during the training process and are employed as alternatives to ML approaches in terms of storage and performance. The present study employs four QML approaches, namely, QNN, QSVM, VQC, and QRF, for discovering susceptibilities. Experimentation revealed that the QNN model surpasses other approaches in detecting liabilities, with a performance accuracy of 82.43%. To further validate its feasibility and performance, the model was assessed on a several-partition test dataset, i.e., SolidiFI data, and the outcomes remained consistent. Additionally, the performance of the model was statistically validated using McNemar’s test. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 7914 KB  
Article
Channel Estimation for Intelligent Reflecting Surface Empowered Coal Mine Wireless Communication Systems
by Yang Liu, Kaikai Guo, Xiaoyue Li, Bin Wang and Yanhong Xu
Entropy 2025, 27(9), 932; https://doi.org/10.3390/e27090932 - 4 Sep 2025
Viewed by 334
Abstract
The confined space of coal mines characterized by curved tunnels with rough surfaces and a variety of deployed production equipment induces severe signal attenuation and interruption, which significantly degrades the accuracy of conventional channel estimation algorithms applied in coal mine wireless communication systems. [...] Read more.
The confined space of coal mines characterized by curved tunnels with rough surfaces and a variety of deployed production equipment induces severe signal attenuation and interruption, which significantly degrades the accuracy of conventional channel estimation algorithms applied in coal mine wireless communication systems. To address these challenges, we propose a modified Bilinear Generalized Approximate Message Passing (mBiGAMP) algorithm enhanced by intelligent reflecting surface (IRS) technology to improve channel estimation accuracy in coal mine scenarios. Due to the presence of abundant coal-carrying belt conveyors, we establish a hybrid channel model integrating both fast-varying and quasi-static components to accurately model the unique propagation environment in coal mines. Specifically, the fast-varying channel captures the varying signal paths affected by moving conveyors, while the quasi-static channel represents stable direct links. Since this hybrid structure necessitates an augmented factor graph, we introduce two additional factor nodes and variable nodes to characterize the distinct message-passing behaviors and then rigorously derive the mBiGAMP algorithm. Simulation results demonstrate that the proposed mBiGAMP algorithm achieves superior channel estimation accuracy in dynamic conveyor-affected coal mine scenarios compared with other state-of-the-art methods, showing significant improvements in both separated and cascaded channel estimation. Specifically, when the NMSE is 103, the SNR of mBiGAMP is improved by approximately 5 dB, 6 dB, and 14 dB compared with the Dual-Structure Orthogonal Matching Pursuit (DS-OMP), Parallel Factor (PARAFAC), and Least Squares (LS) algorithms, respectively. We also verify the convergence behavior of the proposed mBiGAMP algorithm across the operational signal-to-noise ratios range. Furthermore, we investigate the impact of the number of pilots on the channel estimation performance, which reveals that the proposed mBiGAMP algorithm consumes fewer number of pilots to accurately recover channel state information than other methods while preserving estimation fidelity. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

21 pages, 375 KB  
Review
Sherlock Holmes Doesn’t Play Dice: The Mathematics of Uncertain Reasoning When Something May Happen, That You Are Not Even Able to Figure Out
by Guido Fioretti
Entropy 2025, 27(9), 931; https://doi.org/10.3390/e27090931 - 4 Sep 2025
Viewed by 284
Abstract
While Evidence Theory (also known as Dempster–Shafer Theory, or Belief Functions Theory) is being increasingly used in data fusion, its potentialities in the Social and Life Sciences are often obscured by lack of awareness of its distinctive features. In particular, with this paper [...] Read more.
While Evidence Theory (also known as Dempster–Shafer Theory, or Belief Functions Theory) is being increasingly used in data fusion, its potentialities in the Social and Life Sciences are often obscured by lack of awareness of its distinctive features. In particular, with this paper I stress that an extended version of Evidence Theory can express the uncertainty deriving from the fear that events may materialize, that one is not even able to figure out. By contrast, Probability Theory must limit itself to the possibilities that a decision-maker is currently envisaging. I compare this extended version of Evidence Theory to cutting-edge extensions of Probability Theory, such as imprecise and sub-additive probabilities, as well as unconventional versions of Information Theory that are employed in data fusion and transmission of cultural information. A possible application to creative usage of Large Language Models is outlined, and further extensions to multi-agent interactions are outlined. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 1017 KB  
Article
Optimized Generalized LDPC Convolutional Codes
by Li Deng, Kai Tao, Zhiping Shi, You Zhang, Yinlong Shi, Jian Wang, Tian Liu and Yongben Wang
Entropy 2025, 27(9), 930; https://doi.org/10.3390/e27090930 - 4 Sep 2025
Viewed by 369
Abstract
In this paper, some optimized encoding and decoding schemes are proposed for the generalized LDPC convolutional codes (GLDPC–CCs). In terms of the encoding scheme, a flexible doping method is proposed, which replaces multiple single parity check (SPC) nodes with one generalized check (GC) [...] Read more.
In this paper, some optimized encoding and decoding schemes are proposed for the generalized LDPC convolutional codes (GLDPC–CCs). In terms of the encoding scheme, a flexible doping method is proposed, which replaces multiple single parity check (SPC) nodes with one generalized check (GC) node. Different types of BCH codes can be selected as the GC node by adjusting the number of SPC nodes to be replaced. Moreover, by fine-tuning the truncated bits and the extended parity check bits, or by reasonably adjusting the GC node distribution, the performance of GLDPC–CCs can be further improved. In terms of the decoding scheme, a hybrid layered normalized min-sum (HLNMS) decoding algorithm is proposed, where the layered normalized min-sum (LNMS) decoding is used for SPC nodes, and the Chase–Pyndiah decoding is adopted for GC nodes. Based on analysis of the decoding convergence of GC node and SPC node, an adaptive weight factor is designed for GC nodes that changes as the decoding iterations, aiming to further improve the decoding performance. In addition, an early stop decoding strategy is also proposed based on the minimum amplitude threshold of mutual information in order to reduce the decoding complexity. The simulation results have verified the superiority of the proposed scheme for GLDPC–CCs over the prior art, which has great application potential in optical communication systems. Full article
(This article belongs to the Special Issue LDPC Codes for Communication Systems)
Show Figures

Figure 1

24 pages, 2585 KB  
Article
Comprehensive Examination of Unrolled Networks for Solving Linear Inverse Problems
by Yuxi Chen, Xi Chen, Arian Maleki and Shirin Jalali
Entropy 2025, 27(9), 929; https://doi.org/10.3390/e27090929 - 3 Sep 2025
Viewed by 377
Abstract
Unrolled networks have become prevalent in various computer vision and imaging tasks. Although they have demonstrated remarkable efficacy in solving specific computer vision and computational imaging tasks, their adaptation to other applications presents considerable challenges. This is primarily due to the multitude of [...] Read more.
Unrolled networks have become prevalent in various computer vision and imaging tasks. Although they have demonstrated remarkable efficacy in solving specific computer vision and computational imaging tasks, their adaptation to other applications presents considerable challenges. This is primarily due to the multitude of design decisions that practitioners working on new applications must navigate, each potentially affecting the network’s overall performance. These decisions include selecting the optimization algorithm, defining the loss function, and determining the deep architecture, among others. Compounding the issue, evaluating each design choice requires time-consuming simulations to train, fine-tune the neural network, and optimize its performance. As a result, the process of exploring multiple options and identifying the optimal configuration becomes time-consuming and computationally demanding. The main objectives of this paper are (1) to unify some ideas and methodologies used in unrolled networks to reduce the number of design choices a user has to make, and (2) to report a comprehensive ablation study to discuss the impact of each of the choices involved in designing unrolled networks and present practical recommendations based on our findings. We anticipate that this study will help scientists and engineers to design unrolled networks for their applications and diagnose problems within their networks efficiently. Full article
Show Figures

Figure 1

13 pages, 952 KB  
Article
Sensor Fusion for Target Detection Using LLM-Based Transfer Learning Approach
by Yuval Ziv, Barouch Matzliach and Irad Ben-Gal
Entropy 2025, 27(9), 928; https://doi.org/10.3390/e27090928 - 3 Sep 2025
Viewed by 451
Abstract
This paper introduces a novel sensor fusion approach for the detection of multiple static and mobile targets by autonomous mobile agents. Unlike previous studies that rely on theoretical sensor models, which are considered as independent, the proposed methodology leverages real-world sensor data, which [...] Read more.
This paper introduces a novel sensor fusion approach for the detection of multiple static and mobile targets by autonomous mobile agents. Unlike previous studies that rely on theoretical sensor models, which are considered as independent, the proposed methodology leverages real-world sensor data, which is transformed into sensor-specific probability maps using object detection estimation for optical data and converting averaged point-cloud intensities for LIDAR based on a dedicated deep learning model before being integrated through a large language model (LLM) framework. We introduce a methodology based on LLM transfer learning (LLM-TLFT) to create a robust global probability map enabling efficient swarm management and target detection in challenging environments. The paper focuses on real data obtained from two types of sensors, light detection and ranging (LIDAR) sensors and optical sensors, and it demonstrates significant improvement in performance compared to existing methods (Independent Opinion Pool, CNN, GPT-2 with deep transfer learning) in terms of precision, recall, and computational efficiency, particularly in scenarios with high noise and sensor imperfections. The significant advantage of the proposed approach is the possibility to interpret a dependency between different sensors. In addition, a model compression using knowledge-based distillation was performed (distilled TLFT), which yielded satisfactory results for the deployment of the proposed approach to edge devices. Full article
Show Figures

Figure 1

20 pages, 952 KB  
Article
Noise-Robust-Based Clock Parameter Estimation and Low-Overhead Time Synchronization in Time-Sensitive Industrial Internet of Things
by Long Tang, Fangyan Li, Zichao Yu and Haiyong Zeng
Entropy 2025, 27(9), 927; https://doi.org/10.3390/e27090927 - 3 Sep 2025
Viewed by 354
Abstract
Time synchronization is critical for task-oriented and time-sensitive Industrial Internet of Things (IIoT) systems. Nevertheless, achieving high-precision synchronization with low communication overhead remains a key challenge due to the constrained resources of IIoT devices. In this paper, we propose a single-timestamp time synchronization [...] Read more.
Time synchronization is critical for task-oriented and time-sensitive Industrial Internet of Things (IIoT) systems. Nevertheless, achieving high-precision synchronization with low communication overhead remains a key challenge due to the constrained resources of IIoT devices. In this paper, we propose a single-timestamp time synchronization scheme that significantly reduces communication overhead by utilizing the mechanism of AP to periodically collect sensor device data. The reduced communication overhead alleviates network congestion, which is essential for achieving low end-to-end latency in synchronized IIoT networks. Furthermore, to mitigate the impact of random delay noise on clock parameter estimation, we propose a noise-robust-based Maximum Likelihood Estimation (NR-MLE) algorithm that jointly optimizes synchronization accuracy and resilience to random delays. Specifically, we decompose the collected timestamp matrix into two low-rank matrices and use gradient descent to minimize reconstruction error and regularization, approximating the true signal and removing noise. The denoised timestamp matrix is then used to jointly estimate clock skew and offset via MLE, with the corresponding Cramér–Rao Lower Bounds (CRLBs) being derived. The simulation results demonstrate that the NR-MLE algorithm achieves a higher clock parameter estimation accuracy than conventional MLE and exhibits strong robustness against increasing noise levels. Full article
Show Figures

Figure 1

23 pages, 5687 KB  
Article
Benchmarking Static Analysis for PHP Applications Security
by Jiazhen Zhao, Kailong Zhu, Canju Lu, Jun Zhao and Yuliang Lu
Entropy 2025, 27(9), 926; https://doi.org/10.3390/e27090926 - 3 Sep 2025
Viewed by 340
Abstract
PHP is the most widely used server-side programming language, but it remains highly susceptible to diverse classes of vulnerabilities. Static Application Security Testing (SAST) tools are commonly adopted for vulnerability detection; however, their evaluation lacks systematic criteria capable of quantifying information loss and [...] Read more.
PHP is the most widely used server-side programming language, but it remains highly susceptible to diverse classes of vulnerabilities. Static Application Security Testing (SAST) tools are commonly adopted for vulnerability detection; however, their evaluation lacks systematic criteria capable of quantifying information loss and uncertainty in analysis. Existing approaches, often based on small real-world case sets or heuristic sampling, fail to control experimental entropy within test cases. This uncontrolled variability makes it difficult to measure the information gain provided by different tools and to accurately differentiate their performance under varying levels of structural and semantic complexity. In this paper, we have developed a systematic evaluation framework for PHP SAST tools, designed to provide accurate and comprehensive assessments of their vulnerability detection capabilities. The framework explicitly isolates key factors influencing data flow analysis, enabling evaluation over four progressive dimensions with controlled information diversity. Using a benchmark instance, we validate the framework’s feasibility and show how it reduces evaluation entropy, enabling the more reliable measurement of detection capabilities. Our results highlight the framework’s ability to reveal the limitations in current SAST tools, offering actionable insights for their future improvement. Full article
Show Figures

Figure 1

15 pages, 266 KB  
Article
Structural Complexity as a Directional Signature of System Evolution: Beyond Entropy
by Donglu Shi
Entropy 2025, 27(9), 925; https://doi.org/10.3390/e27090925 - 3 Sep 2025
Viewed by 432
Abstract
We propose a universal framework for understanding system evolution based on structural complexity, offering a directional signature that applies across physical, chemical, and biological domains. Unlike entropy, which is constrained by its definition in closed, equilibrium systems, we introduce Kolmogorov Complexity (KC) and [...] Read more.
We propose a universal framework for understanding system evolution based on structural complexity, offering a directional signature that applies across physical, chemical, and biological domains. Unlike entropy, which is constrained by its definition in closed, equilibrium systems, we introduce Kolmogorov Complexity (KC) and Fractal Dimension (FD) as quantifiable, scalable metrics that capture the emergence of organized complexity in open, non-equilibrium systems. We examine two major classes of systems: (1) living systems, revisiting Schrödinger’s insight that biological growth may locally reduce entropy while increasing structural order, and (2) irreversible natural processes such as oxidation, diffusion, and material aging. We formalize a Universal Law: expressed as a non-decreasing function Ω(t) = α·KC(t) + β·FD(t), which parallels the Second Law of Thermodynamics but tracks the rise in algorithmic and geometric complexity. This framework integrates principles from complexity science, providing a robust, mathematically grounded lens for describing the directional evolution of systems across scales-from crystals to cognition. Full article
(This article belongs to the Section Complexity)
21 pages, 753 KB  
Article
Learnable Convolutional Attention Network for Unsupervised Knowledge Graph Entity Alignment
by Weishan Cai and Wenjun Ma
Entropy 2025, 27(9), 924; https://doi.org/10.3390/e27090924 - 3 Sep 2025
Viewed by 413
Abstract
The success of current entity alignment (EA) tasks largely depends on the supervision information provided by labeled data. Considering the cost of labeled data, most supervised methods are challenging to apply in practical scenarios. Therefore, an increasing number of works based on contrastive [...] Read more.
The success of current entity alignment (EA) tasks largely depends on the supervision information provided by labeled data. Considering the cost of labeled data, most supervised methods are challenging to apply in practical scenarios. Therefore, an increasing number of works based on contrastive learning, active learning, or other deep learning techniques have been developed, to solve the performance bottleneck caused by the lack of labeled data. However, existing unsupervised EA methods still face certain limitations; either their modeling complexity is high or they fail to balance the effectiveness and practicality of alignment. To overcome these issues, we propose a learnable convolutional attention network for unsupervised entity alignment, named LCA-UEA. Specifically, LCA-UEA performs convolution operations before the attention mechanism, ensuring the acquisition of structural information and avoiding the superposition of redundant information. Then, to efficiently filter out invalid neighborhood information of aligned entities, LCA-UEA designs a relation structure reconstruction method based on potential matching relations, thereby enhancing the usability and scalability of the EA method. Notably, a similarity function based on consistency is proposed to better measure the similarity of candidate entity pairs. Finally, we conducted extensive experiments on three datasets of different sizes and types (cross-lingual and monolingual) to verify the superiority of LCA-UEA. Experimental results demonstrate that LCA-UEA significantly improved alignment accuracy, outperforming 25 supervised or unsupervised methods, and improving by 6.4% in Hits@1 over the best baseline in the best case. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications, 2nd Edition)
Show Figures

Figure 1

34 pages, 2491 KB  
Article
Simulating Public Opinion: Comparing Distributional and Individual-Level Predictions from LLMs and Random Forests
by Fernando Miranda and Pedro Paulo Balbi
Entropy 2025, 27(9), 923; https://doi.org/10.3390/e27090923 - 2 Sep 2025
Viewed by 431
Abstract
Understanding and modeling the flow of information in human societies is essential for capturing phenomena such as polarization, opinion formation, and misinformation diffusion. Traditional agent-based models often rely on simplified behavioral rules that fail to capture the nuanced and context-sensitive nature of human [...] Read more.
Understanding and modeling the flow of information in human societies is essential for capturing phenomena such as polarization, opinion formation, and misinformation diffusion. Traditional agent-based models often rely on simplified behavioral rules that fail to capture the nuanced and context-sensitive nature of human decision-making. In this study, we explore the potential of Large Language Models (LLMs) as data-driven, high-fidelity agents capable of simulating individual opinions under varying informational conditions. Conditioning LLMs on real survey data from the 2020 American National Election Studies (ANES), we investigate their ability to predict individual-level responses across a spectrum of political and social issues in a zero-shot setting, without any training on the survey outcomes. Using Jensen–Shannon distance to quantify divergence in opinion distributions and F1-score to measure predictive accuracy, we compare LLM-generated simulations to those produced by a supervised Random Forest model. While performance at the individual level is comparable, LLMs consistently produce aggregate opinion distributions closer to the empirical ground truth. These findings suggest that LLMs offer a promising new method for simulating complex opinion dynamics and modeling the probabilistic structure of belief systems in computational social science. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

35 pages, 638 KB  
Article
On the Relativity of Quantumness as Implied by Relativity of Arithmetic and Probability
by Marek Czachor
Entropy 2025, 27(9), 922; https://doi.org/10.3390/e27090922 - 2 Sep 2025
Viewed by 358
Abstract
A hierarchical structure of isomorphic arithmetics is defined by a bijection gR:RR. It entails a hierarchy of probabilistic models, with probabilities pk=gk(p), where g is the restriction of [...] Read more.
A hierarchical structure of isomorphic arithmetics is defined by a bijection gR:RR. It entails a hierarchy of probabilistic models, with probabilities pk=gk(p), where g is the restriction of gR to the interval [0,1], gk is the kth iterate of g, and k is an arbitrary integer (positive, negative, or zero; g0(x)=x). The relation between p and gk(p), k>0, is analogous to the one between probability and neural activation function. For k1, gk(p) is essentially white noise (all processes are equally probable). The choice of k=0 is physically as arbitrary as the choice of origin of a line in space, hence what we regard as experimental binary probabilities, pexp, can be given by any k, pexp=gk(p). Quantum binary probabilities are defined by g(p)=sin2π2p. With this concrete form of g, one finds that any two neighboring levels of the hierarchy are related to each other in a quantum–subquantum relation. In this sense, any model in the hierarchy is probabilistically quantum in appropriate arithmetic and calculus. And the other way around: any model is subquantum in appropriate arithmetic and calculus. Probabilities involving more than two events are constructed by means of trees of binary conditional probabilities. We discuss from this perspective singlet-state probabilities and Bell inequalities. We find that singlet state probabilities involve simultaneously three levels of the hierarchy: quantum, hidden, and macroscopic. As a by-product of the analysis, we discover a new (arithmetic) interpretation of the Fubini–Study geodesic distance. Full article
(This article belongs to the Special Issue Quantum Measurement)
Show Figures

Figure 1

18 pages, 6001 KB  
Article
A Graph Contrastive Learning Method for Enhancing Genome Recovery in Complex Microbial Communities
by Guo Wei and Yan Liu
Entropy 2025, 27(9), 921; https://doi.org/10.3390/e27090921 - 31 Aug 2025
Viewed by 459
Abstract
Accurate genome binning is essential for resolving microbial community structure and functional potential from metagenomic data. However, existing approaches—primarily reliant on tetranucleotide frequency (TNF) and abundance profiles—often perform sub-optimally in the face of complex community compositions, low-abundance taxa, and long-read sequencing datasets. To [...] Read more.
Accurate genome binning is essential for resolving microbial community structure and functional potential from metagenomic data. However, existing approaches—primarily reliant on tetranucleotide frequency (TNF) and abundance profiles—often perform sub-optimally in the face of complex community compositions, low-abundance taxa, and long-read sequencing datasets. To address these limitations, we present MBGCCA, a novel metagenomic binning framework that synergistically integrates graph neural networks (GNNs), contrastive learning, and information-theoretic regularization to enhance binning accuracy, robustness, and biological coherence. MBGCCA operates in two stages: (1) multimodal information integration, where TNF and abundance profiles are fused via a deep neural network trained using a multi-view contrastive loss, and (2) self-supervised graph representation learning, which leverages assembly graph topology to refine contig embeddings. The contrastive learning objective follows the InfoMax principle by maximizing mutual information across augmented views and modalities, encouraging the model to extract globally consistent and high-information representations. By aligning perturbed graph views while preserving topological structure, MBGCCA effectively captures both global genomic characteristics and local contig relationships. Comprehensive evaluations using both synthetic and real-world datasets—including wastewater and soil microbiomes—demonstrate that MBGCCA consistently outperforms state-of-the-art binning methods, particularly in challenging scenarios marked by sparse data and high community complexity. These results highlight the value of entropy-aware, topology-preserving learning for advancing metagenomic genome reconstruction. Full article
(This article belongs to the Special Issue Network-Based Machine Learning Approaches in Bioinformatics)
Show Figures

Figure 1

34 pages, 10418 KB  
Article
Entropy-Fused Enhanced Symplectic Geometric Mode Decomposition for Hybrid Power Quality Disturbance Recognition
by Chencheng He, Wenbo Wang, Xuezhuang E, Hao Yuan and Yuyi Lu
Entropy 2025, 27(9), 920; https://doi.org/10.3390/e27090920 - 30 Aug 2025
Viewed by 350
Abstract
Electrical networks face operational challenges from power quality-affecting disturbances. Since disturbance signatures directly affect classifier performance, optimized feature selection becomes critical for accurate power quality assessment. The pursuit of robust feature extraction inevitably constrains the dimensionality of the discriminative feature set, but the [...] Read more.
Electrical networks face operational challenges from power quality-affecting disturbances. Since disturbance signatures directly affect classifier performance, optimized feature selection becomes critical for accurate power quality assessment. The pursuit of robust feature extraction inevitably constrains the dimensionality of the discriminative feature set, but the complexity of the recognition model will be increased and the recognition speed will be reduced if the feature vector dimension is too high. Building upon the aforementioned requirements, in this paper, we propose a feature extraction framework that combines improved symplectic geometric mode decomposition, refined generalized multiscale quantum entropy, and refined generalized multiscale reverse dispersion entropy. Firstly, based on the intrinsic properties of power quality disturbance (PQD) signals, the embedding dimension of symplectic geometric mode decomposition and the adaptive mode component screening method are improved, and the PQD signal undergoes tri-band decomposition via improved symplectic geometric mode decomposition (ISGMD), yielding distinct high-frequency, medium-frequency, and low-frequency components. Secondly, utilizing the enhanced symplectic geometric mode decomposition as a foundation, the perturbation features are extracted by the combination of refined generalized multiscale quantum entropy and refined generalized multiscale reverse dispersion entropy to construct high-precision and low-dimensional feature vectors. Finally, a double-layer composite power quality disturbance model is constructed by a deep extreme learning machine algorithm to identify power quality disturbance signals. After analysis and comparison, the proposed method is found to be effective even in a strong noise environment with a single interference, and the average recognition accuracy across different noise environments is 97.3%. Under the complex conditions involving multiple types of mixed perturbations, the average recognition accuracy is maintained above 96%. Compared with the existing CNN + LSTM method, the recognition accuracy of the proposed method is improved by 3.7%. In addition, its recognition accuracy in scenarios with small data samples is significantly better than that of traditional methods, such as single CNN models and LSTM models. The experimental results show that the proposed strategy can accurately classify and identify various power quality interferences and that it is better than traditional methods in terms of classification accuracy and robustness. The experimental results of the simulation and measured data show that the combined feature extraction methodology reliably extracts discriminative feature vectors from PQD. The double-layer combined classification model can further enhance the model’s recognition capabilities. This method has high accuracy and certain noise resistance. In the 30 dB white noise environment, the average classification accuracy of the model is 99.10% for the simulation database containing 63 PQD types. Meanwhile, for the test data based on a hardware platform, the average accuracy is 99.03%, and the approach’s dependability is further evidenced by rigorous validation experiments. Full article
Show Figures

Figure 1

24 pages, 1687 KB  
Article
A Novel Co-Designed Multi-Domain Entropy and Its Dynamic Synapse Classification Approach for EEG Seizure Detection
by Guanyuan Feng, Jiawen Li, Yicheng Zhong, Shuang Zhang, Xin Liu, Mang I Vai, Kaihan Lin, Xianxian Zeng, Jun Yuan and Rongjun Chen
Entropy 2025, 27(9), 919; https://doi.org/10.3390/e27090919 - 30 Aug 2025
Viewed by 476
Abstract
Automated electroencephalography (EEG) seizure detection is meaningful in clinical medicine. However, current approaches often lack comprehensive feature extraction and are limited by generic classifier architectures, which limit their effectiveness in complex real-world scenarios. To overcome this traditional coupling between feature representation and classifier [...] Read more.
Automated electroencephalography (EEG) seizure detection is meaningful in clinical medicine. However, current approaches often lack comprehensive feature extraction and are limited by generic classifier architectures, which limit their effectiveness in complex real-world scenarios. To overcome this traditional coupling between feature representation and classifier development, this study proposes DySC-MDE, an end-to-end co-designed framework for seizure detection. A novel multi-domain entropy (MDE) representation is constructed at the feature level based on amplitude-sensitive permutation entropy (ASPE), which adopts entropy-based quantifiers to characterize the nonlinear dynamics of EEG signals across diverse domains. Specifically, ASPE is extended into three distinct variants, refined composite multiscale ASPE (RCMASPE), discrete wavelet transform-based hierarchical ASPE (HASPE-DWT), and time-shift multiscale ASPE (TSMASPE), to represent various temporal and spectral dynamics of EEG signals. At the classifier level, a dynamic synapse classifier (DySC) is proposed to align with the structure of the MDE features. Particularly, DySC includes three parallel and specialized processing pathways, each tailored to a specific entropy variant. These outputs are then adaptively fused through a dynamic synaptic gating mechanism, which can enhance the model’s ability to integrate heterogeneous information sources. To fully evaluate the effectiveness of the proposed method, extensive experiments are conducted on two public datasets using cross-validation. For the binary classification task, DySC-MDE achieves an accuracy of 97.50% and 98.93% and an F1-score of 97.58% and 98.87% in the Bonn and CHB-MIT datasets, respectively. Moreover, in the three-class task, the proposed method maintains a high F1-score of 96.83%, revealing its strong discriminative performance and generalization ability across different categories. Consequently, these impressive results demonstrate that the joint optimization of nonlinear dynamic feature representations and structure-aware classifiers can further improve the analysis of complex epileptic EEG signals, which opens a novel direction for robust seizure detection. Full article
(This article belongs to the Special Issue Entropy Analysis of ECG and EEG Signals)
Show Figures

Figure 1

16 pages, 1358 KB  
Article
The Hungry Daemon: Does an Energy-Harvesting Active Particle Have to Obey the Second Law of Thermodynamics?
by Simon Bienewald, Diego M. Fieguth and James R. Anglin
Entropy 2025, 27(9), 918; https://doi.org/10.3390/e27090918 - 30 Aug 2025
Cited by 1 | Viewed by 517
Abstract
Thought experiments like Maxwell’s Demon or the Smoluchowski–Feynman Ratchet can help in pursuing the microscopic origin of the Second Law of Thermodynamics. Here we present a more sophisticated mechanical system than a ratchet, consisting of a Hamiltonian (non-Brownian) active particle which can harvest [...] Read more.
Thought experiments like Maxwell’s Demon or the Smoluchowski–Feynman Ratchet can help in pursuing the microscopic origin of the Second Law of Thermodynamics. Here we present a more sophisticated mechanical system than a ratchet, consisting of a Hamiltonian (non-Brownian) active particle which can harvest energy from an environment which may be in thermal equilibrium at a single temperature. We show that while a phenomenological description would seem to allow the system to operate as a Perpetual Motion Machine of the Second Kind, a full mechanical analysis confirms that this is impossible, and that perpetual energy harvesting within a mechanical system can only occur if the environment has an energetic population inversion similar to a lasing medium. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop