Next Issue
Volume 27, November
Previous Issue
Volume 27, September
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 10 (October 2025) – 93 articles

Cover Story (view full-size image): This cover illustrates the study’s framework of combining EEG entropy analysis with machine learning to distinguish psychogenic non-epileptic seizures (PNES) from epileptic seizures (ES). Entropy measures capture changes in neural complexity across interictal and preictal states, showing that entropy in PNES is higher during interictal periods but lower during preictal periods compared with ES. The study workflow includes data collection, preprocessing, entropy computation, channel importance, and classification. This study underscores how entropy dynamics can improve diagnostic differentiation between PNES and ES. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 1868 KB  
Article
Information Content and Maximum Entropy of Compartmental Systems in Equilibrium
by Holger Metzler and Carlos A. Sierra
Entropy 2025, 27(10), 1085; https://doi.org/10.3390/e27101085 - 21 Oct 2025
Viewed by 371
Abstract
Mass-balanced compartmental systems defy classical deterministic entropy measures since both metric and topological entropy vanish in dissipative dynamics. By interpreting open compartmental systems as absorbing continuous-time Markov chains that describe the random journey of a single representative particle, we allow established information-theoretic principles [...] Read more.
Mass-balanced compartmental systems defy classical deterministic entropy measures since both metric and topological entropy vanish in dissipative dynamics. By interpreting open compartmental systems as absorbing continuous-time Markov chains that describe the random journey of a single representative particle, we allow established information-theoretic principles to be applied to this particular type of deterministic dynamical system. In particular, path entropy quantifies the uncertainty of complete trajectories, while entropy rates measure the average uncertainty of instantaneous transitions. Using Shannon’s information entropy, we derive closed-form expressions for these quantities in equilibrium and extend the maximum entropy principle (MaxEnt) to the problem of model selection in compartmental dynamics. This information-theoretic framework not only provides a systematic way to address equifinality but also reveals hidden structural properties of complex systems such as the global carbon cycle. Full article
Show Figures

Figure 1

23 pages, 3142 KB  
Article
Cross-Group EEG Emotion Recognition Based on Phase Space Reconstruction Topology
by Xuanpeng Zhu, Mu Zhu, Dong Li and Yu Song
Entropy 2025, 27(10), 1084; https://doi.org/10.3390/e27101084 - 20 Oct 2025
Viewed by 338
Abstract
Due to the interference of artifacts and the nonlinearity of electroencephalogram (EEG) signals, the extraction of representational features has become a challenge in EEG emotion recognition. In this work, we reduce the dimensionality of phase space trajectories by introducing local linear embedding (LLE), [...] Read more.
Due to the interference of artifacts and the nonlinearity of electroencephalogram (EEG) signals, the extraction of representational features has become a challenge in EEG emotion recognition. In this work, we reduce the dimensionality of phase space trajectories by introducing local linear embedding (LLE), which projects the trajectories onto a 2-D plane while preserving their local topological structure, and innovatively construct 16 topological features from different perspectives to quantitatively describe the nonlinear dynamic patterns induced by emotions on a multi-scale level. By using independent feature evaluation, we select core features with significant discrimination and combine the activation patterns of brain topography with model gain ranking to optimize the electrode channels. Validation of the SEED and HIED datasets resulted in subject-dependent average accuracies of 90.33% for normal-hearing subjects (3-Class) and 77.17% for hearing-impaired subjects (4-Class), and we also used differential entropy (DE) features to explore the potential of integrating topological features. By quantifying topological features, the 6-Class task achieved an average accuracy of 77.5% in distinguishing emotions across different subject groups. Full article
Show Figures

Figure 1

23 pages, 321 KB  
Article
Nonlinear Shrinkage Estimation of Higher-Order Moments for Portfolio Optimization Under Uncertainty in Complex Financial Systems
by Wanbo Lu and Zhenzhong Tian
Entropy 2025, 27(10), 1083; https://doi.org/10.3390/e27101083 - 20 Oct 2025
Viewed by 336
Abstract
This paper develops a nonlinear shrinkage estimation method for higher-order moment matrices within a multifactor model framework and establishes its asymptotic consistency under high-dimensional settings. The approach extends the nonlinear shrinkage methodology from covariance to higher-order moments, thereby mitigating the “curse of dimensionality” [...] Read more.
This paper develops a nonlinear shrinkage estimation method for higher-order moment matrices within a multifactor model framework and establishes its asymptotic consistency under high-dimensional settings. The approach extends the nonlinear shrinkage methodology from covariance to higher-order moments, thereby mitigating the “curse of dimensionality” and alleviating estimation uncertainty in high-dimensional settings. Monte Carlo simulations demonstrate that, compared with linear shrinkage estimation, the proposed method substantially reduces mean squared errors (MSEs) and achieves greater Percentage Relative Improvement in Average Loss (PRIAL) for covariance and cokurtosis estimates; relative to sample estimation, it delivers significant gains in mitigating uncertainty for covariance, coskewness, and cokurtosis. An empirical portfolio analysis incorporating higher-order moments shows that, when the asset universe is large, portfolios based on the nonlinear shrinkage estimator outperform those constructed using linear shrinkage and sample estimators, achieving higher annualized return and Sharpe ratio with lower kurtosis and maximum drawdown, thus providing stronger resilience against uncertainty in complex financial systems. In smaller asset universes, nonlinear shrinkage portfolios perform on par with their linear shrinkage counterparts. These findings highlight the potential of nonlinear shrinkage techniques to reduce uncertainty in higher-order moment estimation and to improve portfolio performance across diverse and complex investment environments. Full article
(This article belongs to the Special Issue Complexity and Synchronization in Time Series)
24 pages, 2934 KB  
Article
Selected Methods for Designing Monetary and Fiscal Targeting Rules Within the Policy Mix Framework
by Agnieszka Przybylska-Mazur
Entropy 2025, 27(10), 1082; https://doi.org/10.3390/e27101082 - 19 Oct 2025
Viewed by 327
Abstract
In the existing literature, targeting rules are typically determined separately for monetary and fiscal policy. This article proposes a framework for determining targeting rules that account for the policy mix of both monetary and fiscal policy. The aim of this study is to [...] Read more.
In the existing literature, targeting rules are typically determined separately for monetary and fiscal policy. This article proposes a framework for determining targeting rules that account for the policy mix of both monetary and fiscal policy. The aim of this study is to compare selected optimization methods used to derive targeting rules as solutions to a constrained minimization problem. The constraints are defined by a model that incorporates a monetary and fiscal policy mix. The optimization methods applied include the linear–quadratic regulator, Bellman dynamic programming, and Euler’s calculus of variations. The resulting targeting rules are solutions to a discrete-time optimization problem with a finite horizon and without discounting. In this article, we define targeting rules that take into account the monetary and fiscal policy mix. The derived rules allow for the calculation of optimal values for the interest rate and the balance-to-GDP ratio, which ensure price stability, a stable debt-to-GDP ratio, and the desired GDP growth dynamics. It can be noted that all the optimization methods used yield the same optimal vector of decision variables, and the specific method applied does not affect the form of the targeting rules. Full article
Show Figures

Figure 1

20 pages, 363 KB  
Article
A Set of Master Variables for the Two-Star Random Graph
by Pawat Akara-pipattana and Oleg Evnin
Entropy 2025, 27(10), 1081; https://doi.org/10.3390/e27101081 - 19 Oct 2025
Viewed by 245
Abstract
The two-star random graph is the simplest exponential random graph model with nontrivial interactions between the graph edges. We propose a set of auxiliary variables that control the thermodynamic limit where the number of vertices N tends to infinity. Such ’master variables’ are [...] Read more.
The two-star random graph is the simplest exponential random graph model with nontrivial interactions between the graph edges. We propose a set of auxiliary variables that control the thermodynamic limit where the number of vertices N tends to infinity. Such ’master variables’ are usually highly desirable in treatments of ‘large N’ statistical field theory problems. For the dense regime when a finite fraction of all possible edges are filled, this construction recovers the mean-field solution of Park and Newman, but with explicit control over the 1/N corrections. We use this advantage to compute the first subleading correction to the Park–Newman result, which encodes the finite, nonextensive contribution to the free energy. For the sparse regime with a finite mean degree, we obtain a very compact derivation of the Annibale–Courtney solution, originally developed with the use of functional integrals, which is comfortably bypassed in our treatment. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

20 pages, 2135 KB  
Article
Coupled Dynamics of Information–Epidemic Spreading with Resource Allocation and Transmission on Multi-Layer Networks
by Qian Yin, Zhishuang Wang, Kaiyao Wang and Zhiyong Hong
Entropy 2025, 27(10), 1080; https://doi.org/10.3390/e27101080 - 19 Oct 2025
Viewed by 281
Abstract
The spread of epidemic-associated panic information through online social platforms, as well as the allocation and utilization of therapeutic defensive resources in reality, directly influences the transmission of infectious diseases. Moreover, how to reasonably allocate resources to effectively suppress epidemic spread remains a [...] Read more.
The spread of epidemic-associated panic information through online social platforms, as well as the allocation and utilization of therapeutic defensive resources in reality, directly influences the transmission of infectious diseases. Moreover, how to reasonably allocate resources to effectively suppress epidemic spread remains a problem that requires further investigation. To address this, we construct a coupled three-layer network framework to explore the complex co-evolutionary mechanisms among false panic information, therapeutic defensive resource transmission, and disease propagation. In the model, individuals can obtain therapeutic defensive resources either through centralized distribution by government agencies or through interpersonal assistance, while the presence of false panic information reduces the willingness of neighbors to share resources. Using the microscopic Markov chain approach, we formulate the dynamical equations of the system and analyze the epidemic threshold. Furthermore, systematic simulation analyses are carried out to evaluate how panic information, resource-sharing willingness, centralized distribution strategies, and resource effectiveness affect epidemic prevalence and threshold levels. For example, under a representative parameter setting, the infection prevalence decreases from 0.18 under the random allocation strategy to 0.03 when resources are allocated exclusively to infected individuals. Moreover, increasing the total supply of resources under high treatment efficiency raises the epidemic threshold by approximately 2.5 times, effectively delaying the outbreak. These quantitative results highlight the significant role of allocation strategies, resource supply, and treatment efficiency in suppressing epidemic transmission. Full article
(This article belongs to the Special Issue Information Spreading Dynamics in Complex Networks)
Show Figures

Figure 1

21 pages, 2677 KB  
Article
Compatibility of a Competition Model for Explaining Eye Fixation Durations During Free Viewing
by Carlos M. Gómez, María A. Altahona-Medina, Gabriela Barrera and Elena I. Rodriguez-Martínez
Entropy 2025, 27(10), 1079; https://doi.org/10.3390/e27101079 - 18 Oct 2025
Viewed by 309
Abstract
Inter-saccadic times or eye fixation durations (EFDs) are relatively stable at around 250 ms, equivalent to four saccades per second. However, the mean and standard deviation are not sufficient to describe the frequency histogram distribution of EFD. The exGaussian has been proposed for [...] Read more.
Inter-saccadic times or eye fixation durations (EFDs) are relatively stable at around 250 ms, equivalent to four saccades per second. However, the mean and standard deviation are not sufficient to describe the frequency histogram distribution of EFD. The exGaussian has been proposed for fitting the EFD histograms. The present report tries to adjust a competition model (C model) between the saccadic and the fixation network to the EFD histograms. This model is at a rather conceptual level (computational level in Marr’s classification). Both models were adjusted to EFD from an open database with data of 179,473 eye fixations. The C model showed to be able, along with exGaussian model, to be compatible with explaining the EFD distributions. The two parameters of the C model can be ascribed to (i) a refractory period for new saccades modeled by a sigmoid equation (A parameter), while (ii) the ps parameter would be related to the continuous competition between the saccadic network related to the saliency map and the eye fixation network, and would be modeled through a geometric probability density function. The model suggests that competition between neural networks would be an organizational property of brain neural networks to facilitate the decision process for action and perception. In the visual scene scanning, the C model dynamic justifies the early post-saccadic stability of the foveated image, and the subsequent exploration of a broad space in the observed image. The code to extract the data and to run the model is added in the Supplementary Materials. Additionally, entropy of EFD is reported. Full article
(This article belongs to the Special Issue Dynamics in Biological and Social Networks)
Show Figures

Figure 1

18 pages, 2486 KB  
Article
Optimization of Exergy Output Rate in a Supercritical CO2 Brayton Cogeneration System
by Jiachi Shan, Shaojun Xia and Qinglong Jin
Entropy 2025, 27(10), 1078; https://doi.org/10.3390/e27101078 - 18 Oct 2025
Viewed by 290
Abstract
To address low energy utilization efficiency and severe exergy destruction from direct discharge of high-temperature turbine exhaust, this study proposes a supercritical CO2 Brayton cogeneration system with a series-connected hot water heat exchanger for stepwise waste heat recovery. Based on finite-time thermodynamics, [...] Read more.
To address low energy utilization efficiency and severe exergy destruction from direct discharge of high-temperature turbine exhaust, this study proposes a supercritical CO2 Brayton cogeneration system with a series-connected hot water heat exchanger for stepwise waste heat recovery. Based on finite-time thermodynamics, a physical model that provides a more realistic framework by incorporating finite temperature difference heat transfer, irreversible compression, and expansion losses is established. Aiming to maximize exergy output rate under the constraint of fixed total thermal conductance, the decision variables, including working fluid mass flow rate, pressure ratio, and thermal conductance distribution ratio, are optimized. Optimization yields a 16.06% increase in exergy output rate compared with the baseline design. The optimal parameter combination is a mass flow rate of 79 kg/s and a pressure ratio of 5.64, with thermal conductance allocation increased for the regenerator and cooler, while decreased for the heater. The obtained results could provide theoretical guidance for enhancing energy efficiency and sustainability in S-CO2 cogeneration systems, with potential applications in industrial waste heat recovery and power generation. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Energy Systems)
Show Figures

Figure 1

17 pages, 478 KB  
Article
A Bayesian Model for Paired Data in Genome-Wide Association Studies with Application to Breast Cancer
by Yashi Bu, Min Chen, Zhenyu Xuan and Xinlei Wang
Entropy 2025, 27(10), 1077; https://doi.org/10.3390/e27101077 - 18 Oct 2025
Viewed by 303
Abstract
Complex human diseases, including cancer, are linked to genetic factors. Genome-wide association studies (GWASs) are powerful for identifying genetic variants associated with cancer but are limited by their reliance on case–control data. We propose approaches to expanding GWAS by using tumor and paired [...] Read more.
Complex human diseases, including cancer, are linked to genetic factors. Genome-wide association studies (GWASs) are powerful for identifying genetic variants associated with cancer but are limited by their reliance on case–control data. We propose approaches to expanding GWAS by using tumor and paired normal tissues to investigate somatic mutations. We apply penalized maximum likelihood estimation for single-marker analysis and develop a Bayesian hierarchical model to integrate multiple markers, identifying SNP sets grouped by genes or pathways, improving detection of moderate-effect SNPs. Applied to breast cancer data from The Cancer Genome Atlas (TCGA), both single- and multiple-marker analyses identify associated genes, with multiple-marker analysis providing more consistent results with external resources. The Bayesian model significantly increases the chance of new discoveries. Full article
Show Figures

Figure 1

26 pages, 5031 KB  
Article
Analysis of Price Dynamic Competition and Stability in Cross-Border E-Commerce Supply Chain Channels Empowered by Blockchain Technology
by Le-Bin Wang, Jian Chai and Lu-Ying Wen
Entropy 2025, 27(10), 1076; https://doi.org/10.3390/e27101076 - 16 Oct 2025
Viewed by 407
Abstract
Based on the perspective of multi-stage dynamic competition, this study constructs a discrete dynamic model of price competition between the “direct sales” and “resale” channels in cross-border e-commerce (CBEC) under three blockchain deployment modes. Drawing on nonlinear dynamics theory, the Nash equilibrium of [...] Read more.
Based on the perspective of multi-stage dynamic competition, this study constructs a discrete dynamic model of price competition between the “direct sales” and “resale” channels in cross-border e-commerce (CBEC) under three blockchain deployment modes. Drawing on nonlinear dynamics theory, the Nash equilibrium of the system and its stability conditions are examined. Using numerical simulations, the effects of factors such as the channel price adjustment speed, tariff rate, and commission ratio on the dynamic evolution, entropy, and stability of the system under the empowerment of blockchain technology are investigated. Furthermore, the impact of noise factors on system stability and the corresponding chaos control strategies are further analyzed. This study finds that a single-channel deployment tends to induce asymmetric system responses, whereas dual-channel collaborative deployment helps enhance strategic coordination. An increase in price adjustment speed, tariffs, and commission rates can drive the system’s pricing dynamics from a stable state into chaos, thereby raising its entropy, while the adoption of blockchain technology tends to weaken dynamic stability. Therefore, after deploying blockchain technology, each channel should make its pricing decisions more cautiously. Moderate noise can exert a stabilizing effect, whereas excessive disturbances may cause the system to diverge. Hence, enterprises should carefully assess the magnitude of disturbances and capitalize on the positive effects brought about by moderate fluctuations. In addition, the delayed feedback control method can effectively suppress chaotic fluctuations and enhance system stability, demonstrating strong adaptability across different blockchain deployment modes. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 21611 KB  
Article
Aggregation in Ill-Conditioned Regression Models: A Comparison with Entropy-Based Methods
by Ana Helena Tavares, Ana Silva, Tiago Freitas, Maria Costa, Pedro Macedo and Rui A. da Costa
Entropy 2025, 27(10), 1075; https://doi.org/10.3390/e27101075 - 16 Oct 2025
Viewed by 262
Abstract
Despite the advances on data analysis methodologies in the last decades, most of the traditional regression methods cannot be directly applied to large-scale data. Although aggregation methods are especially designed to deal with large-scale data, their performance may be strongly reduced in ill-conditioned [...] Read more.
Despite the advances on data analysis methodologies in the last decades, most of the traditional regression methods cannot be directly applied to large-scale data. Although aggregation methods are especially designed to deal with large-scale data, their performance may be strongly reduced in ill-conditioned problems (due to collinearity issues). This work compares the performance of a recent approach based on normalized entropy, a concept from information theory and info-metrics, with bagging and magging, two well-established aggregation methods in the literature, providing valuable insights for applications in regression analysis with large-scale data. While the results reveal a similar performance between methods in terms of prediction accuracy, the approach based on normalized entropy largely outperforms the other methods in terms of precision accuracy, even considering a smaller number of groups and observations per group, which represents an important advantage in inference problems with large-scale data. This work also alerts for the risk of using the OLS estimator, particularly under collinearity scenarios, knowing that data scientists frequently use linear models as a simplified view of the reality in big data analysis, and the OLS estimator is routinely used in practice. Beyond the promising findings of the simulation study, our estimation and aggregation strategies show strong potential for real-world applications in fields such as econometrics, genomics, environmental sciences, and machine learning, where data challenges such as noise and ill-conditioning are persistent. Full article
Show Figures

Figure 1

6 pages, 172 KB  
Editorial
Advances in Quantum Computation in NISQ Era
by Xu-Dan Xie, Xiaoming Zhang, Balint Koczor and Xiao Yuan
Entropy 2025, 27(10), 1074; https://doi.org/10.3390/e27101074 - 15 Oct 2025
Viewed by 869
Abstract
Realizing a universal, fault-tolerant quantum computer remains challenging with current technology [...] Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
19 pages, 419 KB  
Article
Information-Theoretic Analysis of Selected Water Force Fields: From Molecular Clusters to Bulk Properties
by Rodolfo O. Esquivel, Hazel Vázquez-Hernández and Alexander Pérez de La Luz
Entropy 2025, 27(10), 1073; https://doi.org/10.3390/e27101073 - 15 Oct 2025
Viewed by 354
Abstract
We present a comprehensive information-theoretic evaluation of three widely used rigid water models (TIP3P, SPC, and SPC/ε) through systematic analysis of water clusters ranging from single molecules to 11-molecule aggregates. Five fundamental descriptors—Shannon entropy, Fisher information, disequilibrium, LMC complexity, and Fisher–Shannon [...] Read more.
We present a comprehensive information-theoretic evaluation of three widely used rigid water models (TIP3P, SPC, and SPC/ε) through systematic analysis of water clusters ranging from single molecules to 11-molecule aggregates. Five fundamental descriptors—Shannon entropy, Fisher information, disequilibrium, LMC complexity, and Fisher–Shannon complexity—were calculated in both position and momentum spaces to quantify electronic delocalizability, localization, uniformity, and structural sophistication. Clusters containing 1, 3, 5, 7, 9, and 11 molecules (denoted 1 M, 3 M, 5 M, 7 M, 9 M, and 11 M) were selected to balance computational tractability with representative scaling behavior. Molecular dynamics simulations validated the force fields against experimental bulk properties (density, dielectric constant, self-diffusion coefficient), while statistical analysis using Shapiro–Wilk normality tests and Student’s t-tests ensured robust discrimination between models. Our results reveal distinct scaling behaviors that correlate with experimental accuracy: SPC/ε demonstrates superior electronic structure representation with optimal entropy–information balance and enhanced complexity measures, while TIP3P shows excessive localization and reduced complexity that worsen with increasing cluster size. The transferability from clusters to bulk properties is established through systematic convergence of information-theoretic measures toward bulk-like behavior. The methodology establishes information-theoretic analysis as a useful tool for comprehensive force field evaluation. Full article
Show Figures

Figure 1

18 pages, 10279 KB  
Article
Hypergraph Representation Learning with Weighted- and Clustering-Biased Random Walks
by Li Liang, Shi-Ming Cai and Shi-Cai Gong
Entropy 2025, 27(10), 1072; https://doi.org/10.3390/e27101072 - 15 Oct 2025
Viewed by 422
Abstract
Hypergraphs are powerful tools for modeling complex systems because they naturally encode higher-order interactions. However, most existing hypergraph representation-learning methods still struggle to capture such high-order structures, particularly in heterogeneous hypergraphs, which results in suboptimal performance on structure-sensitive tasks such as node classification. [...] Read more.
Hypergraphs are powerful tools for modeling complex systems because they naturally encode higher-order interactions. However, most existing hypergraph representation-learning methods still struggle to capture such high-order structures, particularly in heterogeneous hypergraphs, which results in suboptimal performance on structure-sensitive tasks such as node classification. This paper presents WCRW-MLP, a new framework that integrates a Weighted- and Clustering-Biased Random Walk (WCRW) with a multi-layer perceptron. WCRW extends second-order random walks by introducing node-pair co-occurrence weights and triadic-closure clustering bias, enabling the walk to favor structurally significant and locally cohesive regions of the hypergraph. The resulting walk sequences are processed with Skip-gram to obtain high-quality structural embeddings, which are then concatenated with node attributes and fed into an MLP for classification. Experiments on several real-world hypergraph benchmarks show that WCRW-MLP consistently surpasses state-of-the-art baselines, validating both the efficacy of the proposed biasing strategy and the overall framework. These results demonstrate that explicitly modeling co-occurrence strength and local clustering is crucial for effective hypergraph embedding. Full article
(This article belongs to the Topic Computational Complex Networks)
Show Figures

Figure 1

10 pages, 635 KB  
Article
Impact of Homophily in Adherence to Anti-Epidemic Measures on the Spread of Infectious Diseases in Social Networks
by Piotr Bentkowski and Tomasz Gubiec
Entropy 2025, 27(10), 1071; https://doi.org/10.3390/e27101071 - 15 Oct 2025
Viewed by 285
Abstract
We investigate how homophily in adherence to anti-epidemic measures affects the final size of epidemics in social networks. Using a modified SIR model, we divide agents into two behavioral groups—compliant and non-compliant—and introduce transmission probabilities that depend asymmetrically on the behavior of both [...] Read more.
We investigate how homophily in adherence to anti-epidemic measures affects the final size of epidemics in social networks. Using a modified SIR model, we divide agents into two behavioral groups—compliant and non-compliant—and introduce transmission probabilities that depend asymmetrically on the behavior of both the infected and susceptible individuals. We simulate epidemic dynamics on two types of synthetic networks with tunable inter-group connection probability: stochastic block models (SBM) and networks with triadic closure (TC) that better capture local clustering. Our main result reveals a counterintuitive effect: under conditions where compliant infected agents significantly reduce transmission, increasing the separation between groups may lead to a higher fraction of infections in the compliant population. This paradoxical outcome emerges only in networks with clustering (TC), not in SBM, suggesting that local network structure plays a crucial role. These findings highlight that increasing group separation does not always confer protection, especially when behavioral traits amplify within-group transmission. Full article
(This article belongs to the Special Issue Spreading Dynamics in Complex Networks)
Show Figures

Figure 1

16 pages, 1206 KB  
Article
Contrast Analysis on Spin Transport of Multi-Periodic Exotic States in the XXZ Chain
by Shixian Jiang, Jianpeng Liu and Yongqiang Li
Entropy 2025, 27(10), 1070; https://doi.org/10.3390/e27101070 - 15 Oct 2025
Viewed by 390
Abstract
Quantum spin transport in integrable systems reveals a rich nonequilibrium phenomena that challenges the conventional hydrodynamic framework. Recent advances in ultracold atom experiments with state preparation and single-site addressing have enabled the understanding of this anomalous behavior. Particularly, the full universality characterization of [...] Read more.
Quantum spin transport in integrable systems reveals a rich nonequilibrium phenomena that challenges the conventional hydrodynamic framework. Recent advances in ultracold atom experiments with state preparation and single-site addressing have enabled the understanding of this anomalous behavior. Particularly, the full universality characterization of exotic initial states, as well as their measurement representation, remain unknown. By employing tensor network and contrast methods, we systematically investigate spin transport in the quantum XXZ spin chain and extract dynamical scaling exponents emerging from two paradigmatic and experimentally attainable initial states, i.e., multi-periodic domain-wall (MPDW) and spin-helix (SH) states. Our results using different values of anisotropic parameters Δ[0,1.2] demonstrate the evident impeded transport and the difference between the two states with increasing Δ values. Large-scale and consistent simulations confirm the contrast method as a viable scaling extraction approach for exotic states with periodicity within experimentally accessible timescales. Our work establishes a foundation for studying initial memory and the corresponding relations of emergent transport behavior in nonequilibrium quantum systems, opening avenues for the identification of their unique universality classes. Full article
(This article belongs to the Special Issue Emergent Phenomena in Quantum Many-Body Systems)
Show Figures

Figure 1

21 pages, 512 KB  
Article
A Decision Tree Classification Algorithm Based on Two-Term RS-Entropy
by Ruoyue Mao, Xiaoyang Shi and Zhiyan Shi
Entropy 2025, 27(10), 1069; https://doi.org/10.3390/e27101069 - 14 Oct 2025
Viewed by 428
Abstract
Classification is an important task in the field of machine learning. Decision tree algorithms are a popular choice for handling classification tasks due to their high accuracy, simple algorithmic process, and good interpretability. Traditional decision tree algorithms, such as ID3, C4.5, and CART, [...] Read more.
Classification is an important task in the field of machine learning. Decision tree algorithms are a popular choice for handling classification tasks due to their high accuracy, simple algorithmic process, and good interpretability. Traditional decision tree algorithms, such as ID3, C4.5, and CART, differ primarily in their criteria for splitting trees. Shannon entropy, Gini index, and mean squared error are all examples of measures that can be used as splitting criteria. However, their performance varies on different datasets, making it difficult to determine the optimal splitting criterion. As a result, the algorithms lack flexibility. In this paper, we introduce the concept of generalized entropy from information theory, which unifies many splitting criteria under one free parameter, as the split criterion for decision trees. We propose a new decision tree algorithm called RSE (RS-Entropy decision tree). Additionally, we improve upon a two-term information measure method by incorporating penalty terms and coefficients into the split criterion, leading to a new decision tree algorithm called RSEIM (RS-Entropy Information Method). In theory, the improved algorithms RSE and RSEIM are more flexible due to the presence of multiple free parameters. In experiments conducted on several datasets, using genetic algorithms to optimize the parameters, our proposed RSE and RSEIM methods significantly outperform traditional decision tree methods in terms of classification accuracy without increasing the complexity of the resulting trees. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

11 pages, 275 KB  
Article
Relativistic Limits on the Discretization and Temporal Resolution of a Quantum Clock
by Tommaso Favalli
Entropy 2025, 27(10), 1068; https://doi.org/10.3390/e27101068 - 14 Oct 2025
Viewed by 305
Abstract
We provide a brief discussion regarding relativistic limits on the discretization and temporal resolution of time values in a quantum clock. Our clock is characterized by a time observable chosen to be the complement of a bounded and discrete Hamiltonian that can have [...] Read more.
We provide a brief discussion regarding relativistic limits on the discretization and temporal resolution of time values in a quantum clock. Our clock is characterized by a time observable chosen to be the complement of a bounded and discrete Hamiltonian that can have an equally spaced or a generic spectrum. In the first case, the time observable can be described by a Hermitian operator, and we find a limit in the discretization for the time eigenvalues. Nevertheless, in both cases, the time observable can be described by a POVM, and, by increasing the number of time states, we show how the bound on the minimum time quantum can be reduced and identify the conditions under which the clock values can be treated as continuous. Finally, we find a limit for the temporal resolution of our time observable when the clock is used (together with light signals) in a relativistic framework for the measurement of spacetime distances. Full article
(This article belongs to the Special Issue Time in Quantum Mechanics)
15 pages, 10451 KB  
Article
Noise Robustness of Transcript-Based Estimators for Properties of Interactions
by Manuel Adams and Klaus Lehnertz
Entropy 2025, 27(10), 1067; https://doi.org/10.3390/e27101067 - 14 Oct 2025
Viewed by 297
Abstract
We investigate the robustness of transcript-based estimators for properties of interactions against various types of noise, ranging from colored noise to isospectral noise. We observe that all estimators are sensitive to symmetric and asymmetric contamination at signal-to-noise ratios that are orders of magnitude [...] Read more.
We investigate the robustness of transcript-based estimators for properties of interactions against various types of noise, ranging from colored noise to isospectral noise. We observe that all estimators are sensitive to symmetric and asymmetric contamination at signal-to-noise ratios that are orders of magnitude higher than those typically encountered in real-world applications. While different coupling regimes can still be distinguished and characterized sufficiently well, the strong impact of noise on the estimator for the direction of interaction can lead to severe misinterpretations of the underlying coupling structure. Full article
(This article belongs to the Special Issue Ordinal Patterns-Based Tools and Their Applications)
Show Figures

Figure 1

16 pages, 1101 KB  
Article
Analysis of Complex Network Attack and Defense Game Strategies Under Uncertain Value Criterion
by Chaoqi Fu and Zhuoying Shi
Entropy 2025, 27(10), 1066; https://doi.org/10.3390/e27101066 - 14 Oct 2025
Viewed by 339
Abstract
The study of attack–defense game decision making in critical infrastructure systems confronting intelligent adversaries, grounded in complex network theory, has emerged as a prominent topic in the field of network security. Most existing research centers on game-theoretic analysis under conditions of complete information [...] Read more.
The study of attack–defense game decision making in critical infrastructure systems confronting intelligent adversaries, grounded in complex network theory, has emerged as a prominent topic in the field of network security. Most existing research centers on game-theoretic analysis under conditions of complete information and assumes that the attacker and defender share congruent criteria for evaluating target values. However, in reality, asymmetric value perception may lead to different evaluation criteria for both the offensive and defensive sides. This paper examines the game problem wherein the attacker and defender possess distinct target value evaluation criteria. The research findings reveal that both the attacker and defender have their own “advantage ranges” for value assessment, and topological heterogeneity is the reason for this phenomenon. Within their respective advantage ranges, the attacker or defender can adopt clear-cut strategies to secure optimal benefits—without needing to consider their opponents’ decisions. Outside these ranges, we explore how the attacker can leverage small-sample detection outcomes to probabilistically infer defenders’ strategies, and we further analyze the attackers’ preference strategy selections under varying acceptable security thresholds and penalty coefficients. The research results deliver more practical solutions for games involving uncertain value criteria. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

17 pages, 1117 KB  
Article
High-Efficiency Lossy Source Coding Based on Multi-Layer Perceptron Neural Network
by Yuhang Wang, Weihua Chen, Linjing Song, Zhiping Xu, Dan Song and Lin Wang
Entropy 2025, 27(10), 1065; https://doi.org/10.3390/e27101065 - 14 Oct 2025
Viewed by 337
Abstract
With the rapid growth of data volume in sensor networks, lossy source coding systems achieve high–efficiency data compression with low distortion under limited transmission bandwidth. However, conventional compression algorithms rely on a two–stage framework with high computational complexity and frequently struggle to balance [...] Read more.
With the rapid growth of data volume in sensor networks, lossy source coding systems achieve high–efficiency data compression with low distortion under limited transmission bandwidth. However, conventional compression algorithms rely on a two–stage framework with high computational complexity and frequently struggle to balance compression performance with generalization ability. To address these issues, an end–to–end lossy compression method is proposed in this paper. The approach integrates an enhanced belief propagation algorithm with a multi–layer perceptron neural network, aiming to introduce a novel joint optimization architecture described as “encoding–structured encoding–decoding”. In addition, a quantization module incorporating random perturbation and the straight–through estimator is designed to address the non–differentiability in the quantization process. Simulation results demonstrate that the proposed system significantly improves compression performance while offering superior generalization and reconstruction quality. Furthermore, the designed neural architecture is both simple and efficient, reducing system complexity and enhancing feasibility for practical deployment. Full article
(This article belongs to the Special Issue Next-Generation Channel Coding: Theory and Applications)
Show Figures

Figure 1

18 pages, 1960 KB  
Article
CasDacGCN: A Dynamic Attention-Calibrated Graph Convolutional Network for Information Popularity Prediction
by Bofeng Zhang, Yanlin Zhu, Zhirong Zhang, Kaili Liao, Sen Niu, Bingchun Li and Haiyan Li
Entropy 2025, 27(10), 1064; https://doi.org/10.3390/e27101064 - 14 Oct 2025
Viewed by 487
Abstract
Information popularity prediction is a critical problem in social network analysis. With the increasing prevalence of social platforms, accurate prediction of the diffusion process has become increasingly important. Existing methods mainly rely on graph neural networks to model structural relationships, but they are [...] Read more.
Information popularity prediction is a critical problem in social network analysis. With the increasing prevalence of social platforms, accurate prediction of the diffusion process has become increasingly important. Existing methods mainly rely on graph neural networks to model structural relationships, but they are often insufficient in capturing the complex interplay between temporal evolution and local cascade structures, especially in real-world scenarios involving sparse or rapidly changing cascades. To address this issue, we propose the Cascading Dynamic attention-calibrated Graph Convolutional Network, named CasDacGCN. It enhances prediction performance through spatiotemporal feature fusion and adaptive representation learning. The model integrates snapshot-level local encoding, global temporal modeling, cross-attention mechanisms, and a hypernetwork-based sample-wise calibration strategy, enabling flexible modeling of multi-scale diffusion patterns. Results from experiments demonstrate that the proposed model consistently surpasses existing approaches on two real-world datasets, validating its effectiveness in popularity prediction tasks. Full article
Show Figures

Figure 1

24 pages, 9099 KB  
Article
Dynamic MAML with Efficient Multi-Scale Attention for Cross-Load Few-Shot Bearing Fault Diagnosis
by Qinglei Zhang, Yifan Zhang, Jiyun Qin, Jianguo Duan and Ying Zhou
Entropy 2025, 27(10), 1063; https://doi.org/10.3390/e27101063 - 14 Oct 2025
Viewed by 416
Abstract
Accurate bearing fault diagnosis under various operational conditions presents significant challenges, mainly due to the limited availability of labeled data and the domain mismatches across different operating environments. In this study, an adaptive meta-learning framework (AdaMETA) is proposed, which combines dynamic task-aware model-independent [...] Read more.
Accurate bearing fault diagnosis under various operational conditions presents significant challenges, mainly due to the limited availability of labeled data and the domain mismatches across different operating environments. In this study, an adaptive meta-learning framework (AdaMETA) is proposed, which combines dynamic task-aware model-independent meta-learning (DT-MAML) with efficient multi-scale attention (EMA) modules to enhance the model’s ability to generalize and improve diagnostic performance in small-sample bearing fault diagnosis across different load scenarios. Specifically, a hierarchical encoder equipped with C-EMA is introduced to effectively capture multi-scale fault features from vibration signals, greatly improving feature extraction under constrained data conditions. Furthermore, DT-MAML dynamically adjusts the inner-loop learning rate based on task complexity, promoting efficient adaptation to diverse tasks and mitigating domain bias. Comprehensive experimental evaluations on the CWRU bearing dataset, conducted under carefully designed cross-domain scenarios, demonstrate that AdaMETA achieves superior accuracy (up to 99.26%) and robustness compared to traditional meta-learning and classical diagnostic methods. Additional ablation studies and noise interference experiments further validate the substantial contribution of the EMA module and the dynamic learning rate components. Full article
Show Figures

Figure 1

24 pages, 7771 KB  
Article
Cross-Domain OTFS Detection via Delay–Doppler Decoupling: Reduced-Complexity Design and Performance Analysis
by Mengmeng Liu, Shuangyang Li, Baoming Bai and Giuseppe Caire
Entropy 2025, 27(10), 1062; https://doi.org/10.3390/e27101062 - 13 Oct 2025
Viewed by 373
Abstract
In this paper, a reduced-complexity cross-domain iterative detection for orthogonal time frequency space (OTFS) modulation is proposed that exploits channel properties in both time and delay–Doppler domains. Specifically, we first show that in the time-domain effective channel, the path delay only introduces interference [...] Read more.
In this paper, a reduced-complexity cross-domain iterative detection for orthogonal time frequency space (OTFS) modulation is proposed that exploits channel properties in both time and delay–Doppler domains. Specifically, we first show that in the time-domain effective channel, the path delay only introduces interference among samples in adjacent time slots, while the Doppler becomes a phase term that does not affect the channel sparsity. This investigation indicates that the effects of delay and Doppler can be decoupled and treated separately. This “band-limited” matrix structure further motivates us to apply a reduced-size linear minimum mean square error (LMMSE) filter to eliminate the effect of delay in the time domain, while exploiting the cross-domain iteration for minimizing the effect of Doppler by noticing that the time and Doppler are a Fourier dual pair. Furthermore, we apply eigenvalue decomposition to the reduced-size LMMSE estimator, which makes the computational complexity independent of the number of cross-domain iterations, thus significantly reducing the computational complexity. The bias evolution and variance evolution are derived to evaluate the average MSE performance of the proposed scheme, which shows that the proposed estimators suffer from only negligible estimation bias in both time and DD domains. Particularly, the state (MSE) evolution is compared with bounds to verify the effectiveness of the proposed scheme. Simulation results demonstrate that the proposed scheme achieves almost the same error performance as the optimal detection, but only requires a reduced complexity. Full article
Show Figures

Figure 1

24 pages, 6365 KB  
Article
Synergizing High-Quality Tourism Development and Digital Economy: A Coupling Coordination Analysis in Chinese Prefecture-Level Cities
by Yuyan Luo, Yue Wang, Ziqi Pan, Huilin Li, Bin Lai and Yong Qin
Entropy 2025, 27(10), 1061; https://doi.org/10.3390/e27101061 - 12 Oct 2025
Viewed by 458
Abstract
The rapid development of the digital economy (DE) provides a new driving force for high-quality tourism development (HQTD). How to coordinate HQTD and DE is an urgent issue to be resolved. In this study, the coupling coordination degree (CCD) between HQTD and DE [...] Read more.
The rapid development of the digital economy (DE) provides a new driving force for high-quality tourism development (HQTD). How to coordinate HQTD and DE is an urgent issue to be resolved. In this study, the coupling coordination degree (CCD) between HQTD and DE in Chinese prefecture-level cities is analysed using the CCD model, and the factors driving CCD are identified by Shapley additive explanations (SHAP). The results show that (1) Chinese city-level HQTD and DE show a rising trend from 2010 to 2019. The national average rises from 0.1807 and 0.2434 in 2010 to 0.2318 and 0.4113 in 2019, respectively, with HQTD’s development lagging noticeably behind DE. (2) CCD exhibits marked inter-regional disparities and intra-regional clustering. The northwest region has the lowest values, with many cities’ CCD below 0.5, indicating an imbalanced status. In 2019, all cities in the eastern region are in a balanced status, with Shanghai exceeding 0.8. (3) Total social retail sales per capita and percentage of tertiary sector are the key drivers of CCD; economic development and urbanisation rate exhibit a non-linear relationship with CCD. The CCD in developed cities in the east and north is driven by consumption, whereas the northwest region is primarily influenced by factors related to labour capital. Based on these conclusions, some policy implications are provided for the synergistic development of HQTD and DE. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 7050 KB  
Article
Secure and Efficient Lattice-Based Ring Signcryption Scheme for BCCL
by Yang Zhang, Pengxiao Duan, Chaoyang Li, Haseeb Ahmad and Hua Zhang
Entropy 2025, 27(10), 1060; https://doi.org/10.3390/e27101060 - 12 Oct 2025
Viewed by 370
Abstract
Blockchain-based cold chain logistics (BCCL) systems establish a new logistics data-sharing mechanism with blockchain technology, which destroys the traditional data island problem and promotes cross-institutional data interoperability. However, security vulnerabilities, risks of data loss, exposure of private information, and particularly the emergence of [...] Read more.
Blockchain-based cold chain logistics (BCCL) systems establish a new logistics data-sharing mechanism with blockchain technology, which destroys the traditional data island problem and promotes cross-institutional data interoperability. However, security vulnerabilities, risks of data loss, exposure of private information, and particularly the emergence of quantum-based attacks pose heightened threats to the existing BCCL framework. This paper first introduces a transaction privacy preserving (TPP) model for BCCLS that aggregates the blockchain and ring signcryption scheme together to strengthen the security of the data exchange process. Then, a lattice-based ring signcryption (LRSC) scheme is proposed. This LRSC utilizes the lattice assumption to enhance resistance against quantum attacks while employing ring mechanisms to safeguard the anonymity and privacy of the actual signer. It also executes signature and encryption algorithms simultaneously to improve algorithm execution efficiency. Moreover, the formal security proof results show that this LRSC can capture the signer’s confidentiality and unforgeability. Experimental findings indicate that the LRSC scheme achieves higher efficiency compared with comparable approaches. The proposed TPP model and LRSC scheme effectively facilitate cross-institutional logistics data exchange and enhance the utilization of logistics information via the BCCL system. Full article
Show Figures

Figure 1

18 pages, 357 KB  
Article
Exact ODE Framework for Classical and Quantum Corrections for the Lennard-Jones Second Virial Coefficient
by Zhe Zhao, Alfredo González-Calderón, Jorge Adrián Perera-Burgos, Antonio Estrada, Horacio Hernández-Anguiano, Celia Martínez-Lázaro and Yanmei Li
Entropy 2025, 27(10), 1059; https://doi.org/10.3390/e27101059 - 11 Oct 2025
Viewed by 438
Abstract
The second virial coefficient (SVC) of the Lennard-Jones fluid is a cornerstone of molecular theory, yet its calculation has traditionally relied on the complex integration of the pair potential. This work introduces a fundamentally different approach by reformulating the problem in terms of [...] Read more.
The second virial coefficient (SVC) of the Lennard-Jones fluid is a cornerstone of molecular theory, yet its calculation has traditionally relied on the complex integration of the pair potential. This work introduces a fundamentally different approach by reformulating the problem in terms of ordinary differential equations (ODEs). For the classical component of the SVC, we generalize the confluent hypergeometric and Weber–Hermite equations. For the first quantum correction, we present entirely new ODEs and their corresponding exact-analytical solutions. The most striking result of this framework is the discovery that these ODEs can be transformed into Schrödinger-like equations. The classical term corresponds to a harmonic oscillator, while the quantum correction includes additional inverse-power potential terms. This formulation not only provides a versatile method for expressing the virial coefficient through a linear combination of functions (including Kummer, Weber, and Whittaker functions) but also reveals a profound and previously unknown mathematical structure underlying a classical thermodynamic property. Full article
(This article belongs to the Collection Foundations of Statistical Mechanics)
Show Figures

Figure 1

16 pages, 2423 KB  
Article
Exploring Ohm’s Law: The Randomness of Determinism
by Angel Cuadras, Marina Cuadras-Alba and Gaia Cuadras-Alba
Entropy 2025, 27(10), 1058; https://doi.org/10.3390/e27101058 - 11 Oct 2025
Viewed by 903
Abstract
Ohm’s law has become ubiquitous in numerous scientific and technical disciplines. Generally, the subject is introduced to students in secondary school as fundamental technical knowledge. The present study proposes a visual model to facilitate the comprehension of Ohm’s law in electron transport in [...] Read more.
Ohm’s law has become ubiquitous in numerous scientific and technical disciplines. Generally, the subject is introduced to students in secondary school as fundamental technical knowledge. The present study proposes a visual model to facilitate the comprehension of Ohm’s law in electron transport in solids to pre-university and university students. The objective is to facilitate students’ comprehension of the correlation between electron movement in solids, as depicted by a current, and the energy of the system, which is introduced by the electric field and the material’s structure. The approach’s originality lies in its novel strategy for describing electron trajectory randomization. This enables the establishment of a relationship between the material’s structure and its resistivity. Moreover, the description of electron transport and scattering processes is presented regarding different types of entropy. It shows that electrons follow the maximum trajectory entropy and that thermal entropy has a quadratic relationship with configurational entropy. The determinism of Ohm’s law is inferred from statistical entropy. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 2165 KB  
Article
Seizure Type Classification Based on Hybrid Feature Engineering and Mutual Information Analysis Using Electroencephalogram
by Yao Miao
Entropy 2025, 27(10), 1057; https://doi.org/10.3390/e27101057 - 11 Oct 2025
Viewed by 443
Abstract
Epilepsy has diverse seizure types that challenge diagnosis and treatment, requiring automated and accurate classification to improve patient outcomes. Traditional electroencephalogram (EEG)-based diagnosis relies on manual interpretation, which is subjective and inefficient, particularly for multi-class differentiation in imbalanced datasets. This study aims to [...] Read more.
Epilepsy has diverse seizure types that challenge diagnosis and treatment, requiring automated and accurate classification to improve patient outcomes. Traditional electroencephalogram (EEG)-based diagnosis relies on manual interpretation, which is subjective and inefficient, particularly for multi-class differentiation in imbalanced datasets. This study aims to develop a hybrid framework for automated multi-class seizure type classification using segment-wise EEG processing and multi-band feature engineering to enhance precision and address data challenges. EEG signals from the TUSZ dataset were segmented into 1-s windows with 0.5-s overlaps, followed by the extraction of multi-band features, including statistical measures, sample entropy, wavelet energies, Hurst exponent, and Hjorth parameters. The mutual information (MI) approach was employed to select the optimal features, and seven machine learning models (SVM, KNN, DT, RF, XGBoost, CatBoost, LightGBM) were evaluated via 10-fold stratified cross-validation with a class balancing strategy. The results showed the following: (1) XGBoost achieved the highest performance (accuracy: 0.8710, F1 score: 0.8721, AUC: 0.9797), with γ-band features dominating importance. (2) Confusion matrices indicated robust discrimination but noted overlaps in focal subtypes. This framework advances seizure type classification by integrating multi-band features and the MI method, which offers a scalable and interpretable tool for supporting clinical epilepsy diagnostics. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

33 pages, 2247 KB  
Article
An Information-Theoretic Framework for Understanding Learning and Choice Under Uncertainty
by Jae Hyung Woo, Lakshana Balaji and Alireza Soltani
Entropy 2025, 27(10), 1056; https://doi.org/10.3390/e27101056 - 11 Oct 2025
Viewed by 403
Abstract
Although information theory is widely used in neuroscience, its application has primarily been limited to the analysis of neural activity, with much less emphasis on behavioral data. This is despite the fact that the discrete nature of behavioral variables in many experimental settings—such [...] Read more.
Although information theory is widely used in neuroscience, its application has primarily been limited to the analysis of neural activity, with much less emphasis on behavioral data. This is despite the fact that the discrete nature of behavioral variables in many experimental settings—such as choice and reward outcomes—makes them particularly well-suited to information-theoretic analysis. In this study, we provide a framework for how behavioral metrics based on conditional entropy and mutual information can be used to infer an agent’s decision-making and learning strategies under uncertainty. Using simulated reinforcement-learning models as ground truth, we illustrate how information-theoretic metrics can reveal the underlying learning and choice mechanisms. Specifically, we show that these metrics can uncover (1) a positivity bias, reflected in higher learning rates for rewarded compared to unrewarded outcomes; (2) gradual, history-dependent changes in the learning rates indicative of metaplasticity; (3) adjustments in choice strategies driven by reward harvest rate; and (4) the presence of alternative learning strategies and their interaction. Overall, our study highlights how information theory can leverage the discrete, trial-by-trial structure of many cognitive tasks, with the added advantage of being parameter-free as opposed to more traditional methods such as logistic regression. Information theory thus offers a versatile framework for investigating neural and computational mechanisms of learning and choice under uncertainty—with potential for further extension. Full article
(This article belongs to the Special Issue Information-Theoretic Principles in Cognitive Systems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop