Next Issue
Volume 28, February
Previous Issue
Volume 27, December
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 28, Issue 1 (January 2026) – 131 articles

Cover Story (view full-size image): Visual and generative data systems must ensure perceptual realism, in addition to low distortion and low latency. This work studies the fundamental trade-offs between rate, distortion, perception, and latency through a Shannon-theoretic framework based on randomized distributed function computation. We derive finite-blocklength achievable limits that quantify the cost of enforcing a target output probability distribution, which establishes how realism impacts rate and latency. The framework is extended to scenarios with shared side information and strong information-theoretic secrecy against an attacker, including settings with perfect realism constraints. These fundamental results provide principled design guidelines for low-latency, realism-aware, and secure data reconstruction systems. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 783 KB  
Article
Some New Maximally Chaotic Discrete Maps
by Hyojeong Choi, Gangsan Kim, Hong-Yeop Song, Sangung Shin, Chulho Lee and Hongjun Noh
Entropy 2026, 28(1), 131; https://doi.org/10.3390/e28010131 - 22 Jan 2026
Viewed by 134
Abstract
In this paper, we first prove (Theorem 1) that any two inputs producing the same output in a symmetric pair of discrete skew tent maps always have the same parity, meaning that they are either both even or both odd. Building on this [...] Read more.
In this paper, we first prove (Theorem 1) that any two inputs producing the same output in a symmetric pair of discrete skew tent maps always have the same parity, meaning that they are either both even or both odd. Building on this property, we then propose (Definition 1) a new discrete chaotic map and prove that (Theorem 2) the proposed map is a bijection for all control parameters. We further prove that (Theorem 3) the discrete Lyapunov exponent (dLE) of the proposed map is not only positive but also approaches the maximum value among all permutation maps over the integers {0,1,,2m1} as m gets larger. In other words, (Corollary 1) the proposed map asymptotically achieves the highest possible chaotic divergence among the permutation maps over the integers {0,1,,2m1}. To provide some further evidence that the proposed map is highly chaotic, we present at the end some results from the numerical experiments. We calculate the approximation and permutation entropy of the output integer sequences. We also show the NIST SP800-22 tests results and correlation properties of some derived binary sequences. Full article
(This article belongs to the Special Issue Discrete Math in Coding Theory, 2nd Edition)
Show Figures

Figure 1

26 pages, 911 KB  
Article
Logarithmic-Size Post-Quantum Linkable Ring Signatures Based on Aggregation Operations
by Minghui Zheng, Shicheng Huang, Deju Kong, Xing Fu, Qiancheng Yao and Wenyi Hou
Entropy 2026, 28(1), 130; https://doi.org/10.3390/e28010130 - 22 Jan 2026
Viewed by 95
Abstract
Linkable ring signatures are a type of ring signature scheme that can protect the anonymity of signers while allowing the public to verify whether the same signer has signed the same message multiple times. This functionality makes linkable ring signatures suitable for applications [...] Read more.
Linkable ring signatures are a type of ring signature scheme that can protect the anonymity of signers while allowing the public to verify whether the same signer has signed the same message multiple times. This functionality makes linkable ring signatures suitable for applications such as cryptocurrencies and anonymous voting systems, achieving the dual goals of identity privacy protection and misuse prevention. However, existing post-quantum linkable ring signature schemes often suffer from issues such as excessive linear data growth the adoption of post-quantum signature algorithms, and high circuit complexity resulting from the use of post-quantum zero-knowledge proof protocols. To address these issues, a logarithmic-size post-quantum linkable ring signature scheme based on aggregation operations is proposed. The scheme constructs a Merkle tree from ring members’ public keys via a hash algorithm to achieve logarithmic-scale signing and verification operations. Moreover, it introduces, for the first time, a post-quantum aggregate signature scheme to replace post-quantum zero-knowledge proof protocols, thereby effectively avoiding the construction of complex circuits. Scheme analysis confirms that the proposed scheme meets the correctness requirements of linkable ring signatures. In terms of security, the scheme satisfies the anonymity, unforgeability, and linkability requirements of linkable ring signatures. Moreover, the aggregation process does not leak information about the signing members, ensuring strong privacy protection. Experimental results demonstrate that, when the ring size scales to 1024 members, our scheme outperforms the existing Dilithium-based logarithmic post-quantum ring signature scheme, with nearly 98.25% lower signing time, 98.90% lower verification time, and 99.81% smaller signature size. Full article
(This article belongs to the Special Issue Quantum Information Security)
Show Figures

Figure 1

3 pages, 121 KB  
Editorial
Advances in Modern Channel Coding
by Yongpeng Wu and Peihong Yuan
Entropy 2026, 28(1), 129; https://doi.org/10.3390/e28010129 - 22 Jan 2026
Viewed by 75
Abstract
Channel coding has long stood at the core of reliable communications, shaping the evolution of modern information and communication systems [...] Full article
(This article belongs to the Special Issue Advances in Modern Channel Coding)
12 pages, 3550 KB  
Article
Percolation with Distance-Dependent Site Occupational Probabilities
by Eleftherios Lambrou and Panos Argyrakis
Entropy 2026, 28(1), 128; https://doi.org/10.3390/e28010128 - 22 Jan 2026
Viewed by 150
Abstract
We introduce a new method for preparing a percolation system by employing an inverse percolation model. Unlike standard percolation, where the site occupancy is uniform, the new model imposes a distance-dependent probability of site removal, where sites closer to the lattice center have [...] Read more.
We introduce a new method for preparing a percolation system by employing an inverse percolation model. Unlike standard percolation, where the site occupancy is uniform, the new model imposes a distance-dependent probability of site removal, where sites closer to the lattice center have a higher probability of being removed and are more prone to damage as compared to those at the periphery of the system. The variation in this removal probability is a function of the distance (d) from the central point. Thus, the central point plays a key role. This is reflected in our effort to model the role of a tumor cell and its surroundings (the tumor microenvironment). The tumor causes a decrease in the concentration of key elements, such as O2 (resulting in hypoxia) and Ca, in the region close to it, which in turn is an impediment to the efficiency of radiotherapy and chemotherapy. This decrease is the largest in sites adjacent to the tumor and smaller away from the tumor. Such change in the concentrations of these elements is vital in the mechanism of cancer therapies. Starting from a fully occupied lattice, we introduce a distance-dependent removal probability q(d). The value of q(d) is 1 at and next to the tumor (center) and decreases linearly away from it to a limiting value qp, which is the value of q at the lattice boundaries. We investigate the system properties as a function of qp and observe a significant decrease in the critical percolation threshold pc as qp decreases, falling from the standard value of pc=0.5927 to approximately pc=0.20. Furthermore, we demonstrate that the size of the spanning cluster and the total number of clusters exhibit a strong dependence on qp as well. Full article
(This article belongs to the Special Issue Percolation in the 21st Century)
Show Figures

Figure 1

13 pages, 1265 KB  
Article
The Physical Spectrum of a Driven Jaynes–Cummings Model
by Luis Medina-Dozal, Alejandro R. Urzúa, Irán Ramos-Prieto, Ricardo Román-Ancheyta, Francisco Soto-Eguibar, Héctor M. Moya-Cessa and José Récamier
Entropy 2026, 28(1), 127; https://doi.org/10.3390/e28010127 - 21 Jan 2026
Viewed by 217
Abstract
We analyze the time-dependent physical spectrum of a driven Jaynes–Cummings model in which both the two-level system and the quantized cavity mode are subject to coherent classical driving. The time-dependent Hamiltonian is mapped, via well-defined unitary transformations, onto an effective stationary Jaynes–Cummings form. [...] Read more.
We analyze the time-dependent physical spectrum of a driven Jaynes–Cummings model in which both the two-level system and the quantized cavity mode are subject to coherent classical driving. The time-dependent Hamiltonian is mapped, via well-defined unitary transformations, onto an effective stationary Jaynes–Cummings form. Within this framework, we derive closed-form expressions for the two-time correlation functions of both the atomic and field operators. These correlation functions are subsequently used to evaluate the time-dependent physical spectrum according to the Eberly–Wódkiewicz definition, which properly accounts for finite spectral resolution and transient emission dynamics. We show that the external driving leads to substantial modifications of the atomic spectral response, including controllable frequency shifts and asymmetric line shapes. Importantly, we identify a regime in which the driving parameters are chosen such that the coherent displacement induced in the cavity field exactly cancels out the initial coherent amplitude. In this limit, the system dynamics reduce to that of an effectively vacuum-initialized Jaynes–Cummings model, and the standard vacuum Rabi splitting is recovered. This behavior provides a clear and physically transparent interpretation of the spectral features as arising from coherent field displacement rather than from modifications of the underlying atom–cavity coupling. Full article
(This article belongs to the Special Issue Quantum Nonstationary Systems—Second Edition)
Show Figures

Figure 1

12 pages, 27122 KB  
Article
Orientation-Modulated Hyperuniformity in Frustrated Vicsek–Kuramoto Systems
by Yichen Lu, Tong Zhu, Yingshan Guo, Yunyun Li and Zhigang Zheng
Entropy 2026, 28(1), 126; https://doi.org/10.3390/e28010126 - 21 Jan 2026
Viewed by 112
Abstract
In the study of disordered hyperuniformity, which bridges ordered and disordered states and has broad implications in physics and biology, active matter systems offer a rich platform for spontaneous pattern formation. This work investigates frustrated Vicsek–Kuramoto systems, where frustration induces complex collective behaviors, [...] Read more.
In the study of disordered hyperuniformity, which bridges ordered and disordered states and has broad implications in physics and biology, active matter systems offer a rich platform for spontaneous pattern formation. This work investigates frustrated Vicsek–Kuramoto systems, where frustration induces complex collective behaviors, to explore how hyperuniform states arise. We numerically analyze the phase diagram via the structure factor S(q) and the density variance δρ2R. Results show that recessive lattice states exhibit Class I hyperuniformity under high coupling strength and intermediate frustration, emerging from the interplay of frustration-induced periodicity and active motion, characterized by dynamic, drifting rotation centers rather than static order. Notably, global hyperuniformity emerges from the spatial complementarity of orientation subgroups that are individually non-hyperuniform, a phenomenon termed “orientation-modulated hyperuniformity”. This work establishes frustration as a novel mechanism for generating hyperuniform states in active matter, highlighting how anisotropic interactions can yield global order from disordered components, with potential relevance to biological systems and material science. Full article
Show Figures

Figure 1

28 pages, 3616 KB  
Article
Optimization of Cryogenic Gas Separation Systems Based on Exergetic Analysis—The Claude–Heylandt Cycle for Oxygen Separation
by Dănuț-Cristian Urduza, Lavinia Grosu, Alexandru Serban, Adalia Andreea Percembli (Chelmuș) and Alexandru Dobrovicescu
Entropy 2026, 28(1), 125; https://doi.org/10.3390/e28010125 - 21 Jan 2026
Viewed by 122
Abstract
In cryogenic air liquefaction systems, a major share of the mechanical energy consumption is associated with exergy destruction caused by heat transfer in recuperative heat exchangers. This study investigated the exergetic optimization of cryogenic gas separation systems by focusing on the Claude–Heylandt cycle [...] Read more.
In cryogenic air liquefaction systems, a major share of the mechanical energy consumption is associated with exergy destruction caused by heat transfer in recuperative heat exchangers. This study investigated the exergetic optimization of cryogenic gas separation systems by focusing on the Claude–Heylandt cycle as an advanced structural modification of the classical Linde–Hampson scheme. An exergy-based analysis demonstrates that minimizing mechanical energy consumption requires a progressive reduction in the temperature difference between the hot forward stream and the cold returning stream toward the cold end of the heat exchanger. This condition was achieved by extracting a fraction of the high-pressure stream and expanding it in a parallel expander, thereby creating a controlled imbalance in the heat capacities between the two streams. The proposed configuration reduces the share of exergy destruction associated with heat transfer in the recuperative heat exchanger from 14% to 3.5%, leading to a fourfold increase in the exergetic efficiency, together with a 3.6-fold increase in the liquefied air fraction compared with the Linde–Hampson cycle operating under identical conditions. The effects of key decision parameters, including the compression pressure, imposed temperature differences, and expander inlet temperature, were systematically analyzed. The study was further extended by integrating an air separation column into the Claude–Heylandt cycle and optimizing its configuration based on entropy generation minimization. The optimal liquid-air feeding height and threshold number of rectification trays were identified, beyond which further structural complexity yielded no thermodynamic benefit. The results highlight the effectiveness of exergy-based optimization as a unified design criterion for both cryogenic liquefaction and separation processes. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Industrial Energy Systems, 2nd Edition)
Show Figures

Figure 1

16 pages, 1206 KB  
Article
HASwinNet: A Swin Transformer-Based Denoising Framework with Hybrid Attention for mmWave MIMO Systems
by Xi Han, Houya Tu, Jiaxi Ying, Junqiao Chen and Zhiqiang Xing
Entropy 2026, 28(1), 124; https://doi.org/10.3390/e28010124 - 20 Jan 2026
Viewed by 213
Abstract
Millimeter-wave (mmWave) massive multiple-input, multiple-output (MIMO) systems are a cornerstone technology for integrated sensing and communication (ISAC) in sixth-generation (6G) mobile networks. These systems provide high-capacity backhaul while simultaneously enabling high-resolution environmental sensing. However, accurate channel estimation remains highly challenging due to intrinsic [...] Read more.
Millimeter-wave (mmWave) massive multiple-input, multiple-output (MIMO) systems are a cornerstone technology for integrated sensing and communication (ISAC) in sixth-generation (6G) mobile networks. These systems provide high-capacity backhaul while simultaneously enabling high-resolution environmental sensing. However, accurate channel estimation remains highly challenging due to intrinsic noise sensitivity and clustered sparse multipath structures. These challenges are particularly severe under limited pilot resources and low signal-to-noise ratio (SNR) conditions. To address these difficulties, this paper proposes HASwinNet, a deep learning (DL) framework designed for mmWave channel denoising. The framework integrates a hierarchical Swin Transformer encoder for structured representation learning. It further incorporates two complementary branches. The first branch performs sparse token extraction guided by angular-domain significance. The second branch focuses on angular-domain refinement by applying discrete Fourier transform (DFT), squeeze-and-excitation (SE), and inverse DFT (IDFT) operations. This generates a mask that highlights angularly coherent features. A decoder combines the outputs of both branches with a residual projection from the input to yield refined channel estimates. Additionally, we introduce an angular-domain perceptual loss during training. This enforces spectral consistency and preserves clustered multipath structures. Simulation results based on the Saleh–Valenzuela (S–V) channel model demonstrate that HASwinNet achieves significant improvements in normalized mean squared error (NMSE) and bit error rate (BER). It consistently outperforms convolutional neural network (CNN), long short-term memory (LSTM), and U-Net baselines. Furthermore, experiments with reduced pilot symbols confirm that HASwinNet effectively exploits angular sparsity. The model retains a consistent advantage over baselines even under pilot-limited conditions. These findings validate the scalability of HASwinNet for practical 6G mmWave backhaul applications. They also highlight its potential in ISAC scenarios where accurate channel recovery supports both communication and sensing. Full article
Show Figures

Figure 1

42 pages, 1665 KB  
Article
What Is a Pattern in Statistical Mechanics? Formalizing Structure and Patterns in One-Dimensional Spin Lattice Models with Computational Mechanics
by Omar Aguilar
Entropy 2026, 28(1), 123; https://doi.org/10.3390/e28010123 - 20 Jan 2026
Viewed by 298
Abstract
This work formalizes the notions of structure and pattern for three distinct one-dimensional spin-lattice models (finite-range Ising, solid-on-solid, and three-body), using information- and computation-theoretic methods. We begin by presenting a novel derivation of the Boltzmann distribution for finite one-dimensional spin configurations embedded in [...] Read more.
This work formalizes the notions of structure and pattern for three distinct one-dimensional spin-lattice models (finite-range Ising, solid-on-solid, and three-body), using information- and computation-theoretic methods. We begin by presenting a novel derivation of the Boltzmann distribution for finite one-dimensional spin configurations embedded in infinite ones. We next recast this distribution as a stochastic process, thereby enabling us to analyze each spin-lattice model within the theory of computational mechanics. In this framework, the process’s structure is quantified by excess entropy E (predictable information) and statistical complexity Cμ (stored information), and the process’s structure-generating mechanism is specified by its ϵ-machine. To assess compatibility with statistical mechanics, we compare the configurations jointly determined by the information measures and ϵ-machines to typical configurations drawn from the Boltzmann distribution, and we find agreement. We also include a self-contained primer on computational mechanics and provide code implementing the information measures and spin-model distributions. Full article
(This article belongs to the Special Issue Ising Model—100 Years Old and Still Attractive)
Show Figures

Figure 1

17 pages, 1848 KB  
Article
Complexity and Robustness of Public–Private Partnership Networks
by Na Zhao, Xiongfei Jiang and Ling Bai
Entropy 2026, 28(1), 122; https://doi.org/10.3390/e28010122 - 20 Jan 2026
Viewed by 222
Abstract
Public–private partnership (PPP) has been increasingly imported to deliver infrastructure and public services around the world. As an emerging public procurement mode, PPP has drawn considerable attention both from academy and industry. We construct a PPP shareholder network of China and analyze its [...] Read more.
Public–private partnership (PPP) has been increasingly imported to deliver infrastructure and public services around the world. As an emerging public procurement mode, PPP has drawn considerable attention both from academy and industry. We construct a PPP shareholder network of China and analyze its topological complexity, robustness, and geographic structure. We find that the PPP shareholder network exhibits small-world behavior and a heavy-tailed degree distribution. Using multiple centrality measures, we investigate the network robustness under various attack strategies. The results show that the targeted attack destroys the network more efficiently than the random attack, especially the degree-based and betweenness-based attacks. For geographic topology, it exhibits a hierarchical spatial structure in which Beijing is the central hub and provincial capitals are regional centers. Our research has significant implications for policy-making to improve supervision for enterprises involved in PPP projects. Full article
(This article belongs to the Special Issue Complexity in Financial Networks)
Show Figures

Figure 1

11 pages, 3825 KB  
Article
Physiological Noise in Cardiorespiratory Time-Varying Interactions
by Dushko Lukarski, Dushko Stavrov and Tomislav Stankovski
Entropy 2026, 28(1), 121; https://doi.org/10.3390/e28010121 - 19 Jan 2026
Viewed by 133
Abstract
The systems in nature are rarely isolated and there are different influences that can perturb their states. Dynamic noise in physiological systems can cause fluctuations and changes on different levels, often leading to qualitative transitions. In this study, we explore how to detect [...] Read more.
The systems in nature are rarely isolated and there are different influences that can perturb their states. Dynamic noise in physiological systems can cause fluctuations and changes on different levels, often leading to qualitative transitions. In this study, we explore how to detect and extract the physiological noise, in terms of dynamic noise, from measurements of biological oscillatory systems. Moreover, because the biological systems can often have deterministic time-varying dynamics, we have considered how to detect the dynamic physiological noise while at the same time following the time-variability of the deterministic part. To achieve this, we use dynamical Bayesian inference for modeling stochastic differential equations that describe the phase dynamics of interacting oscillators. We apply this methodological framework on cardio-respiratory signals in which the breathing of the subjects varies in a predefined manner, including free spontaneous, sine, ramped and aperiodic breathing patterns. The statistical results showed significant difference in the physiological noise for the respiration dynamics in relation to different breathing patterns. The effect from the perturbed breathing was not translated through the interactions on the dynamic noise of the cardiac dynamics. The fruitful cardio-respiratory application demonstrated the potential of the methodological framework for applications to other physiological systems more generally. Full article
Show Figures

Figure 1

13 pages, 994 KB  
Article
Privacy-Preserving Average-Tracking Control for Multi-Agent Systems with Constant Reference Signals
by Wei Jiang and Cheng-Lin Liu
Entropy 2026, 28(1), 120; https://doi.org/10.3390/e28010120 - 19 Jan 2026
Viewed by 226
Abstract
This paper addresses the average-tracking control problem for multi-agent systems subject to constant reference signals. By introducing auxiliary signals generated from the states and delayed states of agents, a novel privacy-preserving integral-type average-tracking algorithm is proposed. Leveraging the frequency-domain analysis approach, delay-dependent sufficient [...] Read more.
This paper addresses the average-tracking control problem for multi-agent systems subject to constant reference signals. By introducing auxiliary signals generated from the states and delayed states of agents, a novel privacy-preserving integral-type average-tracking algorithm is proposed. Leveraging the frequency-domain analysis approach, delay-dependent sufficient and necessary conditions for ensuring asymptotic average-tracking convergence are derived. Furthermore, the proposed algorithm is extended to tackle the average-tracking control problem with mismatched reference signals, and a corresponding delay-dependent sufficient condition is established to guarantee privacy-preserving average-tracking convergence. Numerical simulations are conducted to verify the effectiveness of the developed algorithms. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

26 pages, 439 KB  
Article
Robust Distributed High-Dimensional Regression: A Convoluted Rank Approach
by Mingcong Wu
Entropy 2026, 28(1), 119; https://doi.org/10.3390/e28010119 - 19 Jan 2026
Viewed by 128
Abstract
This paper investigates robust high-dimensional convoluted rank regression in distributed environments. We propose an estimation method suitable for sparse regimes, which remains effective under heavy-tailed errors and outliers, as it does not impose moment assumptions on the noise distribution. To facilitate scalable computation, [...] Read more.
This paper investigates robust high-dimensional convoluted rank regression in distributed environments. We propose an estimation method suitable for sparse regimes, which remains effective under heavy-tailed errors and outliers, as it does not impose moment assumptions on the noise distribution. To facilitate scalable computation, we develop a local linear approximation algorithm, enabling fast and stable optimization in high-dimensional settings and across distributed systems. Our theoretical results provide non-asymptotic error bounds for both one-round and multi-round communication schemes, explicitly quantifying how estimation accuracy improves with additional communication rounds. Specifically, after a number of communication rounds (logarithmic in the number of machines), the proposed estimator achieves the minimax-optimal convergence rate, up to logarithmic factors. Extensive simulations further demonstrate stable performance across a wide range of error distributions, with accurate coefficient estimation and reliable support recovery. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 2774 KB  
Article
Uncovering Neural Learning Dynamics Through Latent Mutual Information
by Arianna Issitt, Alex Merino, Lamine Deen, Ryan T. White and Mackenzie J. Meni
Entropy 2026, 28(1), 118; https://doi.org/10.3390/e28010118 - 19 Jan 2026
Viewed by 220
Abstract
We study how convolutional neural networks reorganize information during learning in natural image classification tasks by tracking mutual information (MI) between inputs, intermediate representations, and labels. Across VGG-16, ResNet-18, and ResNet-50, we find that label-relevant MI grows reliably with depth while input MI [...] Read more.
We study how convolutional neural networks reorganize information during learning in natural image classification tasks by tracking mutual information (MI) between inputs, intermediate representations, and labels. Across VGG-16, ResNet-18, and ResNet-50, we find that label-relevant MI grows reliably with depth while input MI depends strongly on architecture and activation, indicating that “compression’’ is not a universal phenomenon. Within convolutional layers, label information becomes increasingly concentrated in a small subset of channels; inference-time knockouts, shuffles, and perturbations confirm that these high-MI channels are functionally necessary for accuracy. This behavior suggests a view of representation learning driven by selective concentration and decorrelation rather than global information reduction. Finally, we show that a simple dependence-aware regularizer based on the Hilbert–Schmidt Independence Criterion can encourage these same patterns during training, yielding small accuracy gains and consistently faster convergence. Full article
Show Figures

Figure 1

21 pages, 1205 KB  
Article
Reassessing China’s Regional Modernization Based on a Grey-Based Evaluation Framework and Spatial Disparity Analysis
by Wenhao Zhou, Hongxi Lin, Zhiwei Zhang and Siyu Lin
Entropy 2026, 28(1), 117; https://doi.org/10.3390/e28010117 - 19 Jan 2026
Viewed by 225
Abstract
Understanding regional disparities in Chinese modernization is essential for achieving coordinated and sustainable development. This study develops a multi-dimensional evaluation framework, integrating grey relational analysis, entropy weighting, and TOPSIS to assess provincial modernization across China from 2018 to 2023. The framework operationalizes Chinese-style [...] Read more.
Understanding regional disparities in Chinese modernization is essential for achieving coordinated and sustainable development. This study develops a multi-dimensional evaluation framework, integrating grey relational analysis, entropy weighting, and TOPSIS to assess provincial modernization across China from 2018 to 2023. The framework operationalizes Chinese-style modernization through five dimensions: population quality, economic strength, social development, ecological sustainability, innovation and governance, capturing both material and institutional aspects of development. Using K-Means clustering, kernel density estimation, and convergence analysis, the study examines spatial and temporal patterns of modernization. Results reveal pronounced regional heterogeneity: eastern provinces lead in overall modernization but display internal volatility, central provinces exhibit gradual convergence, and western provinces face widening disparities. Intra-regional analysis highlights uneven development even within geographic clusters, reflecting differential access to resources, governance capacity, and innovation infrastructure. These findings are interpreted through modernization theory, linking observed patterns to governance models, regional development trajectories, and policy coordination. The proposed framework offers a rigorous, data-driven tool for monitoring modernization progress, diagnosing regional bottlenecks, and informing targeted policy interventions. This study demonstrates the methodological value of integrating grey system theory with multi-criteria decision-making and clustering analysis, providing both theoretical insights and practical guidance for advancing balanced and sustainable Chinese-style modernization. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 24392 KB  
Article
Peer Reporting: Sampling Design and Unbiased Estimates
by Kang Wen, Jianhong Mou and Xin Lu
Entropy 2026, 28(1), 116; https://doi.org/10.3390/e28010116 - 18 Jan 2026
Viewed by 119
Abstract
The Ego-Centric Sampling Method (ECM) leverages individual-level reports about peers to estimate population proportions within social networks, offering strong privacy protection without requiring full network data. However, the conventional ECM estimator is unbiased only under the restrictive assumption of a homogeneous network, where [...] Read more.
The Ego-Centric Sampling Method (ECM) leverages individual-level reports about peers to estimate population proportions within social networks, offering strong privacy protection without requiring full network data. However, the conventional ECM estimator is unbiased only under the restrictive assumption of a homogeneous network, where node degrees are uniform and uncorrelated with attributes. To overcome this limitation, we introduce the Activity Ratio Corrected ECM estimator (ECMac), which exploits network reciprocity to recast the population–proportion problem into an equivalent formulation in edge space. This reformulation relies solely on ego–peer data and explicitly corrects for degree–attribute dependencies, yielding unbiased and stable estimates even in highly heterogeneous networks. Simulations and analyses on real-world networks show that ECMac reduces estimation error by up to 70% compared with the conventional ECM. Our results establish a theoretically grounded and practically scalable framework for unbiased inference in network-based sampling designs. Full article
(This article belongs to the Special Issue Complexity of Social Networks)
Show Figures

Figure 1

21 pages, 486 KB  
Article
Extended Arimoto–Blahut Algorithms for Bistatic Integrated Sensing and Communications Systems
by Tian Jiao, Yanlin Geng, Zhiqiang Wei and Zai Yang
Entropy 2026, 28(1), 115; https://doi.org/10.3390/e28010115 - 18 Jan 2026
Viewed by 128
Abstract
Integrated Sensing and Communication (ISAC) has emerged as a cornerstone technology for next-generation wireless networks, where accurate performance evaluation is essential. In such systems, the capacity–distortion function provides a fundamental measure of the trade-off between communication and sensing performance, making its computation a [...] Read more.
Integrated Sensing and Communication (ISAC) has emerged as a cornerstone technology for next-generation wireless networks, where accurate performance evaluation is essential. In such systems, the capacity–distortion function provides a fundamental measure of the trade-off between communication and sensing performance, making its computation a problem of significant interest. However, the associated optimization problem is often constrained by non-convexity, which poses considerable challenges for deriving effective solutions. In this paper, we propose extended Arimoto–Blahut (AB) algorithms to solve the non-convex optimization problem associated with the capacity–distortion trade-off in bistatic ISAC systems. Specifically, we introduce auxiliary variables to transform non-convex distortion constraints in the optimization problem into linear constraints, prove that the reformulated linearly constrained optimization problem maintains the same optimal solution as the original problem, and develop extended AB algorithms for both squared error distortion and logarithmic loss distortion. The numerical results validate the effectiveness of the proposed algorithms. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

30 pages, 2546 KB  
Article
Entropy and Normalization in MCDA: A Data-Driven Perspective on Ranking Stability
by Ewa Roszkowska
Entropy 2026, 28(1), 114; https://doi.org/10.3390/e28010114 - 18 Jan 2026
Viewed by 172
Abstract
Normalization is a critical step in Multiple-Criteria Decision Analysis (MCDA) because it transforms heterogeneous criterion values into comparable information. This study examines normalization techniques through the lens of entropy, highlighting how criterion data structure shapes normalization behavior and ranking stability within TOPSIS (Technique [...] Read more.
Normalization is a critical step in Multiple-Criteria Decision Analysis (MCDA) because it transforms heterogeneous criterion values into comparable information. This study examines normalization techniques through the lens of entropy, highlighting how criterion data structure shapes normalization behavior and ranking stability within TOPSIS (Technique for Order Preference by Similarity to Ideal Solution). Seven widely used normalization procedures are analyzed regarding mathematical properties, sensitivity to extreme values, treatment of benefit and cost criteria, and rank reversal. Normalization is treated as a source of uncertainty in MCDA outcomes, as different schemes can produce divergent rankings under identical decision settings. Shannon entropy is employed as a descriptive measure of information dispersion and structural uncertainty, capturing the heterogeneity and discriminatory potential of criteria rather than serving as a weighting mechanism. An illustrative experiment with ten alternatives and four criteria (two high-entropy, two low-entropy) demonstrates how entropy mediates normalization effects. Seven normalization schemes are examined, including vector, max, linear Sum, and max–min procedures. For vector, max, and linear sum, cost-type criteria are treated using either linear inversion or reciprocal transformation, whereas max–min is implemented as a single method. This design separates the choice of normalization form from the choice of cost-criteria transformation, allowing a cleaner identification of their respective contributions to ranking variability. The analysis shows that normalization choice alone can cause substantial differences in preference values and rankings. High-entropy criteria tend to yield stable rankings, whereas low-entropy criteria amplify sensitivity, especially with extreme or cost-type data. These findings position entropy as a key mediator linking data structure with normalization-induced ranking variability and highlight the need to consider entropy explicitly when selecting normalization procedures. Finally, a practical entropy-based method for choosing normalization techniques is introduced to enhance methodological transparency and ranking robustness in MCDA. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
Show Figures

Figure 1

34 pages, 2594 KB  
Article
Variational Deep Alliance: A Generative Auto-Encoding Approach to Longitudinal Data Analysis
by Shan Feng, Wenxian Xie and Yufeng Nie
Entropy 2026, 28(1), 113; https://doi.org/10.3390/e28010113 - 18 Jan 2026
Viewed by 132
Abstract
Rapid advancements in the field of deep learning have had a profound impact on a wide range of scientific studies. This paper incorporates the power of deep neural networks to learn complex relationships in longitudinal data. The novel generative approach, Variational Deep Alliance [...] Read more.
Rapid advancements in the field of deep learning have had a profound impact on a wide range of scientific studies. This paper incorporates the power of deep neural networks to learn complex relationships in longitudinal data. The novel generative approach, Variational Deep Alliance (VaDA), is established, where an “alliance” is formed across repeated measurements via the strength of Variational Auto-Encoder. VaDA models the generating process of longitudinal data with a unified and well-structured latent space, allowing outcomes prediction, subjects clustering and representation learning simultaneously. The integrated model can be inferred efficiently within a stochastic Auto-Encoding Variational Bayes framework, which is scalable to large datasets and can accommodate variables of mixed type. Quantitative comparisons to those baseline methods are considered. VaDA shows high robustness and generalization capability across various synthetic scenarios. Moreover, a longitudinal study based on the well-known CelebFaces Attributes dataset is carried out, where we show its usefulness in detecting meaningful latent clusters and generating high-quality face images. Full article
Show Figures

Figure 1

18 pages, 2971 KB  
Article
First Experimental Measurements of Biophotons from Astrocytes and Glioblastoma Cell Cultures
by Luca De Paolis, Elisabetta Pace, Chiara Maria Mazzanti, Mariangela Morelli, Francesca Di Lorenzo, Lucio Tonello, Catalina Curceanu, Alberto Clozza, Maurizio Grandi, Ivan Davoli, Angelo Gemignani, Paolo Grigolini and Maurizio Benfatto
Entropy 2026, 28(1), 112; https://doi.org/10.3390/e28010112 - 17 Jan 2026
Viewed by 201
Abstract
Biophotons are non-thermal and non-bioluminescent ultraweak photon emissions, first hypothesised by Gurwitsch as a regulatory mechanism in cell division, and then experimentally observed in living organisms. Today, two main hypotheses explain their origin: stochastic decay of excited molecules and coherent electromagnetic fields produced [...] Read more.
Biophotons are non-thermal and non-bioluminescent ultraweak photon emissions, first hypothesised by Gurwitsch as a regulatory mechanism in cell division, and then experimentally observed in living organisms. Today, two main hypotheses explain their origin: stochastic decay of excited molecules and coherent electromagnetic fields produced in biochemical processes. Recent interest focuses on the role of biophotons in cellular communication and disease monitoring. This study presents the first campaign of biophoton emission measurements from cultured astrocytes and glioblastoma cells, conducted at Fondazione Pisana per la Scienza (FPS) using two ultra-sensitive setups developed in collaboration between the National Laboratories of Frascati (LNF-INFN) and the University of Rome II Tor Vergata. The statistical analyses of the collected data revealed a clear separation between cellular signals and dark noise, confirming the high sensitivity of the apparatus. The Diffusion Entropy Analysis (DEA) was applied to the data to uncover dynamic patterns, revealing anomalous diffusion and long-range memory effects that may be related to intercellular signaling and cellular communication. These findings support the hypothesis that biophoton emissions encode rich information beyond intensity, reflecting metabolic and pathological states. The differences revealed by applying the Diffusion Entropy Analysis to the biophotonic signals of Astrocytes and Glioblastoma are highlighted and discussed in the paper. This work lays the groundwork for future studies on neuronal cultures and proposes biophoton dynamics as a promising tool for non-invasive diagnostics and the study of cellular communication. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

34 pages, 2540 KB  
Article
Way More than the Sum of Their Parts: From Statistical to Structural Mixtures
by James P. Crutchfield
Entropy 2026, 28(1), 111; https://doi.org/10.3390/e28010111 - 16 Jan 2026
Viewed by 156
Abstract
We show that mixtures comprising multicomponent systems typically are much more structurally complex than the sum of their parts; sometimes, infinitely more complex. We contrast this with the more familiar notion of statistical mixtures, demonstrating how statistical mixtures miss key aspects of emergent [...] Read more.
We show that mixtures comprising multicomponent systems typically are much more structurally complex than the sum of their parts; sometimes, infinitely more complex. We contrast this with the more familiar notion of statistical mixtures, demonstrating how statistical mixtures miss key aspects of emergent hierarchical organization. This leads us to identify a new kind of structural complexity inherent in multicomponent systems and to draw out broad consequences for system ergodicity. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

15 pages, 4568 KB  
Article
Influences of Annealing Treatment on Soft Magnetic Properties, Mechanical Properties and Microstructure of Fe24.94Co24.94Ni24.94Al24.94Si0.24 High-Entropy Alloy
by Shiqi Zhang, Pin Jiang, Xuanbo Shi, Xiaohua Tan and Hui Xu
Entropy 2026, 28(1), 110; https://doi.org/10.3390/e28010110 - 16 Jan 2026
Viewed by 151
Abstract
In order to meet the ever-growing demand in modern power electronics, the advanced soft magnetic materials (SMMs) are required to exhibit both excellent soft magnetic performance and mechanical properties. In this work, the effects of an annealing treatment on the soft magnetic properties, [...] Read more.
In order to meet the ever-growing demand in modern power electronics, the advanced soft magnetic materials (SMMs) are required to exhibit both excellent soft magnetic performance and mechanical properties. In this work, the effects of an annealing treatment on the soft magnetic properties, mechanical properties and microstructure of the Fe24.94Co24.94Ni24.94Al24.94Si0.24 high-entropy alloy (HEA) are investigated. The as-cast HEA consists of a body-centered cubic (BCC) matrix phase and spherical B2 nanoprecipitates with a diameter of approximately 5 nm, where a coherent relationship is established between the B2 phase and the BCC matrix. After annealing at 873 K, the alloy retains both the BCC and B2 phases, with their coherent relationship preserved; besides the spherical B2 nanoprecipitates, rod-shaped B2 nanoprecipitates are also observed. After the annealing treatment, the saturation magnetization (Ms) of the alloy varies slightly within the range of 103–113 Am2/kg, which may be induced by the precipitation of this rod-shaped nanoprecipitate phase in the alloy. The increase in the coercivity (Hc) of annealed HEA is due to the inhomogeneous grain distribution, increased lattice misfit and high dislocation density induced by the annealing. The nanoindentation result reveals that the hardness after annealing at 873 K exhibits a 25% improvement compared with the hardness of as-cast HEA, which is mainly due to dislocation strengthening and precipitation strengthening. This research finding can provide guidance for the development of novel ferromagnetic HEAs, so as to meet the demands for materials with excellent soft magnetic properties and superior mechanical properties in the field of sustainable electrical energy. Full article
(This article belongs to the Special Issue Recent Advances in High Entropy Alloys)
Show Figures

Figure 1

24 pages, 3852 KB  
Review
Ions at Helium Interfaces: A Review
by Paul Leiderer
Entropy 2026, 28(1), 109; https://doi.org/10.3390/e28010109 - 16 Jan 2026
Viewed by 163
Abstract
Ions in liquid helium exist in their simplest form in two configurations, as negatively charged “electron bubbles” (electrons in a void of about 35 Å in diameter) and as positive “snowballs” (He+ ions surrounded by a sphere of solid helium, about 14 [...] Read more.
Ions in liquid helium exist in their simplest form in two configurations, as negatively charged “electron bubbles” (electrons in a void of about 35 Å in diameter) and as positive “snowballs” (He+ ions surrounded by a sphere of solid helium, about 14 Å in diameter). Here, we give an overview of studies with these ions when they are trapped at interfaces between different helium phases, i.e., the “free” surface between liquid and vapor, but also the interfaces between liquid and solid helium at high pressure and between phase-separated 3He-4He mixtures below the tricritical point. Three cases are discussed: (i) if the energy barrier provided by the interface is of the order of the thermal energy kBT, the ions can pass from one phase to the other with characteristic trapping times at the interface, which are in qualitative agreement with the existing theories; (ii) if the energy barrier is sufficiently high, the ions are trapped at the interface for extended periods of time, forming 2D Coulomb systems with intriguing properties; and (iii) at high electric fields and high ion densities, an electrohydrodynamic instability takes place, which is a model for critical phenomena. Full article
Show Figures

Figure 1

25 pages, 1436 KB  
Article
Entropy-Augmented Forecasting and Portfolio Construction at the Industry-Group Level: A Causal Machine-Learning Approach Using Gradient-Boosted Decision Trees
by Gil Cohen, Avishay Aiche and Ron Eichel
Entropy 2026, 28(1), 108; https://doi.org/10.3390/e28010108 - 16 Jan 2026
Viewed by 266
Abstract
This paper examines whether information-theoretic complexity measures enhance industry-group return forecasting and portfolio construction within a machine-learning framework. Using daily data for 25 U.S. GICS industry groups spanning more than three decades, we augment gradient-boosted decision tree models with Shannon entropy and fuzzy [...] Read more.
This paper examines whether information-theoretic complexity measures enhance industry-group return forecasting and portfolio construction within a machine-learning framework. Using daily data for 25 U.S. GICS industry groups spanning more than three decades, we augment gradient-boosted decision tree models with Shannon entropy and fuzzy entropy computed from recent return dynamics. Models are estimated at weekly, monthly, and quarterly horizons using a strictly causal rolling-window design and translated into two economically interpretable allocation rules, a maximum-profit strategy and a minimum-risk strategy. Results show that the top performing strategy, the weekly maximum-profit model augmented with Shannon entropy, achieves an accumulated return exceeding 30,000%, substantially outperforming both the baseline model and the fuzzy-entropy variant. On monthly and quarterly horizons, entropy and fuzzy entropy generate smaller but robust improvements by maintaining lower volatility and better downside protection. Industry allocations display stable and economically interpretable patterns, profit-oriented strategies concentrate primarily in cyclical and growth-sensitive industries such as semiconductors, automobiles, technology hardware, banks, and energy, while minimum-risk strategies consistently favor defensive industries including utilities, food, beverage and tobacco, real estate, and consumer staples. Overall, the results demonstrate that entropy-based complexity measures improve both economic performance and interpretability, yielding industry-rotation strategies that are simultaneously more profitable, more stable, and more transparent. Full article
(This article belongs to the Special Issue Entropy, Artificial Intelligence and the Financial Markets)
Show Figures

Figure 1

15 pages, 2092 KB  
Article
Improved NB Model Analysis of Earthquake Recurrence Interval Coefficient of Variation for Major Active Faults in the Hetao Graben and Northern Marginal Region
by Jinchen Li and Xing Guo
Entropy 2026, 28(1), 107; https://doi.org/10.3390/e28010107 - 16 Jan 2026
Viewed by 160
Abstract
This study presents an improved Nishenko–Buland (NB) model to address systematic biases in estimating the coefficient of variation for earthquake recurrence intervals based on a normalizing function TTave. Through Monte Carlo simulations, we demonstrate that traditional NB methods [...] Read more.
This study presents an improved Nishenko–Buland (NB) model to address systematic biases in estimating the coefficient of variation for earthquake recurrence intervals based on a normalizing function TTave. Through Monte Carlo simulations, we demonstrate that traditional NB methods significantly underestimate the coefficient of variation when applied to limited paleoseismic datasets, with deviations reaching between 30 and 40% for small sample sizes. We developed a linear transformation and iterative optimization approach that corrects these statistical biases by standardizing recurrence interval data from different sample sizes to conform to a common standardized distribution. Application to 26 fault segments across 15 major active faults in the Hetao graben system yields a corrected coefficient of variation of α = 0.381, representing a 24% increase over the traditional method (α0 = 0.307). This correction demonstrates that conventional approaches systematically underestimate earthquake recurrence variability, potentially compromising seismic hazard assessments. The improved model successfully eliminates sampling bias through iterative convergence, providing more reliable parameters for probability distributions in renewal-based earthquake forecasting. Full article
Show Figures

Figure 1

28 pages, 5111 KB  
Article
A Novel Parallel-Preheating Supercritical CO2 Brayton Cycle for Waste Heat Recovery from Offshore Gas Turbines: Energy, Exergy, and Economic Analysis Under Variable Loads
by Dianli Qu, Jia Yan, Xiang Xu and Zhan Liu
Entropy 2026, 28(1), 106; https://doi.org/10.3390/e28010106 - 16 Jan 2026
Viewed by 184
Abstract
Supercritical carbon dioxide (SC-CO2) power cycles offer a promising solution for offshore platforms’ gas turbine waste heat recovery due to their compact design and high thermal efficiency. This study proposes a novel parallel-preheating recuperated Brayton cycle (PBC) using SC-CO2 for [...] Read more.
Supercritical carbon dioxide (SC-CO2) power cycles offer a promising solution for offshore platforms’ gas turbine waste heat recovery due to their compact design and high thermal efficiency. This study proposes a novel parallel-preheating recuperated Brayton cycle (PBC) using SC-CO2 for waste heat recovery on offshore gas turbines. An integrated energy, exergy, and economic (3E) model was developed and showed good predictive accuracy (deviations < 3%). The comparative analysis indicates that the PBC significantly outperforms the simple recuperated Brayton cycle (SBC). Under 100% load conditions, the PBC achieves a net power output of 4.55 MW, while the SBC reaches 3.28 MW, representing a power output increase of approximately 27.9%. In terms of thermal efficiency, the PBC reaches 36.7%, compared to 21.5% for the SBC, marking an improvement of about 41.4%. Additionally, the electricity generation cost of the PBC is 0.391 CNY/kWh, whereas that of the SBC is 0.43 CNY/kWh, corresponding to a cost reduction of approximately 21.23%. Even at 30% gas turbine load, the PBC maintains high thermoelectric and exergy efficiencies of 30.54% and 35.43%, respectively, despite a 50.8% reduction in net power from full load. The results demonstrate that the integrated preheater effectively recovers residual flue gas heat, enhancing overall performance. To meet the spatial constraints of offshore platforms, we maintained a pinch-point temperature difference of approximately 20 K in both the preheater and heater by adjusting the flow split ratio. This approach ensures a compact system layout while balancing cycle thermal efficiency with economic viability. This study offers valuable insights into the PBC’s variable-load performance and provides theoretical guidance for its practical optimization in engineering applications. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Energy Systems)
Show Figures

Figure 1

16 pages, 6698 KB  
Article
Sustainable High Corrosion Resistance in High-Concentration NaCl Solutions for Refractory High-Entropy Alloys with High Strength and Good Plasticity
by Shunhua Chen, Xinxin Liu, Chong Li, Wuji Wang and Xiaokang Yue
Entropy 2026, 28(1), 105; https://doi.org/10.3390/e28010105 - 15 Jan 2026
Viewed by 244
Abstract
Among corrosive environments, Cl is one of the most aggressive anions which can cause electrochemical corrosion and the resultant failures of alloys, and the increase in Cl concentration will further deteriorate the passive film in many conventional alloys. Here, we report [...] Read more.
Among corrosive environments, Cl is one of the most aggressive anions which can cause electrochemical corrosion and the resultant failures of alloys, and the increase in Cl concentration will further deteriorate the passive film in many conventional alloys. Here, we report single-phase Nb25Mo25Ta25Ti20W5Cx (x = 0.1, 0.3, 0.8 at.%) refractory high-entropy alloys (RHEAs) with excellent corrosion resistance in high-concentration NaCl solutions. According to potentiodynamic polarization, electrochemical impedance spectroscopy, corroded morphology and the current–time results, the RHEAs demonstrate even better corrosion resistance with the increase in NaCl concentration to 23.5 wt.%, significantly superior to 304 L stainless steel. Typically, the corrosion current density (icorr) and over-passivation potential (Et) reached the lowest and highest value, respectively, in the 23.5 wt.% NaCl solution, and the icorr (2.36 × 10−7 A/cm2) of Nb25Mo25Ta25Ti20W5C0.1 alloy is nearly two orders lower than that of 304 L stainless steel (1.75 × 10−5 A/cm2). The excellent corrosion resistance results from the formation of passive films with fewer defects and more stable oxides. Moreover, with the addition of the appropriate C element, the RHEAs also demonstrated improved strength and plasticity simultaneously, for example, the Nb25Mo25Ta25Ti20W5C0.3 alloy exhibited an average yield strength of 1368 MPa and a plastic strain of 19.7%. The present findings provide useful guidance to address the conflict between the excellent corrosion resistance and high strength of advanced alloys. Full article
(This article belongs to the Special Issue Recent Advances in High Entropy Alloys)
Show Figures

Figure 1

17 pages, 19420 KB  
Article
Lévy Diffusion Under Power-Law Stochastic Resetting
by Jianli Liu, Yunyun Li and Fabio Marchesoni
Entropy 2026, 28(1), 104; https://doi.org/10.3390/e28010104 - 15 Jan 2026
Viewed by 187
Abstract
We investigated the diffusive dynamics of a Lévy walk subject to stochastic resetting through combined numerical and theoretical approaches. Under exponential resetting, the process mean squared displacement (MSD) undergoes a sharp transition from free superdiffusive behavior with exponent γ0 to a steady-state [...] Read more.
We investigated the diffusive dynamics of a Lévy walk subject to stochastic resetting through combined numerical and theoretical approaches. Under exponential resetting, the process mean squared displacement (MSD) undergoes a sharp transition from free superdiffusive behavior with exponent γ0 to a steady-state saturation regime. In contrast, power-law resetting with exponent β exhibits three asymptotic MSD regimes: free superdiffusion for β<1, superdiffusive scaling with a linearly β-decreasing exponent for 1<β<γ0+1, and localization characterized by finite steady-state plateaus for β>γ0+1. MSD scaling laws derived via renewal theory-based analysis demonstrate excellent agreement with numerical simulations. These findings offer new insights for optimizing search strategies and controlling transport processes in non-equilibrium environments. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

17 pages, 1692 KB  
Article
A Multi-Object Tracking Method with an Unscented Kalman Filter on a Lie Group Manifold
by Xinyu Wang, Li Liu and Fanzhang Li
Entropy 2026, 28(1), 103; https://doi.org/10.3390/e28010103 - 15 Jan 2026
Viewed by 231
Abstract
Multi-object tracking (MOT) has attracted increasing attention and achieved remarkable progress. However, accurately tracking objects with homogeneous appearance, heterogeneous motion, and heavy occlusion remains a challenge because of two problems: (1) missing association due to recognizing an object as background and (2) false [...] Read more.
Multi-object tracking (MOT) has attracted increasing attention and achieved remarkable progress. However, accurately tracking objects with homogeneous appearance, heterogeneous motion, and heavy occlusion remains a challenge because of two problems: (1) missing association due to recognizing an object as background and (2) false prediction caused by the predominant utilization of linear motion models and the insufficient discriminability of object appearance representations. To address these challenges, this paper proposes a lightweight, generic, and appearance-independent MOT method with an unscented Kalman filter (UKF) on a Lie group called LUKF-Track. The method utilizes detection boxes across the entire range of scores in data association and matches objects across frames by employing a motion model, where the propagation and prediction of object states are formulated using a UKF on the Lie group. LUKF-Track achieves state-of-the-art results on three public benchmarks, MOT17, MOT20, and DanceTrack, which are characterized by highly nonlinear object motion and severe occlusions. Full article
(This article belongs to the Special Issue Lie Group Machine Learning)
Show Figures

Figure 1

38 pages, 3177 KB  
Review
Unveiling Scale-Dependent Statistical Physics: Connecting Finite-Size and Non-Equilibrium Systems for New Insights
by Agustín Pérez-Madrid and Iván Santamaría-Holek
Entropy 2026, 28(1), 99; https://doi.org/10.3390/e28010099 - 14 Jan 2026
Viewed by 355
Abstract
A scale-dependent effective temperature emerges as a unifying principle in the statistical physics of apparently different phenomena, namely quantum confinement in finite-size systems and non-equilibrium effects in thermodynamic systems. This concept effectively maps these inherently complex systems onto equilibrium states, thereby enabling the [...] Read more.
A scale-dependent effective temperature emerges as a unifying principle in the statistical physics of apparently different phenomena, namely quantum confinement in finite-size systems and non-equilibrium effects in thermodynamic systems. This concept effectively maps these inherently complex systems onto equilibrium states, thereby enabling the direct application of standard statistical physics methods. By offering a framework to analyze these systems as effectively at equilibrium, our approach provides powerful new tools that significantly expand the scope of the field. Just as the constant speed of light in Einstein’s theory of special relativity necessitates a relative understanding of space and time, our fixed ratio of energy to temperature suggests a fundamental rescaling of both quantities that allows us to recognize shared patterns across diverse materials and situations. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop