Next Issue
Volume 28, May
Previous Issue
Volume 28, March
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 28, Issue 4 (April 2026) – 116 articles

Cover Story (view full-size image): This cover illustrates a unified framework in which interacting spin systems function as both quantum thermal machines and quantum batteries. On the left, a spin-chain heat engine absorbs heat from a hot bath and converts part of it into work through a cyclic process, while the remaining heat is rejected to a cold bath. On the right, the same system operates as a quantum battery, where energy is stored via external driving and extracted as useful work (ergotropy). The central quantum bridge highlights coherence and many-body correlations as key resources enhancing both energy conversion and storage, establishing a unified quantum platform for thermodynamics applications. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
29 pages, 489 KB  
Article
A Sequential Design for Extreme Quantile Estimation Under Binary Sampling
by Michel Broniatowski and Emilie Miranda
Entropy 2026, 28(4), 479; https://doi.org/10.3390/e28040479 - 21 Apr 2026
Viewed by 287
Abstract
We propose a sequential design method aiming at the estimation of an extreme quantile based on a sample of binary data corresponding to peaks over a given threshold. This study is motivated by an industrial challenge in material reliability and consists of estimating [...] Read more.
We propose a sequential design method aiming at the estimation of an extreme quantile based on a sample of binary data corresponding to peaks over a given threshold. This study is motivated by an industrial challenge in material reliability and consists of estimating a failure quantile from trials whose outcomes are reduced to indicators of whether the specimen has failed at the tested stress levels. The proposed approach relies on a splitting strategy that decomposes the target extreme probability into a product of higher-order conditional probabilities, enabling a progressive exploration of the tail of the distribution through sampling under truncated laws. We consider GEV and Weibull models for the underlying distribution, and the sequential estimation of their parameters is carried out using an enhanced maximum likelihood procedure specifically adapted to binary data, addressing the substantial uncertainty inherent to such limited information. Full article
(This article belongs to the Special Issue Statistical Inference: Theory and Methods)
Show Figures

Figure 1

24 pages, 1327 KB  
Article
VeriFed: Temporally Consistent Continuous Cross-Chain Data Federation
by Kun Hao, Meng Bi and Yuliang Ma
Entropy 2026, 28(4), 478; https://doi.org/10.3390/e28040478 - 21 Apr 2026
Viewed by 399
Abstract
Cross-chain analytics increasingly demand continuous joins across ledgers with asynchronous state evolution. Existing solutions, however, typically assume static snapshots or neglect temporal alignment, yielding semantically inconsistent results when epochs drift. This paper introduces VeriFed, a system for temporally consistent continuous cross-chain joins. We [...] Read more.
Cross-chain analytics increasingly demand continuous joins across ledgers with asynchronous state evolution. Existing solutions, however, typically assume static snapshots or neglect temporal alignment, yielding semantically inconsistent results when epochs drift. This paper introduces VeriFed, a system for temporally consistent continuous cross-chain joins. We formalize the problem of snapshot-aligned continuous joins, design a Unified Adapter Layer (UAL) to align finalized snapshots across heterogeneous protocols, and develop incremental verification that composes per-chain proofs into a global summary via the Epoch Attestation Mesh (EAM) and the Delta-Linked Proof Forest (DLPF). To sustain high-throughput execution, VeriFed further adopts an incremental multi-objective optimizer that balances latency and monetary cost. Experiments on Ethereum transaction data with a simulated wide-area network (WAN) demonstrate that VeriFed achieves sub-second per-epoch latency (approx. 38 ms) and reduces verification overhead by orders of magnitude compared to state-of-the-art baselines, while effectively detecting tampering with zero false positives. These results confirm consistent efficiency and verifiability under continuous updates. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

31 pages, 1181 KB  
Article
A Discrete Informational Framework for Classical Gravity: Ledger Foundations and Galaxy Rotation Curve Constraints
by Megan Simons, Elshad Allahyarov and Jonathan Washburn
Entropy 2026, 28(4), 477; https://doi.org/10.3390/e28040477 - 20 Apr 2026
Viewed by 354
Abstract
The weak-field, quasi-static regime of gravity is commonly described by the Newton–Poisson equation as an effective response law. We construct this response within a cost-first discrete variational framework. The Recognition Composition Law (RCL) uniquely selects a reciprocal closure cost within the restricted quadratic [...] Read more.
The weak-field, quasi-static regime of gravity is commonly described by the Newton–Poisson equation as an effective response law. We construct this response within a cost-first discrete variational framework. The Recognition Composition Law (RCL) uniquely selects a reciprocal closure cost within the restricted quadratic symmetric composition class; together with the discrete ledger axioms AX1–AX5 (including conservation) and standard DEC refinement, the Newton–Poisson baseline is then recovered in the instantaneous-closure limit. Conditional on Assumption AS1 (scale-free latency) and Assumption AS2 (causal frequency–wavenumber ansatz), allowing finite equilibration introduces fractional memory into the response, yielding a scale-free modification of the source–potential relation characterized by a power-law kernel wker(k)=1+C(k0/k)α in Fourier space. The kernel exponent α=12(1φ1)0.191, where φ=(1+5)/2, is derived from self-similarity of the discrete ledger closure; the amplitude C=φ20.382 is identified as a hypothesis from a three-channel factorization argument. We evaluate this quasi-static kernel-motivated response against SPARC galaxy rotation curves under a strict global-only protocol (fixed M/L=1, no per-galaxy tuning, conservative σtot), using a controlled multiplicative surrogate for the full nonlocal disk operator implied by the kernel. In this deliberately over-constrained setting, the surrogate interface achieves median(χ2/N)=3.06 over 147 galaxies (2933 points), outperforming a strict global-only NFW benchmark and remaining less efficient than MOND under identical constraints. The analysis is restricted to the non-relativistic, quasi-static sector and should be read as a falsifier-oriented galactic-regime consistency check of the scaling window, not as a relativistic completion or a claim of Solar System viability without additional UV regularization/screening. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
Show Figures

Figure 1

8 pages, 272 KB  
Article
A Perturbation Subsampling Method for Massive Censored Data
by Yan Tian and Jiaxin Song
Entropy 2026, 28(4), 476; https://doi.org/10.3390/e28040476 - 20 Apr 2026
Viewed by 199
Abstract
With the advancement of information technology, large-scale data have become increasingly common. Subsampling methods for the statistical analysis of such data require computing the sampling probability for each observation, a process that can be computationally intensive. In this paper, we extend the perturbed [...] Read more.
With the advancement of information technology, large-scale data have become increasingly common. Subsampling methods for the statistical analysis of such data require computing the sampling probability for each observation, a process that can be computationally intensive. In this paper, we extend the perturbed subsampling approach to the Cox proportional hazards model, a widely used method in survival analysis to address the statistical analysis of large-scale survival data. Specifically, we propose a perturbed subsampling algorithm for this model. The effectiveness of the proposed method is evaluated through simulation studies and real-data analysis. Full article
Show Figures

Figure 1

22 pages, 21234 KB  
Article
MFAFENet: A Multi-Sensor Collaborative and Multi-Scale Feature Information Adaptive Fusion Network for Spindle Rotational Error Classification in CNC Machine Tools
by Fei Wang, Lin Song, Pengfei Wang, Ping Deng and Tianwei Lan
Entropy 2026, 28(4), 475; https://doi.org/10.3390/e28040475 - 20 Apr 2026
Viewed by 221
Abstract
Accurate classification of spindle rotational errors is critical for ensuring machining precision and operational reliability of CNC machine tools. However, existing methods face challenges in extracting discriminative feature information from vibration signals due to small inter-class differences and complex electromechanical interference. This paper [...] Read more.
Accurate classification of spindle rotational errors is critical for ensuring machining precision and operational reliability of CNC machine tools. However, existing methods face challenges in extracting discriminative feature information from vibration signals due to small inter-class differences and complex electromechanical interference. This paper proposes a novel deep learning model, MFAFENet, based on multi-sensor collaboration and multi-scale feature information adaptive fusion. Vibration signals from three mounting positions are transformed into time-frequency information representations via Short-time Fourier Transform. The proposed network adaptively fuses multi-scale feature information from parallel branches with different kernel sizes through a branch attention mechanism. An efficient channel attention module is then incorporated to recalibrate channel-wise feature responses. The cross-entropy loss function is employed to optimize the network parameters during training. Experiments on a spindle reliability test bench demonstrate that MFAFENet achieves 93.37% average test accuracy, outperforming other comparative methods. Ablation and comparative studies confirm the effectiveness of each module and the clear advantage of adaptive fusion over fixed-weight multi-scale methods. Multi-sensor fusion further improves accuracy by 7.23% over the best single-sensor setup. The proposed method establishes an effective end-to-end mapping between vibration signals and rotational errors, providing a promising solution for high-precision spindle condition monitoring. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

38 pages, 11591 KB  
Article
A Simple Understanding of Quantum Electrodynamics Using Bohmian Trajectories: Detecting Non-Ontic Photons
by Juan José Seoane, Abdelilah Benali and Xavier Oriols
Entropy 2026, 28(4), 474; https://doi.org/10.3390/e28040474 - 20 Apr 2026
Viewed by 270
Abstract
The use of Bohmian mechanics as a practical tool for modeling non-relativistic quantum phenomena of matter provides clear evidence of its success, not only as a way to interpret the foundations of quantum mechanics, but also as a computational framework. In the literature, [...] Read more.
The use of Bohmian mechanics as a practical tool for modeling non-relativistic quantum phenomena of matter provides clear evidence of its success, not only as a way to interpret the foundations of quantum mechanics, but also as a computational framework. In the literature, it is frequently argued that such a realistic view—based on deterministic trajectories—cannot account for phenomena involving the “creation” and “annihilation” of photons. In this paper, by revisiting and rehabilitating earlier proposals, we show how quantum optics can be modeled using Bohmian trajectories for electrons in physical space, together with well-defined electromagnetic fields evolving in time. By paying special attention to an experimental scenario demonstrating partition noise for photons, and to how the Born rule emerges in this context, the paper pursues two main goals. First, it validates the use of this simple Bohmian framework for pedagogical and computational purposes in understanding and visualizing quantum electrodynamics phenomena. Second, given that measurements are ultimately indicated on matter pointers, it clarifies what it means to measure photon or electromagnetic-field properties, even when they are considered non-ontic elements. Full article
(This article belongs to the Special Issue Quantum Foundations: 100 Years of Born’s Rule)
Show Figures

Figure 1

28 pages, 1541 KB  
Article
An Entropy-Based Framework for Hybrid Coalitions in Game Theory—Part I: Human Arbitration
by Salomé A. Sepúlveda-Fontaine and José M. Amigó
Entropy 2026, 28(4), 473; https://doi.org/10.3390/e28040473 - 20 Apr 2026
Viewed by 413
Abstract
Classical Game Theory underpins much of AI and multi-agent research, but hybrid Human–AI systems require a framework in which execution authority can alternate within a digital environment. We introduce Neo-Game Theory, an extension of Classical Game Theory for hybrid Human–AI coalitions operating under [...] Read more.
Classical Game Theory underpins much of AI and multi-agent research, but hybrid Human–AI systems require a framework in which execution authority can alternate within a digital environment. We introduce Neo-Game Theory, an extension of Classical Game Theory for hybrid Human–AI coalitions operating under Virtual Nature, the algorithmic analogue of classical (physical) Nature. The framework combines a lexicographic coalition utility with a delegation rule based on the Jensen–Shannon divergence between Human and AI policies. Two thresholds define agreement, contextual, and disagreement regions. In the contextual region, execution follows a scenario-specific rule. Apart from the theory, in this paper we develop the first regime, Human arbitration, in which the AI learns by observation and frequency matching while the Human retains final execution authority. We establish the axiomatic basis of the framework and characterize a frequency-convergence equilibrium, providing the foundation for later extensions and computational validation. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

50 pages, 1551 KB  
Article
Causally Informative Entropic Inequalities within Families of Distributions with Shared Marginals
by Daniel Chicharro
Entropy 2026, 28(4), 472; https://doi.org/10.3390/e28040472 - 20 Apr 2026
Viewed by 273
Abstract
The joint probability distribution of observable variables from a system is constrained by the underlying causal structure. In the presence of hidden variables, untestable independencies that involve hidden variables lead to testable causally-imposed inequality constraints for observable variables, whose violation can reject the [...] Read more.
The joint probability distribution of observable variables from a system is constrained by the underlying causal structure. In the presence of hidden variables, untestable independencies that involve hidden variables lead to testable causally-imposed inequality constraints for observable variables, whose violation can reject the compatibility of a causal structure with data. One type of causally informative inequalities is entropic inequalities, which appear in the space of entropic terms associated with the distribution of observable variables. We derive a new type of minimum information (minInf) entropic inequalities that substantially increases causal inference power. These new entropic inequalities appear when considering the constraints that the causal structure imposes on entropic terms determined by information minimization within families of distributions that preserve sets of marginals shared with the original distribution. We introduce a new family of minInf data processing inequalities and a procedure to recursively combine different types of data processing inequalities to create tighter testable entropic inequalities. We extensively illustrate the applicability of this procedure in the instrumental causal scenario, integrating the new inequalities with standard instrumental entropic inequalities constructed with multivariate instrumental sets. We also provide additional examples with other types of entropic inequalities, such as the Information Causality and Groups-Decomposition inequalities. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 2068 KB  
Article
Conditional Agglomeration in China’s Northeast Rust Belt: Density, Structural Orientation, and Ownership-Mixing Entropy
by Omar Abu Risha, Jifan Ren, Mohammed Ismail Alhussam and Mohamad Ali Alhussam
Entropy 2026, 28(4), 471; https://doi.org/10.3390/e28040471 - 20 Apr 2026
Viewed by 238
Abstract
Northeast China’s rust-belt cities have faced persistent concerns about stagnating labor productivity amid structural change. This paper examines how the productivity payoff to urban density depends on local economic structure and ownership composition using an annual panel of prefecture-level cities. We estimate two-way [...] Read more.
Northeast China’s rust-belt cities have faced persistent concerns about stagnating labor productivity amid structural change. This paper examines how the productivity payoff to urban density depends on local economic structure and ownership composition using an annual panel of prefecture-level cities. We estimate two-way fixed-effects models with city and year effects and city-clustered standard errors, complemented by dynamic specifications and additional robustness checks. The results show a robust positive within-city association between population density and labor productivity. This density premium is structure-conditioned: the productivity payoff to density is significantly larger in city-years that are more industry-oriented. Information-theoretic measures further show that sectoral and ownership composition matter in distinct ways. A normalized entropy measure based on 19 all-city sectoral employment categories is positively associated with labor productivity, while its interaction with density is negative and significant, indicating that the density premium is weaker in more sectorally balanced city-years. A normalized four-category ownership entropy measure, constructed from SOE, private/self-employed, collective, and other employment shares, is positively associated with labor productivity and interacts positively with density, indicating a stronger density–productivity association in city-years with a more balanced ownership composition. Collectively, the findings suggest that urban density is not a uniform engine of productivity: its payoff depends on whether dense city economies are organized around productive sectoral linkages and a sufficiently balanced ownership environment. Overall, the evidence supports a conditional agglomeration view in which productivity dynamics in Northeast China reflect the interaction of density, structural orientation, sectoral dispersion, and ownership mixing. Full article
(This article belongs to the Special Issue Complexity in Urban Systems)
Show Figures

Figure 1

24 pages, 455 KB  
Article
Fragmentation of Nuclear Remnants in Electron–Nucleus Collisions at High Energy as a Nonextensive Process
by Ting-Ting Duan, Sahanaa Büriechin, Hai-Ling Lao, Fu-Hu Liu and Khusniddin K. Olimov
Entropy 2026, 28(4), 470; https://doi.org/10.3390/e28040470 - 20 Apr 2026
Viewed by 234
Abstract
Utilizing a partitioning method based on equal (or unequal) probabilities—without incorporating the alpha-cluster (α-cluster) model—allows for the derivation of diverse topological configurations of nuclear fragments resulting from fragmentation. Subsequently, we predict the multiplicity distribution of nuclear fragments for specific excited nuclei, [...] Read more.
Utilizing a partitioning method based on equal (or unequal) probabilities—without incorporating the alpha-cluster (α-cluster) model—allows for the derivation of diverse topological configurations of nuclear fragments resulting from fragmentation. Subsequently, we predict the multiplicity distribution of nuclear fragments for specific excited nuclei, such as Be*9, C*12, and O*16, which can be formed as nuclear remnants in electron–nucleus (eA) collisions at high energy. Based on the α-cluster model, an α-cluster structure may result in deviations in the multiplicity distributions of nuclear fragments with charge Z=2, compared to those predicted by the partitioning methods. Furthermore, in the framework of Tsallis statistics, the nonextensive generalized temperature, entropy index, and q-entropy are obtained from the multiplicity distribution of nuclear fragments with a given charge number. Our work shows that fragmentation of nuclear remnants in electron–nucleus collisions at high energy is a nonextensive process. Full article
(This article belongs to the Special Issue Complexity in High-Energy Physics: A Nonadditive Entropic Perspective)
Show Figures

Figure 1

22 pages, 7955 KB  
Article
Speed Ratio in a Novel Multilayer Traffic Network for Urban Congestion Relief and Efficiency Gain
by Wenna Liu and Bo Yang
Entropy 2026, 28(4), 469; https://doi.org/10.3390/e28040469 - 20 Apr 2026
Viewed by 292
Abstract
Based on observations of real-world transport systems such as bus-subway systems, street-motorway networks, and rail-air transport frameworks, in which high-speed layers are typically constructed above pre-existing low-speed networks to alleviate congestion and improve efficiency, this study proposes a method for constructing multilayer transport [...] Read more.
Based on observations of real-world transport systems such as bus-subway systems, street-motorway networks, and rail-air transport frameworks, in which high-speed layers are typically constructed above pre-existing low-speed networks to alleviate congestion and improve efficiency, this study proposes a method for constructing multilayer transport networks by strategically deploying the high-speed layer according to node betweenness centrality in the underlying low-speed network. The concept of speed ratio is introduced to quantify the speed difference within the multilayer network. The multilayer network is integrated into the following model: the user equilibrium flow assignment strategy model based on the Bureau of Public Roads function. Utilizing network efficiency, high-speed layer utilization ratio, and proportion of congested edges as metrics, we analyze the impact of: (1) inter-tier speed ratio, (2) low-speed layer topology, and (3) interlayer transfer costs on system performance. Key findings indicate: Under a given traffic demand, increasing the inter-layer speed ratio elevates network efficiency while shifting congestion from lower to upper layers; incorporation of long-range connections improves efficiency, alleviating traffic congestion; introducing interlayer travel speed may enhance efficiency in specific parameter regimes. Full article
(This article belongs to the Special Issue Complexity in Urban Systems)
Show Figures

Figure 1

22 pages, 3182 KB  
Article
Modeling and Dynamic Analysis of Trust Decay in Social Media Based on Triadic Closure Structure
by Yao Qu, Changjing Wang and Qi Tian
Entropy 2026, 28(4), 468; https://doi.org/10.3390/e28040468 - 20 Apr 2026
Viewed by 346
Abstract
Trust decay in social media is a serious threat to user experience and platform ecology. To solve this problem, this paper focuses on triadic closure in the infrastructure of social networks and explores its mechanism in trust decay prevention. Based on the systematic [...] Read more.
Trust decay in social media is a serious threat to user experience and platform ecology. To solve this problem, this paper focuses on triadic closure in the infrastructure of social networks and explores its mechanism in trust decay prevention. Based on the systematic comparison of the ER random graph, the BA scale-free network, a forest fire model, and complete graph approaches, two core metrics, the trust decay risk index and trust resilience index, are proposed in this paper. Combined with structural indices such as the clustering coefficient, the average path length, and the triangular closure number and its growth rate, the quantitative relationship between network structure evolution and trust decay risk is established. It is found that the forest fire model exhibits optimal trust resilience in structure due to its power-law growth characteristics of high clustering, short path length and triangular closure; the dynamic mechanism of trust decay under different network growth modes is significantly different. The validity of the theoretical framework is further supported by the verification of Sina Weibo attention relationship network data. The analysis framework of network growth evolution based on fusion triangle closure and the risk and resilience indicators defined in this paper provides a computable theoretical tool for understanding and predicting trust evolution in social media from the perspective of network structure. Full article
Show Figures

Figure 1

12 pages, 409 KB  
Article
The Rényi Entropy and Entropic Cosmology
by S. I. Kruglov
Entropy 2026, 28(4), 467; https://doi.org/10.3390/e28040467 - 20 Apr 2026
Viewed by 510
Abstract
Entropic cosmology with the Rényi entropy of the apparent horizon SR=(1/α)ln(1+αSBH), where SBH is the Bekenstein–Hawking entropy, is studied. By virtue of the thermodynamics–gravity [...] Read more.
Entropic cosmology with the Rényi entropy of the apparent horizon SR=(1/α)ln(1+αSBH), where SBH is the Bekenstein–Hawking entropy, is studied. By virtue of the thermodynamics–gravity correspondence, a model of dark energy is investigated. The generalized Friedmann equations for the Friedmann–Lemaître–Robertson–Walker spatially flat universe with barotropic matter fluid are obtained. We compute the dark energy density ρD, pressure pD, and the deceleration parameter q of the universe. At some model parameters, the normalized density parameter of the matter Ωm00.315 and the deceleration parameter q00.535 for the current epoch, which are in the agreement with the Planck data, are found. Making use of the thermodynamics–gravity correspondence, we describe the late-time acceleration of the universe. The entropic cosmology considered here is equivalent to cosmology based on the teleparallel gravity with the definite function F(T). The Hubble parameters are in approximate agreement (within 5 percents) with the observational Hubble data for redshifts 0.07z1.75 at the entropy parameter α0.305GH02. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

31 pages, 551 KB  
Article
Frequentist and Bayesian Predictive Inference for the Log-Logistic Distribution Under Progressive Type-II Censoring
by Ziteng Zhang and Wenhao Gui
Entropy 2026, 28(4), 466; https://doi.org/10.3390/e28040466 - 18 Apr 2026
Viewed by 262
Abstract
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we [...] Read more.
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we derive three distinct point predictors: the Best Unbiased Predictor (BUP), the Conditional Median Predictor (CMP), and the Bayesian Predictor (BP). Corresponding prediction intervals are constructed using frequentist pivotal quantities, Bayesian Equal-Tailed Intervals (ETIs), and Highest Posterior Density (HPD) methods. The Bayesian procedures are implemented via Markov chain Monte Carlo (MCMC) sampling. We evaluate the finite-sample performance of the proposed methodologies through a Monte Carlo simulation study and further validate them using two real-world datasets, namely bladder cancer remission times and guinea pig survival times. The numerical results indicate that the proposed BP, particularly under the empirical prior, provides the most accurate and stable overall performance for point prediction, while the frequentist predictors become less reliable in extreme heavy-tailed settings. For interval prediction, the Bayesian HPD method consistently outperforms the alternatives, substantially reducing interval lengths for right-skewed data while maintaining the nominal coverage probability. Full article
Show Figures

Figure 1

24 pages, 1336 KB  
Article
Haken-Entropy-Based Analysis of the Synergy Among Financial Support, Technological Innovation, and Industrial Upgrading
by Yue Zhang, Jinchuan Ke and Jingqi He
Entropy 2026, 28(4), 465; https://doi.org/10.3390/e28040465 - 17 Apr 2026
Viewed by 394
Abstract
This study reveals the internal mechanism of the synergetic evolution of financial support, technological innovation, and industrial upgrading from the perspective of system synergy. It aims to provide a theoretical basis and reference for promoting benign interactions among these elements, thereby driving high-quality [...] Read more.
This study reveals the internal mechanism of the synergetic evolution of financial support, technological innovation, and industrial upgrading from the perspective of system synergy. It aims to provide a theoretical basis and reference for promoting benign interactions among these elements, thereby driving high-quality economic development. During the research process, an evaluation indicator system was constructed based on China’s industrial development data, utilizing the entropy method to determine indicator weights and the Haken model to analyze synergy effects. In a methodological innovation, this study identifies the system’s order parameters to derive the potential function. Through this approach, it systematically analyzes the dynamic evolution characteristics and synergetic mechanisms of the composite system. The research results indicate that the three systems have formed a mutually promoting and closely coupled compound synergetic mechanism, rather than following a single linear transmission path. The overall synergy level presents a medium-to-low development trend, following an asymmetric U-shaped evolution trajectory that first decreases and then slowly recovers. Furthermore, the degree of synergy exhibits an inverse relationship with the volatility of the subsystems, suggesting that the stability of synergy is highly susceptible to external forces and remains in a state of dynamic flux. Full article
Show Figures

Figure 1

45 pages, 4870 KB  
Article
A Novel Version of the Arcsine–Rayleigh Distribution with Entropy Measures, Statistical Inference, and Applications
by Asmaa S. Al-Moisheer, Khalaf S. Sultan, Moustafa N. Mousa and Mahmoud M. M. Mansour
Entropy 2026, 28(4), 464; https://doi.org/10.3390/e28040464 - 17 Apr 2026
Viewed by 293
Abstract
This paper presents a new distribution on the unit interval, named the Unit Arcsine–Rayleigh distribution (UASRD), which is the result of the exponential transformation of the Arcsine–Rayleigh distribution. The model suggested is versatile and can be used in modeling limited reliability and proportion [...] Read more.
This paper presents a new distribution on the unit interval, named the Unit Arcsine–Rayleigh distribution (UASRD), which is the result of the exponential transformation of the Arcsine–Rayleigh distribution. The model suggested is versatile and can be used in modeling limited reliability and proportion data. Entropy-based measures are also studied to determine the uncertainty and information content of the proposed model and further explain the probabilistic nature of the proposed model and its potential applicability in information-theoretic and reliability tasks. These findings demonstrate the utility of the suggested model in the study of the limited data in the context of information theory. Basic statistical characteristics are derived, such as cumulative and density functions, quantile function, reliability and hazard functions, and ordinary moments. Estimation of parameters is obtained through approaches of maximum likelihood and maximum product spacing and Bayesian estimation of parameters. The performance of the estimators is also assessed by a Monte Carlo simulation study, and the application of real data shows the utility of the proposed model to the analysis of bounded data. Full article
Show Figures

Figure 1

26 pages, 1879 KB  
Article
NEF-DHR: A Non-Equivalent Functional Dynamic Heterogeneous Redundancy Architecture for Endogenous Safety and Security
by Bingbing Jiang, Yilin Kang and Hanzhi Cai
Entropy 2026, 28(4), 463; https://doi.org/10.3390/e28040463 - 17 Apr 2026
Viewed by 246
Abstract
Endogenous safety and security (ESS), which advocates for designing systems that are inherently safe and secure by nature, has emerged as a pivotal paradigm for addressing the inherent vulnerabilities of information systems. The Dynamic Heterogeneous Redundancy (DHR) architecture serves as its typical implementation [...] Read more.
Endogenous safety and security (ESS), which advocates for designing systems that are inherently safe and secure by nature, has emerged as a pivotal paradigm for addressing the inherent vulnerabilities of information systems. The Dynamic Heterogeneous Redundancy (DHR) architecture serves as its typical implementation by introducing dynamic, heterogeneous, redundant executors with equivalent function (EF) into the information system. However, the functional equivalence property explicitly connects the system’s output to that of the individual executors, thereby creating potential security risks that adversaries could exploit. In addition, EF-DHR faces an inherent contradiction between functional equivalence and heterogeneous implementations (HIS), leading to high engineering costs and limited applicability. To address these problems, this paper proposes the Non-Equivalent Functional DHR (NEF-DHR) architecture, leveraging function secret sharing (FSS) theory to replace EF executors with NEF components, which fundamentally eliminates the EF-HIS contradiction. Specifically, we propose the concept of ‘terminal executor output information entropy loss’ to formalize the risk of output information interception by adversaries and theoretically prove that NEF-DHR improves unpredictability and resistance to attacks. Experimental results further validate that NEF-DHR exhibits lower error rates under various attack levels, with enhanced robustness and superior ESS performance. Additionally, we generalize the DHR architecture based on three core properties (indistinguishability, output recoverability, verification) and classify ESS into three types with corresponding DHR variants. This work advances the application of entropy theory in ESS and provides a novel entropy-enhanced solution for the large-scale deployment of DHR security systems. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

20 pages, 4015 KB  
Article
Feature Selection Based on Information Entropy for Accurate Detection of Optical Fiber End-Face Defects
by Longbing Yang, Quan Xu, Min Liao, Kang Sun, Rujie Xiang and Haonan Xu
Entropy 2026, 28(4), 462; https://doi.org/10.3390/e28040462 - 17 Apr 2026
Viewed by 324
Abstract
Multimode fibers with core diameters of 50 μm and 62.5 μm are the core media for short-distance, low-cost, and high-bandwidth optical transmission scenarios. Currently, the detection of their end-face defects is still mainly based on manual microscopic inspection. Most of the existing machine [...] Read more.
Multimode fibers with core diameters of 50 μm and 62.5 μm are the core media for short-distance, low-cost, and high-bandwidth optical transmission scenarios. Currently, the detection of their end-face defects is still mainly based on manual microscopic inspection. Most of the existing machine vision detection schemes are aimed at polarization-maintaining fibers (POL), which are easily interfered with by impurities and have insufficient accuracy and efficiency. This study introduces the information entropy in information theory as a constraint for feature selection, proposes the WGMOS digital image detection method, and optimizes the entire process of image acquisition, correction, filtering, adaptive segmentation, and feature extraction. By minimizing the information entropy of background noise and maximizing the information content of defect features, interference is suppressed. Experiments show that compared with the POL detection method, this scheme can exclude more impurities, with the image equalization value increased by ≥38.20% and the signal-to-noise ratio increased by ≥6.0%. It can achieve efficient and accurate detection of multimode fiber end-face defects. Full article
(This article belongs to the Special Issue Failure Diagnosis of Complex Systems)
Show Figures

Figure 1

23 pages, 1176 KB  
Article
Uncertainty Quantification in Inverse Scattering Problems
by Carolina Abugattas, Ana Carpio and Elena Cebrián
Entropy 2026, 28(4), 461; https://doi.org/10.3390/e28040461 - 17 Apr 2026
Viewed by 255
Abstract
Inverse scattering problems seek anomalies in a medium given data measured after the interaction with emitted waves. Due to noise, predictions about the nature of these inclusions should be complemented with uncertainty estimates. To this end, we propose a progressive framework for inverse [...] Read more.
Inverse scattering problems seek anomalies in a medium given data measured after the interaction with emitted waves. Due to noise, predictions about the nature of these inclusions should be complemented with uncertainty estimates. To this end, we propose a progressive framework for inverse scattering from low- to high-dimensional Bayesian formulations depending on the prior information and the problem complexity. We aim to reduce computational costs by exploiting educated prior information. When we look for a few well-separated inclusions in a known medium with information about their number, we resort to low-dimensional parameterizations in terms of a few random variables representing their shape and material constants. We test this approach detecting anomalies in tissues and deposits in stratified subsoils. In more complex situations where the anomalies may overlap, we propose high-dimensional parameterizations obtained from Karhunen–Loève (KL) or Fourier expansions of the density and velocity fields. We employ these methods to characterize oil and gas reservoirs in a salt dome configuration, where the screening effect of the dome cap prevents the obtention of adequate prior information. We characterize the posterior probability by means of affine invariant ensemble and functional ensemble MCMC samplers depending on dimensionality. This provides information on configurations with the highest a posteriori probability and the uncertainty around them, identifying factors that could reduce the uncertainty. In high-dimensional setups, techniques based on KL developments are more effective and stable. A recurring issue is the choice of the a priori covariance (which strongly affects the results) and the choice of its hyperparameters. Here, we use educated choices. Formulations that include them as additional parameters could be a next step at a higher cost. Full article
(This article belongs to the Special Issue Uncertainty Quantification and Entropy Analysis)
Show Figures

Figure 1

25 pages, 2471 KB  
Article
Boosting the Diversity of a Similarity-Aware Genetic Algorithm Using a Siamese Network for Optimized S-Box Generation
by Ishfaq Ahmad Khaja, Musheer Ahmad and Louai A. Maghrabi
Entropy 2026, 28(4), 460; https://doi.org/10.3390/e28040460 - 17 Apr 2026
Viewed by 312
Abstract
A difficult NP-hard optimization problem, designing cryptographically robust substitution-boxes (S-boxes) necessitates a careful balancing act between several conflicting properties, such as differential uniformity and nonlinearity. Genetic Algorithms (GAs) have been widely used for this task; however, their performance is often limited by premature [...] Read more.
A difficult NP-hard optimization problem, designing cryptographically robust substitution-boxes (S-boxes) necessitates a careful balancing act between several conflicting properties, such as differential uniformity and nonlinearity. Genetic Algorithms (GAs) have been widely used for this task; however, their performance is often limited by premature convergence and insufficient diversity during crossover operations. This primarily occurs because genetic algorithms commence with limited a priori knowledge. This sort of “blindness” and failure to utilize local knowledge results in diminished performance. In GA, the crossover operations facilitate the dissemination of robust candidates within the population. Conventionally, GA implements crossover for each pair of parents for diversity and a robust solution. However, this is not invariably the situation. To enhance children’s candidacy, parental diversity is quite crucial. This paper proposes a similarity-aware crossover strategy, integrated with a Siamese learning framework, to guide the genetic algorithm for improved S-box optimization with better diversity and faster convergence by utilizing parental local information. The proposed model is similarity-aware to guarantee that the GA improves parental diversity. When the parents exhibit excessive similarity, a “regressive” crossover is opted, which ensures the propagation of a parental couple with sufficient diversity to produce superior offspring. The proposed similarity-aware GA model is applied and evaluated to generate cryptographically robust and optimized S-boxes. To verify the robustness in terms of diversity, the model has been tested using three different loss functions: contrastive loss, KL divergence loss, and the suggested method of combining both loss functions to form a hybrid loss function. The effectiveness of the proposed approach is demonstrated through the generation of high-quality S-boxes with strong cryptographic properties. Full article
Show Figures

Figure 1

12 pages, 2083 KB  
Article
Transient Catalytic Reaction Analysis Through Signal Defragmentation
by Stephen Kristy, Shengguang Wang and Jason P. Malizia
Entropy 2026, 28(4), 459; https://doi.org/10.3390/e28040459 - 17 Apr 2026
Viewed by 333
Abstract
The Temporal Analysis of Products (TAP) pulse response technique provides valuable insights into catalytic function and reaction kinetics. However, complex fragmentation patterns in the TAP mass spectrometry signals can complicate precise quantification, particularly when analyzing transient gas flux data typical of TAP experiments. [...] Read more.
The Temporal Analysis of Products (TAP) pulse response technique provides valuable insights into catalytic function and reaction kinetics. However, complex fragmentation patterns in the TAP mass spectrometry signals can complicate precise quantification, particularly when analyzing transient gas flux data typical of TAP experiments. This work demonstrates a standard defragmentation method that deconvolves transient TAP signals while maintaining the temporal resolution of the experiment. First, the integrals of calibration gas fluxes are used to determine the fingerprint fragmentation pattern and construct a fragmentation matrix. This matrix is then used to defragment experimental flux data at each recorded time point via a non-negative least squares regression. The effectiveness of this method is demonstrated using virtual data and control experiments with a TAP reactor system. The defragmentation is then applied to the more complex propane dehydrogenation reaction on a chromia/alumina catalyst, which can contain up to ten significant gas species in the reactor outlet. Initial propane pulsing reveals an induction period during which propane is fully oxidized to CO2, followed by partial reduction to CO. Afterwards, there is a transition in chemistries towards coking and propylene production. Our example illustrates a practical method for the accurate determination of the time-dependent reactant/product concentrations and rates for a thorough analysis of the propane dehydrogenation kinetics. This approach can be broadly applied to any transient mass spectrometry experiment for a better understanding of catalyst-reaction dynamics. Full article
Show Figures

Figure 1

20 pages, 1031 KB  
Article
Rate-Splitting-Based RF-UWOC Relaying Systems with Hardware Impairments and Interference
by Xin Huang, Yeqing Su, Yuehao Qiu, Xusheng Tang and Sai Li
Entropy 2026, 28(4), 458; https://doi.org/10.3390/e28040458 - 16 Apr 2026
Viewed by 230
Abstract
To meet the future demands of high-rate transmission and full-coverage networks, radio frequency–underwater wireless optical communication (RF-UWOC) relaying systems are considered a promising heterogeneous communication architecture. The rate-splitting (RS) scheme, through its power allocation (PA) mechanism, provides a generalized framework for the performance [...] Read more.
To meet the future demands of high-rate transmission and full-coverage networks, radio frequency–underwater wireless optical communication (RF-UWOC) relaying systems are considered a promising heterogeneous communication architecture. The rate-splitting (RS) scheme, through its power allocation (PA) mechanism, provides a generalized framework for the performance evaluation of such systems. Based on this, this paper analyzes the performance of an RS-based RF-UWOC system under hardware impairments (HIs) and interference. Analytical expressions of the outage probability (OP) and ergodic capacity (EC) for the considered system are formulated within a generalized framework, which encompasses the conventional RF-UWOC system as a special case. The results indicate that the OP and EC are affected by HIs, interference transmit power, the PA coefficients, channel fading, pointing errors (PEs), and detection types of the UWOC link. Furthermore, the asymptotic results for the OP and the diversity gain (DG) are explicitly characterized. For a fixed interference transmit power, the DG is mainly dominated by the channel fading severity, PEs effect, and the detection scheme. When the interference transmit power is comparable to the desired signal power, the system operates in an interference-limited regime, and the DG decreases to zero. It is also revealed that HIs and PA coefficients affect the coding gain but not the DG. Moreover, the existence of an optimal PA scheme improves the reliability of the RS-based system. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

19 pages, 1142 KB  
Article
RIS-Aided Physical Layer Security with Imperfect CSI: A Robust Model-Driven Deep Learning Approach
by Ruikai Miao, Zhiqun Song, Yong Li, Xingjian Li, Lizhe Liu, Guoyuan Shao and Bin Wang
Entropy 2026, 28(4), 457; https://doi.org/10.3390/e28040457 - 16 Apr 2026
Cited by 1 | Viewed by 288
Abstract
Reconfigurable intelligent surface (RIS) emerges as a promising paradigm and offers a new perspective for physical layer security. In practice, imperfect eavesdropper channel state information (CSI) represents a critical challenge for RIS-aided physical layer security design. To tackle this issue, this paper investigates [...] Read more.
Reconfigurable intelligent surface (RIS) emerges as a promising paradigm and offers a new perspective for physical layer security. In practice, imperfect eavesdropper channel state information (CSI) represents a critical challenge for RIS-aided physical layer security design. To tackle this issue, this paper investigates RIS-aided physical layer security enhancement under imperfect eavesdropper CSI and formulates a robust weighted sum secrecy rate maximization problem. To efficiently solve this problem, a model-driven deep learning approach is proposed. We begin by introducing the gradient descent–ascent algorithm to solve the optimization problem. Then we unfold this algorithm into a gated recurrent unit (GRU)-aided deep unfold network with trainable parameters. The proposed GRU-aided deep unfold network leverages GRU to adaptively generate gradient ascent–descent step sizes. Different from the existing deep unfold network that commonly has a fixed number of iteration, the proposed deep unfold network integrates the sequential learning capability of GRU and enables adaptive iteration adjustment. The simulation results demonstrate that compared to existing non-robust optimization algorithm and traditional deep unfold network with fixed number of iteration, the proposed method exhibits robustness against imperfect CSI and achieves higher weighted sum secrecy rate. Full article
Show Figures

Figure 1

5 pages, 166 KB  
Editorial
Complex Networks in Contemporary Science and Technology
by L. H. A. Monteiro
Entropy 2026, 28(4), 456; https://doi.org/10.3390/e28040456 - 16 Apr 2026
Viewed by 419
Abstract
Many systems in nature, society, and technology can be represented as networks of interacting components [...] Full article
(This article belongs to the Special Issue Dynamics in Biological and Social Networks)
7 pages, 207 KB  
Article
Reversible Evaporation and the Entropy of Black Holes
by Friedrich Herrmann and Michael Pohlig
Entropy 2026, 28(4), 455; https://doi.org/10.3390/e28040455 - 15 Apr 2026
Viewed by 475
Abstract
The entropy of a Schwarzschild black hole is commonly derived using thermodynamic relations whose physical interpretation is not always transparent, in particular with respect to the localization of temperature and entropy. In this paper, we present a derivation of the Bekenstein–Hawking entropy based [...] Read more.
The entropy of a Schwarzschild black hole is commonly derived using thermodynamic relations whose physical interpretation is not always transparent, in particular with respect to the localization of temperature and entropy. In this paper, we present a derivation of the Bekenstein–Hawking entropy based exclusively on the principles of phenomenological thermodynamics, formulated entirely in regions where spacetime is effectively flat. The analysis considers a reversible evaporation process in which the black hole is surrounded by a tunable thermal radiation bath whose temperature is kept arbitrarily close to the Hawking temperature. In this limit, entropy production can be made negligible. By integrating the entropy flux through a distant reference surface over the evaporation process, the standard entropy formula is obtained without invoking assumptions about the localization of the black hole entropy or about microscopic degrees of freedom. The derivation is mathematically simple but conceptually instructive. The approach is intended to be accessible to readers familiar with classical thermodynamics and general relativity at an advanced undergraduate or graduate level. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
52 pages, 1369 KB  
Review
Dynamic Properties in a Collisional Model for Confined Granular Fluids: A Review
by Ricardo Brito, Rodrigo Soto and Vicente Garzó
Entropy 2026, 28(4), 454; https://doi.org/10.3390/e28040454 - 15 Apr 2026
Viewed by 287
Abstract
Granular systems confined in a shallow box and subjected to vertical vibration provide an attractive geometry for studying fluidized granular media. In this configuration, grains acquire kinetic energy in the vertical direction through collisions with the confining walls, and this energy is subsequently [...] Read more.
Granular systems confined in a shallow box and subjected to vertical vibration provide an attractive geometry for studying fluidized granular media. In this configuration, grains acquire kinetic energy in the vertical direction through collisions with the confining walls, and this energy is subsequently transferred to the horizontal degrees of freedom via interparticle collisions. In recent years, the so-called Δ-model has been introduced as a simplified yet effective description of the dynamics of granular systems in such geometries. This review presents the results obtained from kinetic theory for the granular Δ-model. To model the energy transfer mechanism, a fixed velocity increment Δ is added to the normal component of the relative velocity during collisions. In this way, the vertical motion is effectively integrated out while retaining the collisional energy injection characteristic of the confined setup. This mechanism compensates for the energy loss due to inelastic collisions and leads to stable homogeneous steady states that can be analyzed within the framework of kinetic theory. The Enskog kinetic equation is formulated for this model and first analyzed in homogeneous steady states, yielding the stationary temperature and the equation of state. The dynamics of inhomogeneous states is then investigated using the Chapman–Enskog method, from which the Navier–Stokes transport coefficients are derived. The theory is further extended to granular mixtures, in which particles may differ in mass, size, restitution coefficient, or in the value of Δ. In this case, the phenomenology becomes richer; for example, energy equipartition is violated even in homogeneous steady states. The mixture dynamics is studied through the corresponding Navier–Stokes equations, and the associated transport coefficients are obtained in the low-density regime. The analysis of the hydrodynamic equations shows that, in agreement with simulations, the homogeneous state is linearly stable. Moreover, the intrinsically nonequilibrium nature of the model leads to the violation of Onsager reciprocity relations in granular mixtures. The theoretical predictions exhibit in general good agreement with both molecular dynamics simulations and direct simulation Monte Carlo results. Full article
(This article belongs to the Special Issue Review Papers for Entropy, Second Edition)
Show Figures

Figure 1

33 pages, 30703 KB  
Article
Polynomial Perceptrons for Compact, Robust, and Interpretable Machine Learning Models
by Edwin Aldana-Bobadilla, Alejandro Molina-Villegas, Juan Cesar-Hernandez and Mario Garza-Fabre
Entropy 2026, 28(4), 453; https://doi.org/10.3390/e28040453 - 15 Apr 2026
Viewed by 519
Abstract
This paper introduces the Polynomial Perceptron (PP), a structured extension of the classical perceptron that incorporates explicit polynomial feature expansions to model nonlinear interactions while preserving analytical transparency. By expressing feature interactions in closed functional form, PP captures higher-order dependencies through a compact [...] Read more.
This paper introduces the Polynomial Perceptron (PP), a structured extension of the classical perceptron that incorporates explicit polynomial feature expansions to model nonlinear interactions while preserving analytical transparency. By expressing feature interactions in closed functional form, PP captures higher-order dependencies through a compact set of learned coefficients, establishing a principled trade-off between expressivity and parameter efficiency. The proposed architecture is evaluated across heterogeneous domains, including text, image, and structured data tasks, under controlled experimental settings with parameter-matched baselines. Performance is assessed using standard metrics such as classification accuracy and model complexity (parameter count). Empirical results demonstrate that low-degree PP models achieve competitive accuracy compared to multilayer perceptrons and convolutional neural networks, while requiring significantly fewer parameters. An ablation study further analyzes the impact of polynomial degree on predictive performance, revealing diminishing returns beyond moderate degrees and highlighting favorable efficiency–accuracy trade-offs. A key advantage of PP lies in its intrinsic interpretability. Unlike conventional deep learning models that rely on post hhoc explanation methods, PP provides direct analytical insight through its explicit polynomial structure, enabling decomposition of predictions into feature-, token-, or patch-level contributions without surrogate approximations. Overall, the results indicate that PP offers a lightweight, interpretable, and computationally efficient alternative to standard neural architectures, particularly well-suited for resource-constrained environments and applications where transparency is critical. Full article
(This article belongs to the Special Issue Advances in Data Mining and Coding Theory for Data Compression)
Show Figures

Graphical abstract

23 pages, 7162 KB  
Article
Causal Interpretation of DBSCAN Algorithm: A Dynamic Modeling for Epsilon Estimation
by K. Garcia-Sanchez, J.-L. Perez-Ramos, S. Ramirez-Rosales, A.-M. Herrera-Navarro, H. Jiménez-Hernández and D. Canton-Enriquez
Entropy 2026, 28(4), 452; https://doi.org/10.3390/e28040452 - 15 Apr 2026
Viewed by 420
Abstract
DBSCAN is widely used to identify structured regions in unlabeled data, but its performance depends critically on the selection of the neighborhood parameter ε. Traditional heuristics for estimating ε often become unreliable in high-dimensional or varying-density settings because they rely heavily on [...] Read more.
DBSCAN is widely used to identify structured regions in unlabeled data, but its performance depends critically on the selection of the neighborhood parameter ε. Traditional heuristics for estimating ε often become unreliable in high-dimensional or varying-density settings because they rely heavily on local geometric criteria and may fail under smooth transitions or topological ambiguity. This work presents a three-level perspective on DBSCAN hyperparameter selection. At the algorithmic level, ε controls neighborhood connectivity and structural transitions in clustering. At the modeling level, the ordered k-distance signal is approximated through a surrogate dynamical estimation framework inspired by a mass–spring–damper system. At the causal level, the resulting estimator is interpreted through interventions on its internal threshold-selection mechanism. The proposed method models the variation of ε using ordinary differential equations defined on the ordered k-distance signal, enabling analysis of structural transitions in density organization via a surrogate dynamical representation. System identification is performed using L-BFGS-B optimization on the smoothed k-distance curve, while the system dynamics are solved with the fourth-order Runge–Kutta method. The resulting estimator identifies transition regions that are structurally informative for ε selection in DBSCAN. To analyze the estimator at the intervention level, Pearl’s do-calculus is used to compute the Average Causal Effect (ACE). The method was evaluated on synthetic benchmarks and on the Covtype dataset, including scenarios with multi-density overlap and dimensionality up to R10. The resulting ACE values, +0.9352, +0.5148, and +0.9246, indicate that the proposed estimator improves intervention-based ε selection relative to the geometric baseline across the evaluated datasets. Its practical computational cost is dominated by nearest-neighbor search, behaving approximately as O(NlogN) under favorable indexing conditions and degrading toward O(N2) in high-dimensional or weak-pruning regimes. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications, 2nd Edition)
Show Figures

Figure 1

16 pages, 2014 KB  
Article
Dwell Time Outperforms Social and Chemical Predictors of Behavioural Transitions in Ants
by Michael Crosscombe, Ilya Horiguchi, Shigeto Dobata and Takashi Ikegami
Entropy 2026, 28(4), 451; https://doi.org/10.3390/e28040451 - 15 Apr 2026
Viewed by 340
Abstract
Agent-based models of collective behaviour can reproduce the macroscopic patterns observed in biological systems, yet reproducing observed behaviour does not guarantee the model captures the true underlying mechanisms. In ant colonies, for example, clustering may arise from local imitation, chemical marking of the [...] Read more.
Agent-based models of collective behaviour can reproduce the macroscopic patterns observed in biological systems, yet reproducing observed behaviour does not guarantee the model captures the true underlying mechanisms. In ant colonies, for example, clustering may arise from local imitation, chemical marking of the environment, or internal physiological states. Distinguishing between these requires predictive tests at the individual level. Here, we apply regularised hazard models to trajectory data from three colonies and systematically compare candidate mechanisms. We find that neighbour-based cues alone are weak predictors of when an ant will transition between moving and resting states. A reconstructed arrestant pheromone field is similarly weak as a predictor, and combining pheromone with neighbour cues yields inconsistent results across colonies. In contrast, a simple measure of internal state, i.e., how long an ant has occupied its current state, emerges as the dominant predictor. These results suggest that the timing of behavioural transitions is primarily governed by internal dynamics, while environmental and social cues act as modulators that shape where transitions occur rather than when. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

17 pages, 445 KB  
Article
On GRAND-Assisted Vector Random Linear Network Coding in Wireless Broadcasts
by Rina Su, Chengji Zhao, Qifu Sun and Linqi Song
Entropy 2026, 28(4), 450; https://doi.org/10.3390/e28040450 - 15 Apr 2026
Viewed by 372
Abstract
Recent works have combined random linear network coding (RLNC) with guessing random additive noise decoding (GRAND) to leverage RLNC packets to partially correct bit errors prior to RLNC decoding, so as to reduce the packet erasure rates in wireless broadcast networks. However, existing [...] Read more.
Recent works have combined random linear network coding (RLNC) with guessing random additive noise decoding (GRAND) to leverage RLNC packets to partially correct bit errors prior to RLNC decoding, so as to reduce the packet erasure rates in wireless broadcast networks. However, existing schemes are restricted to scalar RLNC over the finite field GF(2L). In this paper, we first formulate a general GRAND-assisted decoding framework for vector RLNC over the vector space GF(2)L, and further propose a design rule for vector RLNC schemes such that estimated error vectors can be efficiently obtained without incurring any additional computational overhead. Necessary and sufficient conditions for the correctness of every efficiently obtained estimated error vector are characterized. Two explicit vector RLNC schemes satisfying the proposed design rule are constructed. The first scheme is designed based on the matrix representation of GF(2L), and analytical results show that it achieves the same completion delay performance as the counterpart scalar RLNC scheme over GF(2L), while achieving up to a 37.3% reduction in coding computational complexity compared with the scalar one. The second scheme is designed based on sparse coding coefficient matrices. It further reduces computational complexity by up to 33.6% compared with the first scheme, at the cost of a slight degradation in completion delay performance. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop