Previous Issue
Volume 27, April
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 5 (May 2025) – 32 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 1621 KiB  
Review
Entropy Production in Epithelial Monolayers Due to Collective Cell Migration
by Ivana Pajic-Lijakovic and Milan Milivojevic
Entropy 2025, 27(5), 483; https://doi.org/10.3390/e27050483 (registering DOI) - 29 Apr 2025
Abstract
The intricate multi-scale phenomenon of entropy generation, resulting from the inhomogeneous and anisotropic rearrangement of cells during their collective migration, is examined across three distinct regimes: (i) convective, (ii) conductive (diffusion), and (iii) sub-diffusion. The collective movement of epithelial monolayers on substrate matrices [...] Read more.
The intricate multi-scale phenomenon of entropy generation, resulting from the inhomogeneous and anisotropic rearrangement of cells during their collective migration, is examined across three distinct regimes: (i) convective, (ii) conductive (diffusion), and (iii) sub-diffusion. The collective movement of epithelial monolayers on substrate matrices induces the accumulation of mechanical stress within the cells, which subsequently influences cell packing density, velocity, and alignment. Variations in these physical parameters affect cell-cell interactions, which play a crucial role in the storage and dissipation of energy within multicellular systems. The internal dynamics of entropy generation, as a consequence of energy dissipation, are characterized in each regime using viscoelastic constitutive models and the surface properties at the cell-matrix biointerface. The focus of this theoretical review is to clarify how cells can modulate their rate of energy dissipation by altering cell-cell and cell-matrix adhesion interactions, undergoing changes in shape, and re-establishing polarity due to the contact inhibition of locomotion. We approach these questions by discussing the physical aspects of these complex phenomena. Full article
Show Figures

Figure 1

25 pages, 343 KiB  
Article
Quantum κ-Entropy: A Quantum Computational Approach
by Demosthenes Ellinas and Giorgio Kaniadakis
Entropy 2025, 27(5), 482; https://doi.org/10.3390/e27050482 - 29 Apr 2025
Abstract
A novel approach to the quantum version of κ-entropy that incorporates it into the conceptual, mathematical and operational framework of quantum computation is put forward. Various alternative expressions stemming from its definition emphasizing computational and algorithmic aspects are worked out: First, for [...] Read more.
A novel approach to the quantum version of κ-entropy that incorporates it into the conceptual, mathematical and operational framework of quantum computation is put forward. Various alternative expressions stemming from its definition emphasizing computational and algorithmic aspects are worked out: First, for the case of canonical Gibbs states, it is shown that κ-entropy is cast in the form of an expectation value for an observable that is determined. Also, an operational method named “the two-temperatures protocol” is introduced that provides a way to obtain the κ-entropy in terms of the partition functions of two auxiliary Gibbs states with temperatures κ-shifted above, the hot-system, and κ-shifted below, the cold-system, with respect to the original system temperature. That protocol provides physical procedures for evaluating entropy for any κ. Second, two novel additional ways of expressing the κ-entropy are further introduced. One determined by a non-negativity definite quantum channel, with Kraus-like operator sum representation and its extension to a unitary dilation via a qubit ancilla. Another given as a simulation of the κ-entropy via the quantum circuit of a generalized version of the Hadamard test. Third, a simple inter-relation of the von Neumann entropy and the quantum κ-entropy is worked out and a bound of their difference is evaluated and interpreted. Also the effect on the κ-entropy of quantum noise, implemented as a random unitary quantum channel acting in the system’s density matrix, is addressed and a bound on the entropy, depending on the spectral properties of the noisy channel and the system’s density matrix, is evaluated. The results obtained amount to a quantum computational tool-box for the κ-entropy that enhances its applicability in practical problems. Full article
(This article belongs to the Section Statistical Physics)
28 pages, 15480 KiB  
Article
Analysis and Synchronous Study of a Five-Dimensional Multistable Memristive Chaotic System with Bidirectional Offset Increments
by Ding Lina and Xuan Mengtian
Entropy 2025, 27(5), 481; https://doi.org/10.3390/e27050481 - 29 Apr 2025
Abstract
In order to further explore the complex dynamical behavior involved in super-multistability, a new five-dimensional memristive chaotic system was obtained by using a magnetically controlled memristor to construct a four-dimensional equation on the basis of a three-dimensional chaotic system, adding a five-dimensional equation [...] Read more.
In order to further explore the complex dynamical behavior involved in super-multistability, a new five-dimensional memristive chaotic system was obtained by using a magnetically controlled memristor to construct a four-dimensional equation on the basis of a three-dimensional chaotic system, adding a five-dimensional equation and selecting parameter y as the control term. Firstly, the multistability of the system was analyzed by using a Lyapunov exponential diagram, a bifurcation diagram and a phase portrait; the experimental results show that the system has parameter-related periodic chaotic alternating characteristics, symmetric attractors and transient chaotic characteristics, and it also has the characteristics of homogeneous multistability, heterogeneous multistability and super-multistability, which depend on the initial memristive values. Secondly, two offset constants g and h were added to the linear state variables, which were used as controllers of the attractors in the z and w directions, respectively, and the influences of the bidirectional offset increments on the system were analyzed. The complexity of the system was analyzed; the higher the complexity of the system, the larger the values of the complexity, and the darker the colors of the spectrogram. The five-dimensional memristive chaotic system was simulated using Multisim to verify the feasibility of the new system. Finally, an adaptive synchronization controller was designed using the method of adaptive synchronization; then, synchronization of the drive system and the response system was realized by changing the positive gain constant k, which achieved encryption and decryption of sinusoidal signals based on chaotic synchronization. Full article
(This article belongs to the Section Complexity)
40 pages, 794 KiB  
Article
An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria
by Massimiliano Kaucic, Renato Pelessoni and Filippo Piccotto
Entropy 2025, 27(5), 480; https://doi.org/10.3390/e27050480 - 29 Apr 2025
Abstract
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More [...] Read more.
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More precisely, the best-performing assets from the investable universe are identified using three financial criteria. The first criterion is based on mutual information, and it is employed to capture the microstructure of the stock market. The second one is the momentum, and the third is the upside-to-downside beta ratio. To calculate the preference weights used in the chosen multi-criteria decision-making procedure, two methods are compared, namely equal and entropy weighting. In the second stage, this work considers a portfolio optimization model where the objective function is a modified version of the Sharpe ratio, consistent with the choices of a rational agent even when faced with negative risk premiums. Additionally, the portfolio design incorporates a set of bound, budget, and cardinality constraints, together with a set of risk budgeting restrictions. To solve the resulting non-smooth programming problem with non-convex constraints, this paper proposes a variant of the distance-based parameter adaptation for success-history-based differential evolution with double crossover (DISH-XX) algorithm equipped with a hybrid constraint-handling approach. Numerical experiments on the US and European stock markets over the past ten years are conducted, and the results show that the flexibility of the proposed portfolio model allows the better control of losses, particularly during market downturns, thereby providing superior or at least comparable ex post performance with respect to several benchmark investment strategies. Full article
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)
Show Figures

Figure 1

25 pages, 36928 KiB  
Article
Exploring Entanglement Spectra and Phase Diagrams in Multi-Electron Quantum Dot Chains
by Guanjie He and Xin Wang
Entropy 2025, 27(5), 479; https://doi.org/10.3390/e27050479 - 29 Apr 2025
Abstract
We investigate the entanglement properties in semiconductor quantum dot systems modeled by the extended Hubbard model, focusing on the impacts of potential energy variations and electron interactions within a four-site quantum dot spin chain. Our study explores local and pairwise entanglement across configurations [...] Read more.
We investigate the entanglement properties in semiconductor quantum dot systems modeled by the extended Hubbard model, focusing on the impacts of potential energy variations and electron interactions within a four-site quantum dot spin chain. Our study explores local and pairwise entanglement across configurations with electron counts N=4 and N=6, under different potential energy settings. By adjusting the potential energy in specific dots and examining the entanglement across various interaction regimes, we identify significant variations in the ground states of quantum dots. We extend this analysis to larger systems with L=6 and L=8, comparing electron counts N=L and N=L+2, revealing sharper entanglement transitions and reduced finite-size effects as the system size increases. Our results show that local potential shifts and the Coulomb interaction strength lead to notable redistributions of the electron configurations in the quantum dot spin chain, significantly affecting the entanglement properties. These changes are depicted in phase diagrams that highlight entanglements’ dependencies on the interaction strengths and potential energy adjustments, illustrating complex entanglement dynamics shifts triggered by interdot interactions. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

17 pages, 722 KiB  
Article
Thermal Fisher Information for a Rotating BTZ Black Hole
by Everett A. Patterson and Robert B. Mann
Entropy 2025, 27(5), 478; https://doi.org/10.3390/e27050478 - 28 Apr 2025
Viewed by 86
Abstract
Relativistic quantum metrology provides a framework within which we can quantify the quality of measurement and estimation procedures while accounting for both quantum and relativistic effects. The chief measure for describing such procedures is the Fisher information, which quantifies how sensitive a given [...] Read more.
Relativistic quantum metrology provides a framework within which we can quantify the quality of measurement and estimation procedures while accounting for both quantum and relativistic effects. The chief measure for describing such procedures is the Fisher information, which quantifies how sensitive a given estimation is to a variance in some underlying parameter. Recently, the Fisher information has been used to quantify the spacetime information accessible to two-level quantum particle detectors. We have previously shown that such a system is capable of discerning black hole mass for static black holes in 2 + 1 dimensions. Here, we extend these results to the astrophysically interesting case of rotating black holes and show that the Fisher information is also sensitive to the rotation of a black hole. Full article
(This article belongs to the Special Issue Applications of Fisher Information in Sciences II)
Show Figures

Figure 1

22 pages, 678 KiB  
Article
Classical Versus Bayesian Error-Controlled Sampling Under Lognormal Distributions with Type II Censoring
by Huasen Zhou and Wenhao Gui
Entropy 2025, 27(5), 477; https://doi.org/10.3390/e27050477 - 28 Apr 2025
Viewed by 58
Abstract
This paper presents a comparative study of classical and Bayesian risks in the design of optimal failure-censored sampling plans for lognormal lifetime models. The analysis focuses on how variations in prior distributions, specifically the beta distribution for defect rates, influence the producer’s and [...] Read more.
This paper presents a comparative study of classical and Bayesian risks in the design of optimal failure-censored sampling plans for lognormal lifetime models. The analysis focuses on how variations in prior distributions, specifically the beta distribution for defect rates, influence the producer’s and consumer’s risks, along with the optimal sample size. We explore the sensitivity of the sampling plan’s risks to changes in the prior mean and variance, offering insight into the impacts of uncertainty in prior knowledge on sampling efficiency. Classical and Bayesian approaches are evaluated, highlighting the trade-offs between minimizing sample size and controlling risks for both the producer and the consumer. The results demonstrate that Bayesian methods generally provide more robust designs under uncertain prior information, while classical methods exhibit greater sensitivity to parameter changes. A computational procedure for determining the optimal sampling plans is provided, and the outcomes are validated through simulations, showcasing the practical implications for quality control in reliability testing and industrial applications. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

16 pages, 304 KiB  
Article
LDPC Codes on Balanced Incomplete Block Designs: Construction, Girth, and Cycle Structure Analysis
by Hengzhou Xu, Xiaodong Zhang, Mengmeng Xu, Haipeng Yu and Hai Zhu
Entropy 2025, 27(5), 476; https://doi.org/10.3390/e27050476 - 27 Apr 2025
Viewed by 111
Abstract
In this paper, we investigate the cycle structure inherent in the Tanner graphs of low-density parity-check (LDPC) codes constructed from balanced incomplete block designs (BIBDs). We begin by delineating the incidence structure of BIBDs and propose a methodology for constructing LDPC codes based [...] Read more.
In this paper, we investigate the cycle structure inherent in the Tanner graphs of low-density parity-check (LDPC) codes constructed from balanced incomplete block designs (BIBDs). We begin by delineating the incidence structure of BIBDs and propose a methodology for constructing LDPC codes based on these designs. By analyzing the incidence relations between points and blocks within a BIBD, we prove that the resulting LDPC codes possess a girth of 6. Subsequently, we provide a detailed analysis of the cycle structure of the constructed LDPC codes and introduce a systematic approach for enumerating their short cycles. Using this method, we determine the exact numbers of cycles of lengths 6 and 8. Simulation results demonstrate that the constructed LDPC codes exhibit excellent performance. Full article
(This article belongs to the Special Issue Advances in Modern Channel Coding)
Show Figures

Figure 1

15 pages, 1473 KiB  
Article
HECM-Plus: Hyper-Entropy Enhanced Cloud Models for Uncertainty-Aware Design Evaluation in Multi-Expert Decision Systems
by Jiaozi Pu and Zongxin Liu
Entropy 2025, 27(5), 475; https://doi.org/10.3390/e27050475 - 27 Apr 2025
Viewed by 88
Abstract
Uncertainty quantification in cloud models requires simultaneous characterization of fuzziness (via Entropy, En) and randomness (via Hyper-entropy, He), yet existing similarity measures often neglect the stochastic dispersion governed by He. To address this gap, we propose HECM-Plus, an algorithm integrating [...] Read more.
Uncertainty quantification in cloud models requires simultaneous characterization of fuzziness (via Entropy, En) and randomness (via Hyper-entropy, He), yet existing similarity measures often neglect the stochastic dispersion governed by He. To address this gap, we propose HECM-Plus, an algorithm integrating Expectation (Ex), En, and He to holistically model geometric and probabilistic uncertainties in cloud models. By deriving He-adjusted standard deviations through reverse cloud transformations, HECM-Plus reformulates the Hellinger distance to resolve conflicts in multi-expert evaluations where subjective ambiguity and stochastic randomness coexist. Experimental validation demonstrates three key advances: (1) Fuzziness–Randomness discrimination: HECM-Plus achieves balanced conceptual differentiation (δC1/C4 = 1.76, δC2 = 1.66, δC3 = 1.58) with linear complexity outperforming PDCM and HCCM by 10.3% and 17.2% in differentiation scores while resolving He-induced biases in HECM/ECM (C1C4 similarity: 0.94 vs. 0.99) critical for stochastic dispersion modeling; (2) Robustness in time-series classification: It reduces the mean error by 6.8% (0.190 vs. 0.204, *p* < 0.05) with lower standard deviation (0.035 vs. 0.047) on UCI datasets, validating noise immunity; (3) Design evaluation application: By reclassifying controversial cases (e.g., reclassified from a “good” design (80.3/100 average) to “moderate” via cloud model using HECM-Plus), it resolves multi-expert disagreements in scoring systems. The main contribution of this work is the proposal of HECM-Plus, which resolves the limitation of HECM in neglecting He, thereby further enhancing the precision of normal cloud similarity measurements. The algorithm provides a practical tool for uncertainty-aware decision-making in multi-expert systems, particularly in multi-criteria design evaluation under conflicting standards. Future work will extend to dynamic expert weight adaptation and higher-order cloud interactions. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
Show Figures

Figure 1

16 pages, 1467 KiB  
Article
Quantum Phase Transition in the Coupled-Top Model: From Z2 to U(1) Symmetry Breaking
by Wen-Jian Mao, Tian Ye, Liwei Duan and Yan-Zhi Wang
Entropy 2025, 27(5), 474; https://doi.org/10.3390/e27050474 - 27 Apr 2025
Viewed by 135
Abstract
We investigate the coupled-top model, which describes two large spins interacting along both x and y directions. By tuning coupling strengths along distinct directions, the system exhibits different symmetries, ranging from a discrete Z2 to a continuous U(1) symmetry. The anisotropic coupled-top [...] Read more.
We investigate the coupled-top model, which describes two large spins interacting along both x and y directions. By tuning coupling strengths along distinct directions, the system exhibits different symmetries, ranging from a discrete Z2 to a continuous U(1) symmetry. The anisotropic coupled-top model displays a discrete Z2 symmetry, and the symmetry breaking induced by strong coupling drives a quantum phase transition from a disordered paramagnetic phase to an ordered ferromagnetic or antiferromagnetic phase. In particular, the isotropic coupled-top model possesses a continuous U(1) symmetry, whose breaking gives rise to the Goldstone mode. The phase boundary can be well captured by the mean-field approach, characterized by the distinct behaviors of the order parameter. Higher-order quantum effects beyond the mean-field contribution can be achieved by mapping the large spins to bosonic operators via the Holstein–Primakoff transformation. For the anisotropic coupled-top model with Z2 symmetry, the energy gap closes, and both quantum fluctuations and entanglement entropy diverge near the critical point, signaling the onset of second-order quantum phase transitions. Strikingly, when U(1) symmetry is broken, the energy gap vanishes beyond the critical point, yielding a novel critical exponent of 1, rather than 1/2 for Z2 symmetry breaking. The rich symmetry structure of the coupled-top model underpins its role as a paradigmatic model for studying quantum phase transitions and exploring associated physical phenomena. Full article
(This article belongs to the Special Issue Entanglement Entropy and Quantum Phase Transition)
Show Figures

Figure 1

24 pages, 3048 KiB  
Article
Automatic Controversy Detection Based on Heterogeneous Signed Attributed Network and Deep Dual-Layer Self-Supervised Community Analysis
by Ying Li, Xiao Zhang, Yu Liang and Qianqian Li
Entropy 2025, 27(5), 473; https://doi.org/10.3390/e27050473 - 27 Apr 2025
Viewed by 72
Abstract
In this study, we propose a computational approach that applies text mining and deep learning to conduct controversy detection on social media platforms. Unlike previous research, our method integrates multidimensional and heterogeneous information from social media into a heterogeneous signed attributed network, encompassing [...] Read more.
In this study, we propose a computational approach that applies text mining and deep learning to conduct controversy detection on social media platforms. Unlike previous research, our method integrates multidimensional and heterogeneous information from social media into a heterogeneous signed attributed network, encompassing various users’ attributes, semantic information, and structural heterogeneity. We introduce a deep dual-layer self-supervised algorithm for community detection and analyze controversy within this network. A novel controversy metric is devised by considering three dimensions of controversy: community distinctions, betweenness centrality, and user representations. A comparison between our method and other classical controversy measures such as Random Walk, Biased Random Walk (BRW), BCC, EC, GMCK, MBLB, and community-based methods reveals that our model consistently produces more stable and accurate controversy scores. Additionally, we calculated the level of controversy and computed p-values for the detected communities on our crawled dataset Weibo, including #Microblog (3792), #Comment (45,741), #Retweet (36,126), and #User (61,327). Overall, our model had a comprehensive and nuanced understanding of controversy on social media platforms. To facilitate its use, we have developed a user-friendly web server. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

37 pages, 996 KiB  
Article
Kolmogorov Capacity with Overlap
by Anshuka Rangi and Massimo Franceschetti
Entropy 2025, 27(5), 472; https://doi.org/10.3390/e27050472 - 27 Apr 2025
Viewed by 83
Abstract
The notion of δ-mutual information between non-stochastic uncertain variables is introduced as a generalization of Nair’s non-stochastic information functional. Several properties of this new quantity are illustrated and used in a communication setting to show that the largest δ-mutual information between [...] Read more.
The notion of δ-mutual information between non-stochastic uncertain variables is introduced as a generalization of Nair’s non-stochastic information functional. Several properties of this new quantity are illustrated and used in a communication setting to show that the largest δ-mutual information between received and transmitted codewords over ϵ-noise channels equals the (ϵ,δ)-capacity. This notion of capacity generalizes the Kolmogorov ϵ-capacity to packing sets of overlap at most δ and is a variation of a previous definition proposed by one of the authors. Results are then extended to more general noise models, including non-stochastic, memoryless, and stationary channels. The presented theory admits the possibility of decoding errors, as in classical information theory, while retaining the worst-case, non-stochastic character of Kolmogorov’s approach. Full article
(This article belongs to the Collection Feature Papers in Information Theory)
Show Figures

Figure 1

26 pages, 2501 KiB  
Article
ECAE: An Efficient Certificateless Aggregate Signature Scheme Based on Elliptic Curves for NDN-IoT Environments
by Cong Wang, Haoyu Wu, Yulong Gan, Rui Zhang and Maode Ma
Entropy 2025, 27(5), 471; https://doi.org/10.3390/e27050471 - 26 Apr 2025
Viewed by 108
Abstract
As a data-centric next-generation network architecture, Named Data Networking (NDN) exhibits inherent compatibility with the distributed nature of the Internet of Things (IoT) through its name-based routing mechanism. However, existing signature schemes for NDN-IoT face dual challenges: resource-constrained IoT terminals struggle with certificate [...] Read more.
As a data-centric next-generation network architecture, Named Data Networking (NDN) exhibits inherent compatibility with the distributed nature of the Internet of Things (IoT) through its name-based routing mechanism. However, existing signature schemes for NDN-IoT face dual challenges: resource-constrained IoT terminals struggle with certificate management and computationally intensive bilinear pairings under traditional Public Key Infrastructure (PKI), while NDN routers require low-latency batch verification for high-speed data forwarding. To address these issues, this study proposes ECAE, an efficient certificateless aggregate signature scheme based on elliptic curve cryptography (ECC). ECAE introduces a partial private key distribution mechanism in key generation, enabling the authentication of identity by a Key Generation Center (KGC) for terminal devices. It leverages ECC and universal hash functions to construct an aggregate verification model that eliminates bilinear pairing operations and reduces communication overhead. Security analysis formally proves that ECAE resists forgery, replay, and man-in-the-middle attacks under the random oracle model. Experimental results demonstrate substantial efficiency gains: total computation overhead is reduced by up to 46.18%, and communication overhead is reduced by 55.56% compared to state-of-the-art schemes. This lightweight yet robust framework offers a trusted and scalable verification solution for NDN-IoT environments. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

14 pages, 1307 KiB  
Article
Chandrasekhar’s Conditions for the Equilibrium and Stability of Stars in a Universal Three-Parameter Non-Maxwell Distribution
by Wei Hu and Jiulin Du
Entropy 2025, 27(5), 470; https://doi.org/10.3390/e27050470 - 26 Apr 2025
Viewed by 69
Abstract
The idea of Chandrasekhar’s conditions for the equilibrium and stability of stars is revisited with a new universal three-parameter non-Maxwell distribution. We derive the maximum radiation pressures in the non-Maxwell distribution for a gas star and a centrally condensed star, and thus, we [...] Read more.
The idea of Chandrasekhar’s conditions for the equilibrium and stability of stars is revisited with a new universal three-parameter non-Maxwell distribution. We derive the maximum radiation pressures in the non-Maxwell distribution for a gas star and a centrally condensed star, and thus, we generalize Chandrasekhar’s conditions in a Maxwellian sense. By numerical analyses, we find that the non-Maxwellian distribution usually reduces the maximum radiation pressures in both gas stars and centrally condensed stars as compared with cases where the gas is assumed to have a Maxwellian distribution. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
Show Figures

Figure 1

18 pages, 878 KiB  
Article
The Knudsen Layer in Modeling the Heat Transfer at Nanoscale: Bulk and Wall Contributions to the Local Heat Flux
by Carmelo Filippo Munafò, Martina Nunziata and Antonio Sellitto
Entropy 2025, 27(5), 469; https://doi.org/10.3390/e27050469 - 26 Apr 2025
Viewed by 99
Abstract
Starting from the observation that the influence of the heat carriers’ boundary scattering on the heat flux is mainly felt in the zone near the system’s boundary, the characteristic dimension of which is of the order of the mean-free path of the heat [...] Read more.
Starting from the observation that the influence of the heat carriers’ boundary scattering on the heat flux is mainly felt in the zone near the system’s boundary, the characteristic dimension of which is of the order of the mean-free path of the heat carriers, in this paper, we introduce the concept of the Knudsen layer in the heat transport at nanoscale and regard the local heat flux as the final resultant of two different contributions: the bulk heat flux and the wall heat flux. In the framework of phonon hydrodynamics, we therefore, here, derive a theoretical model in agreement with the second law of thermodynamics that accounts for those two contributions. In steady states, we then predict both how the local heat flux behaves and how the thermal conductivity depends on the characteristic dimension of the system. This analysis is performed both in the case of a nanolayer and in the case of a nanowire. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

27 pages, 1571 KiB  
Article
Gaussian Versus Mean-Field Models: Contradictory Predictions for the Casimir Force Under Dirichlet–Neumann Boundary Conditions
by Daniel Dantchev, Vassil Vassilev and Joseph Rudnick
Entropy 2025, 27(5), 468; https://doi.org/10.3390/e27050468 - 25 Apr 2025
Viewed by 191
Abstract
The mean-field model (MFM) is the workhorse of statistical mechanics: one normally accepts that it yields results which, despite differing numerically from correct ones, are not “very wrong”, in that they resemble the actual behavior of the system as eventually obtained by more [...] Read more.
The mean-field model (MFM) is the workhorse of statistical mechanics: one normally accepts that it yields results which, despite differing numerically from correct ones, are not “very wrong”, in that they resemble the actual behavior of the system as eventually obtained by more advanced treatments. This, for example, turns out to be the case for the Casimir force under, say, Dirichlet–Dirichlet, (+,+) and (+,) boundary conditions (BC) for which, according to the general expectations, the MFM is attractive for similar BC or repulsive for dissimilar BC force, with the principally correct position of the maximum strength of the force below or above the critical point Tc. It turns out, however, that this is not the case with Dirichlet–Neumann (DN) BC. In this case, the mean-field approach leads to an attractive Casimir force. This contradiction with the “boundary condition rule” is cured in the case of the Gaussian model under DN BC. Our results, which are mathematically exact, demonstrate that the Casimir force within the MFM is attractive as a function of temperature T and external magnetic field h, while for the Gaussian model, it is repulsive for h=0 and can be, surprisingly, both repulsive and attractive for h0. The treatment of the MFM is based on the exact solution of one non-homogeneous, nonlinear differential equation of second order. The Gaussian model is analyzed in terms of both its continuum and lattice realization. The obtained outcome teaches us that the mean-field results should be accepted with caution in the case of fluctuation-induced forces and ought to be checked against the more precise treatment of fluctuations within the envisaged system. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

12 pages, 625 KiB  
Article
Multiscale Simplicial Complex Entropy Analysis of Heartbeat Dynamics
by Alvaro Zabaleta-Ortega, Carlos Carrizales-Velazquez, Bibiana Obregón-Quintana and Lev Guzmán-Vargas
Entropy 2025, 27(5), 467; https://doi.org/10.3390/e27050467 - 25 Apr 2025
Viewed by 151
Abstract
The present study proposes a multiscale analysis of the simplicial complex approximate entropy (MS-SCAE) applied to cardiac interbeat series. The MS-SCAE method is based on quantifying the changes in the simplicial complex associated with the time series when a coarse-grained procedure is performed. [...] Read more.
The present study proposes a multiscale analysis of the simplicial complex approximate entropy (MS-SCAE) applied to cardiac interbeat series. The MS-SCAE method is based on quantifying the changes in the simplicial complex associated with the time series when a coarse-grained procedure is performed. Our findings are consistent with those of previously reported studies, which indicate that the complexity of healthy interbeat dynamics remains relatively stable over different scales. However, these dynamics undergo changes in the presence of certain cardiac pathologies, such as congestive heart failure and atrial fibrillation. The method we present here allows for effective differentiation between different dynamics and is robust in its ability to characterize both real and simulated sequences. This makes it a suitable candidate for application to a variety of complex signals. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications: Fourth Edition)
Show Figures

Figure 1

30 pages, 524 KiB  
Article
Two Types of Temporal Symmetry in the Laws of Nature
by A. Y. Klimenko
Entropy 2025, 27(5), 466; https://doi.org/10.3390/e27050466 - 25 Apr 2025
Viewed by 119
Abstract
This work explores the implications of assuming time symmetry and applying bridge-type, time-symmetric temporal boundary conditions to deterministic laws of nature with random components. The analysis, drawing on the works of Kolmogorov and Anderson, leads to two forms of governing equations, referred to [...] Read more.
This work explores the implications of assuming time symmetry and applying bridge-type, time-symmetric temporal boundary conditions to deterministic laws of nature with random components. The analysis, drawing on the works of Kolmogorov and Anderson, leads to two forms of governing equations, referred to here as symmetric and antisymmetric. These equations account for the emergence of characteristics associated with conventional thermodynamics, the arrow of time, and a form of antecedent causality. The directional properties of time arise from the mathematical structure of Markov bridges in proximity of the corresponding temporal boundary conditions, without requiring any postulates that impose a preferred direction of time. Full article
(This article belongs to the Special Issue Quantum Mechanics and the Challenge of Time)
Show Figures

Figure 1

18 pages, 898 KiB  
Article
The Entropy Analysis Method for Assessing the Efficiency of Workload Distribution Among Medical Institution Personnel
by Oksana Mulesa and Ivanna Dronyuk
Entropy 2025, 27(5), 465; https://doi.org/10.3390/e27050465 - 25 Apr 2025
Viewed by 75
Abstract
The aim of this study is to develop a convenient and effective entropy analysis method for assessing the efficiency of workload distribution among medical institution personnel. This research is based on a model for evaluating employee workload in conditional time units—credits—taking into account [...] Read more.
The aim of this study is to develop a convenient and effective entropy analysis method for assessing the efficiency of workload distribution among medical institution personnel. This research is based on a model for evaluating employee workload in conditional time units—credits—taking into account time-and-motion studies and the volume of medical services provided or tasks performed over a given period. The model and method developed by the authors enable the consideration of potential losses of working time and coefficients that determine the percentage of effective working time. The method is based on calculating and analyzing the values of normative and actual workloads of employees. The study introduces such indicators as relative workload, workload distribution entropy, and the entropy of free and excessively worked time credits. During the experimental verification of the developed method for analyzing the activities of a dental clinic, it was demonstrated that the method is both convenient and effective for analyzing the performance of individual employees as well as groups of employees. The results of the method are presented in a convenient and intuitively understandable form. Therefore, this method can serve as an effective tool for identifying internal reserves within the institution and making managerial decisions regarding its further operation. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 2723 KiB  
Review
Phase Stability and Transitions in High-Entropy Alloys: Insights from Lattice Gas Models, Computational Simulations, and Experimental Validation
by Łukasz Łach
Entropy 2025, 27(5), 464; https://doi.org/10.3390/e27050464 - 25 Apr 2025
Viewed by 141
Abstract
High-entropy alloys (HEAs) are a novel class of metallic materials composed of five or more principal elements in near-equimolar ratios. This unconventional composition leads to high configurational entropy, which promotes the formation of solid solution phases with enhanced mechanical properties, thermal stability, and [...] Read more.
High-entropy alloys (HEAs) are a novel class of metallic materials composed of five or more principal elements in near-equimolar ratios. This unconventional composition leads to high configurational entropy, which promotes the formation of solid solution phases with enhanced mechanical properties, thermal stability, and corrosion resistance. Phase stability plays a critical role in determining their structural integrity and performance. This study provides a focused review of HEA phase transitions, emphasizing the role of lattice gas models in predicting phase behavior. By integrating statistical mechanics with thermodynamic principles, lattice gas models enable accurate modeling of atomic interactions, phase segregation, and order-disorder transformations. The combination of computational simulations (e.g., Monte Carlo, molecular dynamics) with experimental validation (e.g., XRD, TEM, APT) improves predictive accuracy. Furthermore, advances in data-driven methodologies facilitate high-throughput exploration of HEA compositions, accelerating the discovery of alloys with optimized phase stability and superior mechanical performance. Beyond structural applications, HEAs demonstrate potential in functional domains, such as catalysis, hydrogen storage, and energy technologies. This review brings together theoretical modeling—particularly lattice gas approaches—and experimental validation to form a unified understanding of phase behavior in high-entropy alloys. By highlighting the mechanisms behind phase transitions and their implications for material performance, this work aims to support the design and optimization of HEAs for real-world applications in aerospace, energy systems, and structural materials engineering. Full article
(This article belongs to the Special Issue Statistical Mechanics of Lattice Gases)
Show Figures

Figure 1

11 pages, 1449 KiB  
Article
A Learning Probabilistic Boolean Network Model of a Manufacturing Process with Applications in System Asset Maintenance
by Pedro Juan Rivera Torres, Chen Chen, Sara Rodríguez González and Orestes Llanes Santiago
Entropy 2025, 27(5), 463; https://doi.org/10.3390/e27050463 - 25 Apr 2025
Viewed by 188
Abstract
Probabilistic Boolean Networks (PBN) can model the dynamics of complex biological systems, as well as other non-biological systems like manufacturing systems and smart grids. In this proof-of-concept paper, we propose a PBN architecture with a learning process that significantly enhances fault and failure [...] Read more.
Probabilistic Boolean Networks (PBN) can model the dynamics of complex biological systems, as well as other non-biological systems like manufacturing systems and smart grids. In this proof-of-concept paper, we propose a PBN architecture with a learning process that significantly enhances fault and failure prediction in manufacturing systems. This concept was tested using a PBN model of an ultrasound welding process and its machines. Through various experiments, the model successfully learned to maintain a normal operating state. Leveraging the complex properties of PBNs, we utilize them as an adaptive learning tool with positive feedback, demonstrating that these networks may have broader applications than previously recognized. This multi-layered PBN architecture offers substantial improvements in fault detection performance within a positive feedback network structure that shows greater noise tolerance than other methods. Full article
Show Figures

Figure 1

13 pages, 6262 KiB  
Article
Spatial, Temporal, and Dynamic Behavior of Different Entropies in Seismic Activity: The February 2023 Earthquakes in Türkiye and Syria
by Denisse Pastén, Eugenio E. Vogel, Gonzalo Saravia and Antonio Posadas
Entropy 2025, 27(5), 462; https://doi.org/10.3390/e27050462 - 25 Apr 2025
Viewed by 148
Abstract
Türkiye and Syria were hit by two powerful earthquakes on 6 February 2023. A 7.5 magnitude earthquake, soon followed by a second 7.4 magnitude seism, devastated the area. The present study compares three different entropies using data from 2017 to 2023 (55,823 events) [...] Read more.
Türkiye and Syria were hit by two powerful earthquakes on 6 February 2023. A 7.5 magnitude earthquake, soon followed by a second 7.4 magnitude seism, devastated the area. The present study compares three different entropies using data from 2017 to 2023 (55,823 events) in this region and is the first study to use Shannon entropy, Tsallis entropy, and mutability for analyzing the seismic activity in this region. A couple of years before these large earthquakes, both Shannon entropy and mutability show an overall decrease, potentially indicating upcoming large events; however, the detailed results on mutability offer an advantage, as discussed in this paper. A simultaneous overall increase in Tsallis entropy may also point to some kind of warning of the possible occurrence of large events in the area a couple of years later. The three entropies show how they are presently slowly recovering to previous levels in the affected areas. Longer-term studies combining complementary entropies could help to determine regional seismic risk. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

45 pages, 6953 KiB  
Review
A Semantic Generalization of Shannon’s Information Theory and Applications
by Chenguang Lu
Entropy 2025, 27(5), 461; https://doi.org/10.3390/e27050461 - 24 Apr 2025
Viewed by 100
Abstract
Does semantic communication require a semantic information theory parallel to Shannon’s information theory, or can Shannon’s work be generalized for semantic communication? This paper advocates for the latter and introduces a semantic generalization of Shannon’s information theory (G theory for short). The core [...] Read more.
Does semantic communication require a semantic information theory parallel to Shannon’s information theory, or can Shannon’s work be generalized for semantic communication? This paper advocates for the latter and introduces a semantic generalization of Shannon’s information theory (G theory for short). The core idea is to replace the distortion constraint with the semantic constraint, achieved by utilizing a set of truth functions as a semantic channel. These truth functions enable the expressions of semantic distortion, semantic information measures, and semantic information loss. Notably, the maximum semantic information criterion is equivalent to the maximum likelihood criterion and similar to the Regularized Least Squares criterion. This paper shows G theory’s applications to daily and electronic semantic communication, machine learning, constraint control, Bayesian confirmation, portfolio theory, and information value. The improvements in machine learning methods involve multi-label learning and classification, maximum mutual information classification, mixture models, and solving latent variables. Furthermore, insights from statistical physics are discussed: Shannon information is similar to free energy; semantic information to free energy in local equilibrium systems; and information efficiency to the efficiency of free energy in performing work. The paper also proposes refining Friston’s minimum free energy principle into the maximum information efficiency principle. Lastly, it compares G theory with other semantic information theories and discusses its limitation in representing the semantics of complex data. Full article
(This article belongs to the Special Issue Semantic Information Theory)
Show Figures

Figure 1

16 pages, 553 KiB  
Article
Improving Phrase Segmentation in Symbolic Folk Music: A Hybrid Model with Local Context and Global Structure Awareness
by Xin Guan, Zhilin Dong, Hui Liu and Qiang Li
Entropy 2025, 27(5), 460; https://doi.org/10.3390/e27050460 - 24 Apr 2025
Viewed by 114
Abstract
The segmentation of symbolic music phrases is crucial for music information retrieval and structural analysis. However, existing BiLSTM-CRF methods mainly rely on local semantics, making it difficult to capture long-range dependencies, leading to inaccurate phrase boundary recognition across measures or themes. Traditional Transformer [...] Read more.
The segmentation of symbolic music phrases is crucial for music information retrieval and structural analysis. However, existing BiLSTM-CRF methods mainly rely on local semantics, making it difficult to capture long-range dependencies, leading to inaccurate phrase boundary recognition across measures or themes. Traditional Transformer models use static embeddings, limiting their adaptability to different musical styles, structures, and melodic evolutions. Moreover, multi-head self-attention struggles with local context modeling, causing the loss of short-term information (e.g., pitch variation, melodic integrity, and rhythm stability), which may result in over-segmentation or merging errors. To address these issues, we propose a segmentation method integrating local context enhancement and global structure awareness. This method overcomes traditional models’ limitations in long-range dependency modeling, improves phrase boundary recognition, and adapts to diverse musical styles and melodies. Specifically, dynamic note embeddings enhance contextual awareness across segments, while an improved attention mechanism strengthens both global semantics and local context modeling. Combining these strategies ensures reasonable phrase boundaries and prevents unnecessary segmentation or merging. The experimental results show that our method outperforms the state-of-the-art methods for symbolic music phrase segmentation, with phrase boundaries better aligned to musical structures. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

28 pages, 5922 KiB  
Article
Thoughtseeds: A Hierarchical and Agentic Framework for Investigating Thought Dynamics in Meditative States
by Prakash Chandra Kavi, Gorka Zamora-López, Daniel Ari Friedman and Gustavo Patow
Entropy 2025, 27(5), 459; https://doi.org/10.3390/e27050459 - 24 Apr 2025
Viewed by 265
Abstract
The Thoughtseeds Framework introduces a novel computational approach to modeling thought dynamics in meditative states, conceptualizing thoughtseeds as dynamic attentional agents that integrate information. This hierarchical model, structured as nested Markov blankets, comprises three interconnected levels: (i) knowledge domains as information repositories, (ii) [...] Read more.
The Thoughtseeds Framework introduces a novel computational approach to modeling thought dynamics in meditative states, conceptualizing thoughtseeds as dynamic attentional agents that integrate information. This hierarchical model, structured as nested Markov blankets, comprises three interconnected levels: (i) knowledge domains as information repositories, (ii) the Thoughtseed Network where thoughtseeds compete, and (iii) meta-cognition regulating awareness. It simulates focused-attention Vipassana meditation via rule-based training informed by empirical neuroscience research on attentional stability and neural dynamics. Four states—breath_control, mind_wandering, meta_awareness, and redirect_breath—emerge organically from thoughtseed interactions, demonstrating self-organizing dynamics. Results indicate that experts sustain control dominance to reinforce focused attention, while novices exhibit frequent, prolonged mind_wandering episodes, reflecting beginner instability. Integrating Global Workspace Theory and the Intrinsic Ignition Framework, the model elucidates how thoughtseeds shape a unitary meditative experience through meta-awareness, balancing epistemic and pragmatic affordances via active inference. Synthesizing computational modeling with phenomenological insights, it provides an embodied perspective on cognitive state emergence and transitions, offering testable predictions about meditation skill development. The framework yields insights into attention regulation, meta-cognitive awareness, and meditation state emergence, establishing a versatile foundation for future research into diverse meditation practices (e.g., Open Monitoring, Non-Dual Awareness), cognitive development across the lifespan, and clinical applications in mindfulness-based interventions for attention disorders, advancing our understanding of the nature of mind and thought. Full article
(This article belongs to the Special Issue Integrated Information Theory and Consciousness II)
Show Figures

Figure 1

19 pages, 1050 KiB  
Article
Density Distribution of Strongly Quantum Degenerate Fermi Systems Simulated by Fictitious Identical Particle Thermodynamics
by Bo Yang, Hongsheng Yu, Shujuan Liu and Fengzheng Zhu
Entropy 2025, 27(5), 458; https://doi.org/10.3390/e27050458 - 24 Apr 2025
Viewed by 138
Abstract
The exchange antisymmetry of identical fermions leads to an exponential computational bottleneck in ab initio simulations, known as the fermion sign problem. The thermodynamic approach of fictitious identical particles (Y. Xiong and H. Xiong, J. Chem. Phys. 157, 094112 (2022)) provides an efficient [...] Read more.
The exchange antisymmetry of identical fermions leads to an exponential computational bottleneck in ab initio simulations, known as the fermion sign problem. The thermodynamic approach of fictitious identical particles (Y. Xiong and H. Xiong, J. Chem. Phys. 157, 094112 (2022)) provides an efficient and accurate means to simulate some fermionic systems by overcoming the fermion sign problem. This method has been significantly promoted and used by National Ignition Facilities for the ab initio simulations and is believed to have wide application prospects in warm dense quantum matter (T. Dornheim et al., arXiv: 2402.19113 (2023)). By utilizing the fictitious identical particles in the bosonic regime and constant energy extrapolation method (Y. Xiong and H. Xiong, Phys. Rev. E 107, 055308 (2023); T. Morresi and G. Garberoglio, Phys. Rev. B 111, 014521 (2025)), there are promising results in simulating the energy of strongly quantum degenerate fermionic systems. The previous works mainly concern the energy of Fermi systems or only consider situations of weak quantum degeneracy. In this study, we extend the concept of the constant energy extrapolation method and demonstrate the potential of the constant density extrapolation method to accurately simulate the density distribution of fermionic systems in strongly quantum degenerate conditions. Furthermore, based on the energy derived from the constant energy extrapolation method, we present simulation results for the entropy of fermions. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

15 pages, 4569 KiB  
Article
Improved EEG-Based Emotion Classification via Stockwell Entropy and CSP Integration
by Yuan Lu and Jingying Chen
Entropy 2025, 27(5), 457; https://doi.org/10.3390/e27050457 - 24 Apr 2025
Viewed by 91
Abstract
Traditional entropy-based learning methods primarily extract the relevant entropy measures directly from EEG signals using sliding time windows. This study applies differential entropy to a time-frequency domain that is decomposed by Stockwell transform, proposing a novel EEG emotion recognition method combining Stockwell entropy [...] Read more.
Traditional entropy-based learning methods primarily extract the relevant entropy measures directly from EEG signals using sliding time windows. This study applies differential entropy to a time-frequency domain that is decomposed by Stockwell transform, proposing a novel EEG emotion recognition method combining Stockwell entropy and a common spatial pattern (CSP). The results demonstrate that Stockwell entropy effectively captures the entropy features of high-frequency signals, and CSP-transformed Stockwell entropy features show superior discriminative capability for different emotional states. The experimental results indicate that the proposed method achieves excellent classification performance in the Gamma band (30–46 Hz) for emotion recognition. The combined approach yields high classification accuracy for binary tasks (“positive vs. neutral”, “negative vs. neutral”, and “positive vs. negative”) and maintains satisfactory performance in the three-class task (“positive vs. negative vs. neutral”). Full article
Show Figures

Figure 1

13 pages, 540 KiB  
Article
Transmit Power Optimization for Simultaneous Wireless Information and Power Transfer-Assisted IoT Networks with Integrated Sensing and Communication and Nonlinear Energy Harvesting Model
by Chengrui Zhou, Xinru Wang, Yanfei Dou and Xiaomin Chen
Entropy 2025, 27(5), 456; https://doi.org/10.3390/e27050456 - 24 Apr 2025
Viewed by 127
Abstract
Integrated sensing and communication (ISAC) can improve the energy harvesting (EH) efficiency of simultaneous wireless information and power transfer (SWIPT)-assisted IoT networks by enabling precise energy harvest. However, the transmit power is increased in the hybrid system due to the fact that the [...] Read more.
Integrated sensing and communication (ISAC) can improve the energy harvesting (EH) efficiency of simultaneous wireless information and power transfer (SWIPT)-assisted IoT networks by enabling precise energy harvest. However, the transmit power is increased in the hybrid system due to the fact that the sensing signals are required to be transferred in addition to the communication data. This paper aims to tackle this issue by formulating an optimization problem to minimize the transmit power of the base station (BS) under a nonlinear EH model, considering the coexistence of power-splitting users (PSUs) and time-switching users (TSUs), as well as the beamforming vector associated with PSUs and TSUs. A two-layer algorithm based on semi-definite relaxation is proposed to tackle the complexity issue of the non-convex optimization problem. The global optimality is theoretically analyzed, and the impact of each parameter on system performance is also discussed. Numerical results indicate that TSUs are more prone to saturation compared to PSUs under identical EH requirements. The minimal required transmit power under the nonlinear EH model is much lower than that under the linear EH model. Moreover, it is observed that the number of TSUs is the primary limiting factor for the minimization of transmit power, which can be effectively mitigated by the proposed algorithm. Full article
(This article belongs to the Special Issue Integrated Sensing and Communication (ISAC) in 6G)
Show Figures

Figure 1

19 pages, 5288 KiB  
Article
Multi-Particle-Collision Simulation of Heat Transfer in Low-Dimensional Fluids
by Rongxiang Luo and Stefano Lepri
Entropy 2025, 27(5), 455; https://doi.org/10.3390/e27050455 - 24 Apr 2025
Viewed by 153
Abstract
The simulation of the transport properties of confined, low-dimensional fluids can be performed efficiently by means of multi-particle collision (MPC) dynamics with suitable thermal-wall boundary conditions. We illustrate the effectiveness of the method by studying the dimensionality effects and size-dependence of thermal conduction, [...] Read more.
The simulation of the transport properties of confined, low-dimensional fluids can be performed efficiently by means of multi-particle collision (MPC) dynamics with suitable thermal-wall boundary conditions. We illustrate the effectiveness of the method by studying the dimensionality effects and size-dependence of thermal conduction, since these properties are of crucial importance for understanding heat transfer at the micro–nanoscale. We provide a sound numerical evidence that the simple MPC fluid displays the features previously predicted from hydrodynamics of lattice systems: (1) in 1D, the thermal conductivity κ diverges with the system size L as κL1/3 and its total heat current autocorrelation function C(t) decays with the time t as C(t)t2/3; (2) in 2D, κ diverges with L as κln(L) and its C(t) decays with t as C(t)t1; (3) in 3D, its κ is independent with L and its C(t) decays with t as C(t)t3/2. For weak interaction (the nearly integrable case) in 1D and 2D, there exists an intermediate regime of sizes where kinetic effects dominate and transport is diffusive before crossing over to the expected anomalous regime. The crossover can be studied by decomposing the heat current in two contributions, which allows for a very accurate test of the predictions. In addition, we also show that, upon increasing the aspect ratio of the system, there exists a dimensional crossover from 2D or 3D dimensional behavior to the 1D one. Finally, we show that an applied magnetic field renders the transport normal, indicating that pseudomomentum conservation is not sufficient for the anomalous heat conduction behavior to occur. Full article
Show Figures

Figure 1

17 pages, 400 KiB  
Article
Efficient Circuit Implementations of Continuous-Time Quantum Walks for Quantum Search
by Renato Portugal and Jalil Khatibi Moqadam
Entropy 2025, 27(5), 454; https://doi.org/10.3390/e27050454 - 23 Apr 2025
Viewed by 102
Abstract
Quantum walks are a powerful framework for simulating complex quantum systems and designing quantum algorithms, particularly for spatial search on graphs, where the goal is to find a marked vertex efficiently. In this work, we present efficient quantum circuits that implement the evolution [...] Read more.
Quantum walks are a powerful framework for simulating complex quantum systems and designing quantum algorithms, particularly for spatial search on graphs, where the goal is to find a marked vertex efficiently. In this work, we present efficient quantum circuits that implement the evolution operator of continuous-time quantum-walk-based search algorithms for three graph families: complete graphs, complete bipartite graphs, and hypercubes. For complete and complete bipartite graphs, our circuits exactly implement the evolution operator. For hypercubes, we propose an approximate implementation that closely matches the exact evolution operator as the number of vertices increases. Our Qiskit simulations demonstrate that even for low-dimensional hypercubes, the algorithm effectively identifies the marked vertex. Furthermore, the approximate implementation developed for hypercubes can be extended to a broad class of graphs, enabling efficient quantum search in scenarios where exact implementations are impractical. Full article
(This article belongs to the Special Issue Quantum Walks for Quantum Technologies)
Show Figures

Figure 1

Previous Issue
Back to TopTop