entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 402 KB  
Article
Process and Time
by William Sulis
Entropy 2023, 25(5), 803; https://doi.org/10.3390/e25050803 - 15 May 2023
Cited by 4 | Viewed by 5628
Abstract
In regards to the nature of time, it has become commonplace to hear physicists state that time does not exist and that the perception of time passing and of events occurring in time is an illusion. In this paper, I argue that physics [...] Read more.
In regards to the nature of time, it has become commonplace to hear physicists state that time does not exist and that the perception of time passing and of events occurring in time is an illusion. In this paper, I argue that physics is actually agnostic on the question of the nature of time. The standard arguments against its existence all suffer from implicit biases and hidden assumptions, rendering many of them circular in nature. An alternative viewpoint to that of Newtonian materialism is the process view of Whitehead. I will show that the process perspective supports the reality of becoming, of happening, and of change. At the fundamental level, time is an expression of the action of process generating the elements of reality. Metrical space–time is an emergent aspect of relations between process-generated entities. Such a view is compatible with existing physics. The situation of time in physics is reminiscent of that of the continuum hypothesis in mathematical logic. It may be an independent assumption, not provable within physics proper (though it may someday be amenable to experimental exploration). Full article
(This article belongs to the Special Issue Quantum Information and Probability: From Foundations to Engineering)
12 pages, 914 KB  
Article
Dissipation during the Gating Cycle of the Bacterial Mechanosensitive Ion Channel Approaches the Landauer Limit
by Uğur Çetiner, Oren Raz, Madolyn Britt and Sergei Sukharev
Entropy 2023, 25(5), 779; https://doi.org/10.3390/e25050779 - 10 May 2023
Cited by 3 | Viewed by 2415
Abstract
The Landauer principle sets a thermodynamic bound of kBT ln 2 on the energetic cost of erasing each bit of information. It holds for any memory device, regardless of its physical implementation. It was recently shown that carefully built artificial devices [...] Read more.
The Landauer principle sets a thermodynamic bound of kBT ln 2 on the energetic cost of erasing each bit of information. It holds for any memory device, regardless of its physical implementation. It was recently shown that carefully built artificial devices can attain this bound. In contrast, biological computation-like processes, e.g., DNA replication, transcription and translation use an order of magnitude more than their Landauer minimum. Here, we show that reaching the Landauer bound is nevertheless possible with biological devices. This is achieved using a mechanosensitive channel of small conductance (MscS) from E. coli as a memory bit. MscS is a fast-acting osmolyte release valve adjusting turgor pressure inside the cell. Our patch-clamp experiments and data analysis demonstrate that under a slow switching regime, the heat dissipation in the course of tension-driven gating transitions in MscS closely approaches its Landauer limit. We discuss the biological implications of this physical trait. Full article
Show Figures

Figure 1

17 pages, 3278 KB  
Article
Identifying Influential Nodes in Complex Networks Based on Information Entropy and Relationship Strength
by Ying Xi and Xiaohui Cui
Entropy 2023, 25(5), 754; https://doi.org/10.3390/e25050754 - 5 May 2023
Cited by 15 | Viewed by 4401
Abstract
Identifying influential nodes is a key research topic in complex networks, and there have been many studies based on complex networks to explore the influence of nodes. Graph neural networks (GNNs) have emerged as a prominent deep learning architecture, capable of efficiently aggregating [...] Read more.
Identifying influential nodes is a key research topic in complex networks, and there have been many studies based on complex networks to explore the influence of nodes. Graph neural networks (GNNs) have emerged as a prominent deep learning architecture, capable of efficiently aggregating node information and discerning node influence. However, existing graph neural networks often ignore the strength of the relationships between nodes when aggregating information about neighboring nodes. In complex networks, neighboring nodes often do not have the same influence on the target node, so the existing graph neural network methods are not effective. In addition, the diversity of complex networks also makes it difficult to adapt node features with a single attribute to different types of networks. To address the above problems, the paper constructs node input features using information entropy combined with the node degree value and the average degree of the neighbor, and proposes a simple and effective graph neural network model. The model obtains the strength of the relationships between nodes by considering the degree of neighborhood overlap, and uses this as the basis for message passing, thereby effectively aggregating information about nodes and their neighborhoods. Experiments are conducted on 12 real networks, using the SIR model to verify the effectiveness of the model with the benchmark method. The experimental results show that the model can identify the influence of nodes in complex networks more effectively. Full article
(This article belongs to the Topic Computational Complex Networks)
Show Figures

Figure 1

17 pages, 464 KB  
Opinion
Senses along Which the Entropy Sq Is Unique
by Constantino Tsallis
Entropy 2023, 25(5), 743; https://doi.org/10.3390/e25050743 - 1 May 2023
Cited by 6 | Viewed by 2812
Abstract
The Boltzmann–Gibbs–von Neumann–Shannon additive entropy SBG=kipilnpi as well as its continuous and quantum counterparts, constitute the grounding concept on which the BG statistical mechanics is constructed. This magnificent theory has produced, [...] Read more.
The Boltzmann–Gibbs–von Neumann–Shannon additive entropy SBG=kipilnpi as well as its continuous and quantum counterparts, constitute the grounding concept on which the BG statistical mechanics is constructed. This magnificent theory has produced, and will most probably keep producing in the future, successes in vast classes of classical and quantum systems. However, recent decades have seen a proliferation of natural, artificial and social complex systems which defy its bases and make it inapplicable. This paradigmatic theory has been generalized in 1988 into the nonextensive statistical mechanics—as currently referred to—grounded on the nonadditive entropy Sq=k1ipiqq1 as well as its corresponding continuous and quantum counterparts. In the literature, there exist nowadays over fifty mathematically well defined entropic functionals. Sq plays a special role among them. Indeed, it constitutes the pillar of a great variety of theoretical, experimental, observational and computational validations in the area of complexity—plectics, as Murray Gell-Mann used to call it. Then, a question emerges naturally, namely In what senses is entropy Sq unique? The present effort is dedicated to a—surely non exhaustive—mathematical answer to this basic question. Full article
(This article belongs to the Special Issue The Statistical Foundations of Entropy II)
Show Figures

Figure 1

13 pages, 2488 KB  
Article
On Two Non-Ergodic Reversible Cellular Automata, One Classical, the Other Quantum
by Tomaž Prosen
Entropy 2023, 25(5), 739; https://doi.org/10.3390/e25050739 - 30 Apr 2023
Cited by 3 | Viewed by 2050
Abstract
We propose and discuss two variants of kinetic particle models—cellular automata in 1 + 1 dimensions—that have some appeal due to their simplicity and intriguing properties, which could warrant further research and applications. The first model is a deterministic and reversible automaton describing [...] Read more.
We propose and discuss two variants of kinetic particle models—cellular automata in 1 + 1 dimensions—that have some appeal due to their simplicity and intriguing properties, which could warrant further research and applications. The first model is a deterministic and reversible automaton describing two species of quasiparticles: stable massless matter particles moving with velocity ±1 and unstable standing (zero velocity) field particles. We discuss two distinct continuity equations for three conserved charges of the model. While the first two charges and the corresponding currents have support of three lattice sites and represent a lattice analogue of the conserved energy–momentum tensor, we find an additional conserved charge and current with support of nine sites, implying non-ergodic behaviour and potentially signalling integrability of the model with a highly nested R-matrix structure. The second model represents a quantum (or stochastic) deformation of a recently introduced and studied charged hardpoint lattice gas, where particles of different binary charge (±1) and binary velocity (±1) can nontrivially mix upon elastic collisional scattering. We show that while the unitary evolution rule of this model does not satisfy the full Yang–Baxter equation, it still satisfies an intriguing related identity which gives birth to an infinite set of local conserved operators, the so-called glider operators. Full article
Show Figures

Figure 1

86 pages, 1383 KB  
Article
Information Rates for Channels with Fading, Side Information and Adaptive Codewords
by Gerhard Kramer
Entropy 2023, 25(5), 728; https://doi.org/10.3390/e25050728 - 27 Apr 2023
Cited by 9 | Viewed by 3611
Abstract
Generalized mutual information (GMI) is used to compute achievable rates for fading channels with various types of channel state information at the transmitter (CSIT) and receiver (CSIR). The GMI is based on variations of auxiliary channel models with additive white Gaussian noise (AWGN) [...] Read more.
Generalized mutual information (GMI) is used to compute achievable rates for fading channels with various types of channel state information at the transmitter (CSIT) and receiver (CSIR). The GMI is based on variations of auxiliary channel models with additive white Gaussian noise (AWGN) and circularly-symmetric complex Gaussian inputs. One variation uses reverse channel models with minimum mean square error (MMSE) estimates that give the largest rates but are challenging to optimize. A second variation uses forward channel models with linear MMSE estimates that are easier to optimize. Both model classes are applied to channels where the receiver is unaware of the CSIT and for which adaptive codewords achieve capacity. The forward model inputs are chosen as linear functions of the adaptive codeword’s entries to simplify the analysis. For scalar channels, the maximum GMI is then achieved by a conventional codebook, where the amplitude and phase of each channel symbol are modified based on the CSIT. The GMI increases by partitioning the channel output alphabet and using a different auxiliary model for each partition subset. The partitioning also helps to determine the capacity scaling at high and low signal-to-noise ratios. A class of power control policies is described for partial CSIR, including a MMSE policy for full CSIT. Several examples of fading channels with AWGN illustrate the theory, focusing on on-off fading and Rayleigh fading. The capacity results generalize to block fading channels with in-block feedback, including capacity expressions in terms of mutual and directed information. Full article
(This article belongs to the Special Issue Wireless Networks: Information Theoretic Perspectives III)
Show Figures

Figure 1

11 pages, 647 KB  
Article
Outlier-Robust Surrogate Modeling of Ion–Solid Interaction Simulations
by Roland Preuss and Udo von Toussaint
Entropy 2023, 25(4), 685; https://doi.org/10.3390/e25040685 - 19 Apr 2023
Viewed by 1412
Abstract
Data for complex plasma–wall interactions require long-running and expensive computer simulations. Furthermore, the number of input parameters is large, which results in low coverage of the (physical) parameter space. Unpredictable occasions of outliers create a need to conduct the exploration of this multi-dimensional [...] Read more.
Data for complex plasma–wall interactions require long-running and expensive computer simulations. Furthermore, the number of input parameters is large, which results in low coverage of the (physical) parameter space. Unpredictable occasions of outliers create a need to conduct the exploration of this multi-dimensional space using robust analysis tools. We restate the Gaussian process (GP) method as a Bayesian adaptive exploration method for establishing surrogate surfaces in the variables of interest. On this basis, we expand the analysis by the Student-t process (TP) method in order to improve the robustness of the result with respect to outliers. The most obvious difference between both methods shows up in the marginal likelihood for the hyperparameters of the covariance function, where the TP method features a broader marginal probability distribution in the presence of outliers. Eventually, we provide first investigations, with a mixture likelihood of two Gaussians within a Gaussian process ansatz for describing either outlier or non-outlier behavior. The parameters of the two Gaussians are set such that the mixture likelihood resembles the shape of a Student-t likelihood. Full article
Show Figures

Figure 1

14 pages, 468 KB  
Article
A Method Based on Temporal Embedding for the Pairwise Alignment of Dynamic Networks
by Pietro Cinaglia and Mario Cannataro
Entropy 2023, 25(4), 665; https://doi.org/10.3390/e25040665 - 15 Apr 2023
Cited by 14 | Viewed by 2953
Abstract
In network analysis, real-world systems may be represented via graph models, where nodes and edges represent the set of biological objects (e.g., genes, proteins, molecules) and their interactions, respectively. This representative knowledge-graph model may also consider the dynamics involved in the evolution of [...] Read more.
In network analysis, real-world systems may be represented via graph models, where nodes and edges represent the set of biological objects (e.g., genes, proteins, molecules) and their interactions, respectively. This representative knowledge-graph model may also consider the dynamics involved in the evolution of the network (i.e., dynamic networks), in addition to a classic static representation (i.e., static networks). Bioinformatics solutions for network analysis allow knowledge extraction from the features related to a single network of interest or by comparing networks of different species. For instance, we may align a network related to a well known species to a more complex one in order to find a match able to support new hypotheses or studies. Therefore, the network alignment is crucial for transferring the knowledge between species, usually from simplest (e.g., rat) to more complex (e.g., human). Methods: In this paper, we present Dynamic Network Alignment based on Temporal Embedding (DANTE), a novel method for pairwise alignment of dynamic networks that applies the temporal embedding to investigate the topological similarities between the two input dynamic networks. The main idea of DANTE is to consider the evolution of interactions and the changes in network topology. Briefly, the proposed solution builds a similarity matrix by integrating the tensors computed via the embedding process and, subsequently, it aligns the pairs of nodes by performing its own iterative maximization function. Results: The performed experiments have reported promising results in terms of precision and accuracy, as well as good robustness as the number of nodes and time points increases. The proposed solution showed an optimal trade-off between sensitivity and specificity on the alignments produced on several noisy versions of the dynamic yeast network, by improving by ∼18.8% (with a maximum of 20.6%) the Area Under the Receiver Operating Characteristic (ROC) Curve (i.e., AUC or AUROC), compared to two well known methods: DYNAMAGNA++ and DYNAWAVE. From the point of view of quality, DANTE outperformed these by ∼91% as nodes increase and by ∼75% as the number of time points increases. Furthermore, a ∼23.73% improvement in terms of node correctness was reported with our solution on real dynamic networks. Full article
(This article belongs to the Special Issue Foundations of Network Analysis)
Show Figures

Figure 1

20 pages, 1323 KB  
Article
Robustness of Network Controllability with Respect to Node Removals Based on In-Degree and Out-Degree
by Fenghua Wang and Robert E. Kooij
Entropy 2023, 25(4), 656; https://doi.org/10.3390/e25040656 - 14 Apr 2023
Viewed by 2163
Abstract
Network controllability and its robustness have been widely studied. However, analytical methods to calculate network controllability with respect to node in- and out-degree targeted removals are currently lacking. This paper develops methods, based on generating functions for the in- and out-degree distributions, to [...] Read more.
Network controllability and its robustness have been widely studied. However, analytical methods to calculate network controllability with respect to node in- and out-degree targeted removals are currently lacking. This paper develops methods, based on generating functions for the in- and out-degree distributions, to approximate the minimum number of driver nodes needed to control directed networks, during node in- and out-degree targeted removals. By validating the proposed methods on synthetic and real-world networks, we show that our methods work reasonably well. Moreover, when the fraction of the removed nodes is below 10% the analytical results of random removals can also be used to predict the results of targeted node removals. Full article
Show Figures

Figure 1

41 pages, 13660 KB  
Article
A Simple Approximation Method for the Fisher–Rao Distance between Multivariate Normal Distributions
by Frank Nielsen
Entropy 2023, 25(4), 654; https://doi.org/10.3390/e25040654 - 13 Apr 2023
Cited by 12 | Viewed by 6449
Abstract
We present a simple method to approximate the Fisher–Rao distance between multivariate normal distributions based on discretizing curves joining normal distributions and approximating the Fisher–Rao distances between successive nearby normal distributions on the curves by the square roots of their Jeffreys divergences. We [...] Read more.
We present a simple method to approximate the Fisher–Rao distance between multivariate normal distributions based on discretizing curves joining normal distributions and approximating the Fisher–Rao distances between successive nearby normal distributions on the curves by the square roots of their Jeffreys divergences. We consider experimentally the linear interpolation curves in the ordinary, natural, and expectation parameterizations of the normal distributions, and compare these curves with a curve derived from the Calvo and Oller’s isometric embedding of the Fisher–Rao d-variate normal manifold into the cone of (d+1)×(d+1) symmetric positive–definite matrices. We report on our experiments and assess the quality of our approximation technique by comparing the numerical approximations with both lower and upper bounds. Finally, we present several information–geometric properties of Calvo and Oller’s isometric embedding. Full article
(This article belongs to the Special Issue Information Geometry and Its Applications)
Show Figures

Graphical abstract

16 pages, 321 KB  
Review
Collapse Models: A Theoretical, Experimental and Philosophical Review
by Angelo Bassi, Mauro Dorato and Hendrik Ulbricht
Entropy 2023, 25(4), 645; https://doi.org/10.3390/e25040645 - 12 Apr 2023
Cited by 17 | Viewed by 9294
Abstract
In this paper, we review and connect the three essential conditions needed by the collapse model to achieve a complete and exact formulation, namely the theoretical, the experimental, and the ontological ones. These features correspond to the three parts of the paper. In [...] Read more.
In this paper, we review and connect the three essential conditions needed by the collapse model to achieve a complete and exact formulation, namely the theoretical, the experimental, and the ontological ones. These features correspond to the three parts of the paper. In any empirical science, the first two features are obviously connected but, as is well known, among the different formulations and interpretations of non-relativistic quantum mechanics, only collapse models, as the paper well illustrates with a richness of details, have experimental consequences. Finally, we show that a clarification of the ontological intimations of collapse models is needed for at least three reasons: (1) to respond to the indispensable task of answering the question ’what are collapse models (and in general any physical theory) about?’; (2) to achieve a deeper understanding of their different formulations; (3) to enlarge the panorama of possible readings of a theory, which historically has often played a fundamental heuristic role. Full article
30 pages, 6120 KB  
Article
How Much Is Enough? A Study on Diffusion Times in Score-Based Generative Models
by Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone and Pietro Michiardi
Entropy 2023, 25(4), 633; https://doi.org/10.3390/e25040633 - 7 Apr 2023
Cited by 22 | Viewed by 6086
Abstract
Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data. While recent works have started to lay down a theoretical foundation for these models, a detailed understanding of the role of [...] Read more.
Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data. While recent works have started to lay down a theoretical foundation for these models, a detailed understanding of the role of the diffusion time T is still lacking. Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution; however, a smaller value of T should be preferred for a better approximation of the score-matching objective and higher computational efficiency. Starting from a variational interpretation of diffusion models, in this work we quantify this trade-off and suggest a new method to improve quality and efficiency of both training and sampling, by adopting smaller diffusion times. Indeed, we show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process. Empirical results support our analysis; for image data, our method is competitive with regard to the state of the art, according to standard sample quality metrics and log-likelihood. Full article
(This article belongs to the Special Issue Deep Generative Modeling: Theory and Applications)
Show Figures

Figure 1

16 pages, 424 KB  
Article
On the Asymptotic Capacity of Information-Theoretic Privacy-Preserving Epidemiological Data Collection
by Jiale Cheng, Nan Liu and Wei Kang
Entropy 2023, 25(4), 625; https://doi.org/10.3390/e25040625 - 6 Apr 2023
Cited by 4 | Viewed by 2349
Abstract
The paradigm-shifting developments of cryptography and information theory have focused on the privacy of data-sharing systems, such as epidemiological studies, where agencies are collecting far more personal data than they need, causing intrusions on patients’ privacy. To study the capability of the data [...] Read more.
The paradigm-shifting developments of cryptography and information theory have focused on the privacy of data-sharing systems, such as epidemiological studies, where agencies are collecting far more personal data than they need, causing intrusions on patients’ privacy. To study the capability of the data collection while protecting privacy from an information theory perspective, we formulate a new distributed multiparty computation problem called privacy-preserving epidemiological data collection. In our setting, a data collector requires a linear combination of K users’ data through a storage system consisting of N servers. Privacy needs to be protected when the users, servers, and data collector do not trust each other. For the users, any data are required to be protected from up to E colluding servers; for the servers, any more information than the desired linear combination cannot be leaked to the data collector; and for the data collector, any single server can not know anything about the coefficients of the linear combination. Our goal is to find the optimal collection rate, which is defined as the ratio of the size of the user’s message to the total size of downloads from N servers to the data collector. For achievability, we propose an asymptotic capacity-achieving scheme when E<N1, by applying the cross-subspace alignment method to our construction; for the converse, we proved an upper bound of the asymptotic rate for all achievable schemes when E<N1. Additionally, we show that a positive asymptotic capacity is not possible when EN1. The results of the achievability and converse meet when the number of users goes to infinity, yielding the asymptotic capacity. Our work broadens current researches on data privacy in information theory and gives the best achievable asymptotic performance that any epidemiological data collector can obtain. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

19 pages, 337 KB  
Article
Relating a System’s Hamiltonian to Its Entropy Production Using a Complex Time Approach
by Michael C. Parker and Chris Jeynes
Entropy 2023, 25(4), 629; https://doi.org/10.3390/e25040629 - 6 Apr 2023
Cited by 11 | Viewed by 3461
Abstract
We exploit the properties of complex time to obtain an analytical relationship based on considerations of causality between the two Noether-conserved quantities of a system: its Hamiltonian and its entropy production. In natural units, when complexified, the one is simply the Wick-rotated complex [...] Read more.
We exploit the properties of complex time to obtain an analytical relationship based on considerations of causality between the two Noether-conserved quantities of a system: its Hamiltonian and its entropy production. In natural units, when complexified, the one is simply the Wick-rotated complex conjugate of the other. A Hilbert transform relation is constructed in the formalism of quantitative geometrical thermodynamics, which enables system irreversibility to be handled analytically within a framework that unifies both the microscopic and macroscopic scales, and which also unifies the treatment of both reversibility and irreversibility as complementary parts of a single physical description. In particular, the thermodynamics of two unitary entities are considered: the alpha particle, which is absolutely stable (that is, trivially reversible with zero entropy production), and a black hole whose unconditional irreversibility is characterized by a non-zero entropy production, for which we show an alternate derivation, confirming our previous one. The thermodynamics of a canonical decaying harmonic oscillator are also considered. In this treatment, the complexification of time also enables a meaningful physical interpretation of both “imaginary time” and “imaginary energy”. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics III)
Show Figures

Graphical abstract

35 pages, 1146 KB  
Article
Dimensionless Groups by Entropic Similarity: I — Diffusion, Chemical Reaction and Dispersion Processes
by Robert K. Niven
Entropy 2023, 25(4), 617; https://doi.org/10.3390/e25040617 - 5 Apr 2023
Cited by 4 | Viewed by 2595
Abstract
Since the time of Buckingham in 1914, dimensional analysis and similarity arguments based on dimensionless groups have served as powerful tools for the analysis of systems in all branches of science and engineering. Dimensionless groups are generally classified into those arising from geometric [...] Read more.
Since the time of Buckingham in 1914, dimensional analysis and similarity arguments based on dimensionless groups have served as powerful tools for the analysis of systems in all branches of science and engineering. Dimensionless groups are generally classified into those arising from geometric similarity, based on ratios of length scales; kinematic similarity, based on ratios of velocities or accelerations; and dynamic similarity, based on ratios of forces. We propose an additional category of dimensionless groups based on entropic similarity, defined by ratios of (i) entropy production terms; (ii) entropy flow rates or fluxes; or (iii) information flow rates or fluxes. Since all processes involving work against friction, dissipation, diffusion, dispersion, mixing, separation, chemical reaction, gain of information or other irreversible changes are driven by (or must overcome) the second law of thermodynamics, it is appropriate to analyze them directly in terms of competing entropy-producing and transporting phenomena and the dominant entropic regime, rather than indirectly in terms of forces. In this study, entropic groups are derived for a wide variety of diffusion, chemical reaction and dispersion processes relevant to fluid mechanics, chemical engineering and environmental engineering. It is shown that many dimensionless groups traditionally derived by kinematic or dynamic similarity (including the Reynolds number) can also be recovered by entropic similarity—with a different entropic interpretation—while many new dimensionless groups can also be identified. The analyses significantly expand the scope of dimensional analysis and similarity arguments for the resolution of new and existing problems in science and engineering. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 2958 KB  
Article
On the Different Abilities of Cross-Sample Entropy and K-Nearest-Neighbor Cross-Unpredictability in Assessing Dynamic Cardiorespiratory and Cerebrovascular Interactions
by Alberto Porta, Vlasta Bari, Francesca Gelpi, Beatrice Cairo, Beatrice De Maria, Davide Tonon, Gianluca Rossato and Luca Faes
Entropy 2023, 25(4), 599; https://doi.org/10.3390/e25040599 - 1 Apr 2023
Cited by 14 | Viewed by 2794
Abstract
Nonlinear markers of coupling strength are often utilized to typify cardiorespiratory and cerebrovascular regulations. The computation of these indices requires techniques describing nonlinear interactions between respiration (R) and heart period (HP) and between mean arterial pressure (MAP) and mean cerebral blood velocity (MCBv). [...] Read more.
Nonlinear markers of coupling strength are often utilized to typify cardiorespiratory and cerebrovascular regulations. The computation of these indices requires techniques describing nonlinear interactions between respiration (R) and heart period (HP) and between mean arterial pressure (MAP) and mean cerebral blood velocity (MCBv). We compared two model-free methods for the assessment of dynamic HP–R and MCBv–MAP interactions, namely the cross-sample entropy (CSampEn) and k-nearest-neighbor cross-unpredictability (KNNCUP). Comparison was carried out first over simulations generated by linear and nonlinear unidirectional causal, bidirectional linear causal, and lag-zero linear noncausal models, and then over experimental data acquired from 19 subjects at supine rest during spontaneous breathing and controlled respiration at 10, 15, and 20 breaths·minute1 as well as from 13 subjects at supine rest and during 60° head-up tilt. Linear markers were computed for comparison. We found that: (i) over simulations, CSampEn and KNNCUP exhibit different abilities in evaluating coupling strength; (ii) KNNCUP is more reliable than CSampEn when interactions occur according to a causal structure, while performances are similar in noncausal models; (iii) in healthy subjects, KNNCUP is more powerful in characterizing cardiorespiratory and cerebrovascular variability interactions than CSampEn and linear markers. We recommend KNNCUP for quantifying cardiorespiratory and cerebrovascular coupling. Full article
(This article belongs to the Special Issue Nonlinear Dynamics in Cardiovascular Signals)
Show Figures

Figure 1

22 pages, 4492 KB  
Article
A Variational Quantum Linear Solver Application to Discrete Finite-Element Methods
by Corey Jason Trahan, Mark Loveland, Noah Davis and Elizabeth Ellison
Entropy 2023, 25(4), 580; https://doi.org/10.3390/e25040580 - 28 Mar 2023
Cited by 14 | Viewed by 5528
Abstract
Finite-element methods are industry standards for finding numerical solutions to partial differential equations. However, the application scale remains pivotal to the practical use of these methods, even for modern-day supercomputers. Large, multi-scale applications, for example, can be limited by their requirement of prohibitively [...] Read more.
Finite-element methods are industry standards for finding numerical solutions to partial differential equations. However, the application scale remains pivotal to the practical use of these methods, even for modern-day supercomputers. Large, multi-scale applications, for example, can be limited by their requirement of prohibitively large linear system solutions. It is therefore worthwhile to investigate whether near-term quantum algorithms have the potential for offering any kind of advantage over classical linear solvers. In this study, we investigate the recently proposed variational quantum linear solver (VQLS) for discrete solutions to partial differential equations. This method was found to scale polylogarithmically with the linear system size, and the method can be implemented using shallow quantum circuits on noisy intermediate-scale quantum (NISQ) computers. Herein, we utilize the hybrid VQLS to solve both the steady Poisson equation and the time-dependent heat and wave equations. Full article
(This article belongs to the Special Issue Advances in Quantum Computing)
Show Figures

Figure 1

15 pages, 500 KB  
Article
Time Series of Counts under Censoring: A Bayesian Approach
by Isabel Silva, Maria Eduarda Silva, Isabel Pereira and Brendan McCabe
Entropy 2023, 25(4), 549; https://doi.org/10.3390/e25040549 - 23 Mar 2023
Cited by 1 | Viewed by 2115
Abstract
Censored data are frequently found in diverse fields including environmental monitoring, medicine, economics and social sciences. Censoring occurs when observations are available only for a restricted range, e.g., due to a detection limit. Ignoring censoring produces biased estimates and unreliable statistical inference. The [...] Read more.
Censored data are frequently found in diverse fields including environmental monitoring, medicine, economics and social sciences. Censoring occurs when observations are available only for a restricted range, e.g., due to a detection limit. Ignoring censoring produces biased estimates and unreliable statistical inference. The aim of this work is to contribute to the modelling of time series of counts under censoring using convolution closed infinitely divisible (CCID) models. The emphasis is on estimation and inference problems, using Bayesian approaches with Approximate Bayesian Computation (ABC) and Gibbs sampler with Data Augmentation (GDA) algorithms. Full article
(This article belongs to the Special Issue Discrete-Valued Time Series)
Show Figures

Figure 1

14 pages, 1668 KB  
Article
More Stages Decrease Dissipation in Irreversible Step Processes
by Peter Salamon, Bjarne Andresen, James Nulton, Ty N. F. Roach and Forest Rohwer
Entropy 2023, 25(3), 539; https://doi.org/10.3390/e25030539 - 21 Mar 2023
Cited by 2 | Viewed by 1949
Abstract
The dissipation in an irreversible step process is reduced when the number of steps is increased in any refinement of the steps in the process. This is a consequence of the ladder theorem, which states that, for any irreversible process proceeding by a [...] Read more.
The dissipation in an irreversible step process is reduced when the number of steps is increased in any refinement of the steps in the process. This is a consequence of the ladder theorem, which states that, for any irreversible process proceeding by a sequence of relaxations, dividing any relaxation step into two will result in a new sequence that is more efficient than the original one. This results in a more-steps-the-better rule, even when the new sequence of steps is not reoptimized. This superiority of many steps is well established empirically in, e.g., insulation and separation applications. In particular, the fact that the division of any step into two steps improves the overall efficiency has interesting implications for biological evolution and emphasizes thermodynamic length as a central measure for dissipation. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

39 pages, 8736 KB  
Article
NaRnEA: An Information Theoretic Framework for Gene Set Analysis
by Aaron T. Griffin, Lukas J. Vlahos, Codruta Chiuzan and Andrea Califano
Entropy 2023, 25(3), 542; https://doi.org/10.3390/e25030542 - 21 Mar 2023
Cited by 3 | Viewed by 5146
Abstract
Gene sets are being increasingly leveraged to make high-level biological inferences from transcriptomic data; however, existing gene set analysis methods rely on overly conservative, heuristic approaches for quantifying the statistical significance of gene set enrichment. We created Nonparametric analytical-Rank-based Enrichment Analysis (NaRnEA) to [...] Read more.
Gene sets are being increasingly leveraged to make high-level biological inferences from transcriptomic data; however, existing gene set analysis methods rely on overly conservative, heuristic approaches for quantifying the statistical significance of gene set enrichment. We created Nonparametric analytical-Rank-based Enrichment Analysis (NaRnEA) to facilitate accurate and robust gene set analysis with an optimal null model derived using the information theoretic Principle of Maximum Entropy. By measuring the differential activity of ~2500 transcriptional regulatory proteins based on the differential expression of each protein’s transcriptional targets between primary tumors and normal tissue samples in three cohorts from The Cancer Genome Atlas (TCGA), we demonstrate that NaRnEA critically improves in two widely used gene set analysis methods: Gene Set Enrichment Analysis (GSEA) and analytical-Rank-based Enrichment Analysis (aREA). We show that the NaRnEA-inferred differential protein activity is significantly correlated with differential protein abundance inferred from independent, phenotype-matched mass spectrometry data in the Clinical Proteomic Tumor Analysis Consortium (CPTAC), confirming the statistical and biological accuracy of our approach. Additionally, our analysis crucially demonstrates that the sample-shuffling empirical null models leveraged by GSEA and aREA for gene set analysis are overly conservative, a shortcoming that is avoided by the newly developed Maximum Entropy analytical null model employed by NaRnEA. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

13 pages, 459 KB  
Article
Entropic Dynamics in a Theoretical Framework for Biosystems
by Richard L. Summers
Entropy 2023, 25(3), 528; https://doi.org/10.3390/e25030528 - 18 Mar 2023
Cited by 3 | Viewed by 3206
Abstract
Central to an understanding of the physical nature of biosystems is an apprehension of their ability to control entropy dynamics in their environment. To achieve ongoing stability and survival, living systems must adaptively respond to incoming information signals concerning matter and energy perturbations [...] Read more.
Central to an understanding of the physical nature of biosystems is an apprehension of their ability to control entropy dynamics in their environment. To achieve ongoing stability and survival, living systems must adaptively respond to incoming information signals concerning matter and energy perturbations in their biological continuum (biocontinuum). Entropy dynamics for the living system are then determined by the natural drive for reconciliation of these information divergences in the context of the constraints formed by the geometry of the biocontinuum information space. The configuration of this information geometry is determined by the inherent biological structure, processes and adaptive controls that are necessary for the stable functioning of the organism. The trajectory of this adaptive reconciliation process can be described by an information-theoretic formulation of the living system’s procedure for actionable knowledge acquisition that incorporates the axiomatic inference of the Kullback principle of minimum information discrimination (a derivative of Jaynes’ principle of maximal entropy). Utilizing relative information for entropic inference provides for the incorporation of a background of the adaptive constraints in biosystems within the operations of Fisher biologic replicator dynamics. This mathematical expression for entropic dynamics within the biocontinuum may then serve as a theoretical framework for the general analysis of biological phenomena. Full article
Show Figures

Figure 1

15 pages, 2994 KB  
Review
Quantum Chaos and Level Dynamics
by Jakub Zakrzewski
Entropy 2023, 25(3), 491; https://doi.org/10.3390/e25030491 - 13 Mar 2023
Cited by 6 | Viewed by 3243
Abstract
We review the application of level dynamics to spectra of quantally chaotic systems. We show that the statistical mechanics approach gives us predictions about level statistics intermediate between integrable and chaotic dynamics. Then we discuss in detail different statistical measures involving level dynamics, [...] Read more.
We review the application of level dynamics to spectra of quantally chaotic systems. We show that the statistical mechanics approach gives us predictions about level statistics intermediate between integrable and chaotic dynamics. Then we discuss in detail different statistical measures involving level dynamics, such as level avoided-crossing distributions, level slope distributions, or level curvature distributions. We show both the aspects of universality in these distributions and their limitations. We concentrate in some detail on measures imported from the quantum information approach such as the fidelity susceptibility, and more generally, geometric tensor matrix elements. The possible open problems are suggested. Full article
Show Figures

Figure 1

27 pages, 462 KB  
Article
Stochastic Expectation Maximization Algorithm for Linear Mixed-Effects Model with Interactions in the Presence of Incomplete Data
by Alandra Zakkour, Cyril Perret and Yousri Slaoui
Entropy 2023, 25(3), 473; https://doi.org/10.3390/e25030473 - 8 Mar 2023
Cited by 3 | Viewed by 3480
Abstract
The purpose of this paper is to propose a new algorithm based on stochastic expectation maximization (SEM) to deal with the problem of unobserved values when multiple interactions in a linear mixed-effects model (LMEM) are present. We test the effectiveness of the proposed [...] Read more.
The purpose of this paper is to propose a new algorithm based on stochastic expectation maximization (SEM) to deal with the problem of unobserved values when multiple interactions in a linear mixed-effects model (LMEM) are present. We test the effectiveness of the proposed algorithm with the stochastic approximation expectation maximization (SAEM) and Monte Carlo Markov chain (MCMC) algorithms. This comparison is implemented to highlight the importance of including the maximum effects that can affect the model. The applications are made on both simulated psychological and real data. The findings demonstrate that our proposed SEM algorithm is highly preferable to the other competitor algorithms. Full article
(This article belongs to the Special Issue Monte Carlo Simulation in Statistical Physics)
Show Figures

Figure 1

24 pages, 6744 KB  
Article
Rate Distortion Theory for Descriptive Statistics
by Peter Harremoës
Entropy 2023, 25(3), 456; https://doi.org/10.3390/e25030456 - 5 Mar 2023
Cited by 3 | Viewed by 3683
Abstract
Rate distortion theory was developed for optimizing lossy compression of data, but it also has applications in statistics. In this paper, we illustrate how rate distortion theory can be used to analyze various datasets. The analysis involves testing, identification of outliers, choice of [...] Read more.
Rate distortion theory was developed for optimizing lossy compression of data, but it also has applications in statistics. In this paper, we illustrate how rate distortion theory can be used to analyze various datasets. The analysis involves testing, identification of outliers, choice of compression rate, calculation of optimal reconstruction points, and assigning “descriptive confidence regions” to the reconstruction points. We study four models or datasets of increasing complexity: clustering, Gaussian models, linear regression, and a dataset describing orientations of early Islamic mosques. These examples illustrate how rate distortion analysis may serve as a common framework for handling different statistical problems. Full article
(This article belongs to the Collection Feature Papers in Information Theory)
Show Figures

Figure 1

10 pages, 4198 KB  
Article
Spectral Form Factor and Dynamical Localization
by Črt Lozej
Entropy 2023, 25(3), 451; https://doi.org/10.3390/e25030451 - 4 Mar 2023
Cited by 2 | Viewed by 2671
Abstract
Quantum dynamical localization occurs when quantum interference stops the diffusion of wave packets in momentum space. The expectation is that dynamical localization will occur when the typical transport time of the momentum diffusion is greater than the Heisenberg time. The transport time is [...] Read more.
Quantum dynamical localization occurs when quantum interference stops the diffusion of wave packets in momentum space. The expectation is that dynamical localization will occur when the typical transport time of the momentum diffusion is greater than the Heisenberg time. The transport time is typically computed from the corresponding classical dynamics. In this paper, we present an alternative approach based purely on the study of spectral fluctuations of the quantum system. The information about the transport times is encoded in the spectral form factor, which is the Fourier transform of the two-point spectral autocorrelation function. We compute large samples of the energy spectra (of the order of 106 levels) and spectral form factors of 22 stadium billiards with parameter values across the transition between the localized and extended eigenstate regimes. The transport time is obtained from the point when the spectral form factor transitions from the non-universal to the universal regime predicted by random matrix theory. We study the dependence of the transport time on the parameter value and show the level repulsion exponents, which are known to be a good measure of dynamical localization, depend linearly on the transport times obtained in this way. Full article
Show Figures

Figure 1

12 pages, 484 KB  
Article
Two Features of the GINAR(1) Process and Their Impact on the Run-Length Performance of Geometric Control Charts
by Manuel Cabral Morais
Entropy 2023, 25(3), 444; https://doi.org/10.3390/e25030444 - 2 Mar 2023
Cited by 1 | Viewed by 1777
Abstract
The geometric first-order integer-valued autoregressive process (GINAR(1)) can be particularly useful to model relevant discrete-valued time series, namely in statistical process control. We resort to stochastic ordering to prove that the GINAR(1) process is a discrete-time Markov chain governed by a totally positive [...] Read more.
The geometric first-order integer-valued autoregressive process (GINAR(1)) can be particularly useful to model relevant discrete-valued time series, namely in statistical process control. We resort to stochastic ordering to prove that the GINAR(1) process is a discrete-time Markov chain governed by a totally positive order 2 (TP2) transition matrix.Stochastic ordering is also used to compare transition matrices referring to pairs of GINAR(1) processes with different values of the marginal mean. We assess and illustrate the implications of these two stochastic ordering results, namely on the properties of the run length of geometric charts for monitoring GINAR(1) counts. Full article
(This article belongs to the Special Issue Discrete-Valued Time Series)
Show Figures

Figure 1

12 pages, 1348 KB  
Article
The Effect of On-Site Potentials on Supratransmission in One-Dimensional Hamiltonian Lattices
by Tassos Bountis and Jorge E. Macías-Díaz
Entropy 2023, 25(3), 423; https://doi.org/10.3390/e25030423 - 26 Feb 2023
Cited by 3 | Viewed by 1807
Abstract
We investigated a class of one-dimensional (1D) Hamiltonian N-particle lattices whose binary interactions are quadratic and/or quartic in the potential. We also included on-site potential terms, frequently considered in connection with localization phenomena, in this class. Applying a sinusoidal perturbation at one [...] Read more.
We investigated a class of one-dimensional (1D) Hamiltonian N-particle lattices whose binary interactions are quadratic and/or quartic in the potential. We also included on-site potential terms, frequently considered in connection with localization phenomena, in this class. Applying a sinusoidal perturbation at one end of the lattice and an absorbing boundary on the other, we studied the phenomenon of supratransmission and its dependence on two ranges of interactions, 0<α< and 0<β<, as the effect of the on-site potential terms of the Hamiltonian varied. In previous works, we studied the critical amplitude As(α,Ω) at which supratransmission occurs, for one range parameter α, and showed that there was a sharp threshold above which energy was transmitted in the form of large-amplitude nonlinear modes, as long as the driving frequency Ω lay in the forbidden band-gap of the system. In the absence of on-site potentials, it is known that As(α,Ω) increases monotonically the longer the range of interactions is (i.e., as α0). However, when on-site potential terms are taken into account, As(α,Ω) reaches a maximum at a low value of α that depends on Ω, below which supratransmission thresholds decrease sharply to lower values. In this work, we studied this phenomenon further, as the contribution of the on-site potential terms varied, and we explored in detail their effect on the supratransmission thresholds. Full article
Show Figures

Figure 1

30 pages, 2586 KB  
Review
Tsallis q-Statistics in Seismology
by Leonardo Di G. Sigalotti, Alejandro Ramírez-Rojas and Carlos A. Vargas
Entropy 2023, 25(3), 408; https://doi.org/10.3390/e25030408 - 23 Feb 2023
Cited by 17 | Viewed by 4208
Abstract
Non-extensive statistical mechanics (or q-statistics) is based on the so-called non-additive Tsallis entropy. Since its introduction by Tsallis, in 1988, as a generalization of the Boltzmann–Gibbs equilibrium statistical mechanics, it has steadily gained ground as a suitable theory for the description of [...] Read more.
Non-extensive statistical mechanics (or q-statistics) is based on the so-called non-additive Tsallis entropy. Since its introduction by Tsallis, in 1988, as a generalization of the Boltzmann–Gibbs equilibrium statistical mechanics, it has steadily gained ground as a suitable theory for the description of the statistical properties of non-equilibrium complex systems. Therefore, it has been applied to numerous phenomena, including real seismicity. In particular, Tsallis entropy is expected to provide a guiding principle to reveal novel aspects of complex dynamical systems with catastrophes, such as seismic events. The exploration of the existing connections between Tsallis formalism and real seismicity has been the focus of extensive research activity in the last two decades. In particular, Tsallis q-statistics has provided a unified framework for the description of the collective properties of earthquakes and faults. Despite this progress, our present knowledge of the physical processes leading to the initiation of a rupture, and its subsequent growth through a fault system, remains quite limited. The aim of this paper was to provide an overview of the non-extensive interpretation of seismicity, along with the contributions of the Tsallis formalism to the statistical description of seismic events. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

15 pages, 418 KB  
Article
Design and Analysis of Joint Group Shuffled Scheduling Decoding Algorithm for Double LDPC Codes System
by Qiwang Chen, Yanzhao Ren, Lin Zhou, Chen Chen and Sanya Liu
Entropy 2023, 25(2), 357; https://doi.org/10.3390/e25020357 - 15 Feb 2023
Cited by 5 | Viewed by 3538
Abstract
In this paper, a joint group shuffled scheduling decoding (JGSSD) algorithm for a joint source-channel coding (JSCC) scheme based on double low-density parity-check (D-LDPC) codes is presented. The proposed algorithm considers the D-LDPC coding structure as a whole and applies shuffled scheduling to [...] Read more.
In this paper, a joint group shuffled scheduling decoding (JGSSD) algorithm for a joint source-channel coding (JSCC) scheme based on double low-density parity-check (D-LDPC) codes is presented. The proposed algorithm considers the D-LDPC coding structure as a whole and applies shuffled scheduling to each group; the grouping relies on the types or the length of the variable nodes (VNs). By comparison, the conventional shuffled scheduling decoding algorithm can be regarded as a special case of this proposed algorithm. A novel joint extrinsic information transfer (JEXIT) algorithm for the D-LDPC codes system with the JGSSD algorithm is proposed, by which the source and channel decoding are calculated with different grouping strategies to analyze the effects of the grouping strategy. Simulation results and comparisons verify the superiority of the JGSSD algorithm, which can adaptively trade off the decoding performance, complexity and latency. Full article
(This article belongs to the Special Issue Coding and Entropy)
Show Figures

Figure 1

23 pages, 2911 KB  
Article
The Typical Set and Entropy in Stochastic Systems with Arbitrary Phase Space Growth
by Rudolf Hanel and Bernat Corominas-Murtra
Entropy 2023, 25(2), 350; https://doi.org/10.3390/e25020350 - 14 Feb 2023
Cited by 3 | Viewed by 2751
Abstract
The existence of the typical set is key for data compression strategies and for the emergence of robust statistical observables in macroscopic physical systems. Standard approaches derive its existence from a restricted set of dynamical constraints. However, given its central role underlying the [...] Read more.
The existence of the typical set is key for data compression strategies and for the emergence of robust statistical observables in macroscopic physical systems. Standard approaches derive its existence from a restricted set of dynamical constraints. However, given its central role underlying the emergence of stable, almost deterministic statistical patterns, a question arises whether typical sets exist in much more general scenarios. We demonstrate here that the typical set can be defined and characterized from general forms of entropy for a much wider class of stochastic processes than was previously thought. This includes processes showing arbitrary path dependence, long range correlations or dynamic sampling spaces, suggesting that typicality is a generic property of stochastic processes, regardless of their complexity. We argue that the potential emergence of robust properties in complex stochastic systems provided by the existence of typical sets has special relevance to biological systems. Full article
(This article belongs to the Special Issue The Statistical Foundations of Entropy II)
Show Figures

Figure 1

15 pages, 2368 KB  
Article
System Integrated Information
by William Marshall, Matteo Grasso, William G. P. Mayner, Alireza Zaeemzadeh, Leonardo S. Barbosa, Erick Chastain, Graham Findlay, Shuntaro Sasai, Larissa Albantakis and Giulio Tononi
Entropy 2023, 25(2), 334; https://doi.org/10.3390/e25020334 - 11 Feb 2023
Cited by 10 | Viewed by 5286
Abstract
Integrated information theory (IIT) starts from consciousness itself and identifies a set of properties (axioms) that are true of every conceivable experience. The axioms are translated into a set of postulates about the substrate of consciousness (called a complex), which are then used [...] Read more.
Integrated information theory (IIT) starts from consciousness itself and identifies a set of properties (axioms) that are true of every conceivable experience. The axioms are translated into a set of postulates about the substrate of consciousness (called a complex), which are then used to formulate a mathematical framework for assessing both the quality and quantity of experience. The explanatory identity proposed by IIT is that an experience is identical to the cause–effect structure unfolded from a maximally irreducible substrate (a Φ-structure). In this work we introduce a definition for the integrated information of a system (φs) that is based on the existence, intrinsicality, information, and integration postulates of IIT. We explore how notions of determinism, degeneracy, and fault lines in the connectivity impact system-integrated information. We then demonstrate how the proposed measure identifies complexes as systems, the φs of which is greater than the φs of any overlapping candidate systems. Full article
(This article belongs to the Collection Advances in Integrated Information Theory)
Show Figures

Figure 1

12 pages, 574 KB  
Article
Toward Prediction of Financial Crashes with a D-Wave Quantum Annealer
by Yongcheng Ding, Javier Gonzalez-Conde, Lucas Lamata, José D. Martín-Guerrero, Enrique Lizaso, Samuel Mugel, Xi Chen, Román Orús, Enrique Solano and Mikel Sanz
Entropy 2023, 25(2), 323; https://doi.org/10.3390/e25020323 - 10 Feb 2023
Cited by 22 | Viewed by 5344
Abstract
The prediction of financial crashes in a complex financial network is known to be an NP-hard problem, which means that no known algorithm can efficiently find optimal solutions. We experimentally explore a novel approach to this problem by using a D-Wave quantum annealer, [...] Read more.
The prediction of financial crashes in a complex financial network is known to be an NP-hard problem, which means that no known algorithm can efficiently find optimal solutions. We experimentally explore a novel approach to this problem by using a D-Wave quantum annealer, benchmarking its performance for attaining a financial equilibrium. To be specific, the equilibrium condition of a nonlinear financial model is embedded into a higher-order unconstrained binary optimization (HUBO) problem, which is then transformed into a spin-1/2 Hamiltonian with at most, two-qubit interactions. The problem is thus equivalent to finding the ground state of an interacting spin Hamiltonian, which can be approximated with a quantum annealer. The size of the simulation is mainly constrained by the necessity of a large number of physical qubits representing a logical qubit with the correct connectivity. Our experiment paves the way for the codification of this quantitative macroeconomics problem in quantum annealers. Full article
(This article belongs to the Special Issue Quantum Control and Quantum Computing)
Show Figures

Figure 1

22 pages, 2941 KB  
Article
Characterizing the Impact of Communication on Cellular and Collective Behavior Using a Three-Dimensional Multiscale Cellular Model
by Moriah Echlin, Boris Aguilar and Ilya Shmulevich
Entropy 2023, 25(2), 319; https://doi.org/10.3390/e25020319 - 9 Feb 2023
Viewed by 2385
Abstract
Communication between cells enables the coordination that drives structural and functional complexity in biological systems. Both single and multicellular organisms have evolved diverse communication systems for a range of purposes, including synchronization of behavior, division of labor, and spatial organization. Synthetic systems are [...] Read more.
Communication between cells enables the coordination that drives structural and functional complexity in biological systems. Both single and multicellular organisms have evolved diverse communication systems for a range of purposes, including synchronization of behavior, division of labor, and spatial organization. Synthetic systems are also increasingly being engineered to utilize cell–cell communication. While research has elucidated the form and function of cell–cell communication in many biological systems, our knowledge is still limited by the confounding effects of other biological phenomena at play and the bias of the evolutionary background. In this work, our goal is to push forward the context-free understanding of what impact cell–cell communication can have on cellular and population behavior to more fully understand the extent to which cell–cell communication systems can be utilized, modified, and engineered. We use an in silico model of 3D multiscale cellular populations, with dynamic intracellular networks interacting via diffusible signals. We focus on two key communication parameters: the effective interaction distance at which cells are able to interact and the receptor activation threshold. We found that cell–cell communication can be divided into six different forms along the parameter axes, three asocial and three social. We also show that cellular behavior, tissue composition, and tissue diversity are all highly sensitive to both the general form and specific parameters of communication even when the cellular network has not been biased towards that behavior. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

19 pages, 339 KB  
Article
Uncertainty Relations in the Madelung Picture Including a Dissipative Environment
by Dieter Schuch and Moise Bonilla-Licea
Entropy 2023, 25(2), 312; https://doi.org/10.3390/e25020312 - 8 Feb 2023
Cited by 2 | Viewed by 1706
Abstract
In a recent paper, we have shown how in Madelung’s hydrodynamic formulation of quantum mechanics, the uncertainties are related to the phase and amplitude of the complex wave function. Now we include a dissipative environment via a nonlinear modified Schrödinger equation. The effect [...] Read more.
In a recent paper, we have shown how in Madelung’s hydrodynamic formulation of quantum mechanics, the uncertainties are related to the phase and amplitude of the complex wave function. Now we include a dissipative environment via a nonlinear modified Schrödinger equation. The effect of the environment is described by a complex logarithmic nonlinearity that vanishes on average. Nevertheless, there are various changes in the dynamics of the uncertainties originating from the nonlinear term. Again, this is illustrated explicitly using generalized coherent states as examples. With particular focus on the quantum mechanical contribution to the energy and the uncertainty product, connections can be made with the thermodynamic properties of the environment. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations III)
9 pages, 492 KB  
Article
Carnot Cycles in a Harmonically Confined Ultracold Gas across Bose–Einstein Condensation
by Ignacio Reyes-Ayala, Marcos Miotti, Michal Hemmerling, Romain Dubessy, Hélène Perrin, Victor Romero-Rochin and Vanderlei Salvador Bagnato
Entropy 2023, 25(2), 311; https://doi.org/10.3390/e25020311 - 8 Feb 2023
Cited by 5 | Viewed by 3413
Abstract
Carnot cycles of samples of harmonically confined ultracold 87Rb fluids, near and across Bose–Einstein condensation (BEC), are analyzed. This is achieved through the experimental determination of the corresponding equation of state in terms of the appropriate global thermodynamics for non-uniform confined fluids. [...] Read more.
Carnot cycles of samples of harmonically confined ultracold 87Rb fluids, near and across Bose–Einstein condensation (BEC), are analyzed. This is achieved through the experimental determination of the corresponding equation of state in terms of the appropriate global thermodynamics for non-uniform confined fluids. We focus our attention on the efficiency of the Carnot engine when the cycle occurs for temperatures either above or below the critical temperature and when BEC is crossed during the cycle. The measurement of the cycle efficiency reveals a perfect agreement with the theoretical prediction (1TL/TH), with TH and TL serving as the temperatures of the hot and cold heat exchange reservoirs. Other cycles are also considered for comparison. Full article
(This article belongs to the Special Issue Ultracold Gases and Thermodynamics)
Show Figures

Figure 1

33 pages, 752 KB  
Article
Distributed Hypothesis Testing over a Noisy Channel: Error-Exponents Trade-Off
by Sreejith Sreekumar and Deniz Gündüz
Entropy 2023, 25(2), 304; https://doi.org/10.3390/e25020304 - 6 Feb 2023
Viewed by 3128
Abstract
A two-terminal distributed binary hypothesis testing problem over a noisy channel is studied. The two terminals, called the observer and the decision maker, each has access to n independent and identically distributed samples, denoted by U and V, respectively. The observer communicates [...] Read more.
A two-terminal distributed binary hypothesis testing problem over a noisy channel is studied. The two terminals, called the observer and the decision maker, each has access to n independent and identically distributed samples, denoted by U and V, respectively. The observer communicates to the decision maker over a discrete memoryless channel, and the decision maker performs a binary hypothesis test on the joint probability distribution of (U,V) based on V and the noisy information received from the observer. The trade-off between the exponents of the type I and type II error probabilities is investigated. Two inner bounds are obtained, one using a separation-based scheme that involves type-based compression and unequal error-protection channel coding, and the other using a joint scheme that incorporates type-based hybrid coding. The separation-based scheme is shown to recover the inner bound obtained by Han and Kobayashi for the special case of a rate-limited noiseless channel, and also the one obtained by the authors previously for a corner point of the trade-off. Finally, we show via an example that the joint scheme achieves a strictly tighter bound than the separation-based scheme for some points of the error-exponents trade-off. Full article
(This article belongs to the Special Issue Information Theory for Distributed Systems)
Show Figures

Figure 1

19 pages, 1155 KB  
Article
Random Walks on Networks with Centrality-Based Stochastic Resetting
by Kiril Zelenkovski, Trifce Sandev, Ralf Metzler, Ljupco Kocarev and Lasko Basnarkov
Entropy 2023, 25(2), 293; https://doi.org/10.3390/e25020293 - 4 Feb 2023
Cited by 17 | Viewed by 3460
Abstract
We introduce a refined way to diffusely explore complex networks with stochastic resetting where the resetting site is derived from node centrality measures. This approach differs from previous ones, since it not only allows the random walker with a certain probability to jump [...] Read more.
We introduce a refined way to diffusely explore complex networks with stochastic resetting where the resetting site is derived from node centrality measures. This approach differs from previous ones, since it not only allows the random walker with a certain probability to jump from the current node to a deliberately chosen resetting node, rather it enables the walker to jump to the node that can reach all other nodes faster. Following this strategy, we consider the resetting site to be the geometric center, the node that minimizes the average travel time to all the other nodes. Using the established Markov chain theory, we calculate the Global Mean First Passage Time (GMFPT) to determine the search performance of the random walk with resetting for different resetting node candidates individually. Furthermore, we compare which nodes are better resetting node sites by comparing the GMFPT for each node. We study this approach for different topologies of generic and real-life networks. We show that, for directed networks extracted for real-life relationships, this centrality focused resetting can improve the search to a greater extent than for the generated undirected networks. This resetting to the center advocated here can minimize the average travel time to all other nodes in real networks as well. We also present a relationship between the longest shortest path (the diameter), the average node degree and the GMFPT when the starting node is the center. We show that, for undirected scale-free networks, stochastic resetting is effective only for networks that are extremely sparse with tree-like structures as they have larger diameters and smaller average node degrees. For directed networks, the resetting is beneficial even for networks that have loops. The numerical results are confirmed by analytic solutions. Our study demonstrates that the proposed random walk approach with resetting based on centrality measures reduces the memoryless search time for targets in the examined network topologies. Full article
(This article belongs to the Topic Complex Systems and Network Science)
Show Figures

Figure 1

17 pages, 3668 KB  
Article
Sample, Fuzzy and Distribution Entropies of Heart Rate Variability: What Do They Tell Us on Cardiovascular Complexity?
by Paolo Castiglioni, Giampiero Merati, Gianfranco Parati and Andrea Faini
Entropy 2023, 25(2), 281; https://doi.org/10.3390/e25020281 - 2 Feb 2023
Cited by 16 | Viewed by 5298
Abstract
Distribution Entropy (DistEn) has been introduced as an alternative to Sample Entropy (SampEn) to assess the heart rate variability (HRV) on much shorter series without the arbitrary definition of distance thresholds. However, DistEn, considered a measure of cardiovascular complexity, differs substantially from SampEn [...] Read more.
Distribution Entropy (DistEn) has been introduced as an alternative to Sample Entropy (SampEn) to assess the heart rate variability (HRV) on much shorter series without the arbitrary definition of distance thresholds. However, DistEn, considered a measure of cardiovascular complexity, differs substantially from SampEn or Fuzzy Entropy (FuzzyEn), both measures of HRV randomness. This work aims to compare DistEn, SampEn, and FuzzyEn analyzing postural changes (expected to modify the HRV randomness through a sympatho/vagal shift without affecting the cardiovascular complexity) and low-level spinal cord injuries (SCI, whose impaired integrative regulation may alter the system complexity without affecting the HRV spectrum). We recorded RR intervals in able-bodied (AB) and SCI participants in supine and sitting postures, evaluating DistEn, SampEn, and FuzzyEn over 512 beats. The significance of “case” (AB vs. SCI) and “posture” (supine vs. sitting) was assessed by longitudinal analysis. Multiscale DistEn (mDE), SampEn (mSE), and FuzzyEn (mFE) compared postures and cases at each scale between 2 and 20 beats. Unlike SampEn and FuzzyEn, DistEn is affected by the spinal lesion but not by the postural sympatho/vagal shift. The multiscale approach shows differences between AB and SCI sitting participants at the largest mFE scales and between postures in AB participants at the shortest mSE scales. Thus, our results support the hypothesis that DistEn measures cardiovascular complexity while SampEn/FuzzyEn measure HRV randomness, highlighting that together these methods integrate the information each of them provides. Full article
Show Figures

Graphical abstract

9 pages, 1205 KB  
Article
Narrow Pore Crossing of Active Particles under Stochastic Resetting
by Weitao Zhang, Yunyun Li, Fabio Marchesoni, Vyacheslav R. Misko and Pulak K. Ghosh
Entropy 2023, 25(2), 271; https://doi.org/10.3390/e25020271 - 1 Feb 2023
Cited by 10 | Viewed by 3463
Abstract
We propose a two-dimensional model of biochemical activation process, whereby self-propelling particles of finite correlation times are injected at the center of a circular cavity with constant rate equal to the inverse of their lifetime; activation is triggered when one such particle hits [...] Read more.
We propose a two-dimensional model of biochemical activation process, whereby self-propelling particles of finite correlation times are injected at the center of a circular cavity with constant rate equal to the inverse of their lifetime; activation is triggered when one such particle hits a receptor on the cavity boundary, modeled as a narrow pore. We numerically investigated this process by computing the particle mean-first exit times through the cavity pore as a function of the correlation and injection time constants. Due to the breach of the circular symmetry associated with the positioning of the receptor, the exit times may depend on the orientation of the self-propelling velocity at injection. Stochastic resetting appears to favor activation for large particle correlation times, where most of the underlying diffusion process occurs at the cavity boundary. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Graphical abstract

19 pages, 764 KB  
Article
Temporal, Structural, and Functional Heterogeneities Extend Criticality and Antifragility in Random Boolean Networks
by Amahury Jafet López-Díaz, Fernanda Sánchez-Puig and Carlos Gershenson
Entropy 2023, 25(2), 254; https://doi.org/10.3390/e25020254 - 31 Jan 2023
Cited by 14 | Viewed by 4668
Abstract
Most models of complex systems have been homogeneous, i.e., all elements have the same properties (spatial, temporal, structural, functional). However, most natural systems are heterogeneous: few elements are more relevant, larger, stronger, or faster than others. In homogeneous systems, criticality—a balance between change [...] Read more.
Most models of complex systems have been homogeneous, i.e., all elements have the same properties (spatial, temporal, structural, functional). However, most natural systems are heterogeneous: few elements are more relevant, larger, stronger, or faster than others. In homogeneous systems, criticality—a balance between change and stability, order and chaos—is usually found for a very narrow region in the parameter space, close to a phase transition. Using random Boolean networks—a general model of discrete dynamical systems—we show that heterogeneity—in time, structure, and function—can broaden additively the parameter region where criticality is found. Moreover, parameter regions where antifragility is found are also increased with heterogeneity. However, maximum antifragility is found for particular parameters in homogeneous networks. Our work suggests that the “optimal” balance between homogeneity and heterogeneity is non-trivial, context-dependent, and in some cases, dynamic. Full article
(This article belongs to the Special Issue The Principle of Dynamical Criticality)
Show Figures

Graphical abstract

25 pages, 21310 KB  
Article
Turn-Taking Mechanisms in Imitative Interaction: Robotic Social Interaction Based on the Free Energy Principle
by Nadine Wirkuttis, Wataru Ohata and Jun Tani
Entropy 2023, 25(2), 263; https://doi.org/10.3390/e25020263 - 31 Jan 2023
Cited by 8 | Viewed by 4534
Abstract
This study explains how the leader-follower relationship and turn-taking could develop in a dyadic imitative interaction by conducting robotic simulation experiments based on the free energy principle. Our prior study showed that introducing a parameter during the model training phase can determine leader [...] Read more.
This study explains how the leader-follower relationship and turn-taking could develop in a dyadic imitative interaction by conducting robotic simulation experiments based on the free energy principle. Our prior study showed that introducing a parameter during the model training phase can determine leader and follower roles for subsequent imitative interactions. The parameter is defined as w, the so-called meta-prior, and is a weighting factor used to regulate the complexity term versus the accuracy term when minimizing the free energy. This can be read as sensory attenuation, in which the robot’s prior beliefs about action are less sensitive to sensory evidence. The current extended study examines the possibility that the leader-follower relationship shifts depending on changes in w during the interaction phase. We identified a phase space structure with three distinct types of behavioral coordination using comprehensive simulation experiments with sweeps of w of both robots during the interaction. Ignoring behavior in which the robots follow their own intention was observed in the region in which both ws were set to large values. One robot leading, followed by the other robot was observed when one w was set larger and the other was set smaller. Spontaneous, random turn-taking between the leader and the follower was observed when both ws were set at smaller or intermediate values. Finally, we examined a case of slowly oscillating w in anti-phase between the two agents during the interaction. The simulation experiment resulted in turn-taking in which the leader-follower relationship switched during determined sequences, accompanied by periodic shifts of ws. An analysis using transfer entropy found that the direction of information flow between the two agents also shifted along with turn-taking. Herein, we discuss qualitative differences between random/spontaneous turn-taking and agreed-upon sequential turn-taking by reviewing both synthetic and empirical studies. Full article
(This article belongs to the Special Issue Brain Theory from Artificial Life)
Show Figures

Figure 1

13 pages, 1425 KB  
Article
Quantum Bounds on the Generalized Lyapunov Exponents
by Silvia Pappalardi and Jorge Kurchan
Entropy 2023, 25(2), 246; https://doi.org/10.3390/e25020246 - 30 Jan 2023
Cited by 14 | Viewed by 3915
Abstract
We discuss the generalized quantum Lyapunov exponents Lq, defined from the growth rate of the powers of the square commutator. They may be related to an appropriately defined thermodynamic limit of the spectrum of the commutator, which plays the role of [...] Read more.
We discuss the generalized quantum Lyapunov exponents Lq, defined from the growth rate of the powers of the square commutator. They may be related to an appropriately defined thermodynamic limit of the spectrum of the commutator, which plays the role of a large deviation function, obtained from the exponents Lq via a Legendre transform. We show that such exponents obey a generalized bound to chaos due to the fluctuation–dissipation theorem, as already discussed in the literature. The bounds for larger q are actually stronger, placing a limit on the large deviations of chaotic properties. Our findings at infinite temperature are exemplified by a numerical study of the kicked top, a paradigmatic model of quantum chaos. Full article
Show Figures

Figure 1

25 pages, 434 KB  
Article
A Dynamic Programming Algorithm for Finding an Optimal Sequence of Informative Measurements
by Peter N. Loxley and Ka-Wai Cheung
Entropy 2023, 25(2), 251; https://doi.org/10.3390/e25020251 - 30 Jan 2023
Cited by 3 | Viewed by 3378
Abstract
An informative measurement is the most efficient way to gain information about an unknown state. We present a first-principles derivation of a general-purpose dynamic programming algorithm that returns an optimal sequence of informative measurements by sequentially maximizing the entropy of possible measurement outcomes. [...] Read more.
An informative measurement is the most efficient way to gain information about an unknown state. We present a first-principles derivation of a general-purpose dynamic programming algorithm that returns an optimal sequence of informative measurements by sequentially maximizing the entropy of possible measurement outcomes. This algorithm can be used by an autonomous agent or robot to decide where best to measure next, planning a path corresponding to an optimal sequence of informative measurements. The algorithm is applicable to states and controls that are either continuous or discrete, and agent dynamics that is either stochastic or deterministic; including Markov decision processes and Gaussian processes. Recent results from the fields of approximate dynamic programming and reinforcement learning, including on-line approximations such as rollout and Monte Carlo tree search, allow the measurement task to be solved in real time. The resulting solutions include non-myopic paths and measurement sequences that can generally outperform, sometimes substantially, commonly used greedy approaches. This is demonstrated for a global search task, where on-line planning for a sequence of local searches is found to reduce the number of measurements in the search by approximately half. A variant of the algorithm is derived for Gaussian processes for active sensing. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 5443 KB  
Article
Dissipation + Utilization = Self-Organization
by Harrison Crecraft
Entropy 2023, 25(2), 229; https://doi.org/10.3390/e25020229 - 26 Jan 2023
Cited by 2 | Viewed by 4420
Abstract
This article applies the thermocontextual interpretation (TCI) to open dissipative systems. TCI is a generalization of the conceptual frameworks underlying mechanics and thermodynamics. It defines exergy with respect to the positive-temperature surroundings as a property of state, and it defines the dissipation and [...] Read more.
This article applies the thermocontextual interpretation (TCI) to open dissipative systems. TCI is a generalization of the conceptual frameworks underlying mechanics and thermodynamics. It defines exergy with respect to the positive-temperature surroundings as a property of state, and it defines the dissipation and utilization of exergy as functional properties of process. The Second Law of thermodynamics states that an isolated system maximizes its entropy (by dissipating and minimizing its exergy). TCI’s Postulate Four generalizes the Second Law for non-isolated systems. A non-isolated system minimizes its exergy, but it can do so either by dissipating exergy or utilizing it. A non-isolated dissipator can utilize exergy either by performing external work on the surroundings or by carrying out the internal work of sustaining other dissipators within a dissipative network. TCI defines a dissipative system’s efficiency by the ratio of exergy utilization to exergy input. TCI’s Postulate Five (MaxEff), introduced here, states that a system maximizes its efficiency to the extent allowed by the system’s kinetics and thermocontextual boundary constraints. Two paths of increasing efficiency lead to higher rates of growth and to higher functional complexity for dissipative networks. These are key features for the origin and evolution of life. Full article
(This article belongs to the Special Issue Dissipative Structuring in Life)
Show Figures

Figure 1

16 pages, 1935 KB  
Article
Measurement-Based Quantum Thermal Machines with Feedback Control
by Bibek Bhandari, Robert Czupryniak, Paolo Andrea Erdman and Andrew N. Jordan
Entropy 2023, 25(2), 204; https://doi.org/10.3390/e25020204 - 20 Jan 2023
Cited by 12 | Viewed by 5190
Abstract
We investigated coupled-qubit-based thermal machines powered by quantum measurements and feedback. We considered two different versions of the machine: (1) a quantum Maxwell’s demon, where the coupled-qubit system is connected to a detachable single shared bath, and (2) a measurement-assisted refrigerator, where the [...] Read more.
We investigated coupled-qubit-based thermal machines powered by quantum measurements and feedback. We considered two different versions of the machine: (1) a quantum Maxwell’s demon, where the coupled-qubit system is connected to a detachable single shared bath, and (2) a measurement-assisted refrigerator, where the coupled-qubit system is in contact with a hot and cold bath. In the quantum Maxwell’s demon case, we discuss both discrete and continuous measurements. We found that the power output from a single qubit-based device can be improved by coupling it to the second qubit. We further found that the simultaneous measurement of both qubits can produce higher net heat extraction compared to two setups operated in parallel where only single-qubit measurements are performed. In the refrigerator case, we used continuous measurement and unitary operations to power the coupled-qubit-based refrigerator. We found that the cooling power of a refrigerator operated with swap operations can be enhanced by performing suitable measurements. Full article
(This article belongs to the Special Issue Thermodynamics in Quantum and Mesoscopic Systems)
Show Figures

Figure 1

12 pages, 784 KB  
Article
Generalized Survival Probability
by David A. Zarate-Herrada, Lea F. Santos and E. Jonathan Torres-Herrera
Entropy 2023, 25(2), 205; https://doi.org/10.3390/e25020205 - 20 Jan 2023
Cited by 3 | Viewed by 2486
Abstract
Survival probability measures the probability that a system taken out of equilibrium has not yet transitioned from its initial state. Inspired by the generalized entropies used to analyze nonergodic states, we introduce a generalized version of the survival probability and discuss how it [...] Read more.
Survival probability measures the probability that a system taken out of equilibrium has not yet transitioned from its initial state. Inspired by the generalized entropies used to analyze nonergodic states, we introduce a generalized version of the survival probability and discuss how it can assist in studies of the structure of eigenstates and ergodicity. Full article
Show Figures

Figure 1

22 pages, 2693 KB  
Article
Scarring in Rough Rectangular Billiards
by Felix M. Izrailev, German A. Luna-Acosta and J. A. Mendez-Bermudez
Entropy 2023, 25(2), 189; https://doi.org/10.3390/e25020189 - 18 Jan 2023
Viewed by 2091
Abstract
We study the mechanism of scarring of eigenstates in rectangular billiards with slightly corrugated surfaces and show that it is very different from that known in Sinai and Bunimovich billiards. We demonstrate that there are two sets of scar states. One set is [...] Read more.
We study the mechanism of scarring of eigenstates in rectangular billiards with slightly corrugated surfaces and show that it is very different from that known in Sinai and Bunimovich billiards. We demonstrate that there are two sets of scar states. One set is related to the bouncing ball trajectories in the configuration space of the corresponding classical billiard. A second set of scar-like states emerges in the momentum space, which originated from the plane-wave states of the unperturbed flat billiard. In the case of billiards with one rough surface, the numerical data demonstrate the repulsion of eigenstates from this surface. When two horizontal rough surfaces are considered, the repulsion effect is either enhanced or canceled depending on whether the rough profiles are symmetric or antisymmetric. The effect of repulsion is quite strong and influences the structure of all eigenstates, indicating that the symmetric properties of the rough profiles are important for the problem of scattering of electromagnetic (or electron) waves through quasi-one-dimensional waveguides. Our approach is based on the reduction of the model of one particle in the billiard with corrugated surfaces to a model of two artificial particles in the billiard with flat surfaces, however, with an effective interaction between these particles. As a result, the analysis is conducted in terms of a two-particle basis, and the roughness of the billiard boundaries is absorbed by a quite complicated potential. Full article
Show Figures

Figure 1

12 pages, 329 KB  
Article
Pauli’s Electron in Ehrenfest and Bohm Theories, a Comparative Study
by Asher Yahalom
Entropy 2023, 25(2), 190; https://doi.org/10.3390/e25020190 - 18 Jan 2023
Cited by 2 | Viewed by 2168
Abstract
Electrons moving at slow speeds much lower than the speed of light are described by a wave function which is a solution of Pauli’s equation. This is a low-velocity limit of the relativistic Dirac equation. Here we compare two approaches, one of which [...] Read more.
Electrons moving at slow speeds much lower than the speed of light are described by a wave function which is a solution of Pauli’s equation. This is a low-velocity limit of the relativistic Dirac equation. Here we compare two approaches, one of which is the more conservative Copenhagen’s interpretation denying a trajectory of the electron but allowing a trajectory to the electron expectation value through the Ehrenfest theorem. The said expectation value is of course calculated using a solution of Pauli’s equation. A less orthodox approach is championed by Bohm, and attributes a velocity field to the electron also derived from the Pauli wave function. It is thus interesting to compare the trajectory followed by the electron according to Bohm and its expectation value according to Ehrenfest. Both similarities and differences will be considered. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics II)
Show Figures

Figure 1

32 pages, 2388 KB  
Article
Quantum Annealing in the NISQ Era: Railway Conflict Management
by Krzysztof Domino, Mátyás Koniorczyk, Krzysztof Krawiec, Konrad Jałowiecki, Sebastian Deffner and Bartłomiej Gardas
Entropy 2023, 25(2), 191; https://doi.org/10.3390/e25020191 - 18 Jan 2023
Cited by 16 | Viewed by 7958
Abstract
We are in the noisy intermediate-scale quantum (NISQ) devices’ era, in which quantum hardware has become available for application in real-world problems. However, demonstrations of the usefulness of such NISQ devices are still rare. In this work, we consider a practical railway dispatching [...] Read more.
We are in the noisy intermediate-scale quantum (NISQ) devices’ era, in which quantum hardware has become available for application in real-world problems. However, demonstrations of the usefulness of such NISQ devices are still rare. In this work, we consider a practical railway dispatching problem: delay and conflict management on single-track railway lines. We examine the train dispatching consequences of the arrival of an already delayed train to a given network segment. This problem is computationally hard and needs to be solved almost in real time. We introduce a quadratic unconstrained binary optimization (QUBO) model of this problem, which is compatible with the emerging quantum annealing technology. The model’s instances can be executed on present-day quantum annealers. As a proof-of-concept, we solve selected real-life problems from the Polish railway network using D-Wave quantum annealers. As a reference, we also provide solutions calculated with classical methods, including the conventional solution of a linear integer version of the model as well as the solution of the QUBO model using a tensor network-based algorithm. Our preliminary results illustrate the degree of difficulty of real-life railway instances for the current quantum annealing technology. Moreover, our analysis shows that the new generation of quantum annealers (the advantage system) does not perform well on those instances, either. Full article
(This article belongs to the Special Issue Advances in Quantum Computing)
Show Figures

Figure 1

18 pages, 2322 KB  
Article
Reconstructing Sparse Multiplex Networks with Application to Covert Networks
by Jin-Zhu Yu, Mincheng Wu, Gisela Bichler, Felipe Aros-Vera and Jianxi Gao
Entropy 2023, 25(1), 142; https://doi.org/10.3390/e25010142 - 10 Jan 2023
Cited by 2 | Viewed by 2821
Abstract
Network structure provides critical information for understanding the dynamic behavior of complex systems. However, the complete structure of real-world networks is often unavailable, thus it is crucially important to develop approaches to infer a more complete structure of networks. In this paper, we [...] Read more.
Network structure provides critical information for understanding the dynamic behavior of complex systems. However, the complete structure of real-world networks is often unavailable, thus it is crucially important to develop approaches to infer a more complete structure of networks. In this paper, we integrate the configuration model for generating random networks into an Expectation–Maximization–Aggregation (EMA) framework to reconstruct the complete structure of multiplex networks. We validate the proposed EMA framework against the Expectation–Maximization (EM) framework and random model on several real-world multiplex networks, including both covert and overt ones. It is found that the EMA framework generally achieves the best predictive accuracy compared to the EM framework and the random model. As the number of layers increases, the performance improvement of EMA over EM decreases. The inferred multiplex networks can be leveraged to inform the decision-making on monitoring covert networks as well as allocating limited resources for collecting additional information to improve reconstruction accuracy. For law enforcement agencies, the inferred complete network structure can be used to develop more effective strategies for covert network interdiction. Full article
(This article belongs to the Special Issue Dynamics of Complex Networks)
Show Figures

Figure 1

Back to TopTop