entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
8 pages, 1014 KiB  
Article
Quantum Speed Limit and Divisibility of the Dynamical Map
by Jose Teittinen and Sabrina Maniscalco
Entropy 2021, 23(3), 331; https://doi.org/10.3390/e23030331 - 11 Mar 2021
Cited by 12 | Viewed by 3223
Abstract
The quantum speed limit (QSL) is the theoretical lower limit of the time for a quantum system to evolve from a given state to another one. Interestingly, it has been shown that non-Markovianity can be used to speed-up the dynamics and to lower [...] Read more.
The quantum speed limit (QSL) is the theoretical lower limit of the time for a quantum system to evolve from a given state to another one. Interestingly, it has been shown that non-Markovianity can be used to speed-up the dynamics and to lower the QSL time, although this behaviour is not universal. In this paper, we further carry on the investigation on the connection between QSL and non-Markovianity by looking at the effects of P- and CP-divisibility of the dynamical map to the quantum speed limit. We show that the speed-up can also be observed under P- and CP-divisible dynamics, and that the speed-up is not necessarily tied to the transition from P-divisible to non-P-divisible dynamics. Full article
(This article belongs to the Special Issue Quantum Information Concepts in Open Quantum Systems)
Show Figures

Figure 1

19 pages, 362 KiB  
Review
Information Theory for Agents in Artificial Intelligence, Psychology, and Economics
by Michael S. Harré
Entropy 2021, 23(3), 310; https://doi.org/10.3390/e23030310 - 6 Mar 2021
Cited by 17 | Viewed by 7909
Abstract
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and [...] Read more.
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called ‘small worlds’ but cannot work in ‘large worlds’. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions remain to be answered in all three fields in order to make progress in producing realistic models of human decision-making in the real world in which we live in. Full article
Show Figures

Graphical abstract

32 pages, 4858 KiB  
Article
Why Do Big Data and Machine Learning Entail the Fractional Dynamics?
by Haoyu Niu, YangQuan Chen and Bruce J. West
Entropy 2021, 23(3), 297; https://doi.org/10.3390/e23030297 - 28 Feb 2021
Cited by 23 | Viewed by 6803
Abstract
Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex [...] Read more.
Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex systems, increase the performance of optimization, and even extend the enabling of the potential for creativity. In this article, the authors discuss the fractional dynamics, FOT and rich fractional stochastic models. First, the use of fractional dynamics in big data analytics for quantifying big data variability stemming from the generation of complex systems is justified. Second, we show why fractional dynamics is needed in machine learning and optimal randomness when asking: “is there a more optimal way to optimize?”. Third, an optimal randomness case study for a stochastic configuration network (SCN) machine-learning method with heavy-tailed distributions is discussed. Finally, views on big data and (physics-informed) machine learning with fractional dynamics for future research are presented with concluding remarks. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

19 pages, 5707 KiB  
Article
Non-Extensive Statistical Analysis of Acoustic Emissions: The Variability of Entropic Index q during Loading of Brittle Materials Until Fracture
by Andronikos Loukidis, Dimos Triantis and Ilias Stavrakas
Entropy 2021, 23(3), 276; https://doi.org/10.3390/e23030276 - 25 Feb 2021
Cited by 6 | Viewed by 2247
Abstract
Non-extensive statistical mechanics (NESM), introduced by Tsallis based on the principle of non-additive entropy, is a generalisation of the Boltzmann–Gibbs statistics. NESM has been shown to provide the necessary theoretical and analytical implementation for studying complex systems such as the fracture mechanisms and [...] Read more.
Non-extensive statistical mechanics (NESM), introduced by Tsallis based on the principle of non-additive entropy, is a generalisation of the Boltzmann–Gibbs statistics. NESM has been shown to provide the necessary theoretical and analytical implementation for studying complex systems such as the fracture mechanisms and crack evolution processes that occur in mechanically loaded specimens of brittle materials. In the current work, acoustic emission (AE) data recorded when marble and cement mortar specimens were subjected to three distinct loading protocols until fracture, are discussed in the context of NESM. The NESM analysis showed that the cumulative distribution functions of the AE interevent times (i.e., the time interval between successive AE hits) follow a q-exponential function. For each examined specimen, the corresponding Tsallis entropic q-indices and the parameters βq and τq were calculated. The entropic index q shows a systematic behaviour strongly related to the various stages of the implemented loading protocols for all the examined specimens. Results seem to support the idea of using the entropic index q as a potential pre-failure indicator for the impending catastrophic fracture of the mechanically loaded specimens. Full article
(This article belongs to the Special Issue Complex Systems Time Series Analysis and Modeling for Geoscience)
Show Figures

Figure 1

33 pages, 477 KiB  
Article
Trade-offs between Error Exponents and Excess-Rate Exponents of Typical Slepian–Wolf Codes
by Ran Tamir (Averbuch) and Neri Merhav
Entropy 2021, 23(3), 265; https://doi.org/10.3390/e23030265 - 24 Feb 2021
Cited by 6 | Viewed by 2231
Abstract
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In [...] Read more.
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity. Full article
(This article belongs to the Special Issue Multiuser Information Theory III)
Show Figures

Figure 1

30 pages, 628 KiB  
Article
Lapsing Quickly into Fatalism: Bell on Backward Causation
by Travis Norsen and Huw Price
Entropy 2021, 23(2), 251; https://doi.org/10.3390/e23020251 - 22 Feb 2021
Cited by 3 | Viewed by 4747
Abstract
This is a dialogue between Huw Price and Travis Norsen, loosely inspired by a letter that Price received from J. S. Bell in 1988. The main topic of discussion is Bell’s views about retrocausal approaches to quantum theory and their relevance to contemporary [...] Read more.
This is a dialogue between Huw Price and Travis Norsen, loosely inspired by a letter that Price received from J. S. Bell in 1988. The main topic of discussion is Bell’s views about retrocausal approaches to quantum theory and their relevance to contemporary issues. Full article
(This article belongs to the Special Issue Quantum Theory and Causation)
Show Figures

Figure 1

14 pages, 1815 KiB  
Article
Signature of Generalized Gibbs Ensemble Deviation from Equilibrium: Negative Absorption Induced by a Local Quench
by Lorenzo Rossi, Fabrizio Dolcini, Fabio Cavaliere, Niccolò Traverso Ziani, Maura Sassetti and Fausto Rossi
Entropy 2021, 23(2), 220; https://doi.org/10.3390/e23020220 - 11 Feb 2021
Cited by 7 | Viewed by 2522
Abstract
When a parameter quench is performed in an isolated quantum system with a complete set of constants of motion, its out of equilibrium dynamics is considered to be well captured by the Generalized Gibbs Ensemble (GGE), characterized by a set [...] Read more.
When a parameter quench is performed in an isolated quantum system with a complete set of constants of motion, its out of equilibrium dynamics is considered to be well captured by the Generalized Gibbs Ensemble (GGE), characterized by a set {λα} of coefficients related to the constants of motion. We determine the most elementary GGE deviation from the equilibrium distribution that leads to detectable effects. By quenching a suitable local attractive potential in a one-dimensional electron system, the resulting GGE differs from equilibrium by only one single λα, corresponding to the emergence of an only partially occupied bound state lying below a fully occupied continuum of states. The effect is shown to induce optical gain, i.e., a negative peak in the absorption spectrum, indicating the stimulated emission of radiation, enabling one to identify GGE signatures in fermionic systems through optical measurements. We discuss the implementation in realistic setups. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

13 pages, 363 KiB  
Article
Selection of Embedding Dimension and Delay Time in Phase Space Reconstruction via Symbolic Dynamics
by Mariano Matilla-García, Isidro Morales, Jose Miguel Rodríguez and Manuel Ruiz Marín
Entropy 2021, 23(2), 221; https://doi.org/10.3390/e23020221 - 11 Feb 2021
Cited by 41 | Viewed by 5581
Abstract
The modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate time delay τ and embedding dimension p [...] Read more.
The modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate time delay τ and embedding dimension p for phase space reconstruction. The value of τ can be estimated from the Mutual Information, but this method is rather cumbersome computationally. Additionally, some researchers have recommended that τ should be chosen to be dependent on the embedding dimension p by means of an appropriate value for the time delay τw=(p1)τ, which is the optimal time delay for independence of the time series. The C-C method, based on Correlation Integral, is a method simpler than Mutual Information and has been proposed to select optimally τw and τ. In this paper, we suggest a simple method for estimating τ and τw based on symbolic analysis and symbolic entropy. As in the C-C method, τ is estimated as the first local optimal time delay and τw as the time delay for independence of the time series. The method is applied to several chaotic time series that are the base of comparison for several techniques. The numerical simulations for these systems verify that the proposed symbolic-based method is useful for practitioners and, according to the studied models, has a better performance than the C-C method for the choice of the time delay and embedding dimension. In addition, the method is applied to EEG data in order to study and compare some dynamic characteristics of brain activity under epileptic episodes Full article
(This article belongs to the Special Issue Information theory and Symbolic Analysis: Theory and Applications)
Show Figures

Figure 1

35 pages, 1675 KiB  
Review
The Entropy Universe
by Maria Ribeiro, Teresa Henriques, Luísa Castro, André Souto, Luís Antunes, Cristina Costa-Santos and Andreia Teixeira
Entropy 2021, 23(2), 222; https://doi.org/10.3390/e23020222 - 11 Feb 2021
Cited by 71 | Viewed by 14697
Abstract
About 160 years ago, the concept of entropy was introduced in thermodynamics by Rudolf Clausius. Since then, it has been continually extended, interpreted, and applied by researchers in many scientific fields, such as general physics, information theory, chaos theory, data mining, and mathematical [...] Read more.
About 160 years ago, the concept of entropy was introduced in thermodynamics by Rudolf Clausius. Since then, it has been continually extended, interpreted, and applied by researchers in many scientific fields, such as general physics, information theory, chaos theory, data mining, and mathematical linguistics. This paper presents The Entropy Universe, which aims to review the many variants of entropies applied to time-series. The purpose is to answer research questions such as: How did each entropy emerge? What is the mathematical definition of each variant of entropy? How are entropies related to each other? What are the most applied scientific fields for each entropy? We describe in-depth the relationship between the most applied entropies in time-series for different scientific fields, establishing bases for researchers to properly choose the variant of entropy most suitable for their data. The number of citations over the past sixteen years of each paper proposing a new entropy was also accessed. The Shannon/differential, the Tsallis, the sample, the permutation, and the approximate entropies were the most cited ones. Based on the ten research areas with the most significant number of records obtained in the Web of Science and Scopus, the areas in which the entropies are more applied are computer science, physics, mathematics, and engineering. The universe of entropies is growing each day, either due to the introducing new variants either due to novel applications. Knowing each entropy’s strengths and of limitations is essential to ensure the proper improvement of this research field. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
Show Figures

Figure 1

33 pages, 432 KiB  
Article
The Principle of Covariance and the Hamiltonian Formulation of General Relativity
by Massimo Tessarotto and Claudio Cremaschini
Entropy 2021, 23(2), 215; https://doi.org/10.3390/e23020215 - 10 Feb 2021
Cited by 13 | Viewed by 3636
Abstract
The implications of the general covariance principle for the establishment of a Hamiltonian variational formulation of classical General Relativity are addressed. The analysis is performed in the framework of the Einstein-Hilbert variational theory. Preliminarily, customary Lagrangian variational principles are reviewed, pointing out the [...] Read more.
The implications of the general covariance principle for the establishment of a Hamiltonian variational formulation of classical General Relativity are addressed. The analysis is performed in the framework of the Einstein-Hilbert variational theory. Preliminarily, customary Lagrangian variational principles are reviewed, pointing out the existence of a novel variational formulation in which the class of variations remains unconstrained. As a second step, the conditions of validity of the non-manifestly covariant ADM variational theory are questioned. The main result concerns the proof of its intrinsic non-Hamiltonian character and the failure of this approach in providing a symplectic structure of space-time. In contrast, it is demonstrated that a solution reconciling the physical requirements of covariance and manifest covariance of variational theory with the existence of a classical Hamiltonian structure for the gravitational field can be reached in the framework of synchronous variational principles. Both path-integral and volume-integral realizations of the Hamilton variational principle are explicitly determined and the corresponding physical interpretations are pointed out. Full article
(This article belongs to the Special Issue Quantum Regularization of Singular Black Hole Solutions)
17 pages, 1594 KiB  
Article
The Role of Entropy in Construct Specification Equations (CSE) to Improve the Validity of Memory Tests
by Jeanette Melin, Stefan Cano and Leslie Pendrill
Entropy 2021, 23(2), 212; https://doi.org/10.3390/e23020212 - 9 Feb 2021
Cited by 19 | Viewed by 3414
Abstract
Commonly used rating scales and tests have been found lacking reliability and validity, for example in neurodegenerative diseases studies, owing to not making recourse to the inherent ordinality of human responses, nor acknowledging the separability of person ability and item difficulty parameters according [...] Read more.
Commonly used rating scales and tests have been found lacking reliability and validity, for example in neurodegenerative diseases studies, owing to not making recourse to the inherent ordinality of human responses, nor acknowledging the separability of person ability and item difficulty parameters according to the well-known Rasch model. Here, we adopt an information theory approach, particularly extending deployment of the classic Brillouin entropy expression when explaining the difficulty of recalling non-verbal sequences in memory tests (i.e., Corsi Block Test and Digit Span Test): a more ordered task, of less entropy, will generally be easier to perform. Construct specification equations (CSEs) as a part of a methodological development, with entropy-based variables dominating, are found experimentally to explain (r=R2 = 0.98) and predict the construct of task difficulty for short-term memory tests using data from the NeuroMET (n = 88) and Gothenburg MCI (n = 257) studies. We propose entropy-based equivalence criteria, whereby different tasks (in the form of items) from different tests can be combined, enabling new memory tests to be formed by choosing a bespoke selection of items, leading to more efficient testing, improved reliability (reduced uncertainties) and validity. This provides opportunities for more practical and accurate measurement in clinical practice, research and trials. Full article
(This article belongs to the Special Issue Entropy in Brain Networks)
Show Figures

Graphical abstract

16 pages, 6760 KiB  
Article
The Effect of a Hidden Source on the Estimation of Connectivity Networks from Multivariate Time Series
by Christos Koutlis and Dimitris Kugiumtzis
Entropy 2021, 23(2), 208; https://doi.org/10.3390/e23020208 - 8 Feb 2021
Cited by 2 | Viewed by 2238
Abstract
Many methods of Granger causality, or broadly termed connectivity, have been developed to assess the causal relationships between the system variables based only on the information extracted from the time series. The power of these methods to capture the true underlying connectivity structure [...] Read more.
Many methods of Granger causality, or broadly termed connectivity, have been developed to assess the causal relationships between the system variables based only on the information extracted from the time series. The power of these methods to capture the true underlying connectivity structure has been assessed using simulated dynamical systems where the ground truth is known. Here, we consider the presence of an unobserved variable that acts as a hidden source for the observed high-dimensional dynamical system and study the effect of the hidden source on the estimation of the connectivity structure. In particular, the focus is on estimating the direct causality effects in high-dimensional time series (not including the hidden source) of relatively short length. We examine the performance of a linear and a nonlinear connectivity measure using dimension reduction and compare them to a linear measure designed for latent variables. For the simulations, four systems are considered, the coupled Hénon maps system, the coupled Mackey–Glass system, the neural mass model and the vector autoregressive (VAR) process, each comprising 25 subsystems (variables for VAR) at close chain coupling structure and another subsystem (variable for VAR) driving all others acting as the hidden source. The results show that the direct causality measures estimate, in general terms, correctly the existing connectivity in the absence of the source when its driving is zero or weak, yet fail to detect the actual relationships when the driving is strong, with the nonlinear measure of dimension reduction performing best. An example from finance including and excluding the USA index in the global market indices highlights the different performance of the connectivity measures in the presence of hidden source. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

52 pages, 769 KiB  
Article
Error Exponents and α-Mutual Information
by Sergio Verdú
Entropy 2021, 23(2), 199; https://doi.org/10.3390/e23020199 - 5 Feb 2021
Cited by 13 | Viewed by 4836
Abstract
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms [...] Read more.
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint. Full article
Show Figures

Figure 1

25 pages, 1135 KiB  
Article
Spectral Properties of Effective Dynamics from Conditional Expectations
by Feliks Nüske, Péter Koltai, Lorenzo Boninsegna and Cecilia Clementi
Entropy 2021, 23(2), 134; https://doi.org/10.3390/e23020134 - 21 Jan 2021
Cited by 6 | Viewed by 3628
Abstract
The reduction of high-dimensional systems to effective models on a smaller set of variables is an essential task in many areas of science. For stochastic dynamics governed by diffusion processes, a general procedure to find effective equations is the conditioning approach. In this [...] Read more.
The reduction of high-dimensional systems to effective models on a smaller set of variables is an essential task in many areas of science. For stochastic dynamics governed by diffusion processes, a general procedure to find effective equations is the conditioning approach. In this paper, we are interested in the spectrum of the generator of the resulting effective dynamics, and how it compares to the spectrum of the full generator. We prove a new relative error bound in terms of the eigenfunction approximation error for reversible systems. We also present numerical examples indicating that, if Kramers–Moyal (KM) type approximations are used to compute the spectrum of the reduced generator, it seems largely insensitive to the time window used for the KM estimators. We analyze the implications of these observations for systems driven by underdamped Langevin dynamics, and show how meaningful effective dynamics can be defined in this setting. Full article
Show Figures

Figure 1

12 pages, 4032 KiB  
Article
On-Road Detection of Driver Fatigue and Drowsiness during Medium-Distance Journeys
by Luca Salvati, Matteo d’Amore, Anita Fiorentino, Arcangelo Pellegrino, Pasquale Sena and Francesco Villecco
Entropy 2021, 23(2), 135; https://doi.org/10.3390/e23020135 - 21 Jan 2021
Cited by 39 | Viewed by 4797
Abstract
Background: The detection of driver fatigue as a cause of sleepiness is a key technology capable of preventing fatal accidents. This research uses a fatigue-related sleepiness detection algorithm based on the analysis of the pulse rate variability generated by the heartbeat and [...] Read more.
Background: The detection of driver fatigue as a cause of sleepiness is a key technology capable of preventing fatal accidents. This research uses a fatigue-related sleepiness detection algorithm based on the analysis of the pulse rate variability generated by the heartbeat and validates the proposed method by comparing it with an objective indicator of sleepiness (PERCLOS). Methods: changes in alert conditions affect the autonomic nervous system (ANS) and therefore heart rate variability (HRV), modulated in the form of a wave and monitored to detect long-term changes in the driver’s condition using real-time control. Results: the performance of the algorithm was evaluated through an experiment carried out in a road vehicle. In this experiment, data was recorded by three participants during different driving sessions and their conditions of fatigue and sleepiness were documented on both a subjective and objective basis. The validation of the results through PERCLOS showed a 63% adherence to the experimental findings. Conclusions: the present study confirms the possibility of continuously monitoring the driver’s status through the detection of the activation/deactivation states of the ANS based on HRV. The proposed method can help prevent accidents caused by drowsiness while driving. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines II)
Show Figures

Figure 1

25 pages, 958 KiB  
Review
Dynamics of Ion Channels via Non-Hermitian Quantum Mechanics
by Tobias Gulden and Alex Kamenev
Entropy 2021, 23(1), 125; https://doi.org/10.3390/e23010125 - 19 Jan 2021
Cited by 2 | Viewed by 3658
Abstract
We study dynamics and thermodynamics of ion transport in narrow, water-filled channels, considered as effective 1D Coulomb systems. The long range nature of the inter-ion interactions comes about due to the dielectric constants mismatch between the water and the surrounding medium, confining the [...] Read more.
We study dynamics and thermodynamics of ion transport in narrow, water-filled channels, considered as effective 1D Coulomb systems. The long range nature of the inter-ion interactions comes about due to the dielectric constants mismatch between the water and the surrounding medium, confining the electric filed to stay mostly within the water-filled channel. Statistical mechanics of such Coulomb systems is dominated by entropic effects which may be accurately accounted for by mapping onto an effective quantum mechanics. In presence of multivalent ions the corresponding quantum mechanics appears to be non-Hermitian. In this review we discuss a framework for semiclassical calculations for the effective non-Hermitian Hamiltonians. Non-Hermiticity elevates WKB action integrals from the real line to closed cycles on a complex Riemann surfaces where direct calculations are not attainable. We circumvent this issue by applying tools from algebraic topology, such as the Picard-Fuchs equation. We discuss how its solutions relate to the thermodynamics and correlation functions of multivalent solutions within narrow, water-filled channels. Full article
Show Figures

Figure 1

29 pages, 693 KiB  
Article
Beyond Causal Explanation: Einstein’s Principle Not Reichenbach’s
by Michael Silberstein, William Mark Stuckey and Timothy McDevitt
Entropy 2021, 23(1), 114; https://doi.org/10.3390/e23010114 - 16 Jan 2021
Cited by 5 | Viewed by 5830
Abstract
Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are [...] Read more.
Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are problematic precisely because they do not fully transcend the assumption that causal or constructive explanation must always be fundamental. Unlike retrocausal accounts, our principle explanation is a complete rejection of Reichenbach’s Principle. Furthermore, we will argue that the basis for our principle account of quantum mechanics is the physical principle sought by quantum information theorists for their reconstructions of quantum mechanics. Finally, we explain why our account is both fully realist and psi-epistemic. Full article
(This article belongs to the Special Issue Quantum Theory and Causation)
Show Figures

Graphical abstract

42 pages, 1487 KiB  
Review
Applications of Distributed-Order Fractional Operators: A Review
by Wei Ding, Sansit Patnaik, Sai Sidhardh and Fabio Semperlotti
Entropy 2021, 23(1), 110; https://doi.org/10.3390/e23010110 - 15 Jan 2021
Cited by 74 | Viewed by 6810
Abstract
Distributed-order fractional calculus (DOFC) is a rapidly emerging branch of the broader area of fractional calculus that has important and far-reaching applications for the modeling of complex systems. DOFC generalizes the intrinsic multiscale nature of constant and variable-order fractional operators opening significant opportunities [...] Read more.
Distributed-order fractional calculus (DOFC) is a rapidly emerging branch of the broader area of fractional calculus that has important and far-reaching applications for the modeling of complex systems. DOFC generalizes the intrinsic multiscale nature of constant and variable-order fractional operators opening significant opportunities to model systems whose behavior stems from the complex interplay and superposition of nonlocal and memory effects occurring over a multitude of scales. In recent years, a significant amount of studies focusing on mathematical aspects and real-world applications of DOFC have been produced. However, a systematic review of the available literature and of the state-of-the-art of DOFC as it pertains, specifically, to real-world applications is still lacking. This review article is intended to provide the reader a road map to understand the early development of DOFC and the progressive evolution and application to the modeling of complex real-world problems. The review starts by offering a brief introduction to the mathematics of DOFC, including analytical and numerical methods, and it continues providing an extensive overview of the applications of DOFC to fields like viscoelasticity, transport processes, and control theory that have seen most of the research activity to date. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

18 pages, 36615 KiB  
Article
Coupling between Blood Pressure and Subarachnoid Space Width Oscillations during Slow Breathing
by Agnieszka Gruszecka, Magdalena K. Nuckowska, Monika Waskow, Jacek Kot, Pawel J. Winklewski, Wojciech Guminski, Andrzej F. Frydrychowski, Jerzy Wtorek, Adam Bujnowski, Piotr Lass, Tomislav Stankovski and Marcin Gruszecki
Entropy 2021, 23(1), 113; https://doi.org/10.3390/e23010113 - 15 Jan 2021
Cited by 7 | Viewed by 3929
Abstract
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width [...] Read more.
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width (SAS), collected during slow ventilation and ventilation against inspiration resistance. The experiment was performed on a group of 20 healthy volunteers (12 females and 8 males; BMI =22.1±3.2 kg/m2; age 25.3±7.9 years). We analysed the recorded signals with a wavelet transform. For the first time, a method based on dynamical Bayesian inference was used to detect the effective phase connectivity and the underlying coupling functions between the SAS and BP signals. There are several new findings. Slow breathing with or without resistance increases the strength of the coupling between the respiratory and cardiac components of both measured signals. We also observed increases in the strength of the coupling between the respiratory component of the BP and the cardiac component of the SAS and vice versa. Slow breathing synchronises the SAS oscillations, between the brain hemispheres. It also diminishes the similarity of the coupling between all analysed pairs of oscillators, while inspiratory resistance partially reverses this phenomenon. BP–SAS and SAS–BP interactions may reflect changes in the overall biomechanical characteristics of the brain. Full article
Show Figures

Figure 1

26 pages, 430 KiB  
Article
Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data
by Elisavet M. Sofikitou, Ray Liu, Huipei Wang and Marianthi Markatou
Entropy 2021, 23(1), 107; https://doi.org/10.3390/e23010107 - 14 Jan 2021
Cited by 1 | Viewed by 3362
Abstract
Pearson residuals aid the task of identifying model misspecification because they compare the estimated, using data, model with the model assumed under the null hypothesis. We present different formulations of the Pearson residual system that account for the measurement scale of the data [...] Read more.
Pearson residuals aid the task of identifying model misspecification because they compare the estimated, using data, model with the model assumed under the null hypothesis. We present different formulations of the Pearson residual system that account for the measurement scale of the data and study their properties. We further concentrate on the case of mixed-scale data, that is, data measured in both categorical and interval scale. We study the asymptotic properties and the robustness of minimum disparity estimators obtained in the case of mixed-scale data and exemplify the performance of the methods via simulation. Full article
Show Figures

Graphical abstract

18 pages, 2691 KiB  
Article
Deep Task-Based Quantization
by Nir Shlezinger and Yonina C. Eldar
Entropy 2021, 23(1), 104; https://doi.org/10.3390/e23010104 - 13 Jan 2021
Cited by 45 | Viewed by 5922
Abstract
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such [...] Read more.
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such hybrid quantizers is quite complex, and their implementation requires complete knowledge of the statistical model of the analog signal. In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools. These mappings are designed to facilitate the task of recovering underlying information from the quantized signals. By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it. Our main target application is multiple-input multiple-output (MIMO) communication receivers, which simultaneously acquire a set of analog signals, and are commonly subject to constraints on the number of bits. Our results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model. Furthermore, for a symbol detection scenario, it is demonstrated that the proposed approach can realize reliable bit-efficient hybrid MIMO receivers capable of setting their quantization rule in light of the task. Full article
Show Figures

Figure 1

28 pages, 3278 KiB  
Review
High-Entropy Alloys for Advanced Nuclear Applications
by Ed J. Pickering, Alexander W. Carruthers, Paul J. Barron, Simon C. Middleburgh, David E. J. Armstrong and Amy S. Gandy
Entropy 2021, 23(1), 98; https://doi.org/10.3390/e23010098 - 11 Jan 2021
Cited by 212 | Viewed by 16863
Abstract
The expanded compositional freedom afforded by high-entropy alloys (HEAs) represents a unique opportunity for the design of alloys for advanced nuclear applications, in particular for applications where current engineering alloys fall short. This review assesses the work done to date in the field [...] Read more.
The expanded compositional freedom afforded by high-entropy alloys (HEAs) represents a unique opportunity for the design of alloys for advanced nuclear applications, in particular for applications where current engineering alloys fall short. This review assesses the work done to date in the field of HEAs for nuclear applications, provides critical insight into the conclusions drawn, and highlights possibilities and challenges for future study. It is found that our understanding of the irradiation responses of HEAs remains in its infancy, and much work is needed in order for our knowledge of any single HEA system to match our understanding of conventional alloys such as austenitic steels. A number of studies have suggested that HEAs possess ‘special’ irradiation damage resistance, although some of the proposed mechanisms, such as those based on sluggish diffusion and lattice distortion, remain somewhat unconvincing (certainly in terms of being universally applicable to all HEAs). Nevertheless, there may be some mechanisms and effects that are uniquely different in HEAs when compared to more conventional alloys, such as the effect that their poor thermal conductivities have on the displacement cascade. Furthermore, the opportunity to tune the compositions of HEAs over a large range to optimise particular irradiation responses could be very powerful, even if the design process remains challenging. Full article
(This article belongs to the Special Issue Future Directions of High Entropy Alloys)
Show Figures

Figure 1

12 pages, 687 KiB  
Article
Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels
by Agnieszka Pregowska
Entropy 2021, 23(1), 92; https://doi.org/10.3390/e23010092 - 10 Jan 2021
Cited by 13 | Viewed by 3165
Abstract
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates [...] Read more.
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1<s, the ITRσ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1<s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

13 pages, 3000 KiB  
Article
Information Thermodynamics and Reducibility of Large Gene Networks
by Swarnavo Sarkar, Joseph B. Hubbard, Michael Halter and Anne L. Plant
Entropy 2021, 23(1), 63; https://doi.org/10.3390/e23010063 - 1 Jan 2021
Cited by 2 | Viewed by 3788
Abstract
Gene regulatory networks (GRNs) control biological processes like pluripotency, differentiation, and apoptosis. Omics methods can identify a large number of putative network components (on the order of hundreds or thousands) but it is possible that in many cases a small subset of genes [...] Read more.
Gene regulatory networks (GRNs) control biological processes like pluripotency, differentiation, and apoptosis. Omics methods can identify a large number of putative network components (on the order of hundreds or thousands) but it is possible that in many cases a small subset of genes control the state of GRNs. Here, we explore how the topology of the interactions between network components may indicate whether the effective state of a GRN can be represented by a small subset of genes. We use methods from information theory to model the regulatory interactions in GRNs as cascading and superposing information channels. We propose an information loss function that enables identification of the conditions by which a small set of genes can represent the state of all the other genes in the network. This information-theoretic analysis extends to a measure of free energy change due to communication within the network, which provides a new perspective on the reducibility of GRNs. Both the information loss and relative free energy depend on the density of interactions and edge communication error in a network. Therefore, this work indicates that a loss in mutual information between genes in a GRN is directly coupled to a thermodynamic cost, i.e., a reduction of relative free energy, of the system. Full article
(This article belongs to the Special Issue Thermodynamics of Life: Cells, Organisms and Evolution)
Show Figures

Figure 1

15 pages, 1188 KiB  
Review
Application of Biological Domain Knowledge Based Feature Selection on Gene Expression Data
by Malik Yousef, Abhishek Kumar and Burcu Bakir-Gungor
Entropy 2021, 23(1), 2; https://doi.org/10.3390/e23010002 - 22 Dec 2020
Cited by 57 | Viewed by 6549
Abstract
In the last two decades, there have been massive advancements in high throughput technologies, which resulted in the exponential growth of public repositories of gene expression datasets for various phenotypes. It is possible to unravel biomarkers by comparing the gene expression levels under [...] Read more.
In the last two decades, there have been massive advancements in high throughput technologies, which resulted in the exponential growth of public repositories of gene expression datasets for various phenotypes. It is possible to unravel biomarkers by comparing the gene expression levels under different conditions, such as disease vs. control, treated vs. not treated, drug A vs. drug B, etc. This problem refers to a well-studied problem in the machine learning domain, i.e., the feature selection problem. In biological data analysis, most of the computational feature selection methodologies were taken from other fields, without considering the nature of the biological data. Thus, integrative approaches that utilize the biological knowledge while performing feature selection are necessary for this kind of data. The main idea behind the integrative gene selection process is to generate a ranked list of genes considering both the statistical metrics that are applied to the gene expression data, and the biological background information which is provided as external datasets. One of the main goals of this review is to explore the existing methods that integrate different types of information in order to improve the identification of the biomolecular signatures of diseases and the discovery of new potential targets for treatment. These integrative approaches are expected to aid the prediction, diagnosis, and treatment of diseases, as well as to enlighten us on disease state dynamics, mechanisms of their onset and progression. The integration of various types of biological information will necessitate the development of novel techniques for integration and data analysis. Another aim of this review is to boost the bioinformatics community to develop new approaches for searching and determining significant groups/clusters of features based on one or more biological grouping functions. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

34 pages, 852 KiB  
Article
Generalised Geometric Brownian Motion: Theory and Applications to Option Pricing
by Viktor Stojkoski, Trifce Sandev, Lasko Basnarkov, Ljupco Kocarev and Ralf Metzler
Entropy 2020, 22(12), 1432; https://doi.org/10.3390/e22121432 - 18 Dec 2020
Cited by 50 | Viewed by 11277
Abstract
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when [...] Read more.
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when comparing its properties with empirical distributions. As a solution, we investigate a generalisation of GBM where the introduction of a memory kernel critically determines the behaviour of the stochastic process. We find the general expressions for the moments, log-moments, and the expectation of the periodic log returns, and then obtain the corresponding probability density functions using the subordination approach. Particularly, we consider subdiffusive GBM (sGBM), tempered sGBM, a mix of GBM and sGBM, and a mix of sGBMs. We utilise the resulting generalised GBM (gGBM) in order to examine the empirical performance of a selected group of kernels in the pricing of European call options. Our results indicate that the performance of a kernel ultimately depends on the maturity of the option and its moneyness. Full article
(This article belongs to the Special Issue New Trends in Random Walks)
Show Figures

Figure 1

14 pages, 1523 KiB  
Article
Examining the Causal Structures of Deep Neural Networks Using Information Theory
by Scythia Marrow, Eric J. Michaud and Erik Hoel
Entropy 2020, 22(12), 1429; https://doi.org/10.3390/e22121429 - 18 Dec 2020
Cited by 7 | Viewed by 5986
Abstract
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the [...] Read more.
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN’s causal structure as it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training. Specifically, we introduce the effective information (EI) of a feedforward DNN, which is the mutual information between layer input and output following a maximum-entropy perturbation. The EI can be used to assess the degree of causal influence nodes and edges have over their downstream targets in each layer. We show that the EI can be further decomposed in order to examine the sensitivity of a layer (measured by how well edges transmit perturbations) and the degeneracy of a layer (measured by how edge overlap interferes with transmission), along with estimates of the amount of integrated information of a layer. Together, these properties define where each layer lies in the “causal plane”, which can be used to visualize how layer connectivity becomes more sensitive or degenerate over time, and how integration changes during training, revealing how the layer-by-layer causal structure differentiates. These results may help in understanding the generalization capabilities of DNNs and provide foundational tools for making DNNs both more generalizable and more explainable. Full article
Show Figures

Figure 1

26 pages, 11199 KiB  
Article
A Comprehensive Framework for Uncovering Non-Linearity and Chaos in Financial Markets: Empirical Evidence for Four Major Stock Market Indices
by Lucia Inglada-Perez
Entropy 2020, 22(12), 1435; https://doi.org/10.3390/e22121435 - 18 Dec 2020
Cited by 23 | Viewed by 4872
Abstract
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market [...] Read more.
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market indices: namely Dow Jones Industrial Average, Ibex 35, Nasdaq-100 and Nikkei 225. To this end, a comprehensive framework has been adopted encompassing a wide range of techniques and the most suitable methods for the analysis of noisy time series. By using daily closing values from January 1992 to July 2013, this study employs twelve techniques and tools of which five are specific to detecting chaos. The findings show no clear evidence of chaos, suggesting that the behavior of financial markets is nonlinear and stochastic. Full article
(This article belongs to the Special Issue Complexity in Economic and Social Systems)
Show Figures

Figure 1

20 pages, 357 KiB  
Article
Foundations of the Quaternion Quantum Mechanics
by Marek Danielewski and Lucjan Sapa
Entropy 2020, 22(12), 1424; https://doi.org/10.3390/e22121424 - 17 Dec 2020
Cited by 22 | Viewed by 7203
Abstract
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy [...] Read more.
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy theory (classical balance equations for isotropic Cauchy-elastic material) and using the Hamilton quaternion algebra, we present a rigorous derivation of the quaternion form of the non- and relativistic wave equations. The family of the wave equations and the Poisson equation are a straightforward consequence of the quaternion representation of the Cauchy model of the elastic continuum. This is the most general kind of quantum mechanics possessing the same kind of calculus of assertions as conventional quantum mechanics. The problem of the Schrödinger equation, where imaginary ‘i’ should emerge, is solved. This interpretation is a serious attempt to describe the ontology of quantum mechanics, and demonstrates that, besides Bohmian mechanics, the complete ontological interpretations of quantum theory exists. The model can be generalized and falsified. To ensure this theory to be true, we specified problems, allowing exposing its falsity. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
14 pages, 2346 KiB  
Article
Artificial Intelligence for Modeling Real Estate Price Using Call Detail Records and Hybrid Machine Learning Approach
by Gergo Pinter, Amir Mosavi and Imre Felde
Entropy 2020, 22(12), 1421; https://doi.org/10.3390/e22121421 - 16 Dec 2020
Cited by 33 | Viewed by 6310
Abstract
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel [...] Read more.
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel machine learning method is proposed to tackle real estate modeling complexity. Call detail records (CDR) provides excellent opportunities for in-depth investigation of the mobility characterization. This study explores the CDR potential for predicting the real estate price with the aid of artificial intelligence (AI). Several essential mobility entropy factors, including dweller entropy, dweller gyration, workers’ entropy, worker gyration, dwellers’ work distance, and workers’ home distance, are used as input variables. The prediction model is developed using the machine learning method of multi-layered perceptron (MLP) trained with the evolutionary algorithm of particle swarm optimization (PSO). Model performance is evaluated using mean square error (MSE), sustainability index (SI), and Willmott’s index (WI). The proposed model showed promising results revealing that the workers’ entropy and the dwellers’ work distances directly influence the real estate price. However, the dweller gyration, dweller entropy, workers’ gyration, and the workers’ home had a minimum effect on the price. Furthermore, it is shown that the flow of activities and entropy of mobility are often associated with the regions with lower real estate prices. Full article
Show Figures

Figure 1

25 pages, 7942 KiB  
Article
Statistical Features in High-Frequency Bands of Interictal iEEG Work Efficiently in Identifying the Seizure Onset Zone in Patients with Focal Epilepsy
by Most. Sheuli Akter, Md. Rabiul Islam, Toshihisa Tanaka, Yasushi Iimura, Takumi Mitsuhashi, Hidenori Sugano, Duo Wang and Md. Khademul Islam Molla
Entropy 2020, 22(12), 1415; https://doi.org/10.3390/e22121415 - 15 Dec 2020
Cited by 15 | Viewed by 4894
Abstract
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the [...] Read more.
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the possible seizure onset zone (SOZ) channels. It is known that the activity of HFCs in interictal iEEGs, including ripple and fast ripple bands, is associated with epileptic seizures. This paper proposes to decompose multi-channel interictal iEEG signals into a number of subbands. For every 20 s segment, twelve features are computed from each subband. A mutual information (MI)-based method with grid search was applied to select the most prominent bands and features. A gradient-boosting decision tree-based algorithm called LightGBM was used to score each segment of the channels and these were averaged together to achieve a final score for each channel. The possible SOZ channels were localized based on the higher value channels. The experimental results with eleven epilepsy patients were tested to observe the efficiency of the proposed design compared to the state-of-the-art methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

25 pages, 1361 KiB  
Article
Diffusion Limitations and Translocation Barriers in Atomically Thin Biomimetic Pores
by Subin Sahu and Michael Zwolak
Entropy 2020, 22(11), 1326; https://doi.org/10.3390/e22111326 - 20 Nov 2020
Cited by 5 | Viewed by 3593
Abstract
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors [...] Read more.
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors come into play. Ion dynamics both inside and outside the geometric volume of the pore can be critical in determining the transport properties of the channel due to several commensurate length scales, such as the effective membrane thickness, radii of the first and the second hydration layers, pore radius, and Debye length. In particular, for biomimetic pores, such as the graphene crown ether we examine here, there are regimes where transport is highly sensitive to the pore size due to the interplay of dehydration and interaction with pore charge. Picometer changes in the size, e.g., due to a minute strain, can lead to a large change in conductance. Outside of these regimes, the small pore size itself gives a large resistance, even when electrostatic factors and dehydration compensate each other to give a relatively flat—e.g., near barrierless—free energy landscape. The permeability, though, can still be large and ions will translocate rapidly after they arrive within the capture radius of the pore. This, in turn, leads to diffusion and drift effects dominating the conductance. The current thus plateaus and becomes effectively independent of pore-free energy characteristics. Measurement of this effect will give an estimate of the magnitude of kinetically limiting features, and experimentally constrain the local electromechanical conditions. Full article
Show Figures

Figure 1

15 pages, 1035 KiB  
Article
Entropy Ratio and Entropy Concentration Coefficient, with Application to the COVID-19 Pandemic
by Christoph Bandt
Entropy 2020, 22(11), 1315; https://doi.org/10.3390/e22111315 - 18 Nov 2020
Cited by 13 | Viewed by 4275
Abstract
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient [...] Read more.
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient C=1U. The latter is a multiplicative version of the Kullback-Leibler distance, with values between 0 and 1. For product measures and self-similar phenomena, it does not depend on the measurement level. Hence, C is an alternative to Gini’s concentration coefficient for measures with variation on different levels. Simple examples concern population density and gross domestic product. Application to time series patterns is indicated with a Markov chain. For the Covid-19 pandemic, entropy ratios indicate a homogeneous distribution of infections and the potential of local action when compared to measures for a whole region. Full article
(This article belongs to the Special Issue Information theory and Symbolic Analysis: Theory and Applications)
Show Figures

Figure 1

15 pages, 1363 KiB  
Article
Coherence and Entanglement Dynamics in Training Variational Quantum Perceptron
by Min Namkung and Younghun Kwon
Entropy 2020, 22(11), 1277; https://doi.org/10.3390/e22111277 - 11 Nov 2020
Cited by 3 | Viewed by 2937
Abstract
In quantum computation, what contributes supremacy of quantum computation? One of the candidates is known to be a quantum coherence because it is a resource used in the various quantum algorithms. We reveal that quantum coherence contributes to the training of variational quantum [...] Read more.
In quantum computation, what contributes supremacy of quantum computation? One of the candidates is known to be a quantum coherence because it is a resource used in the various quantum algorithms. We reveal that quantum coherence contributes to the training of variational quantum perceptron proposed by Y. Du et al., arXiv:1809.06056 (2018). In detail, we show that in the first part of the training of the variational quantum perceptron, the quantum coherence of the total system is concentrated in the index register and in the second part, the Grover algorithm consumes the quantum coherence in the index register. This implies that the quantum coherence distribution and the quantum coherence depletion are required in the training of variational quantum perceptron. In addition, we investigate the behavior of entanglement during the training of variational quantum perceptron. We show that the bipartite concurrence between feature and index register decreases since Grover operation is only performed on the index register. Also, we reveal that the concurrence between the two qubits of index register increases as the variational quantum perceptron is trained. Full article
(This article belongs to the Special Issue Physical Information and the Physical Foundations of Computation)
Show Figures

Figure 1

34 pages, 2670 KiB  
Review
An Overview of Key Technologies in Physical Layer Security
by Abraham Sanenga, Galefang Allycan Mapunda, Tshepiso Merapelo Ludo Jacob, Leatile Marata, Bokamoso Basutli and Joseph Monamati Chuma
Entropy 2020, 22(11), 1261; https://doi.org/10.3390/e22111261 - 6 Nov 2020
Cited by 52 | Viewed by 9254
Abstract
The open nature of radio propagation enables ubiquitous wireless communication. This allows for seamless data transmission. However, unauthorized users may pose a threat to the security of the data being transmitted to authorized users. This gives rise to network vulnerabilities such as hacking, [...] Read more.
The open nature of radio propagation enables ubiquitous wireless communication. This allows for seamless data transmission. However, unauthorized users may pose a threat to the security of the data being transmitted to authorized users. This gives rise to network vulnerabilities such as hacking, eavesdropping, and jamming of the transmitted information. Physical layer security (PLS) has been identified as one of the promising security approaches to safeguard the transmission from eavesdroppers in a wireless network. It is an alternative to the computationally demanding and complex cryptographic algorithms and techniques. PLS has continually received exponential research interest owing to the possibility of exploiting the characteristics of the wireless channel. One of the main characteristics includes the random nature of the transmission channel. The aforesaid nature makes it possible for confidential and authentic signal transmission between the sender and the receiver in the physical layer. We start by introducing the basic theories of PLS, including the wiretap channel, information-theoretic security, and a brief discussion of the cryptography security technique. Furthermore, an overview of multiple-input multiple-output (MIMO) communication is provided. The main focus of our review is based on the existing key-less PLS optimization techniques, their limitations, and challenges. The paper also looks into the promising key research areas in addressing these shortfalls. Lastly, a comprehensive overview of some of the recent PLS research in 5G and 6G technologies of wireless communication networks is provided. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

47 pages, 5007 KiB  
Article
Quantum Finite-Time Thermodynamics: Insight from a Single Qubit Engine
by Roie Dann, Ronnie Kosloff and Peter Salamon
Entropy 2020, 22(11), 1255; https://doi.org/10.3390/e22111255 - 4 Nov 2020
Cited by 36 | Viewed by 5132
Abstract
Incorporating time into thermodynamics allows for addressing the tradeoff between efficiency and power. A qubit engine serves as a toy model in order to study this tradeoff from first principles, based on the quantum theory of open systems. We study the quantum origin [...] Read more.
Incorporating time into thermodynamics allows for addressing the tradeoff between efficiency and power. A qubit engine serves as a toy model in order to study this tradeoff from first principles, based on the quantum theory of open systems. We study the quantum origin of irreversibility, originating from heat transport, quantum friction, and thermalization in the presence of external driving. We construct various finite-time engine cycles that are based on the Otto and Carnot templates. Our analysis highlights the role of coherence and the quantum origin of entropy production. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Graphical abstract

33 pages, 586 KiB  
Article
Entropy Production in Exactly Solvable Systems
by Luca Cocconi, Rosalba Garcia-Millan, Zigan Zhen, Bianca Buturca and Gunnar Pruessner
Entropy 2020, 22(11), 1252; https://doi.org/10.3390/e22111252 - 3 Nov 2020
Cited by 36 | Viewed by 7512
Abstract
The rate of entropy production by a stochastic process quantifies how far it is from thermodynamic equilibrium. Equivalently, entropy production captures the degree to which global detailed balance and time-reversal symmetry are broken. Despite abundant references to entropy production in the literature and [...] Read more.
The rate of entropy production by a stochastic process quantifies how far it is from thermodynamic equilibrium. Equivalently, entropy production captures the degree to which global detailed balance and time-reversal symmetry are broken. Despite abundant references to entropy production in the literature and its many applications in the study of non-equilibrium stochastic particle systems, a comprehensive list of typical examples illustrating the fundamentals of entropy production is lacking. Here, we present a brief, self-contained review of entropy production and calculate it from first principles in a catalogue of exactly solvable setups, encompassing both discrete- and continuous-state Markov processes, as well as single- and multiple-particle systems. The examples covered in this work provide a stepping stone for further studies on entropy production of more complex systems, such as many-particle active matter, as well as a benchmark for the development of alternative mathematical formalisms. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)
Show Figures

Figure 1

16 pages, 1849 KiB  
Article
Quantum Work Statistics with Initial Coherence
by María García Díaz, Giacomo Guarnieri and Mauro Paternostro
Entropy 2020, 22(11), 1223; https://doi.org/10.3390/e22111223 - 27 Oct 2020
Cited by 18 | Viewed by 3794
Abstract
The two-point measurement scheme for computing the thermodynamic work performed on a system requires it to be initially in equilibrium. The Margenau–Hill scheme, among others, extends the previous approach to allow for a non-equilibrium initial state. We establish a quantitative comparison between both [...] Read more.
The two-point measurement scheme for computing the thermodynamic work performed on a system requires it to be initially in equilibrium. The Margenau–Hill scheme, among others, extends the previous approach to allow for a non-equilibrium initial state. We establish a quantitative comparison between both schemes in terms of the amount of coherence present in the initial state of the system, as quantified by the l1-coherence measure. We show that the difference between the two first moments of work, the variances of work, and the average entropy production obtained in both schemes can be cast in terms of such initial coherence. Moreover, we prove that the average entropy production can take negative values in the Margenau–Hill framework. Full article
(This article belongs to the Special Issue Thermodynamics of Quantum Information)
Show Figures

Figure 1

10 pages, 503 KiB  
Article
Thermodynamic Curvature of the Binary van der Waals Fluid
by George Ruppeiner and Alex Seftas
Entropy 2020, 22(11), 1208; https://doi.org/10.3390/e22111208 - 26 Oct 2020
Cited by 15 | Viewed by 3822
Abstract
The thermodynamic Ricci curvature scalar R has been applied in a number of contexts, mostly for systems characterized by 2D thermodynamic geometries. Calculations of R in thermodynamic geometries of dimension three or greater have been very few, especially in the fluid regime. In [...] Read more.
The thermodynamic Ricci curvature scalar R has been applied in a number of contexts, mostly for systems characterized by 2D thermodynamic geometries. Calculations of R in thermodynamic geometries of dimension three or greater have been very few, especially in the fluid regime. In this paper, we calculate R for two examples involving binary fluid mixtures: a binary mixture of a van der Waals (vdW) fluid with only repulsive interactions, and a binary vdW mixture with attractive interactions added. In both of these examples, we evaluate R for full 3D thermodynamic geometries. Our finding is that basic physical patterns found for R in the pure fluid are reproduced to a large extent for the binary fluid. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

25 pages, 350 KiB  
Article
The Heisenberg Indeterminacy Principle in the Context of Covariant Quantum Gravity
by Massimo Tessarotto and Claudio Cremaschini
Entropy 2020, 22(11), 1209; https://doi.org/10.3390/e22111209 - 26 Oct 2020
Cited by 9 | Viewed by 2930
Abstract
The subject of this paper deals with the mathematical formulation of the Heisenberg Indeterminacy Principle in the framework of Quantum Gravity. The starting point is the establishment of the so-called time-conjugate momentum inequalities holding for non-relativistic and relativistic Quantum Mechanics. The validity of [...] Read more.
The subject of this paper deals with the mathematical formulation of the Heisenberg Indeterminacy Principle in the framework of Quantum Gravity. The starting point is the establishment of the so-called time-conjugate momentum inequalities holding for non-relativistic and relativistic Quantum Mechanics. The validity of analogous Heisenberg inequalities in quantum gravity, which must be based on strictly physically observable quantities (i.e., necessarily either 4-scalar or 4-vector in nature), is shown to require the adoption of a manifestly covariant and unitary quantum theory of the gravitational field. Based on the prescription of a suitable notion of Hilbert space scalar product, the relevant Heisenberg inequalities are established. Besides the coordinate-conjugate momentum inequalities, these include a novel proper-time-conjugate extended momentum inequality. Physical implications and the connection with the deterministic limit recovering General Relativity are investigated. Full article
(This article belongs to the Special Issue Axiomatic Approaches to Quantum Mechanics)
20 pages, 305 KiB  
Article
The World as a Neural Network
by Vitaly Vanchurin
Entropy 2020, 22(11), 1210; https://doi.org/10.3390/e22111210 - 26 Oct 2020
Cited by 39 | Viewed by 12213
Abstract
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). [...] Read more.
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, x¯1, …, x¯D and an overall average state vector x¯0. In the limit when the weight matrix is a permutation matrix, the dynamics of x¯μ can be described in terms of relativistic strings in an emergent D+1 dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other. Full article
(This article belongs to the Section Statistical Physics)
21 pages, 7211 KiB  
Article
Numerical Investigation into the Development Performance of Gas Hydrate by Depressurization Based on Heat Transfer and Entropy Generation Analyses
by Bo Li, Wen-Na Wei, Qing-Cui Wan, Kang Peng and Ling-Ling Chen
Entropy 2020, 22(11), 1212; https://doi.org/10.3390/e22111212 - 26 Oct 2020
Cited by 21 | Viewed by 2859
Abstract
The purpose of this study is to analyze the dynamic properties of gas hydrate development from a large hydrate simulator through numerical simulation. A mathematical model of heat transfer and entropy production of methane hydrate dissociation by depressurization has been established, and the [...] Read more.
The purpose of this study is to analyze the dynamic properties of gas hydrate development from a large hydrate simulator through numerical simulation. A mathematical model of heat transfer and entropy production of methane hydrate dissociation by depressurization has been established, and the change behaviors of various heat flows and entropy generations have been evaluated. Simulation results show that most of the heat supplied from outside is assimilated by methane hydrate. The energy loss caused by the fluid production is insignificant in comparison to the heat assimilation of the hydrate reservoir. The entropy generation of gas hydrate can be considered as the entropy flow from the ambient environment to the hydrate particles, and it is favorable from the perspective of efficient hydrate exploitation. On the contrary, the undesirable entropy generations of water, gas and quartz sand are induced by the irreversible heat conduction and thermal convection under notable temperature gradient in the deposit. Although lower production pressure will lead to larger entropy production of the whole system, the irreversible energy loss is always extremely limited when compared with the amount of thermal energy utilized by methane hydrate. The production pressure should be set as low as possible for the purpose of enhancing exploitation efficiency, as the entropy production rate is not sensitive to the energy recovery rate under depressurization. Full article
Show Figures

Figure 1

20 pages, 8795 KiB  
Article
Dynamic Topology Reconfiguration of Boltzmann Machines on Quantum Annealers
by Jeremy Liu, Ke-Thia Yao and Federico Spedalieri
Entropy 2020, 22(11), 1202; https://doi.org/10.3390/e22111202 - 24 Oct 2020
Cited by 8 | Viewed by 3207
Abstract
Boltzmann machines have useful roles in deep learning applications, such as generative data modeling, initializing weights for other types of networks, or extracting efficient representations from high-dimensional data. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex [...] Read more.
Boltzmann machines have useful roles in deep learning applications, such as generative data modeling, initializing weights for other types of networks, or extracting efficient representations from high-dimensional data. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex distributions that are difficult to sample. We have used an open-system quantum annealer to sample from complex distributions and implement Boltzmann machines with looping connectivity. Further, we have created policies mapping Boltzmann machine variables to the quantum bits of an annealer. These policies, based on correlation and entropy metrics, dynamically reconfigure the topology of Boltzmann machines during training and improve performance. Full article
(This article belongs to the Special Issue Noisy Intermediate-Scale Quantum Technologies (NISQ))
Show Figures

Figure 1

28 pages, 5650 KiB  
Article
Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences
by Ranjana Koshy and Ausif Mahmood
Entropy 2020, 22(10), 1186; https://doi.org/10.3390/e22101186 - 21 Oct 2020
Cited by 16 | Viewed by 6412
Abstract
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best [...] Read more.
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER. Full article
Show Figures

Figure 1

26 pages, 456 KiB  
Article
The Smoluchowski Ensemble—Statistical Mechanics of Aggregation
by Themis Matsoukas
Entropy 2020, 22(10), 1181; https://doi.org/10.3390/e22101181 - 20 Oct 2020
Cited by 6 | Viewed by 4673
Abstract
We present a rigorous thermodynamic treatment of irreversible binary aggregation. We construct the Smoluchowski ensemble as the set of discrete finite distributions that are reached in fixed number of merging events and define a probability measure on this ensemble, such that the mean [...] Read more.
We present a rigorous thermodynamic treatment of irreversible binary aggregation. We construct the Smoluchowski ensemble as the set of discrete finite distributions that are reached in fixed number of merging events and define a probability measure on this ensemble, such that the mean distribution in the mean-field approximation is governed by the Smoluchowski equation. In the scaling limit this ensemble gives rise to a set of relationships identical to those of familiar statistical thermodynamics. The central element of the thermodynamic treatment is the selection functional, a functional of feasible distributions that connects the probability of distribution to the details of the aggregation model. We obtain scaling expressions for general kernels and closed-form results for the special case of the constant, sum and product kernel. We study the stability of the most probable distribution, provide criteria for the sol-gel transition and obtain the distribution in the post-gel region by simple thermodynamic arguments. Full article
(This article belongs to the Special Issue Generalized Statistical Thermodynamics)
Show Figures

Figure 1

18 pages, 575 KiB  
Article
Two-Qubit Entanglement Generation through Non-Hermitian Hamiltonians Induced by Repeated Measurements on an Ancilla
by Roberto Grimaudo, Antonino Messina, Alessandro Sergi, Nikolay V. Vitanov and Sergey N. Filippov
Entropy 2020, 22(10), 1184; https://doi.org/10.3390/e22101184 - 20 Oct 2020
Cited by 20 | Viewed by 4854
Abstract
In contrast to classical systems, actual implementation of non-Hermitian Hamiltonian dynamics for quantum systems is a challenge because the processes of energy gain and dissipation are based on the underlying Hermitian system–environment dynamics, which are trace preserving. Recently, a scheme for engineering non-Hermitian [...] Read more.
In contrast to classical systems, actual implementation of non-Hermitian Hamiltonian dynamics for quantum systems is a challenge because the processes of energy gain and dissipation are based on the underlying Hermitian system–environment dynamics, which are trace preserving. Recently, a scheme for engineering non-Hermitian Hamiltonians as a result of repetitive measurements on an ancillary qubit has been proposed. The induced conditional dynamics of the main system is described by the effective non-Hermitian Hamiltonian arising from the procedure. In this paper, we demonstrate the effectiveness of such a protocol by applying it to physically relevant multi-spin models, showing that the effective non-Hermitian Hamiltonian drives the system to a maximally entangled stationary state. In addition, we report a new recipe to construct a physical scenario where the quantum dynamics of a physical system represented by a given non-Hermitian Hamiltonian model may be simulated. The physical implications and the broad scope potential applications of such a scheme are highlighted. Full article
(This article belongs to the Special Issue Quantum Dynamics with Non-Hermitian Hamiltonians)
Show Figures

Figure 1

20 pages, 334 KiB  
Article
Unifying Aspects of Generalized Calculus
by Marek Czachor
Entropy 2020, 22(10), 1180; https://doi.org/10.3390/e22101180 - 19 Oct 2020
Cited by 15 | Viewed by 4216
Abstract
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate [...] Read more.
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate courses of mathematics. Its theoretical potential is huge, and yet it remains unknown or unappreciated. Full article
(This article belongs to the Special Issue The Statistical Foundations of Entropy)
Show Figures

Figure 1

25 pages, 1715 KiB  
Article
Segmentation of High Dimensional Time-Series Data Using Mixture of Sparse Principal Component Regression Model with Information Complexity
by Yaojin Sun and Hamparsum Bozdogan
Entropy 2020, 22(10), 1170; https://doi.org/10.3390/e22101170 - 17 Oct 2020
Cited by 6 | Viewed by 3368
Abstract
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our [...] Read more.
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our approach encompasses dimension reduction in high dimensional time-series data and, at the same time, determines the number of component clusters (i.e., number of segments across time-series data) and selects the best subset of predictors. A large-scale Monte Carlo simulation is performed to show the capability of the MIX-SPCR model to identify the correct structure of the time-series data successfully. MIX-SPCR model is also applied to a high dimensional Standard & Poor’s 500 (S&P 500) index data to uncover the time-series’s hidden structure and identify the structure change points. The approach presented in this paper determines both the relationships among the predictor variables and how various predictor variables contribute to the explanatory power of the response variable through the sparsity settings cluster wise. Full article
Show Figures

Figure 1

9 pages, 4211 KiB  
Article
Non-Equilibrium Living Polymers
by Davide Michieletto
Entropy 2020, 22(10), 1130; https://doi.org/10.3390/e22101130 - 6 Oct 2020
Cited by 8 | Viewed by 5735
Abstract
Systems of “living” polymers are ubiquitous in industry and are traditionally realised using surfactants. Here I first review the theoretical state-of-the-art of living polymers and then discuss non-equilibrium extensions that may be realised with advanced synthetic chemistry or DNA functionalised by proteins. These [...] Read more.
Systems of “living” polymers are ubiquitous in industry and are traditionally realised using surfactants. Here I first review the theoretical state-of-the-art of living polymers and then discuss non-equilibrium extensions that may be realised with advanced synthetic chemistry or DNA functionalised by proteins. These systems are not only interesting in order to realise novel “living” soft matter but can also shed insight into how genomes are (topologically) regulated in vivo. Full article
(This article belongs to the Special Issue Statistical Physics of Living Systems)
Show Figures

Figure 1

27 pages, 1869 KiB  
Article
Exploration of Outliers in If-Then Rule-Based Knowledge Bases
by Agnieszka Nowak-Brzezińska and Czesław Horyń
Entropy 2020, 22(10), 1096; https://doi.org/10.3390/e22101096 - 29 Sep 2020
Cited by 7 | Viewed by 4080
Abstract
The article presents both methods of clustering and outlier detection in complex data, such as rule-based knowledge bases. What distinguishes this work from others is, first, the application of clustering algorithms to rules in domain knowledge bases, and secondly, the use of outlier [...] Read more.
The article presents both methods of clustering and outlier detection in complex data, such as rule-based knowledge bases. What distinguishes this work from others is, first, the application of clustering algorithms to rules in domain knowledge bases, and secondly, the use of outlier detection algorithms to detect unusual rules in knowledge bases. The aim of the paper is the analysis of using four algorithms for outlier detection in rule-based knowledge bases: Local Outlier Factor (LOF), Connectivity-based Outlier Factor (COF), K-MEANS, and SMALLCLUSTERS. The subject of outlier mining is very important nowadays. Outliers in rules If-Then mean unusual rules, which are rare in comparing to others and should be explored by the domain expert as soon as possible. In the research, the authors use the outlier detection methods to find a given number of outliers in rules (1%, 5%, 10%), while in small groups, the number of outliers covers no more than 5% of the rule cluster. Subsequently, the authors analyze which of seven various quality indices, which they use for all rules and after removing selected outliers, improve the quality of rule clusters. In the experimental stage, the authors use six different knowledge bases. The best results (the most often the clusters quality was improved) are achieved for two outlier detection algorithms LOF and COF. Full article
Show Figures

Graphical abstract

Back to TopTop