entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 8723 KiB  
Article
Exploring the Phase Space of Multi-Principal-Element Alloys and Predicting the Formation of Bulk Metallic Glasses
by Mirko Gabski, Martin Peterlechner and Gerhard Wilde
Entropy 2020, 22(3), 292; https://doi.org/10.3390/e22030292 - 2 Mar 2020
Cited by 4 | Viewed by 3899
Abstract
Multi-principal-element alloys share a set of thermodynamic and structural parameters that, in their range of adopted values, correlate to the tendency of the alloys to assume a solid solution, whether as a crystalline or an amorphous phase. Based on empirical correlations, this work [...] Read more.
Multi-principal-element alloys share a set of thermodynamic and structural parameters that, in their range of adopted values, correlate to the tendency of the alloys to assume a solid solution, whether as a crystalline or an amorphous phase. Based on empirical correlations, this work presents a computational method for the prediction of possible glass-forming compositions for a chosen alloys system as well as the calculation of their critical cooling rates. The obtained results compare well to experimental data for Pd-Ni-P, micro-alloyed Pd-Ni-P, Cu-Mg-Ca, and Cu-Zr-Ti. Furthermore, a random-number-generator-based algorithm is employed to explore glass-forming candidate alloys with a minimum critical cooling rate, reducing the number of datapoints necessary to find suitable glass-forming compositions. A comparison with experimental results for the quaternary Ti-Zr-Cu-Ni system shows a promising overlap of calculation and experiment, implying that it is a reasonable method to find candidates for glass-forming alloys with a sufficiently low critical cooling rate to allow the formation of bulk metallic glasses. Full article
(This article belongs to the Special Issue Crystallization Thermodynamics)
Show Figures

Graphical abstract

15 pages, 456 KiB  
Article
Deng Entropy Weighted Risk Priority Number Model for Failure Mode and Effects Analysis
by Haixia Zheng and Yongchuan Tang
Entropy 2020, 22(3), 280; https://doi.org/10.3390/e22030280 - 28 Feb 2020
Cited by 29 | Viewed by 5255
Abstract
Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. A vital parameter in FMEA is the risk priority number (RPN), which is the product of occurrence (O), severity (S), and detection [...] Read more.
Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. A vital parameter in FMEA is the risk priority number (RPN), which is the product of occurrence (O), severity (S), and detection (D) of a failure mode. To deal with the uncertainty in the assessments given by domain experts, a novel Deng entropy weighted risk priority number (DEWRPN) for FMEA is proposed in the framework of Dempster–Shafer evidence theory (DST). DEWRPN takes into consideration the relative importance in both risk factors and FMEA experts. The uncertain degree of objective assessments coming from experts are measured by the Deng entropy. An expert’s weight is comprised of the three risk factors’ weights obtained independently from expert’s assessments. In DEWRPN, the strategy of assigning weight for each expert is flexible and compatible to the real decision-making situation. The entropy-based relative weight symbolizes the relative importance. In detail, the higher the uncertain degree of a risk factor from an expert is, the lower the weight of the corresponding risk factor will be and vice versa. We utilize Deng entropy to construct the exponential weight of each risk factor as well as an expert’s relative importance on an FMEA item in a state-of-the-art way. A case study is adopted to verify the practicability and effectiveness of the proposed model. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

19 pages, 509 KiB  
Review
Thermodynamic Limits and Optimality of Microbial Growth
by Nima P. Saadat, Tim Nies, Yvan Rousset and Oliver Ebenhöh
Entropy 2020, 22(3), 277; https://doi.org/10.3390/e22030277 - 28 Feb 2020
Cited by 21 | Viewed by 8060
Abstract
Understanding microbial growth with the use of mathematical models has a long history that dates back to the pioneering work of Jacques Monod in the 1940s. Monod’s famous growth law expressed microbial growth rate as a simple function of the limiting nutrient concentration. [...] Read more.
Understanding microbial growth with the use of mathematical models has a long history that dates back to the pioneering work of Jacques Monod in the 1940s. Monod’s famous growth law expressed microbial growth rate as a simple function of the limiting nutrient concentration. However, to explain growth laws from underlying principles is extremely challenging. In the second half of the 20th century, numerous experimental approaches aimed at precisely measuring heat production during microbial growth to determine the entropy balance in a growing cell and to quantify the exported entropy. This has led to the development of thermodynamic theories of microbial growth, which have generated fundamental understanding and identified the principal limitations of the growth process. Although these approaches ignored metabolic details and instead considered microbial metabolism as a black box, modern theories heavily rely on genomic resources to describe and model metabolism in great detail to explain microbial growth. Interestingly, however, thermodynamic constraints are often included in modern modeling approaches only in a rather superficial fashion, and it appears that recent modeling approaches and classical theories are rather disconnected fields. To stimulate a closer interaction between these fields, we here review various theoretical approaches that aim at describing microbial growth based on thermodynamics and outline the resulting thermodynamic limits and optimality principles. We start with classical black box models of cellular growth, and continue with recent metabolic modeling approaches that include thermodynamics, before we place these models in the context of fundamental considerations based on non-equilibrium statistical mechanics. We conclude by identifying conceptual overlaps between the fields and suggest how the various types of theories and models can be integrated. We outline how concepts from one approach may help to inform or constrain another, and we demonstrate how genome-scale models can be used to infer key black box parameters, such as the energy of formation or the degree of reduction of biomass. Such integration will allow understanding to what extent microbes can be viewed as thermodynamic machines, and how close they operate to theoretical optima. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

14 pages, 477 KiB  
Article
Maxwell’s Demon in Quantum Mechanics
by Orly Shenker and Meir Hemmo
Entropy 2020, 22(3), 269; https://doi.org/10.3390/e22030269 - 27 Feb 2020
Cited by 4 | Viewed by 4979
Abstract
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow [...] Read more.
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow of time, the threat to its universality threatens the account of temporal directionality as well. Various attempts to “exorcise” the Demon, by proving that it is impossible for one reason or another, have been made throughout the years, but none of them were successful. We have shown (in a number of publications) by a general state-space argument that Maxwell’s Demon is compatible with classical mechanics, and that the most recent solutions, based on Landauer’s thesis, are not general. In this paper we demonstrate that Maxwell’s Demon is also compatible with quantum mechanics. We do so by analyzing a particular (but highly idealized) experimental setup and proving that it violates the Second Law. Our discussion is in the framework of standard quantum mechanics; we give two separate arguments in the framework of quantum mechanics with and without the projection postulate. We address in our analysis the connection between measurement and erasure interactions and we show how these notions are applicable in the microscopic quantum mechanical structure. We discuss what might be the quantum mechanical counterpart of the classical notion of “macrostates”, thus explaining why our Quantum Demon setup works not only at the micro level but also at the macro level, properly understood. One implication of our analysis is that the Second Law cannot provide a universal lawlike basis for an account of the arrow of time; this account has to be sought elsewhere. Full article
(This article belongs to the Special Issue Time and Entropy)
Show Figures

Figure 1

11 pages, 1108 KiB  
Review
Some Notes on Counterfactuals in Quantum Mechanics
by Avshalom C. Elitzur and Eliahu Cohen
Entropy 2020, 22(3), 266; https://doi.org/10.3390/e22030266 - 26 Feb 2020
Cited by 1 | Viewed by 4236
Abstract
Counterfactuals, i.e., events that could have occurred but eventually did not, play a unique role in quantum mechanics in that they exert causal effects despite their non-occurrence. They are therefore vital for a better understanding of quantum mechanics (QM) and possibly the universe [...] Read more.
Counterfactuals, i.e., events that could have occurred but eventually did not, play a unique role in quantum mechanics in that they exert causal effects despite their non-occurrence. They are therefore vital for a better understanding of quantum mechanics (QM) and possibly the universe as a whole. In earlier works, we have studied counterfactuals both conceptually and experimentally. A fruitful framework termed quantum oblivion has emerged, referring to situations where one particle seems to "forget" its interaction with other particles despite the latter being visibly affected. This framework proved to have significant explanatory power, which we now extend to tackle additional riddles. The time-symmetric causality employed by the Two State-Vector Formalism (TSVF) reveals a subtle realm ruled by “weak values,” already demonstrated by numerous experiments. They offer a realistic, simple and intuitively appealing explanation to the unique role of quantum non-events, as well as to the foundations of QM. In this spirit, we performed a weak value analysis of quantum oblivion and suggest some new avenues for further research. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Figure 1

8 pages, 1136 KiB  
Article
Entropy of Conduction Electrons from Transport Experiments
by Nicolás Pérez, Constantin Wolf, Alexander Kunzmann, Jens Freudenberger, Maria Krautz, Bruno Weise, Kornelius Nielsch and Gabi Schierning
Entropy 2020, 22(2), 244; https://doi.org/10.3390/e22020244 - 21 Feb 2020
Cited by 6 | Viewed by 4603
Abstract
The entropy of conduction electrons was evaluated utilizing the thermodynamic definition of the Seebeck coefficient as a tool. This analysis was applied to two different kinds of scientific questions that can—if at all—be only partially addressed by other methods. These are the field-dependence [...] Read more.
The entropy of conduction electrons was evaluated utilizing the thermodynamic definition of the Seebeck coefficient as a tool. This analysis was applied to two different kinds of scientific questions that can—if at all—be only partially addressed by other methods. These are the field-dependence of meta-magnetic phase transitions and the electronic structure in strongly disordered materials, such as alloys. We showed that the electronic entropy change in meta-magnetic transitions is not constant with the applied magnetic field, as is usually assumed. Furthermore, we traced the evolution of the electronic entropy with respect to the chemical composition of an alloy series. Insights about the strength and kind of interactions appearing in the exemplary materials can be identified in the experiments. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

10 pages, 658 KiB  
Article
Thermodynamic and Transport Properties of Equilibrium Debye Plasmas
by Gianpiero Colonna and Annarita Laricchiuta
Entropy 2020, 22(2), 237; https://doi.org/10.3390/e22020237 - 20 Feb 2020
Cited by 7 | Viewed by 3792
Abstract
The thermodynamic and transport properties of weakly non-ideal, high-density partially ionized hydrogen plasma are investigated, accounting for quantum effects due to the change in the energy spectrum of atomic hydrogen when the electron–proton interaction is considered embedded in the surrounding particles. The complexity [...] Read more.
The thermodynamic and transport properties of weakly non-ideal, high-density partially ionized hydrogen plasma are investigated, accounting for quantum effects due to the change in the energy spectrum of atomic hydrogen when the electron–proton interaction is considered embedded in the surrounding particles. The complexity of the rigorous approach led to the development of simplified models, able to include the neighbor-effects on the isolated system while remaining consistent with the traditional thermodynamic approach. High-density conditions have been simulated assuming particle interactions described by a screened Coulomb potential. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

12 pages, 290 KiB  
Article
Global Geometry of Bayesian Statistics
by Atsuhide Mori
Entropy 2020, 22(2), 240; https://doi.org/10.3390/e22020240 - 20 Feb 2020
Cited by 2 | Viewed by 3562
Abstract
In the previous work of the author, a non-trivial symmetry of the relative entropy in the information geometry of normal distributions was discovered. The same symmetry also appears in the symplectic/contact geometry of Hilbert modular cusps. Further, it was observed that a contact [...] Read more.
In the previous work of the author, a non-trivial symmetry of the relative entropy in the information geometry of normal distributions was discovered. The same symmetry also appears in the symplectic/contact geometry of Hilbert modular cusps. Further, it was observed that a contact Hamiltonian flow presents a certain Bayesian inference on normal distributions. In this paper, we describe Bayesian statistics and the information geometry in the language of current geometry in order to spread our interest in statistics through general geometers and topologists. Then, we foliate the space of multivariate normal distributions by symplectic leaves to generalize the above result of the author. This foliation arises from the Cholesky decomposition of the covariance matrices. Full article
(This article belongs to the Special Issue Information Geometry III)
Show Figures

Graphical abstract

6 pages, 733 KiB  
Article
Entropy, Information, and Symmetry; Ordered Is Symmetrical, II: System of Spins in the Magnetic Field
by Edward Bormashenko
Entropy 2020, 22(2), 235; https://doi.org/10.3390/e22020235 - 19 Feb 2020
Cited by 11 | Viewed by 3753
Abstract
The second part of this paper develops an approach suggested in Entropy 2020, 22(1), 11; which relates ordering in physical systems to symmetrizing. Entropy is frequently interpreted as a quantitative measure of “chaos” or “disorder”. However, the notions of “chaos” and [...] Read more.
The second part of this paper develops an approach suggested in Entropy 2020, 22(1), 11; which relates ordering in physical systems to symmetrizing. Entropy is frequently interpreted as a quantitative measure of “chaos” or “disorder”. However, the notions of “chaos” and “disorder” are vague and subjective, to a great extent. This leads to numerous misinterpretations of entropy. We propose that the disorder is viewed as an absence of symmetry and identify “ordering” with symmetrizing of a physical system; in other words, introducing the elements of symmetry into an initially disordered physical system. We explore the initially disordered system of elementary magnets exerted to the external magnetic field H . Imposing symmetry restrictions diminishes the entropy of the system and decreases its temperature. The general case of the system of elementary magnets demonstrating j-fold symmetry is studied. The T j = T j interrelation takes place, where T and T j are the temperatures of non-symmetrized and j-fold-symmetrized systems of the magnets, correspondingly. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

14 pages, 585 KiB  
Communication
The Brevity Law as a Scaling Law, and a Possible Origin of Zipf’s Law for Word Frequencies
by Álvaro Corral and Isabel Serra
Entropy 2020, 22(2), 224; https://doi.org/10.3390/e22020224 - 17 Feb 2020
Cited by 21 | Viewed by 5618
Abstract
An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This [...] Read more.
An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This paper presents a new perspective to establish a connection between different statistical linguistic laws. Characterizing each word type by two random variables—length (in number of characters) and absolute frequency—we show that the corresponding bivariate joint probability distribution shows a rich and precise phenomenology, with the type-length and the type-frequency distributions as its two marginals, and the conditional distribution of frequency at fixed length providing a clear formulation for the brevity-frequency phenomenon. The type-length distribution turns out to be well fitted by a gamma distribution (much better than with the previously proposed lognormal), and the conditional frequency distributions at fixed length display power-law-decay behavior with a fixed exponent α 1.4 and a characteristic-frequency crossover that scales as an inverse power δ 2.8 of length, which implies the fulfillment of a scaling law analogous to those found in the thermodynamics of critical phenomena. As a by-product, we find a possible model-free explanation for the origin of Zipf’s law, which should arise as a mixture of conditional frequency distributions governed by the crossover length-dependent frequency. Full article
(This article belongs to the Special Issue Information Theory and Language)
Show Figures

Figure 1

34 pages, 514 KiB  
Article
Generalised Measures of Multivariate Information Content
by Conor Finn and Joseph T. Lizier
Entropy 2020, 22(2), 216; https://doi.org/10.3390/e22020216 - 14 Feb 2020
Cited by 21 | Viewed by 6660
Abstract
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted [...] Read more.
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

17 pages, 2442 KiB  
Article
Finite-Time Thermodynamic Model for Evaluating Heat Engines in Ocean Thermal Energy Conversion
by Takeshi Yasunaga and Yasuyuki Ikegami
Entropy 2020, 22(2), 211; https://doi.org/10.3390/e22020211 - 13 Feb 2020
Cited by 30 | Viewed by 4886
Abstract
Ocean thermal energy conversion (OTEC) converts the thermal energy stored in the ocean temperature difference between warm surface seawater and cold deep seawater into electricity. The necessary temperature difference to drive OTEC heat engines is only 15–25 K, which will theoretically be of [...] Read more.
Ocean thermal energy conversion (OTEC) converts the thermal energy stored in the ocean temperature difference between warm surface seawater and cold deep seawater into electricity. The necessary temperature difference to drive OTEC heat engines is only 15–25 K, which will theoretically be of low thermal efficiency. Research has been conducted to propose unique systems that can increase the thermal efficiency. This thermal efficiency is generally applied for the system performance metric, and researchers have focused on using the higher available temperature difference of heat engines to improve this efficiency without considering the finite flow rate and sensible heat of seawater. In this study, our model shows a new concept of thermodynamics for OTEC. The first step is to define the transferable thermal energy in the OTEC as the equilibrium state and the dead state instead of the atmospheric condition. Second, the model shows the available maximum work, the new concept of exergy, by minimizing the entropy generation while considering external heat loss. The maximum thermal energy and exergy allow the normalization of the first and second laws of thermal efficiencies. These evaluation methods can be applied to optimized OTEC systems and their effectiveness is confirmed. Full article
(This article belongs to the Special Issue Entropy in Renewable Energy Systems)
Show Figures

Figure 1

12 pages, 1921 KiB  
Article
On the Irrationality of Being in Two Minds
by Shahram Dehdashti, Lauren Fell and Peter Bruza
Entropy 2020, 22(2), 174; https://doi.org/10.3390/e22020174 - 4 Feb 2020
Cited by 4 | Viewed by 4519
Abstract
This article presents a general framework that allows irrational decision making to be theoretically investigated and simulated. Rationality in human decision making under uncertainty is normatively prescribed by the axioms of probability theory in order to maximize utility. However, substantial literature from psychology [...] Read more.
This article presents a general framework that allows irrational decision making to be theoretically investigated and simulated. Rationality in human decision making under uncertainty is normatively prescribed by the axioms of probability theory in order to maximize utility. However, substantial literature from psychology and cognitive science shows that human decisions regularly deviate from these axioms. Bistable probabilities are proposed as a principled and straight forward means for modeling (ir)rational decision making, which occurs when a decision maker is in “two minds”. We show that bistable probabilities can be formalized by positive-operator-valued projections in quantum mechanics. We found that (1) irrational decision making necessarily involves a wider spectrum of causal relationships than rational decision making, (2) the accessible information turns out to be greater in irrational decision making when compared to rational decision making, and (3) irrational decision making is quantum-like because it violates the Bell–Wigner polytope. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Graphical abstract

13 pages, 448 KiB  
Article
Nonlinear Fokker–Planck Equation Approach to Systems of Interacting Particles: Thermostatistical Features Related to the Range of the Interactions
by Angel R. Plastino and Roseli S. Wedemann
Entropy 2020, 22(2), 163; https://doi.org/10.3390/e22020163 - 31 Jan 2020
Cited by 8 | Viewed by 3587
Abstract
Nonlinear Fokker–Planck equations (NLFPEs) constitute useful effective descriptions of some interacting many-body systems. Important instances of these nonlinear evolution equations are closely related to the thermostatistics based on the S q power-law entropic functionals. Most applications of the connection between the NLFPE and [...] Read more.
Nonlinear Fokker–Planck equations (NLFPEs) constitute useful effective descriptions of some interacting many-body systems. Important instances of these nonlinear evolution equations are closely related to the thermostatistics based on the S q power-law entropic functionals. Most applications of the connection between the NLFPE and the S q entropies have focused on systems interacting through short-range forces. In the present contribution we re-visit the NLFPE approach to interacting systems in order to clarify the role played by the range of the interactions, and to explore the possibility of developing similar treatments for systems with long-range interactions, such as those corresponding to Newtonian gravitation. In particular, we consider a system of particles interacting via forces following the inverse square law and performing overdamped motion, that is described by a density obeying an integro-differential evolution equation that admits exact time-dependent solutions of the q-Gaussian form. These q-Gaussian solutions, which constitute a signature of S q -thermostatistics, evolve in a similar but not identical way to the solutions of an appropriate nonlinear, power-law Fokker–Planck equation. Full article
(This article belongs to the Special Issue Entropy and Gravitation)
Show Figures

Figure 1

22 pages, 3190 KiB  
Article
Evolution of Neuroaesthetic Variables in Portrait Paintings throughout the Renaissance
by Ivan Correa-Herran, Hassan Aleem and Norberto M. Grzywacz
Entropy 2020, 22(2), 146; https://doi.org/10.3390/e22020146 - 26 Jan 2020
Cited by 14 | Viewed by 5027
Abstract
To compose art, artists rely on a set of sensory evaluations performed fluently by the brain. The outcome of these evaluations, which we call neuroaesthetic variables, helps to compose art with high aesthetic value. In this study, we probed whether these variables varied [...] Read more.
To compose art, artists rely on a set of sensory evaluations performed fluently by the brain. The outcome of these evaluations, which we call neuroaesthetic variables, helps to compose art with high aesthetic value. In this study, we probed whether these variables varied across art periods despite relatively unvaried neural function. We measured several neuroaesthetic variables in portrait paintings from the Early and High Renaissance, and from Mannerism. The variables included symmetry, balance, and contrast (chiaroscuro), as well as intensity and spatial complexities measured by two forms of normalized entropy. The results showed that the degree of symmetry remained relatively constant during the Renaissance. However, the balance of portraits decayed abruptly at the end of the Early Renaissance, that is, at the closing of the 15th century. Intensity and spatial complexities, and thus entropies, of portraits also fell in such manner around the same time. Our data also showed that the decline of complexity and entropy could be attributed to the rise of chiaroscuro. With few exceptions, the values of aesthetic variables from the top of artists of the Renaissance resembled those of their peers. We conclude that neuroaesthetic variables have flexibility to change in brains of artists (and observers). Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

20 pages, 855 KiB  
Article
Adapting Logic to Physics: The Quantum-Like Eigenlogic Program
by Zeno Toffano and François Dubois
Entropy 2020, 22(2), 139; https://doi.org/10.3390/e22020139 - 24 Jan 2020
Cited by 11 | Viewed by 4296
Abstract
Considering links between logic and physics is important because of the fast development of quantum information technologies in our everyday life. This paper discusses a new method in logic inspired from quantum theory using operators, named Eigenlogic. It expresses logical propositions using linear [...] Read more.
Considering links between logic and physics is important because of the fast development of quantum information technologies in our everyday life. This paper discusses a new method in logic inspired from quantum theory using operators, named Eigenlogic. It expresses logical propositions using linear algebra. Logical functions are represented by operators and logical truth tables correspond to the eigenvalue structure. It extends the possibilities of classical logic by changing the semantics from the Boolean binary alphabet { 0 , 1 } using projection operators to the binary alphabet { + 1 , 1 } employing reversible involution operators. Also, many-valued logical operators are synthesized, for whatever alphabet, using operator methods based on Lagrange interpolation and on the Cayley–Hamilton theorem. Considering a superposition of logical input states one gets a fuzzy logic representation where the fuzzy membership function is the quantum probability given by the Born rule. Historical parallels from Boole, Post, Poincaré and Combinatory Logic are presented in relation to probability theory, non-commutative quaternion algebra and Turing machines. An extension to first order logic is proposed inspired by Grover’s algorithm. Eigenlogic is essentially a logic of operators and its truth-table logical semantics is provided by the eigenvalue structure which is shown to be related to the universality of logical quantum gates, a fundamental role being played by non-commutativity and entanglement. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Figure 1

15 pages, 940 KiB  
Article
Electric Double Layers with Surface Charge Regulation Using Density Functional Theory
by Dirk Gillespie, Dimiter N. Petsev and Frank van Swol
Entropy 2020, 22(2), 132; https://doi.org/10.3390/e22020132 - 22 Jan 2020
Cited by 10 | Viewed by 3818
Abstract
Surprisingly, the local structure of electrolyte solutions in electric double layers is primarily determined by the solvent. This is initially unexpected as the solvent is usually a neutral species and not a subject to dominant Coulombic interactions. Part of the solvent dominance in [...] Read more.
Surprisingly, the local structure of electrolyte solutions in electric double layers is primarily determined by the solvent. This is initially unexpected as the solvent is usually a neutral species and not a subject to dominant Coulombic interactions. Part of the solvent dominance in determining the local structure is simply due to the much larger number of solvent molecules in a typical electrolyte solution.The dominant local packing of solvent then creates a space left for the charged species. Our classical density functional theory work demonstrates that the solvent structural effect strongly couples to the surface chemistry, which governs the charge and potential. In this article we address some outstanding questions relating double layer modeling. Firstly, we address the role of ion-ion correlations that go beyond mean field correlations. Secondly we consider the effects of a density dependent dielectric constant which is crucial in the description of a electrolyte-vapor interface. Full article
Show Figures

Figure 1

13 pages, 4382 KiB  
Article
Eigenvalues of Two-State Quantum Walks Induced by the Hadamard Walk
by Shimpei Endo, Takako Endo, Takashi Komatsu and Norio Konno
Entropy 2020, 22(1), 127; https://doi.org/10.3390/e22010127 - 20 Jan 2020
Cited by 7 | Viewed by 4453
Abstract
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks [...] Read more.
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks in one dimension we had treated in our previous studies. Especially, we clarified the characteristic parameter dependence for the distributions of the eigenvalues with the aid of numerical simulation. Full article
(This article belongs to the Special Issue Quantum Walks and Related Issues)
Show Figures

Figure 1

19 pages, 4838 KiB  
Article
Energy and Exergy Evaluation of a Two-Stage Axial Vapour Compressor on the LNG Carrier
by Igor Poljak, Josip Orović, Vedran Mrzljak and Dean Bernečić
Entropy 2020, 22(1), 115; https://doi.org/10.3390/e22010115 - 17 Jan 2020
Cited by 8 | Viewed by 5161
Abstract
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while [...] Read more.
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while changing the main propeller shafts rpm. As the compressor supply of vaporized gas to the main engines increases, so does the load and rpm in propulsion electric motors, and vice versa. The results show that when the main engine load varied from 46 to 56 rpm at main propulsion shafts increased mass flow rate of vaporized LNG at a two-stage compressor has an influence on compressor performance. Compressor average energy efficiency is around 50%, while the exergy efficiency of the compressor is significantly lower in all measured ranges and on average is around 34%. The change in the ambient temperature from 0 to 50 °C also influences the compressor’s exergy efficiency. Higher exergy efficiency is achieved at lower ambient temperatures. As temperature increases, overall compressor exergy efficiency decreases by about 7% on average over the whole analyzed range. The proposed new concept of energy-saving and increasing the compressor efficiency based on pre-cooling of the compressor second stage is also analyzed. The temperature at the second stage was varied in the range from 0 to −50 °C, which results in power savings up to 26 kW for optimal running regimes. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

19 pages, 2322 KiB  
Article
Learning in Feedforward Neural Networks Accelerated by Transfer Entropy
by Adrian Moldovan, Angel Caţaron and Răzvan Andonie
Entropy 2020, 22(1), 102; https://doi.org/10.3390/e22010102 - 16 Jan 2020
Cited by 20 | Viewed by 5678
Abstract
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially [...] Read more.
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

14 pages, 3812 KiB  
Article
Determining the Bulk Parameters of Plasma Electrons from Pitch-Angle Distribution Measurements
by Georgios Nicolaou, Robert Wicks, George Livadiotis, Daniel Verscharen, Christopher Owen and Dhiren Kataria
Entropy 2020, 22(1), 103; https://doi.org/10.3390/e22010103 - 16 Jan 2020
Cited by 15 | Viewed by 5070
Abstract
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution [...] Read more.
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution function. One such high-resolution measurement strategy consists of sampling the two-dimensional pitch-angle distributions of the plasma particles, which describes the velocities of the particles with respect to the local magnetic field direction. Here, we investigate the accuracy of plasma bulk parameters from such high-resolution measurements. We simulate electron observations from the Solar Wind Analyser’s (SWA) Electron Analyser System (EAS) on board Solar Orbiter. We show that fitting analysis of the synthetic datasets determines the plasma temperature and kappa index of the distribution within 10% of their actual values, even at large heliocentric distances where the expected solar wind flux is very low. Interestingly, we show that although measurement points with zero counts are not statistically significant, they provide information about the particle distribution function which becomes important when the particle flux is low. We also examine the convergence of the fitting algorithm for expected plasma conditions and discuss the sources of statistical and systematic uncertainties. Full article
(This article belongs to the Special Issue Theoretical Aspects of Kappa Distributions)
Show Figures

Figure 1

29 pages, 970 KiB  
Article
Quantifying Athermality and Quantum Induced Deviations from Classical Fluctuation Relations
by Zoë Holmes, Erick Hinds Mingo, Calvin Y.-R. Chen and Florian Mintert
Entropy 2020, 22(1), 111; https://doi.org/10.3390/e22010111 - 16 Jan 2020
Cited by 2 | Viewed by 4521
Abstract
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the [...] Read more.
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the system’s energy supply. In particular, we develop Crooks-like equalities for an oscillator system which is prepared either in photon added or photon subtracted thermal states and derive a Jarzynski-like equality for average work extraction. We use these equalities to discuss the extent to which adding or subtracting a photon increases the informational content of a state, thereby amplifying the suppression of free energy increasing process. We go on to derive a Crooks-like equality for an energy supply that is prepared in a pure binomial state, leading to a non-trivial contribution from energy and coherence on the resultant irreversibility. We show how the binomial state equality fits in relation to a previously derived coherent state equality and offers a richer feature-set. Full article
Show Figures

Graphical abstract

27 pages, 1642 KiB  
Article
The Convex Information Bottleneck Lagrangian
by Borja Rodríguez Gálvez, Ragnar Thobaben and Mikael Skoglund
Entropy 2020, 22(1), 98; https://doi.org/10.3390/e22010098 - 14 Jan 2020
Cited by 23 | Viewed by 5468
Abstract
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the [...] Read more.
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) β I ( X ; T ) ) for many values of β [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq IB ( T ; β sq ) = I ( T ; Y ) β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Graphical abstract

13 pages, 5513 KiB  
Article
On Heat Transfer Performance of Cooling Systems Using Nanofluid for Electric Motor Applications
by Ali Deriszadeh and Filippo de Monte
Entropy 2020, 22(1), 99; https://doi.org/10.3390/e22010099 - 14 Jan 2020
Cited by 32 | Viewed by 8222
Abstract
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid [...] Read more.
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid dynamics and 3D fluid motion analysis are used. The base fluid is water with a laminar flow. The fluid Reynolds number and turn-number of spiral channels are evaluation parameters. The effect of nanoparticles volume fraction in the base fluid on the heat transfer performance of the cooling system is studied. Increasing the volume fraction of nanoparticles leads to improving the heat transfer performance of the cooling system. On the other hand, a high-volume fraction of the nanofluid increases the pressure drop of the coolant fluid and increases the required pumping power. This paper aims at finding a trade-off between effective parameters by studying both fluid flow and heat transfer characteristics of the nanofluid. Full article
Show Figures

Figure 1

12 pages, 607 KiB  
Article
On Unitary t-Designs from Relaxed Seeds
by Rawad Mezher, Joe Ghalbouni, Joseph Dgheim and Damian Markham
Entropy 2020, 22(1), 92; https://doi.org/10.3390/e22010092 - 12 Jan 2020
Cited by 3 | Viewed by 5187
Abstract
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. [...] Read more.
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. In particular, they are composed of ensembles of unitaries which, for technical reasons, must contain inverses and whose entries are algebraic. In this work, we reduce the requirements for generating an ε -approximate unitary t-design. To do so, we first construct a specific n-qubit random quantum circuit composed of a sequence of randomly chosen 2-qubit gates, chosen from a set of unitaries which is approximately universal on U ( 4 ) , yet need not contain unitaries and their inverses nor are in general composed of unitaries whose entries are algebraic; dubbed r e l a x e d seed. We then show that this relaxed seed, when used as a basis for our construction, gives rise to an ε -approximate unitary t-design efficiently, where the depth of our random circuit scales as p o l y ( n , t , l o g ( 1 / ε ) ) , thereby overcoming the two requirements which limited previous constructions. We suspect the result found here is not optimal and can be improved; particularly because the number of gates in the relaxed seeds introduced here grows with n and t. We conjecture that constant sized seeds such as those which are usually present in the literature are sufficient. Full article
(This article belongs to the Special Issue Quantum Information: Fragility and the Challenges of Fault Tolerance)
Show Figures

Figure 1

25 pages, 832 KiB  
Concept Paper
Introduction to Extreme Seeking Entropy
by Jan Vrba and Jan Mareš
Entropy 2020, 22(1), 93; https://doi.org/10.3390/e22010093 - 12 Jan 2020
Cited by 7 | Viewed by 4087
Abstract
Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses [...] Read more.
Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses adaptable parameters. Instead of a multi-scale enhanced approach, the generalized Pareto distribution is employed to estimate the probability of unusual updates, as well as for detecting novelties. This measure was successfully tested in various scenarios with (i) synthetic data, (ii) real time series datasets, and multiple adaptive filters and learning algorithms. The results of these experiments are presented. Full article
Show Figures

Figure 1

17 pages, 384 KiB  
Article
Energy Disaggregation Using Elastic Matching Algorithms
by Pascal A. Schirmer, Iosif Mporas and Michael Paraskevas
Entropy 2020, 22(1), 71; https://doi.org/10.3390/e22010071 - 6 Jan 2020
Cited by 25 | Viewed by 4015
Abstract
In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant [...] Read more.
In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant amount of data to train a model, elastic matching-based approaches do not have a model training process but perform recognition using template matching. Five different elastic matching algorithms were evaluated across different datasets and the experimental results showed that the minimum variance matching algorithm outperforms all other evaluated matching algorithms. The best performing minimum variance matching algorithm improved the energy disaggregation accuracy by 2.7% when compared to the baseline dynamic time warping algorithm. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

14 pages, 12760 KiB  
Article
Development of Novel Lightweight Dual-Phase Al-Ti-Cr-Mn-V Medium-Entropy Alloys with High Strength and Ductility
by Yu-Chin Liao, Po-Sung Chen, Chao-Hsiu Li, Pei-Hua Tsai, Jason S. C. Jang, Ker-Chang Hsieh, Chih-Yen Chen, Ping-Hung Lin, Jacob C. Huang, Hsin-Jay Wu, Yu-Chieh Lo, Chang-Wei Huang and I-Yu Tsao
Entropy 2020, 22(1), 74; https://doi.org/10.3390/e22010074 - 6 Jan 2020
Cited by 16 | Viewed by 5559
Abstract
A novel lightweight Al-Ti-Cr-Mn-V medium-entropy alloy (MEA) system was developed using a nonequiatiomic approach and alloys were produced through arc melting and drop casting. These alloys comprised a body-centered cubic (BCC) and face-centered cubic (FCC) dual phase with a density of approximately 4.5 [...] Read more.
A novel lightweight Al-Ti-Cr-Mn-V medium-entropy alloy (MEA) system was developed using a nonequiatiomic approach and alloys were produced through arc melting and drop casting. These alloys comprised a body-centered cubic (BCC) and face-centered cubic (FCC) dual phase with a density of approximately 4.5 g/cm3. However, the fraction of the BCC phase and morphology of the FCC phase can be controlled by incorporating other elements. The results of compression tests indicated that these Al-Ti-Cr-Mn-V alloys exhibited a prominent compression strength (~1940 MPa) and ductility (~30%). Moreover, homogenized samples maintained a high compression strength of 1900 MPa and similar ductility (30%). Due to the high specific compressive strength (0.433 GPa·g/cm3) and excellent combination of strength and ductility, the cast lightweight Al-Ti-Cr-Mn-V MEAs are a promising alloy system for application in transportation and energy industries. Full article
(This article belongs to the Special Issue High-Entropy Materials)
Show Figures

Figure 1

13 pages, 2502 KiB  
Article
Bounds on Mixed State Entanglement
by Bruno Leggio, Anna Napoli, Hiromichi Nakazato and Antonino Messina
Entropy 2020, 22(1), 62; https://doi.org/10.3390/e22010062 - 1 Jan 2020
Cited by 6 | Viewed by 3920
Abstract
In the general framework of d 1 × d 2 mixed states, we derive an explicit bound for bipartite negative partial transpose (NPT) entanglement based on the mixedness characterization of the physical system. The derived result is very general, being based only on [...] Read more.
In the general framework of d 1 × d 2 mixed states, we derive an explicit bound for bipartite negative partial transpose (NPT) entanglement based on the mixedness characterization of the physical system. The derived result is very general, being based only on the assumption of finite dimensionality. In addition, it turns out to be of experimental interest since some purity-measuring protocols are known. Exploiting the bound in the particular case of thermal entanglement, a way to connect thermodynamic features to the monogamy of quantum correlations is suggested, and some recent results on the subject are given a physically clear explanation. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

15 pages, 2142 KiB  
Article
Nonlinear Information Bottleneck
by Artemy Kolchinsky, Brendan D. Tracey and David H. Wolpert
Entropy 2019, 21(12), 1181; https://doi.org/10.3390/e21121181 - 30 Nov 2019
Cited by 105 | Viewed by 10630
Abstract
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. [...] Read more.
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete X and Y with small state spaces, and continuous X and Y with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous X and Y, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed “variational IB” method on several real-world datasets. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Figure 1

20 pages, 6051 KiB  
Article
OTEC Maximum Net Power Output Using Carnot Cycle and Application to Simplify Heat Exchanger Selection
by Kevin Fontaine, Takeshi Yasunaga and Yasuyuki Ikegami
Entropy 2019, 21(12), 1143; https://doi.org/10.3390/e21121143 - 22 Nov 2019
Cited by 30 | Viewed by 8067
Abstract
Ocean thermal energy conversion (OTEC) uses the natural thermal gradient in the sea. It has been investigated to make it competitive with conventional power plants, as it has huge potential and can produce energy steadily throughout the year. This has been done mostly [...] Read more.
Ocean thermal energy conversion (OTEC) uses the natural thermal gradient in the sea. It has been investigated to make it competitive with conventional power plants, as it has huge potential and can produce energy steadily throughout the year. This has been done mostly by focusing on improving cycle performances or central elements of OTEC, such as heat exchangers. It is difficult to choose a suitable heat exchanger for OTEC with the separate evaluations of the heat transfer coefficient and pressure drop that are usually found in the literature. Accordingly, this paper presents a method to evaluate heat exchangers for OTEC. On the basis of finite-time thermodynamics, the maximum net power output for different heat exchangers using both heat transfer performance and pressure drop was assessed and compared. This method was successfully applied to three heat exchangers. The most suitable heat exchanger was found to lead to a maximum net power output 158% higher than the output of the least suitable heat exchanger. For a difference of 3.7% in the net power output, a difference of 22% in the Reynolds numbers was found. Therefore, those numbers also play a significant role in the choice of heat exchangers as they affect the pumping power required for seawater flowing. A sensitivity analysis showed that seawater temperature does not affect the choice of heat exchangers, even though the net power output was found to decrease by up to 10% with every temperature difference drop of 1 °C. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

38 pages, 4465 KiB  
Article
Topological Information Data Analysis
by Pierre Baudot, Monica Tapia, Daniel Bennequin and Jean-Marc Goaillard
Entropy 2019, 21(9), 869; https://doi.org/10.3390/e21090869 - 6 Sep 2019
Cited by 44 | Viewed by 10171
Abstract
This paper presents methods that quantify the structure of statistical interactions within a given data set, and were applied in a previous article. It establishes new results on the k-multivariate mutual-information ( I k ) inspired by the topological formulation of Information [...] Read more.
This paper presents methods that quantify the structure of statistical interactions within a given data set, and were applied in a previous article. It establishes new results on the k-multivariate mutual-information ( I k ) inspired by the topological formulation of Information introduced in a serie of studies. In particular, we show that the vanishing of all I k for 2 k n of n random variables is equivalent to their statistical independence. Pursuing the work of Hu Kuo Ting and Te Sun Han, we show that information functions provide co-ordinates for binary variables, and that they are analytically independent from the probability simplex for any set of finite variables. The maximal positive I k identifies the variables that co-vary the most in the population, whereas the minimal negative I k identifies synergistic clusters and the variables that differentiate–segregate the most in the population. Finite data size effects and estimation biases severely constrain the effective computation of the information topology on data, and we provide simple statistical tests for the undersampling bias and the k-dependences. We give an example of application of these methods to genetic expression and unsupervised cell-type classification. The methods unravel biologically relevant subtypes, with a sample size of 41 genes and with few errors. It establishes generic basic methods to quantify the epigenetic information storage and a unified epigenetic unsupervised learning formalism. We propose that higher-order statistical interactions and non-identically distributed variables are constitutive characteristics of biological systems that should be estimated in order to unravel their significant statistical structure and diversity. The topological information data analysis presented here allows for precisely estimating this higher-order structure characteristic of biological systems. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 6736 KiB  
Article
Image Encryption Scheme with Compressed Sensing Based on New Three-Dimensional Chaotic System
by Yaqin Xie, Jiayin Yu, Shiyu Guo, Qun Ding and Erfu Wang
Entropy 2019, 21(9), 819; https://doi.org/10.3390/e21090819 - 22 Aug 2019
Cited by 55 | Viewed by 4766
Abstract
In this paper, a new three-dimensional chaotic system is proposed for image encryption. The core of the encryption algorithm is the combination of chaotic system and compressed sensing, which can complete image encryption and compression at the same time. The Lyapunov exponent, bifurcation [...] Read more.
In this paper, a new three-dimensional chaotic system is proposed for image encryption. The core of the encryption algorithm is the combination of chaotic system and compressed sensing, which can complete image encryption and compression at the same time. The Lyapunov exponent, bifurcation diagram and complexity of the new three-dimensional chaotic system are analyzed. The performance analysis shows that the chaotic system has two positive Lyapunov exponents and high complexity. In the encryption scheme, a new chaotic system is used as the measurement matrix for compressed sensing, and Arnold is used to scrambling the image further. The proposed method has better reconfiguration ability in the compressible range of the algorithm compared with other methods. The experimental results show that the proposed encryption scheme has good encryption effect and image compression capability. Full article
Show Figures

Figure 1

25 pages, 3053 KiB  
Article
Distinguishing between Clausius, Boltzmann and Pauling Entropies of Frozen Non-Equilibrium States
by Rainer Feistel
Entropy 2019, 21(8), 799; https://doi.org/10.3390/e21080799 - 15 Aug 2019
Cited by 11 | Viewed by 7752
Abstract
In conventional textbook thermodynamics, entropy is a quantity that may be calculated by different methods, for example experimentally from heat capacities (following Clausius) or statistically from numbers of microscopic quantum states (following Boltzmann and Planck). It had turned out that these methods do [...] Read more.
In conventional textbook thermodynamics, entropy is a quantity that may be calculated by different methods, for example experimentally from heat capacities (following Clausius) or statistically from numbers of microscopic quantum states (following Boltzmann and Planck). It had turned out that these methods do not necessarily provide mutually consistent results, and for equilibrium systems their difference was explained by introducing a residual zero-point entropy (following Pauling), apparently violating the Nernst theorem. At finite temperatures, associated statistical entropies which count microstates that do not contribute to a body’s heat capacity, differ systematically from Clausius entropy, and are of particular relevance as measures for metastable, frozen-in non-equilibrium structures and for symbolic information processing (following Shannon). In this paper, it is suggested to consider Clausius, Boltzmann, Pauling and Shannon entropies as distinct, though related, physical quantities with different key properties, in order to avoid confusion by loosely speaking about just “entropy” while actually referring to different kinds of it. For instance, zero-point entropy exclusively belongs to Boltzmann rather than Clausius entropy, while the Nernst theorem holds rigorously for Clausius rather than Boltzmann entropy. The discussion of those terms is underpinned by a brief historical review of the emergence of corresponding fundamental thermodynamic concepts. Full article
(This article belongs to the Special Issue Crystallization Thermodynamics)
Show Figures

Graphical abstract

16 pages, 754 KiB  
Article
Comparing Information Metrics for a Coupled Ornstein–Uhlenbeck Process
by James Heseltine and Eun-jin Kim
Entropy 2019, 21(8), 775; https://doi.org/10.3390/e21080775 - 8 Aug 2019
Cited by 22 | Viewed by 4478
Abstract
It is often the case when studying complex dynamical systems that a statistical formulation can provide the greatest insight into the underlying dynamics. When discussing the behavior of such a system which is evolving in time, it is useful to have the notion [...] Read more.
It is often the case when studying complex dynamical systems that a statistical formulation can provide the greatest insight into the underlying dynamics. When discussing the behavior of such a system which is evolving in time, it is useful to have the notion of a metric between two given states. A popular measure of information change in a system under perturbation has been the relative entropy of the states, as this notion allows us to quantify the difference between states of a system at different times. In this paper, we investigate the relaxation problem given by a single and coupled Ornstein–Uhlenbeck (O-U) process and compare the information length with entropy-based metrics (relative entropy, Jensen divergence) as well as others. By measuring the total information length in the long time limit, we show that it is only the information length that preserves the linear geometry of the O-U process. In the coupled O-U process, the information length is shown to be capable of detecting changes in both components of the system even when other metrics would detect almost nothing in one of the components. We show in detail that the information length is sensitive to the evolution of subsystems. Full article
(This article belongs to the Special Issue Statistical Mechanics and Mathematical Physics)
Show Figures

Figure 1

18 pages, 1620 KiB  
Article
Power, Efficiency and Fluctuations in a Quantum Point Contact as Steady-State Thermoelectric Heat Engine
by Sara Kheradsoud, Nastaran Dashti, Maciej Misiorny, Patrick P. Potts, Janine Splettstoesser and Peter Samuelsson
Entropy 2019, 21(8), 777; https://doi.org/10.3390/e21080777 - 8 Aug 2019
Cited by 34 | Viewed by 5277
Abstract
The trade-off between large power output, high efficiency and small fluctuations in the operation of heat engines has recently received interest in the context of thermodynamic uncertainty relations (TURs). Here we provide a concrete illustration of this trade-off by theoretically investigating the operation [...] Read more.
The trade-off between large power output, high efficiency and small fluctuations in the operation of heat engines has recently received interest in the context of thermodynamic uncertainty relations (TURs). Here we provide a concrete illustration of this trade-off by theoretically investigating the operation of a quantum point contact (QPC) with an energy-dependent transmission function as a steady-state thermoelectric heat engine. As a starting point, we review and extend previous analysis of the power production and efficiency. Thereafter the power fluctuations and the bound jointly imposed on the power, efficiency, and fluctuations by the TURs are analyzed as additional performance quantifiers. We allow for arbitrary smoothness of the transmission probability of the QPC, which exhibits a close to step-like dependence in energy, and consider both the linear and the non-linear regime of operation. It is found that for a broad range of parameters, the power production reaches nearly its theoretical maximum value, with efficiencies more than half of the Carnot efficiency and at the same time with rather small fluctuations. Moreover, we show that by demanding a non-zero power production, in the linear regime a stronger TUR can be formulated in terms of the thermoelectric figure of merit. Interestingly, this bound holds also in a wide parameter regime beyond linear response for our QPC device. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Figure 1

40 pages, 955 KiB  
Article
Two Measures of Dependence
by Amos Lapidoth and Christoph Pfister
Entropy 2019, 21(8), 778; https://doi.org/10.3390/e21080778 - 8 Aug 2019
Cited by 14 | Viewed by 4199
Abstract
Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon’s mutual information when their order α is one. The first [...] Read more.
Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon’s mutual information when their order α is one. The first measure shares many properties with the mutual information, including the data-processing inequality, and can be related to the optimal error exponents in composite hypothesis testing. The second measure does not satisfy the data-processing inequality, but appears naturally in the context of distributed task encoding. Full article
(This article belongs to the Special Issue Information Measures with Applications)
Show Figures

Figure 1

29 pages, 4850 KiB  
Review
Quantum Phonon Transport in Nanomaterials: Combining Atomistic with Non-Equilibrium Green’s Function Techniques
by Leonardo Medrano Sandonas, Rafael Gutierrez, Alessandro Pecchia, Alexander Croy and Gianaurelio Cuniberti
Entropy 2019, 21(8), 735; https://doi.org/10.3390/e21080735 - 27 Jul 2019
Cited by 16 | Viewed by 6634
Abstract
A crucial goal for increasing thermal energy harvesting will be to progress towards atomistic design strategies for smart nanodevices and nanomaterials. This requires the combination of computationally efficient atomistic methodologies with quantum transport based approaches. Here, we review our recent work on this [...] Read more.
A crucial goal for increasing thermal energy harvesting will be to progress towards atomistic design strategies for smart nanodevices and nanomaterials. This requires the combination of computationally efficient atomistic methodologies with quantum transport based approaches. Here, we review our recent work on this problem, by presenting selected applications of the PHONON tool to the description of phonon transport in nanostructured materials. The PHONON tool is a module developed as part of the Density-Functional Tight-Binding (DFTB) software platform. We discuss the anisotropic phonon band structure of selected puckered two-dimensional materials, helical and horizontal doping effects in the phonon thermal conductivity of boron nitride-carbon heteronanotubes, phonon filtering in molecular junctions, and a novel computational methodology to investigate time-dependent phonon transport at the atomistic level. These examples illustrate the versatility of our implementation of phonon transport in combination with density functional-based methods to address specific nanoscale functionalities, thus potentially allowing for designing novel thermal devices. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Graphical abstract

13 pages, 1267 KiB  
Article
Electron Traversal Times in Disordered Graphene Nanoribbons
by Michael Ridley, Michael A. Sentef and Riku Tuovinen
Entropy 2019, 21(8), 737; https://doi.org/10.3390/e21080737 - 27 Jul 2019
Cited by 12 | Viewed by 4363
Abstract
Using the partition-free time-dependent Landauer–Büttiker formalism for transient current correlations, we study the traversal times taken for electrons to cross graphene nanoribbon (GNR) molecular junctions. We demonstrate electron traversal signatures that vary with disorder and orientation of the GNR. These findings can be [...] Read more.
Using the partition-free time-dependent Landauer–Büttiker formalism for transient current correlations, we study the traversal times taken for electrons to cross graphene nanoribbon (GNR) molecular junctions. We demonstrate electron traversal signatures that vary with disorder and orientation of the GNR. These findings can be related to operational frequencies of GNR-based devices and their consequent rational design. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Figure 1

16 pages, 308 KiB  
Article
Empirical Estimation of Information Measures: A Literature Guide
by Sergio Verdú
Entropy 2019, 21(8), 720; https://doi.org/10.3390/e21080720 - 24 Jul 2019
Cited by 48 | Viewed by 11392
Abstract
We give a brief survey of the literature on the empirical estimation of entropy, differential entropy, relative entropy, mutual information and related information measures. While those quantities are of central importance in information theory, universal algorithms for their estimation are increasingly important in [...] Read more.
We give a brief survey of the literature on the empirical estimation of entropy, differential entropy, relative entropy, mutual information and related information measures. While those quantities are of central importance in information theory, universal algorithms for their estimation are increasingly important in data science, machine learning, biology, neuroscience, economics, language, and other experimental sciences. Full article
(This article belongs to the Special Issue Information Measures with Applications)
Show Figures

Figure 1

27 pages, 336 KiB  
Article
Dynamic Maximum Entropy Reduction
by Václav Klika, Michal Pavelka, Petr Vágner and Miroslav Grmela
Entropy 2019, 21(7), 715; https://doi.org/10.3390/e21070715 - 22 Jul 2019
Cited by 24 | Viewed by 5621
Abstract
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state [...] Read more.
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state variables. The method is based on explicit recognition of the state and conjugate variables, which can relax towards the respective quasi-equilibria in different ways. Detailed state variables are reduced using the usual principle of maximum entropy (MaxEnt), whereas relaxation of conjugate variables guarantees that the reduced equations are closed. Moreover, an infinite chain of consecutive DynMaxEnt approximations can be constructed. The method is demonstrated on a particle with friction, complex fluids (equipped with conformation and Reynolds stress tensors), hyperbolic heat conduction and magnetohydrodynamics. Full article
(This article belongs to the Special Issue Entropy and Non-Equilibrium Statistical Mechanics)
Show Figures

Figure 1

22 pages, 1131 KiB  
Communication
Derivations of the Core Functions of the Maximum Entropy Theory of Ecology
by Alexander B. Brummer and Erica A. Newman
Entropy 2019, 21(7), 712; https://doi.org/10.3390/e21070712 - 21 Jul 2019
Cited by 24 | Viewed by 6533
Abstract
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the [...] Read more.
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the METE framework, “ecological state variables” (representing total area, total species richness, total abundance, and total metabolic energy) describe macroecological properties of an ecosystem. METE incorporates these state variables into constraints on underlying probability distributions. The method of Lagrange multipliers and maximization of information entropy (MaxEnt) lead to predicted functional forms of distributions of interest. We demonstrate how information entropy is maximized for the general case of a distribution, which has empirical information that provides constraints on the overall predictions. We then show how METE’s two core functions are derived. These functions, called the “Spatial Structure Function” and the “Ecosystem Structure Function” are the core pieces of the theory, from which all the predictions of METE follow (including the Species Area Relationship, the Species Abundance Distribution, and various metabolic distributions). Primarily, we consider the discrete distributions predicted by METE. We also explore the parameter space defined by the METE’s state variables and Lagrange multipliers. We aim to provide a comprehensive resource for ecologists who want to understand the derivations and assumptions of the basic mathematical structure of METE. Full article
(This article belongs to the Special Issue Information Theory Applications in Biology)
Show Figures

Graphical abstract

18 pages, 404 KiB  
Article
Rateless Codes-Based Secure Communication Employing Transmit Antenna Selection and Harvest-To-Jam under Joint Effect of Interference and Hardware Impairments
by Phu Tran Tin, Tan N. Nguyen, Nguyen Q. Sang, Tran Trung Duy, Phuong T. Tran and Miroslav Voznak
Entropy 2019, 21(7), 700; https://doi.org/10.3390/e21070700 - 16 Jul 2019
Cited by 14 | Viewed by 4503
Abstract
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a [...] Read more.
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a cooperative jammer node harvests energy from radio frequency (RF) signals of the source and the interference sources to generate jamming noises on the eavesdropper. The data transmission terminates as soon as the destination can receive a sufficient number of the encoded packets for decoding the original data of the source. To obtain secure communication, the destination must receive sufficient encoded packets before the eavesdropper. The combination of the TAS and harvest-to-jam techniques obtains the security and efficient energy via reducing the number of the data transmission, increasing the quality of the data channel, decreasing the quality of the eavesdropping channel, and supporting the energy for the jammer. The main contribution of this paper is to derive exact closed-form expressions of outage probability (OP), probability of successful and secure communication (SS), intercept probability (IP) and average number of time slots used by the source over Rayleigh fading channel under the joint impact of co-channel interference and hardware impairments. Then, Monte Carlo simulations are presented to verify the theoretical results. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

24 pages, 1866 KiB  
Perspective
Entropy and Information within Intrinsically Disordered Protein Regions
by Iva Pritišanac, Robert M. Vernon, Alan M. Moses and Julie D. Forman Kay
Entropy 2019, 21(7), 662; https://doi.org/10.3390/e21070662 - 6 Jul 2019
Cited by 34 | Viewed by 10378
Abstract
Bioinformatics and biophysical studies of intrinsically disordered proteins and regions (IDRs) note the high entropy at individual sequence positions and in conformations sampled in solution. This prevents application of the canonical sequence-structure-function paradigm to IDRs and motivates the development of new methods to [...] Read more.
Bioinformatics and biophysical studies of intrinsically disordered proteins and regions (IDRs) note the high entropy at individual sequence positions and in conformations sampled in solution. This prevents application of the canonical sequence-structure-function paradigm to IDRs and motivates the development of new methods to extract information from IDR sequences. We argue that the information in IDR sequences cannot be fully revealed through positional conservation, which largely measures stable structural contacts and interaction motifs. Instead, considerations of evolutionary conservation of molecular features can reveal the full extent of information in IDRs. Experimental quantification of the large conformational entropy of IDRs is challenging but can be approximated through the extent of conformational sampling measured by a combination of NMR spectroscopy and lower-resolution structural biology techniques, which can be further interpreted with simulations. Conformational entropy and other biophysical features can be modulated by post-translational modifications that provide functional advantages to IDRs by tuning their energy landscapes and enabling a variety of functional interactions and modes of regulation. The diverse mosaic of functional states of IDRs and their conformational features within complexes demands novel metrics of information, which will reflect the complicated sequence-conformational ensemble-function relationship of IDRs. Full article
Show Figures

Graphical abstract

21 pages, 4720 KiB  
Article
Multiobjective Optimization of a Plate Heat Exchanger in a Waste Heat Recovery Organic Rankine Cycle System for Natural Gas Engines
by Guillermo Valencia, José Núñez and Jorge Duarte
Entropy 2019, 21(7), 655; https://doi.org/10.3390/e21070655 - 3 Jul 2019
Cited by 47 | Viewed by 5557
Abstract
A multiobjective optimization of an organic Rankine cycle (ORC) evaporator, operating with toluene as the working fluid, is presented in this paper for waste heat recovery (WHR) from the exhaust gases of a 2 MW Jenbacher JMS 612 GS-N.L. gas internal combustion engine. [...] Read more.
A multiobjective optimization of an organic Rankine cycle (ORC) evaporator, operating with toluene as the working fluid, is presented in this paper for waste heat recovery (WHR) from the exhaust gases of a 2 MW Jenbacher JMS 612 GS-N.L. gas internal combustion engine. Indirect evaporation between the exhaust gas and the organic fluid in the parallel plate heat exchanger (ITC2) implied irreversible heat transfer and high investment costs, which were considered as objective functions to be minimized. Energy and exergy balances were applied to the system components, in addition to the phenomenological equations in the ITC2, to calculate global energy indicators, such as the thermal efficiency of the configuration, the heat recovery efficiency, the overall energy conversion efficiency, the absolute increase of engine thermal efficiency, and the reduction of the break-specific fuel consumption of the system, of the system integrated with the gas engine. The results allowed calculation of the plate spacing, plate height, plate width, and chevron angle that minimized the investment cost and entropy generation of the equipment, reaching 22.04 m2 in the heat transfer area, 693.87 kW in the energy transfer by heat recovery from the exhaust gas, and 41.6% in the overall thermal efficiency of the ORC as a bottoming cycle for the engine. This type of result contributes to the inclusion of this technology in the industrial sector as a consequence of the improvement in thermal efficiency and economic viability. Full article
(This article belongs to the Special Issue Thermodynamic Optimization)
Show Figures

Figure 1

15 pages, 15663 KiB  
Article
Energy and New Economic Approach for Nearly Zero Energy Hotels
by Francesco Nocera, Salvatore Giuffrida, Maria Rosa Trovato and Antonio Gagliano
Entropy 2019, 21(7), 639; https://doi.org/10.3390/e21070639 - 28 Jun 2019
Cited by 34 | Viewed by 4612
Abstract
The paper addresses an important long-standing question in regards to the energy efficiency renovation of existing buildings, in this case hotels, towards nearly zero-energy (nZEBs) status. The renovation of existing hotels to achieve a nearly zero-energy (nZEBs) performance is one of the forefront [...] Read more.
The paper addresses an important long-standing question in regards to the energy efficiency renovation of existing buildings, in this case hotels, towards nearly zero-energy (nZEBs) status. The renovation of existing hotels to achieve a nearly zero-energy (nZEBs) performance is one of the forefront goals of EU’s energy policy for 2050. The achievement of nZEBs target for hotels is necessary not only to comply with changing regulations and legislations, but also to foster competitiveness to secure new funding. Indeed, the nZEB hotel status allows for the reduction of operating costs and the increase of energy security, meeting the market and guests’ expectations. Actually, there is not a set national value of nZEBs for hotels to be attained, despite the fact that hotels are among the most energy-intensive buildings. This paper presents the case study of the energy retrofit of an existing historical hotel located in southern Italy (Syracuse) in order to achieve nZEBs status. Starting from the energy audit, the paper proposes a step-by-step approach to nZEBs performance, with a perspective on the costs, in order to identify the most effective energy solutions. Such an approach allows useful insights regarding energy and economic–financial strategies for achieving nZEBs standards to highlighted. Moreover, the results of this paper provide, to stakeholders, useful information for quantifying the technical convenience and economic profitability to reach an nZEBs target in order to prevent the expenses necessary by future energy retrofit programs. Full article
Show Figures

Figure 1

20 pages, 1251 KiB  
Article
Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples
by Damián G. Hernández and Inés Samengo
Entropy 2019, 21(6), 623; https://doi.org/10.3390/e21060623 - 25 Jun 2019
Cited by 14 | Viewed by 8437
Abstract
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual [...] Read more.
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

14 pages, 1441 KiB  
Article
Changed Temporal Structure of Neuromuscular Control, Rather Than Changed Intersegment Coordination, Explains Altered Stabilographic Regularity after a Moderate Perturbation of the Postural Control System
by Felix Wachholz, Tove Kockum, Thomas Haid and Peter Federolf
Entropy 2019, 21(6), 614; https://doi.org/10.3390/e21060614 - 21 Jun 2019
Cited by 11 | Viewed by 5014
Abstract
Sample entropy (SaEn) applied on center-of-pressure (COP) data provides a measure for the regularity of human postural control. Two mechanisms could contribute to altered COP regularity: first, an altered temporal structure (temporal regularity) of postural movements (H1); or second, altered coordination between segment [...] Read more.
Sample entropy (SaEn) applied on center-of-pressure (COP) data provides a measure for the regularity of human postural control. Two mechanisms could contribute to altered COP regularity: first, an altered temporal structure (temporal regularity) of postural movements (H1); or second, altered coordination between segment movements (coordinative complexity; H2). The current study used rapid, voluntary head-shaking to perturb the postural control system, thus producing changes in COP regularity, to then assess the two hypotheses. Sixteen healthy participants (age 26.5 ± 3.5; seven females), whose postural movements were tracked via 39 reflective markers, performed trials in which they first stood quietly on a force plate for 30 s, then shook their head for 10 s, finally stood quietly for another 90 s. A principal component analysis (PCA) performed on the kinematic data extracted the main postural movement components. Temporal regularity was determined by calculating SaEn on the time series of these movement components. Coordinative complexity was determined by assessing the relative explained variance of the first five components. H1 was supported, but H2 was not. These results suggest that moderate perturbations of the postural control system produce altered temporal structures of the main postural movement components, but do not necessarily change the coordinative structure of intersegment movements. Full article
(This article belongs to the Section Complexity)
Show Figures

Graphical abstract

40 pages, 1870 KiB  
Article
Structural Characteristics of Two-Sender Index Coding
by Chandra Thapa, Lawrence Ong, Sarah J. Johnson and Min Li
Entropy 2019, 21(6), 615; https://doi.org/10.3390/e21060615 - 21 Jun 2019
Cited by 7 | Viewed by 4199
Abstract
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique [...] Read more.
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique message such that each message is requested by only one receiver. Index coding in this setup is called two-sender unicast index coding (TSUIC). The main goal is to find the shortest aggregate normalized codelength, which is expressed as the optimal broadcast rate. In this work, firstly, for a given TSUIC problem, we form three independent sub-problems each consisting of the only subset of the messages, based on whether the messages are available only in one of the senders or in both senders. Then, we express the optimal broadcast rate of the TSUIC problem as a function of the optimal broadcast rates of those independent sub-problems. In this way, we discover the structural characteristics of TSUIC. For the proofs of our results, we utilize confusion graphs and coding techniques used in single-sender index coding. To adapt the confusion graph technique in TSUIC, we introduce a new graph-coloring approach that is different from the normal graph coloring, which we call two-sender graph coloring, and propose a way of grouping the vertices to analyze the number of colors used. We further determine a class of TSUIC instances where a certain type of side-information can be removed without affecting their optimal broadcast rates. Finally, we generalize the results of a class of TSUIC problems to multiple senders. Full article
(This article belongs to the Special Issue Multiuser Information Theory II)
Show Figures

Figure 1

24 pages, 6063 KiB  
Article
Machine Learning Techniques to Identify Antimicrobial Resistance in the Intensive Care Unit
by Sergio Martínez-Agüero, Inmaculada Mora-Jiménez, Jon Lérida-García, Joaquín Álvarez-Rodríguez and Cristina Soguero-Ruiz
Entropy 2019, 21(6), 603; https://doi.org/10.3390/e21060603 - 18 Jun 2019
Cited by 45 | Viewed by 9091
Abstract
The presence of bacteria with resistance to specific antibiotics is one of the greatest threats to the global health system. According to the World Health Organization, antimicrobial resistance has already reached alarming levels in many parts of the world, involving a social and [...] Read more.
The presence of bacteria with resistance to specific antibiotics is one of the greatest threats to the global health system. According to the World Health Organization, antimicrobial resistance has already reached alarming levels in many parts of the world, involving a social and economic burden for the patient, for the system, and for society in general. Because of the critical health status of patients in the intensive care unit (ICU), time is critical to identify bacteria and their resistance to antibiotics. Since common antibiotics resistance tests require between 24 and 48 h after the culture is collected, we propose to apply machine learning (ML) techniques to determine whether a bacterium will be resistant to different families of antimicrobials. For this purpose, clinical and demographic features from the patient, as well as data from cultures and antibiograms are considered. From a population point of view, we also show graphically the relationship between different bacteria and families of antimicrobials by performing correspondence analysis. Results of the ML techniques evidence non-linear relationships helping to identify antimicrobial resistance at the ICU, with performance dependent on the family of antimicrobials. A change in the trend of antimicrobial resistance is also evidenced. Full article
Show Figures

Figure 1

Back to TopTop