Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 16, Pages 4168-4184: Characterizing the Asymptotic Per-Symbol Redundancy of Memoryless Sources over Countable Alphabets in Terms of Single-Letter Marginals]]>
http://www.mdpi.com/1099-4300/16/7/4168
The minimum expected number of bits needed to describe a random variable is its entropy, assuming knowledge of the distribution of the random variable. On the other hand, universal compression describes data supposing that the underlying distribution is unknown, but that it belongs to a known set Ρ of distributions. However, since universal descriptions are not matched exactly to the underlying distribution, the number of bits they use on average is higher, and the excess over the entropy used is the redundancy. In this paper, we study the redundancy incurred by the universal description of strings of positive integers (Z+), the strings being generated independently and identically distributed (i.i.d.) according an unknown distribution over Z+ in a known collection P. We first show that if describing a single symbol incurs finite redundancy, then P is tight, but that the converse does not always hold. If a single symbol can be described with finite worst-case regret (a more stringent formulation than redundancy above), then it is known that describing length n i.i.d. strings only incurs vanishing (to zero) redundancy per symbol as n increases. On the contrary, we show it is possible that the description of a single symbol from an unknown distribution of P incurs finite redundancy, yet the description of length n i.i.d. strings incurs a constant (&gt; 0) redundancy per symbol encoded. We then show a sufficient condition on single-letter marginals, such that length n i.i.d. samples will incur vanishing redundancy per symbol encoded.Entropy2014-07-23167Article10.3390/e16074168416841841099-43002014-07-23doi: 10.3390/e16074168Maryam HosseiniNarayana Santhanam<![CDATA[Entropy, Vol. 16, Pages 4132-4167: Network Decomposition and Complexity Measures: An Information Geometrical Approach]]>
http://www.mdpi.com/1099-4300/16/7/4132
We consider the graph representation of the stochastic model with n binary variables, and develop an information theoretical framework to measure the degree of statistical association existing between subsystems as well as the ones represented by each edge of the graph representation. Besides, we consider the novel measures of complexity with respect to the system decompositionability, by introducing the geometric product of Kullback–Leibler (KL-) divergence. The novel complexity measures satisfy the boundary condition of vanishing at the limit of completely random and ordered state, and also with the existence of independent subsystem of any size. Such complexity measures based on the geometric means are relevant to the heterogeneity of dependencies between subsystems, and the amount of information propagation shared entirely in the system.Entropy2014-07-23167Article10.3390/e16074132413241671099-43002014-07-23doi: 10.3390/e16074132Masatoshi Funabashi<![CDATA[Entropy, Vol. 16, Pages 4121-4131: Numerical Investigation on the Temperature Characteristics of the Voice Coil for a Woofer Using Thermal Equivalent Heat Conduction Models]]>
http://www.mdpi.com/1099-4300/16/7/4121
The objective of this study is to numerically investigate the temperature and heat transfer characteristics of the voice coil for a woofer with and without bobbins using the thermal equivalent heat conduction models. The temperature and heat transfer characteristics of the main components of the woofer were analyzed with input powers ranging from 5 W to 60 W. The numerical results of the voice coil showed good agreement within ±1% of the data by Odenbach (2003). The temperatures of the voice coil and its units for the woofer without the bobbin were 6.1% and 5.0% on average, respectively; lower than those of the woofer with the bobbin. However, at an input power of 30 W for the voice coil, the temperatures of the main components of the woofer without the bobbin were 40.0% higher on average than those of the woofer obtained by Lee et al. (2013).Entropy2014-07-21167Article10.3390/e16074121412141311099-43002014-07-21doi: 10.3390/e16074121Moo-Yeon LeeHyung-Jin Kim<![CDATA[Entropy, Vol. 16, Pages 4101-4120: Can the Hexagonal Ice-like Model Render the Spectroscopic Fingerprints of Structured Water? Feedback from Quantum-Chemical Computations]]>
http://www.mdpi.com/1099-4300/16/7/4101
The spectroscopic features of the multilayer honeycomb model of structured water are analyzed on theoretical grounds, by using high-level ab initio quantum-chemical methodologies, through model systems built by two fused hexagons of water molecules: the monomeric system [H19O10], in different oxidation states (anionic and neutral species). The findings do not support anionic species as the origin of the spectroscopic fingerprints observed experimentally for structured water. In this context, hexameric anions can just be seen as a source of hydrated hydroxyl anions and cationic species. The results for the neutral dimer are, however, fully consistent with the experimental evidence related to both, absorption and fluorescence spectra. The neutral π-stacked dimer [H38O20] can be assigned as the main responsible for the recorded absorption and fluorescence spectra with computed band maxima at 271 nm (4.58 eV) and 441 nm (2.81 eV), respectively. The important role of triplet excited states is finally discussed. The most intense vertical triplet⇨ triplet transition is predicted to be at 318 nm (3.90 eV).Entropy2014-07-21167Article10.3390/e16074101410141201099-43002014-07-21doi: 10.3390/e16074101Javier Segarra-MartíDaniel Roca-SanjuánManuela Merchán<![CDATA[Entropy, Vol. 16, Pages 4088-4100: Using Geometry to Select One Dimensional Exponential Families That Are Monotone Likelihood Ratio in the Sample Space, Are Weakly Unimodal and Can Be Parametrized by a Measure of Central Tendency]]>
http://www.mdpi.com/1099-4300/16/7/4088
One dimensional exponential families on finite sample spaces are studied using the geometry of the simplex Δn°-1 and that of a transformation Vn-1 of its interior. This transformation is the natural parameter space associated with the family of multinomial distributions. The space Vn-1 is partitioned into cones that are used to find one dimensional families with desirable properties for modeling and inference. These properties include the availability of uniformly most powerful tests and estimators that exhibit optimal properties in terms of variability and unbiasedness.Entropy2014-07-18167Article10.3390/e16074088408841001099-43002014-07-18doi: 10.3390/e16074088Paul VosKarim Anaya-Izquierdo<![CDATA[Entropy, Vol. 16, Pages 4060-4087: Biosemiotic Entropy: Concluding the Series]]>
http://www.mdpi.com/1099-4300/16/7/4060
This article concludes the special issue on Biosemiotic Entropy looking toward the future on the basis of current and prior results. It highlights certain aspects of the series, concerning factors that damage and degenerate biosignaling systems. As in ordinary linguistic discourse, well-formedness (coherence) in biological signaling systems depends on valid representations correctly construed: a series of proofs are presented and generalized to all meaningful sign systems. The proofs show why infants must (as empirical evidence shows they do) proceed through a strict sequence of formal steps in acquiring any language. Classical and contemporary conceptions of entropy and information are deployed showing why factors that interfere with coherence in biological signaling systems are necessary and sufficient causes of disorders, diseases, and mortality. Known sources of such formal degeneracy in living organisms (here termed, biosemiotic entropy) include: (a) toxicants, (b) pathogens; (c) excessive exposures to radiant energy and/or sufficiently powerful electromagnetic fields; (d) traumatic injuries; and (e) interactions between the foregoing factors. Just as Jaynes proved that irreversible changes invariably increase entropy, the theory of true narrative representations (TNR theory) demonstrates that factors disrupting the well-formedness (coherence) of valid representations, all else being held equal, must increase biosemiotic entropy—the kind impacting biosignaling systems.Entropy2014-07-18167Editorial10.3390/e16074060406040871099-43002014-07-18doi: 10.3390/e16074060John Oller<![CDATA[Entropy, Vol. 16, Pages 4044-4059: Entropy and Its Discontents: A Note on Definitions]]>
http://www.mdpi.com/1099-4300/16/7/4044
The routine definitions of Shannon entropy for both discrete and continuous probability laws show inconsistencies that make them not reciprocally coherent. We propose a few possible modifications of these quantities so that: (1) they no longer show incongruities; and (2) they go one into the other in a suitable limit as the result of a renormalization. The properties of the new quantities would slightly differ from that of the usual entropies in a few other respects.Entropy2014-07-17167Article10.3390/e16074044404440591099-43002014-07-17doi: 10.3390/e16074044Nicola Petroni<![CDATA[Entropy, Vol. 16, Pages 4032-4043: Application of a Modified Entropy Computational Method in Assessing the Complexity of Pulse Wave Velocity Signals in Healthy and Diabetic Subjects]]>
http://www.mdpi.com/1099-4300/16/7/4032
Using 1000 successive points of a pulse wave velocity (PWV) series, we previously distinguished healthy from diabetic subjects with multi-scale entropy (MSE) using a scale factor of 10. One major limitation is the long time for data acquisition (i.e., 20 min). This study aimed at validating the sensitivity of a novel method, short time MSE (sMSE) that utilized a substantially smaller sample size (i.e., 600 consecutive points), in differentiating the complexity of PWV signals both in simulation and in human subjects that were divided into four groups: healthy young (Group 1; n = 24) and middle-aged (Group 2; n = 30) subjects without known cardiovascular disease and middle-aged individuals with well-controlled (Group 3; n = 18) and poorly-controlled (Group 4; n = 22) diabetes mellitus type 2. The results demonstrated that although conventional MSE could differentiate the subjects using 1000 consecutive PWV series points, sensitivity was lost using only 600 points. Simulation study revealed consistent results. By contrast, the novel sMSE method produced significant differences in entropy in both simulation and testing subjects. In conclusion, this study demonstrated that using a novel sMSE approach for PWV analysis, the time for data acquisition can be substantially reduced to that required for 600 cardiac cycles (~10 min) with remarkable preservation of sensitivity in differentiating among healthy, aged, and diabetic populations.Entropy2014-07-17167Article10.3390/e16074032403240431099-43002014-07-17doi: 10.3390/e16074032Yi-Chung ChangHsien-Tsai WuHong-Ruei ChenAn-Bang LiuJung-Jen YehMen-Tzung LoJen-Ho TsaoChieh-Ju TangI-Ting TsaiCheuk-Kwan Sun<![CDATA[Entropy, Vol. 16, Pages 4015-4031: New Riemannian Priors on the Univariate Normal Model]]>
http://www.mdpi.com/1099-4300/16/7/4015
The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ) with hyperparameters θ - ∈ Θ and γ &gt; 0, whose density with respect to Riemannian volume is proportional to exp(−d2(θ, θ - )/2γ2), where d2(θ, θ - ) is the square of Rao’s Riemannian distance. The distributions G( θ - , γ) are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ) is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ) has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ), as shown in the paper), and its dispersion away from θ - is given by γ. Therefore, one thinks of members of the class represented by G( θ - , γ) as being centered around θ - and lying within a typical distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ) and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ) can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that this leads to an improvement in performance over the use of conjugate priors.Entropy2014-07-17167Article10.3390/e16074015401540311099-43002014-07-17doi: 10.3390/e16074015Salem SaidLionel BombrunYannick Berthoumieu<![CDATA[Entropy, Vol. 16, Pages 4004-4014: A Note of Caution on Maximizing Entropy]]>
http://www.mdpi.com/1099-4300/16/7/4004
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples.Entropy2014-07-17167Article10.3390/e16074004400440141099-43002014-07-17doi: 10.3390/e16074004Richard NeapolitanXia Jiang<![CDATA[Entropy, Vol. 16, Pages 3939-4003: Human Brain Networks: Spiking Neuron Models, Multistability, Synchronization, Thermodynamics, Maximum Entropy Production, and Anesthetic Cascade Mechanisms]]>
http://www.mdpi.com/1099-4300/16/7/3939
Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a theoretical foundation for general anesthesia using the network properties of the brain. Finally, we present some key emergent properties from the fields of thermodynamics and electromagnetic field theory to qualitatively explain the underlying neuronal mechanisms of action for anesthesia and consciousness.Entropy2014-07-17167Article10.3390/e16073939393940031099-43002014-07-17doi: 10.3390/e16073939Wassim HaddadQing HuiJames Bailey<![CDATA[Entropy, Vol. 16, Pages 3903-3938: Panel I: Connecting 2nd Law Analysis with Economics, Ecology and Energy Policy]]>
http://www.mdpi.com/1099-4300/16/7/3903
The present paper is a review of several papers from the Proceedings of the Joint European Thermodynamics Conference, held in Brescia, Italy, 1–5 July 2013, namely papers introduced by their authors at Panel I of the conference. Panel I was devoted to applications of the Second Law of Thermodynamics to social issues—economics, ecology, sustainability, and energy policy. The concept called Available Energy which goes back to mid-nineteenth century work of Kelvin, Rankine, Maxwell and Gibbs, is relevant to all of the papers. Various names have been applied to the concept when interactions between the system of interest and an environment are involved. Today, the name exergy is generally accepted. The scope of the papers being reviewed is wide and they complement one another well.Entropy2014-07-16167Review10.3390/e16073903390339381099-43002014-07-16doi: 10.3390/e16073903Richard GaggioliMauro Reini<![CDATA[Entropy, Vol. 16, Pages 3889-3902: Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing]]>
http://www.mdpi.com/1099-4300/16/7/3889
We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights.Entropy2014-07-15167Article10.3390/e16073889388939021099-43002014-07-15doi: 10.3390/e16073889Ri-Qi SuYing-Cheng LaiXiao Wang<![CDATA[Entropy, Vol. 16, Pages 3878-3888: The Entropy-Based Quantum Metric]]>
http://www.mdpi.com/1099-4300/16/7/3878
The von Neumann entropy S( D ^ ) generates in the space of quantum density matrices D ^ the Riemannian metric ds2 = −d2S( D ^ ), which is physically founded and which characterises the amount of quantum information lost by mixing D ^ and D ^ + d D ^ . A rich geometric structure is thereby implemented in quantum mechanics. It includes a canonical mapping between the spaces of states and of observables, which involves the Legendre transform of S( D ^ ). The Kubo scalar product is recovered within the space of observables. Applications are given to equilibrium and non equilibrium quantum statistical mechanics. There the formalism is specialised to the relevant space of observables and to the associated reduced states issued from the maximum entropy criterion, which result from the exact states through an orthogonal projection. Von Neumann’s entropy specialises into a relevant entropy. Comparison is made with other metrics. The Riemannian properties of the metric ds2 = −d2S( D ^ ) are derived. The curvature arises from the non-Abelian nature of quantum mechanics; its general expression and its explicit form for q-bits are given, as well as geodesics.Entropy2014-07-15167Article10.3390/e16073878387838881099-43002014-07-15doi: 10.3390/e16073878Roger Balian<![CDATA[Entropy, Vol. 16, Pages 3866-3877: Many Can Work Better than the Best: Diagnosing with Medical Images via Crowdsourcing]]>
http://www.mdpi.com/1099-4300/16/7/3866
We study a crowdsourcing-based diagnosis algorithm, which is against the fact that currently we do not lack medical staff, but high level experts. Our approach is to make use of the general practitioners’ efforts: For every patient whose illness cannot be judged definitely, we arrange for them to be diagnosed multiple times by different doctors, and we collect the all diagnosis results to derive the final judgement. Our inference model is based on the statistical consistency of the diagnosis data. To evaluate the proposed model, we conduct experiments on both the synthetic and real data; the results show that it outperforms the benchmarks.Entropy2014-07-14167Article10.3390/e16073866386638771099-43002014-07-14doi: 10.3390/e16073866Xian-Hong XiangXiao-Yu HuangXiao-Ling ZhangChun-Fang CaiJian-Yong YangLei Li<![CDATA[Entropy, Vol. 16, Pages 3848-3865: Hierarchical Geometry Verification via Maximum Entropy Saliency in Image Retrieval]]>
http://www.mdpi.com/1099-4300/16/7/3848
We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical saliency and also fully exploring the geometric context of all visual words in images. First of all, we obtain hierarchical salient regions of a query image based on the maximum entropy principle and label visual features with salient tags. The tags added to the feature descriptors are used to compute the saliency matching score, and the scores are regarded as the weight information in the geometry verification step. Second we define a spatial pattern as a triangle composed of three matched features and evaluate the similarity between every two spatial patterns. Finally, we sum all spatial matching scores with weights to generate the final ranking list. Experiment results prove that Hierarchical Geometry Verification based on Maximum Entropy Saliency can not only improve retrieval accuracy, but also reduce the time consumption of the full retrieval.Entropy2014-07-14167Article10.3390/e16073848384838651099-43002014-07-14doi: 10.3390/e16073848Hongwei ZhaoQingliang LiPingping Liu<![CDATA[Entropy, Vol. 16, Pages 3832-3847: Variational Bayes for Regime-Switching Log-Normal Models]]>
http://www.mdpi.com/1099-4300/16/7/3832
The power of projection using divergence functions is a major theme in information geometry. One version of this is the variational Bayes (VB) method. This paper looks at VB in the context of other projection-based methods in information geometry. It also describes how to apply VB to the regime-switching log-normal model and how it provides a computationally fast solution to quantify the uncertainty in the model specification. The results show that the method can recover exactly the model structure, gives the reasonable point estimates and is very computationally efficient. The potential problems of the method in quantifying the parameter uncertainty are discussed.Entropy2014-07-14167Article10.3390/e16073832383238471099-43002014-07-14doi: 10.3390/e16073832Hui ZhaoPaul Marriott<![CDATA[Entropy, Vol. 16, Pages 3813-3831: Phase Competitions behind the Giant Magnetic Entropy Variation: Gd5Si2Ge2 and Tb5Si2Ge2 Case Studies]]>
http://www.mdpi.com/1099-4300/16/7/3813
Magnetic materials with strong spin-lattice coupling are a powerful set of candidates for multifunctional applications because of their multiferroic, magnetocaloric (MCE), magnetostrictive and magnetoresistive effects. In these materials there is a strong competition between two states (where a state comprises an atomic and an associated magnetic structure) that leads to the occurrence of phase transitions under subtle variations of external parameters, such as temperature, magnetic field and hydrostatic pressure. In this review a general method combining detailed magnetic measurements/analysis and first principles calculations with the purpose of estimating the phase transition temperature is presented with the help of two examples (Gd5Si2Ge2 and Tb5Si2Ge2). It is demonstrated that such method is an important tool for a deeper understanding of the (de)coupled nature of each phase transition in the materials belonging to the R5(Si,Ge)4 family and most possibly can be applied to other systems. The exotic Griffiths-like phase in the framework of the R5(SixGe1-x)4 compounds is reviewed and its generalization as a requisite for strong phase competitions systems that present large magneto-responsive properties is proposed.Entropy2014-07-11167Review10.3390/e16073813381338311099-43002014-07-11doi: 10.3390/e16073813Ana PiresJoão BeloArmandina LopesIsabel GomesLuis MorellónCesar MagenPedro AlgarabelManuel IbarraAndré PereiraJoão Araújo<![CDATA[Entropy, Vol. 16, Pages 3808-3812: Entropy Generation in Steady Laminar Boundary Layers with Pressure Gradients]]>
http://www.mdpi.com/1099-4300/16/7/3808
In an earlier paper in Entropy [1] we hypothesized that the entropy generation rate is the driving force for boundary layer transition from laminar to turbulent flow. Subsequently, with our colleagues we have examined the prediction of entropy generation during such transitions [2,3]. We found that reasonable predictions for engineering purposes could be obtained for flows with negligible streamwise pressure gradients by adapting the linear combination model of Emmons [4]. A question then arises—will the Emmons approach be useful for boundary layer transition with significant streamwise pressure gradients as by Nolan and Zaki [5]. In our implementation the intermittency is calculated by comparison to skin friction correlations for laminar and turbulent boundary layers and is then applied with comparable correlations for the energy dissipation coefficient (i.e., non-dimensional integral entropy generation rate). In the case of negligible pressure gradients the Blasius theory provides the necessary laminar correlations.Entropy2014-07-10167Letter10.3390/e16073808380838121099-43002014-07-10doi: 10.3390/e16073808Donald McEligotEdmond Walsh<![CDATA[Entropy, Vol. 16, Pages 3793-3807: Entropy vs. Majorization: What Determines Complexity?]]>
http://www.mdpi.com/1099-4300/16/7/3793
The evolution of a microcanonical statistical ensemble of states of isolated systems from order to disorder as determined by increasing entropy, is compared to an alternative evolution that is determined by mixing character. The fact that the partitions of an integer N are in one-to-one correspondence with macrostates for N distinguishable objects is noted. Orders for integer partitions are given, including the original order by Young and the Boltzmann order by entropy. Mixing character (represented by Young diagrams) is seen to be a partially ordered quality rather than a quantity (See Ruch, 1975). The majorization partial order is reviewed as is its Hasse diagram representation known as the Young Diagram Lattice (YDL).Two lattices that show allowed transitions between macrostates are obtained from the YDL: we term these the mixing lattice and the diversity lattice. We study the dynamics (time evolution) on the two lattices, namely the sequence of steps on the lattices (i.e., the path or trajectory) that leads from low entropy, less mixed states to high entropy, highly mixed states. These paths are sequences of macrostates with monotonically increasing entropy. The distributions of path lengths on the two lattices are obtained via Monte Carlo methods, and surprisingly both distributions appear Gaussian. However, the width of the path length distribution for diversity is the square root of the mixing case, suggesting a qualitative difference in their temporal evolution. Another surprising result is that some macrostates occur in many paths while others do not. The evolution at low entropy and at high entropy is quite simple, but at intermediate entropies, the number of possible evolutionary paths is extremely large (due to the extensive branching of the lattices). A quantitative complexity measure associated with incomparability of macrostates in the mixing partial order is proposed, complementing Kolmogorov complexity and Shannon entropy.Entropy2014-07-09167Article10.3390/e16073793379338071099-43002014-07-09doi: 10.3390/e16073793William SeitzA. Kirwan<![CDATA[Entropy, Vol. 16, Pages 3769-3792: On Conservation Equation Combinations and Closure Relations]]>
http://www.mdpi.com/1099-4300/16/7/3769
Fundamental conservation equations for mass, momentum and energy of chemical species can be combined with thermodynamic relations to obtain secondary forms, such as conservation equations for phases, an internal energy balance and a mechanical energy balance. In fact, the forms of secondary equations are infinite and depend on the criteria used in determining which species-based equations to employ and how to combine them. If one uses these secondary forms in developing an entropy inequality to be used in formulating closure relations, care must be employed to ensure that the appropriate equations are used, or problematic results can develop for multispecies systems. We show here that the use of the fundamental forms minimizes the chance of an erroneous formulation in terms of secondary forms and also provides guidance as to which secondary forms should be used if one uses them as a starting point.Entropy2014-07-07167Article10.3390/e16073769376937921099-43002014-07-07doi: 10.3390/e16073769William GrayAmanda Dye<![CDATA[Entropy, Vol. 16, Pages 3754-3768: Maximum Entropy in Drug Discovery]]>
http://www.mdpi.com/1099-4300/16/7/3754
Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.Entropy2014-07-07167Review10.3390/e16073754375437681099-43002014-07-07doi: 10.3390/e16073754Chih-Yuan TsengJack Tuszynski<![CDATA[Entropy, Vol. 16, Pages 3732-3753: On the Connections of Generalized Entropies With Shannon and Kolmogorov–Sinai Entropies]]>
http://www.mdpi.com/1099-4300/16/7/3732
We consider the concept of generalized Kolmogorov–Sinai entropy, where instead of the Shannon entropy function, we consider an arbitrary concave function defined on the unit interval, vanishing in the origin. Under mild assumptions on this function, we show that this isomorphism invariant is linearly dependent on the Kolmogorov–Sinai entropy.Entropy2014-07-03167Article10.3390/e16073732373237531099-43002014-07-03doi: 10.3390/e16073732Fryderyk Falniowski<![CDATA[Entropy, Vol. 16, Pages 3710-3731: The Role of Vegetation on the Ecosystem Radiative Entropy Budget and Trends Along Ecological Succession]]>
http://www.mdpi.com/1099-4300/16/7/3710
Ecosystem entropy production is predicted to increase along ecological succession and approach a state of maximum entropy production, but few studies have bridged the gap between theory and data. Here, we explore radiative entropy production in terrestrial ecosystems using measurements from 64 Free/Fair-Use sites in the FLUXNET database, including a successional chronosequence in the Duke Forest in the southeastern United States. Ecosystem radiative entropy production increased then decreased as succession progressed in the Duke Forest ecosystems, and did not exceed 95% of the calculated empirical maximum entropy production in the FLUXNET study sites. Forest vegetation, especially evergreen needleleaf forests characterized by low shortwave albedo and close coupling to the atmosphere, had a significantly higher ratio of radiative entropy production to the empirical maximum entropy production than did croplands and grasslands. Our results demonstrate that ecosystems approach, but do not reach, maximum entropy production and that the relationship between succession and entropy production depends on vegetation characteristics. Future studies should investigate how natural disturbances and anthropogenic management—especially the tendency to shift vegetation to an earlier successional state—alter energy flux and entropy production at the surface-atmosphere interface.Entropy2014-07-03167Article10.3390/e16073710371037311099-43002014-07-03doi: 10.3390/e16073710Paul StoyHua LinKimberly NovickMario SiqueiraJehn-Yih Juang<![CDATA[Entropy, Vol. 16, Pages 3689-3709: Searching for Conservation Laws in Brain Dynamics—BOLD Flux and Source Imaging]]>
http://www.mdpi.com/1099-4300/16/7/3689
Blood-oxygen-level-dependent (BOLD) imaging is the most important noninvasive tool to map human brain function. It relies on local blood-flow changes controlled by neurovascular coupling effects, usually in response to some cognitive or perceptual task. In this contribution we ask if the spatiotemporal dynamics of the BOLD signal can be modeled by a conservation law. In analogy to the description of physical laws, which often can be derived from some underlying conservation law, identification of conservation laws in the brain could lead to new models for the functional organization of the brain. Our model is independent of the nature of the conservation law, but we discuss possible hints and motivations for conservation laws. For example, globally limited blood supply and local competition between brain regions for blood might restrict the large scale BOLD signal in certain ways that could be observable. One proposed selective pressure for the evolution of such conservation laws is the closed volume of the skull limiting the expansion of brain tissue by increases in blood volume. These ideas are demonstrated on a mental motor imagery fMRI experiment, in which functional brain activation was mapped in a group of volunteers imagining themselves swimming. In order to search for local conservation laws during this complex cognitive process, we derived maps of quantities resulting from spatial interaction of the BOLD amplitudes. Specifically, we mapped fluxes and sources of the BOLD signal, terms that would appear in a description by a continuity equation. Whereas we cannot present final answers with the particular analysis of this particular experiment, some results seem to be non-trivial. For example, we found that during task the group BOLD flux covered more widespread regions than identified by conventional BOLD mapping and was always increasing during task. It is our hope that these results motivate more work towards the search for conservation laws in neuroimaging experiments or at least towards imaging procedures based on spatial interactions of signals. The payoff could be new models for the dynamics of the healthy brain or more sensitive clinical imaging approaches, respectively.Entropy2014-07-03167Article10.3390/e16073689368937091099-43002014-07-03doi: 10.3390/e16073689Henning VossNicholas Schiff<![CDATA[Entropy, Vol. 16, Pages 3670-3688: Extending the Extreme Physical Information to Universal Cognitive Models via a Confident Information First Principle]]>
http://www.mdpi.com/1099-4300/16/7/3670
The principle of extreme physical information (EPI) can be used to derive many known laws and distributions in theoretical physics by extremizing the physical information loss K, i.e., the difference between the observed Fisher information I and the intrinsic information bound J of the physical phenomenon being measured. However, for complex cognitive systems of high dimensionality (e.g., human language processing and image recognition), the information bound J could be excessively larger than I (J ≫ I), due to insufficient observation, which would lead to serious over-fitting problems in the derivation of cognitive models. Moreover, there is a lack of an established exact invariance principle that gives rise to the bound information in universal cognitive systems. This limits the direct application of EPI. To narrow down the gap between I and J, in this paper, we propose a confident-information-first (CIF) principle to lower the information bound J by preserving confident parameters and ruling out unreliable or noisy parameters in the probability density function being measured. The confidence of each parameter can be assessed by its contribution to the expected Fisher information distance between the physical phenomenon and its observations. In addition, given a specific parametric representation, this contribution can often be directly assessed by the Fisher information, which establishes a connection with the inverse variance of any unbiased estimate for the parameter via the Cramér–Rao bound. We then consider the dimensionality reduction in the parameter spaces of binary multivariate distributions. We show that the single-layer Boltzmann machine without hidden units (SBM) can be derived using the CIF principle. An illustrative experiment is conducted to show how the CIF principle improves the density estimation performance.Entropy2014-07-01167Article10.3390/e16073670367036881099-43002014-07-01doi: 10.3390/e16073670Xiaozhao ZhaoYuexian HouDawei SongWenjie Li<![CDATA[Entropy, Vol. 16, Pages 3655-3669: An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples]]>
http://www.mdpi.com/1099-4300/16/7/3655
In this paper, based on a doubly generalized Type II censored sample, the maximum likelihood estimators (MLEs), the approximate MLE and the Bayes estimator for the entropy of the Rayleigh distribution are derived. We compare the entropy estimators’ root mean squared error (RMSE), bias and Kullback–Leibler divergence values. The simulation procedure is repeated 10,000 times for the sample size n = 10, 20, 40 and 100 and various doubly generalized Type II hybrid censoring schemes. Finally, a real data set has been analyzed for illustrative purposes.Entropy2014-07-01167Article10.3390/e16073655365536691099-43002014-07-01doi: 10.3390/e16073655Youngseuk ChoHokeun SunKyeongjun Lee<![CDATA[Entropy, Vol. 16, Pages 3635-3654: A Maximum Entropy Fixed-Point Route Choice Model for Route Correlation]]>
http://www.mdpi.com/1099-4300/16/7/3635
In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial Logit models. The proposed model considers a fixed point problem for treating correlations between routes, which can be solved iteratively. We estimated the new model on the Santiago (Chile) Metro network and compared the results with other route choice models that can be found in the literature. The new model has better explanatory and predictive power that many other alternative models, correctly capturing the correlation factor. Our methodology can be extended to private transport networks.Entropy2014-06-30167Article10.3390/e16073635363536541099-43002014-06-30doi: 10.3390/e16073635Louis de GrangeSebastián RaveauFelipe González<![CDATA[Entropy, Vol. 16, Pages 3605-3634: Entropy Evolution and Uncertainty Estimation with Dynamical Systems]]>
http://www.mdpi.com/1099-4300/16/7/3605
This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D, some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In general, both H and D are dissipated, and the dissipation bears a form reminiscent of the Fisher information; in the absence of stochasticity, dH/dt is connected to the rate of phase space expansion, and D stays invariant, i.e., the separation of two probability density functions is always conserved. These formulas are validated with linear systems, and put to application with the Lorenz system and a large-dimensional stochastic quasi-geostrophic flow problem. In the Lorenz case, H falls at a constant rate with time, implying that H will eventually become negative, a situation beyond the capability of the commonly used computational technique like coarse-graining and bin counting. For the stochastic flow problem, it is first reduced to a computationally tractable low-dimensional system, using a reduced model approach, and then handled through ensemble prediction. Both the Lorenz system and the stochastic flow system are examples of self-organization in the light of uncertainty reduction. The latter particularly shows that, sometimes stochasticity may actually enhance the self-organization process.Entropy2014-06-30167Article10.3390/e16073605360536341099-43002014-06-30doi: 10.3390/e16073605X. Liang<![CDATA[Entropy, Vol. 16, Pages 3590-3604: Normalized Expected Utility-Entropy Measure of Risk]]>
http://www.mdpi.com/1099-4300/16/7/3590
Yang and Qiu proposed an expected utility-entropy (EU-E) measure of risk, which reflects an individual’s intuitive attitude toward risk. Luce et al. have derived the numerical representations under behavioral axioms about preference orderings among gambles and their joint receipt, which further demonstrates the reasonability of the EU-E decision model as a normative one. In the paper, combining normalized expected utility and entropy together, we improve the EU-E measure of risk and decision model, and then propose the normalized EU-E measure of risk and decision model. The normalized EU-E measure of risk has some normative properties under certain conditions. Moreover, the normalized EU-E decision model can be a proper descriptive model to some extent. Using this model, two cases of common ratio effect and common consequence effect, which are the examples of certainty effects, can be explained in an intuitive way.Entropy2014-06-30167Article10.3390/e16073590359036041099-43002014-06-30doi: 10.3390/e16073590Jiping YangWanhua Qiu<![CDATA[Entropy, Vol. 16, Pages 3573-3589: Simulation of Entropy Generation under Stall Conditions in a Centrifugal Fan]]>
http://www.mdpi.com/1099-4300/16/7/3573
Rotating stalls are generally the first instability met in turbomachinery, before surges. This 3D phenomenon is characterized by one or more stalled flow cells which rotate at a fraction of the impeller speed. The goal of the present work is to shed some light on the entropy generation in a centrifugal fan under rotating stall conditions. A numerical simulation of entropy generation is carried out with the ANSYS Fluent software which solves the Navier-Stokes equations and user defined function (UDF). The entropy generation characteristics in the centrifugal fan for five typical conditions are presented and discussed, involving the design condition, conditions on occurrence and development of stall inception, the rotating stall conditions with two throttle coefficients. The results show that the entropy generation increases after the occurrence of stall inception. The high entropy generation areas move along the circumferential and axial directions, and finally merge into one stall cell. The entropy generation rate during circumferential propagation of the stall cell is also discussed, showing that the entropy generation history is similar to sine curves in impeller and volute, and the volute tongue has a great influence on entropy generation in the centrifugal fan.Entropy2014-06-30167Article10.3390/e16073573357335891099-43002014-06-30doi: 10.3390/e16073573Lei ZhangJinhua LangKuan JiangSongling Wang<![CDATA[Entropy, Vol. 16, Pages 3552-3572: Duality of Maximum Entropy and Minimum Divergence]]>
http://www.mdpi.com/1099-4300/16/7/3552
We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.Entropy2014-06-26167Article10.3390/e16073552355235721099-43002014-06-26doi: 10.3390/e16073552Shinto EguchiOsamu KomoriAtsumi Ohara<![CDATA[Entropy, Vol. 16, Pages 3537-3551: Hybrid Quantum-Classical Protocol for Storage and Retrieval of Discrete-Valued Information]]>
http://www.mdpi.com/1099-4300/16/6/3537
In this paper we present a hybrid (i.e., quantum-classical) adaptive protocol for the storage and retrieval of discrete-valued information. The purpose of this paper is to introduce a procedure that exhibits how to store and retrieve unanticipated information values by using a quantum property, that of using different vector space bases for preparation and measurement of quantum states. This simple idea leads to an interesting old wish in Artificial Intelligence: the development of computer systems that can incorporate new knowledge on a real-time basis just by hardware manipulation.Entropy2014-06-24166Article10.3390/e16063537353735511099-43002014-06-24doi: 10.3390/e16063537Abdullah IliyasuSalvador Venegas-AndracaFei YanAhmed Sayed<![CDATA[Entropy, Vol. 16, Pages 3482-3536: Entropy in the Critical Zone: A Comprehensive Review]]>
http://www.mdpi.com/1099-4300/16/6/3482
Thermodynamic entropy was initially proposed by Clausius in 1865. Since then it has been implemented in the analysis of different systems, and is seen as a promising concept to understand the evolution of open systems in non-equilibrium conditions. Information entropy was proposed by Shannon in 1948, and has become an important concept to measure information in different systems. Both thermodynamic entropy and information entropy have been extensively applied in different fields related to the Critical Zone, such as hydrology, ecology, pedology, and geomorphology. In this study, we review the most important applications of these concepts in those fields, including how they are calculated, and how they have been utilized to analyze different processes. We then synthesize the link between thermodynamic and information entropies in the light of energy dissipation and organizational patterns, and discuss how this link may be used to enhance the understanding of the Critical Zone.Entropy2014-06-24166Review10.3390/e16063482348235361099-43002014-06-24doi: 10.3390/e16063482Juan QuijanoHenry Lin<![CDATA[Entropy, Vol. 16, Pages 3471-3481: Application of the Generalized Work Relation for an N-level Quantum System]]>
http://www.mdpi.com/1099-4300/16/6/3471
An efficient periodic operation to obtain the maximum work from a nonequilibrium initial state in an N–level quantum system is shown. Each cycle consists of a stabilization process followed by an isentropic restoration process. The instantaneous time limit can be taken in the stabilization process from the nonequilibrium initial state to a stable passive state. In the restoration process that preserves the passive state a minimum period is needed to satisfy the uncertainty relation between energy and time. An efficient quantum feedback control in a symmetric two–level quantum system connected to an energy source is proposed.Entropy2014-06-23166Letter10.3390/e16063471347134811099-43002014-06-23doi: 10.3390/e16063471Junichi IshikawaKazuma TakaraHiroshi-H. HasegawaDean Driebe<![CDATA[Entropy, Vol. 16, Pages 3434-3470: Some Trends in Quantum Thermodynamics]]>
http://www.mdpi.com/1099-4300/16/6/3434
Traditional answers to what the 2nd Law is are well known. Some are based on the microstate of a system wandering rapidly through all accessible phase space, while others are based on the idea of a system occupying an initial multitude of states due to the inevitable imperfections of measurements that then effectively, in a coarse grained manner, grow in time (mixing). What has emerged are two somewhat less traditional approaches from which it is said that the 2nd Law emerges, namely, that of the theory of quantum open systems and that of the theory of typicality. These are the two principal approaches, which form the basis of what today has come to be called quantum thermodynamics. However, their dynamics remains strictly linear and unitary, and, as a number of recent publications have emphasized, “testing the unitary propagation of pure states alone cannot rule out a nonlinear propagation of mixtures”. Thus, a non-traditional approach to capturing such a propagation would be one which complements the postulates of QM by the 2nd Law of thermodynamics, resulting in a possibly meaningful, nonlinear dynamics. An unorthodox approach, which does just that, is intrinsic quantum thermodynamics and its mathematical framework, steepest-entropy-ascent quantum thermodynamics. The latter has evolved into an effective tool for modeling the dynamics of reactive and non-reactive systems at atomistic scales. It is the usefulness of this framework in the context of quantum thermodynamics as well as the theory of typicality which are discussed here in some detail. A brief discussion of some other trends such as those related to work, work extraction, and fluctuation theorems is also presented. Entropy2014-06-23166Article10.3390/e16063434343434701099-43002014-06-23doi: 10.3390/e16063434Michael von SpakovskyJochen Gemmer<![CDATA[Entropy, Vol. 16, Pages 3416-3433: Identifying the Coupling Structure in Complex Systems through the Optimal Causation Entropy Principle]]>
http://www.mdpi.com/1099-4300/16/6/3416
Inferring the coupling structure of complex systems from time series data in general by means of statistical and information-theoretic techniques is a challenging problem in applied science. The reliability of statistical inferences requires the construction of suitable information-theoretic measures that take into account both direct and indirect influences, manifest in the form of information flows, between the components within the system. In this work, we present an application of the optimal causation entropy (oCSE) principle to identify the coupling structure of a synthetic biological system, the repressilator. Specifically, when the system reaches an equilibrium state, we use a stochastic perturbation approach to extract time series data that approximate a linear stochastic process. Then, we present and jointly apply the aggregative discovery and progressive removal algorithms based on the oCSE principle to infer the coupling structure of the system from the measured data. Finally, we show that the success rate of our coupling inferences not only improves with the amount of available data, but it also increases with a higher frequency of sampling and is especially immune to false positives.Entropy2014-06-20166Article10.3390/e16063416341634331099-43002014-06-20doi: 10.3390/e16063416Jie SunCarlo CafaroErik Bollt<![CDATA[Entropy, Vol. 16, Pages 3401-3415: A Maximum Entropy Method for a Robust Portfolio Problem]]>
http://www.mdpi.com/1099-4300/16/6/3401
We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.Entropy2014-06-20166Article10.3390/e16063401340134151099-43002014-06-20doi: 10.3390/e16063401Yingying XuZhuwu WuLong JiangXuefeng Song<![CDATA[Entropy, Vol. 16, Pages 3379-3400: Coarse Dynamics for Coarse Modeling: An Example From Population Biology]]>
http://www.mdpi.com/1099-4300/16/6/3379
Networks have become a popular way to concisely represent complex nonlinear systems where the interactions and parameters are imprecisely known. One challenge is how best to describe the associated dynamics, which can exhibit complicated behavior sensitive to small changes in parameters. A recently developed computational approach that we refer to as a database for dynamics provides a robust and mathematically rigorous description of global dynamics over large ranges of parameter space. To demonstrate the potential of this approach we consider two classical age-structured population models that share the same network diagram and have a similar nonlinear overcompensatory term, but nevertheless yield different patterns of qualitative behavior as a function of parameters. Using a generalization of these models we relate the different structure of the dynamics that are observed in the context of biologically relevant questions such as stable oscillations in populations, bistability, and permanence.Entropy2014-06-19166Article10.3390/e16063379337934001099-43002014-06-19doi: 10.3390/e16063379Justin BushKonstantin Mischaikow<![CDATA[Entropy, Vol. 16, Pages 3357-3378: Effects of Anticipation in Individually Motivated Behaviour on Survival and Control in a Multi-Agent Scenario with Resource Constraints]]>
http://www.mdpi.com/1099-4300/16/6/3357
Self-organization and survival are inextricably bound to an agent’s ability to control and anticipate its environment. Here we assess both skills when multiple agents compete for a scarce resource. Drawing on insights from psychology, microsociology and control theory, we examine how different assumptions about the behaviour of an agent’s peers in the anticipation process affect subjective control and survival strategies. To quantify control and drive behaviour, we use the recently developed information-theoretic quantity of empowerment with the principle of empowerment maximization. In two experiments involving extensive simulations, we show that agents develop risk-seeking, risk-averse and mixed strategies, which correspond to greedy, parsimonious and mixed behaviour. Although the principle of empowerment maximization is highly generic, the emerging strategies are consistent with what one would expect from rational individuals with dedicated utility models. Our results support empowerment maximization as a universal drive for guided self-organization in collective agent systems.Entropy2014-06-19166Article10.3390/e16063357335733781099-43002014-06-19doi: 10.3390/e16063357Christian GuckelsbergerDaniel Polani<![CDATA[Entropy, Vol. 16, Pages 3329-3356: Speeding up Derivative Configuration from Product Platforms]]>
http://www.mdpi.com/1099-4300/16/6/3329
To compete in the global marketplace, manufacturers try to differentiate their products by focusing on individual customer needs. Fulfilling this goal requires that companies shift from mass production to mass customization. Under this approach, a generic architecture, named product platform, is designed to support the derivation of customized products through a configuration process that determines which components the product comprises. When a customer configures a derivative, typically not every combination of available components is valid. To guarantee that all dependencies and incompatibilities among the derivative constituent components are satisfied, automated configurators are used. Flexible product platforms provide a big number of interrelated components, and so, the configuration of all, but trivial, derivatives involves considerable effort to select which components the derivative should include. Our approach alleviates that effort by speeding up the derivative configuration using a heuristic based on the information theory concept of entropy.Entropy2014-06-18166Article10.3390/e16063329332933561099-43002014-06-18doi: 10.3390/e16063329Ruben HeradioDavid Fernandez-AmorosHector Perez-MoragoAntonio Adan<![CDATA[Entropy, Vol. 16, Pages 3315-3328: A Novel Block-Based Scheme for Arithmetic Coding]]>
http://www.mdpi.com/1099-4300/16/6/3315
It is well-known that for a given sequence, its optimal codeword length is fixed. Many coding schemes have been proposed to make the codeword length as close to the optimal value as possible. In this paper, a new block-based coding scheme operating on the subsequences of a source sequence is proposed. It is proved that the optimal codeword lengths of the subsequences are not larger than that of the given sequence. Experimental results using arithmetic coding will be presented.Entropy2014-06-18166Article10.3390/e16063315331533281099-43002014-06-18doi: 10.3390/e16063315Qi-Bin HouChong Fu<![CDATA[Entropy, Vol. 16, Pages 3302-3314: A Bayesian Probabilistic Framework for Rain Detection]]>
http://www.mdpi.com/1099-4300/16/6/3302
Heavy rain deteriorates the video quality of outdoor imaging equipments. In order to improve video clearness, image-based and sensor-based methods are adopted for rain detection. In earlier literature, image-based detection methods fall into spatio-based and temporal-based categories. In this paper, we propose a new image-based method by exploring spatio-temporal united constraints in a Bayesian framework. In our framework, rain temporal motion is assumed to be Pathological Motion (PM), which is more suitable to time-varying character of rain steaks. Temporal displaced frame discontinuity and spatial Gaussian mixture model are utilized in the whole framework. Iterated expectation maximization solving method is taken for Gaussian parameters estimation. Pixels state estimation is finished by an iterated optimization method in Bayesian probability formulation. The experimental results highlight the advantage of our method in rain detection.Entropy2014-06-17166Article10.3390/e16063302330233141099-43002014-06-17doi: 10.3390/e16063302Chen YaoCi WangLijuan HongYunfei Cheng<![CDATA[Entropy, Vol. 16, Pages 3273-3301: On Clustering Histograms with k-Means by Using Mixed α-Divergences]]>
http://www.mdpi.com/1099-4300/16/6/3273
Clustering sets of histograms has become popular thanks to the success of the generic method of bag-of-X used in text categorization and in visual categorization applications. In this paper, we investigate the use of a parametric family of distortion measures, called the α-divergences, for clustering histograms. Since it usually makes sense to deal with symmetric divergences in information retrieval systems, we symmetrize the α -divergences using the concept of mixed divergences. First, we present a novel extension of k-means clustering to mixed divergences. Second, we extend the k-means++ seeding to mixed α-divergences and report a guaranteed probabilistic bound. Finally, we describe a soft clustering technique for mixed α-divergences.Entropy2014-06-17166Article10.3390/e16063273327333011099-43002014-06-17doi: 10.3390/e16063273Frank NielsenRichard NockShun-ichi Amari<![CDATA[Entropy, Vol. 16, Pages 3257-3272: Density Reconstructions with Errors in the Data]]>
http://www.mdpi.com/1099-4300/16/6/3257
The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the method is its potential to incorporate errors in the data. Here, we examine two possible ways of doing that. The two approaches have different intuitive interpretations, and one of them allows for error estimation. Our motivating example comes from the field of risk analysis, but the statement of the problem might as well come from any branch of applied sciences. We apply the methodology to a problem consisting of the determination of a probability density from a few values of its numerically-determined Laplace transform. This problem can be mapped onto a problem consisting of the determination of a probability density on [0, 1] from the knowledge of a few of its fractional moments up to some measurement errors stemming from insufficient data.Entropy2014-06-12166Article10.3390/e16063257325732721099-43002014-06-12doi: 10.3390/e16063257Erika Gomes-GonçalvesHenryk GzylSilvia Mayoral<![CDATA[Entropy, Vol. 16, Pages 3234-3256: On Spatial Covariance, Second Law of Thermodynamics and Configurational Forces in Continua]]>
http://www.mdpi.com/1099-4300/16/6/3234
This paper studies the transformation properties of the spatial balance of energy equation for a dissipative material, under the superposition of arbitrary spatial diffeomorphisms. The study reveals that for a dissipative material the transformed energy balance equation has some non-standard terms in it. These terms are related to a system of microforces with its own balance equation. These microforces act during the superposition of the spatial diffeomorphism, because of the dissipative properties of the material. Moreover, it is shown that for the case in question the stress tensor is additively decomposed into a conventional part given by the standard Doyle-Ericksen formula and a non-conventional one which is related to changes in the material internal structure in the course of deformation. On the basis of the second law of thermodynamics and the integrability condition of a Pfaffian form it is shown that the non-conventional part of the stress tensor can be related not only to dissipative but also to conservative response. A further insight to this conservative response is provided by exploiting the invariance properties of the balance of energy equation within the context of the material intrinsic “physical” metric concept. In this case, it is shown that the assumption of spatial covariance yields the standard conservation and balance laws of classical mechanics but it does not yield the standard Doyle-Ericksen formula. In fact, the Doyle-Ericksen formula has an additional term in it, which is related directly to the evolution of the material internal structure, as it is determined by the (time) evolution of the material metric in the spatial configuration. A formal connection between this term and the Eshelby energy-momentum tensor is derived as well.Entropy2014-06-10166Article10.3390/e16063234323432561099-43002014-06-10doi: 10.3390/e16063234Vassilis PanoskaltsisDimitris Soldatos<![CDATA[Entropy, Vol. 16, Pages 3207-3233: On the Fisher Metric of Conditional Probability Polytopes]]>
http://www.mdpi.com/1099-4300/16/6/3207
We consider three different approaches to define natural Riemannian metrics on polytopes of stochastic matrices. First, we define a natural class of stochastic maps between these polytopes and give a metric characterization of Chentsov type in terms of invariance with respect to these maps. Second, we consider the Fisher metric defined on arbitrary polytopes through their embeddings as exponential families in the probability simplex. We show that these metrics can also be characterized by an invariance principle with respect to morphisms of exponential families. Third, we consider the Fisher metric resulting from embedding the polytope of stochastic matrices in a simplex of joint distributions by specifying a marginal distribution. All three approaches result in slight variations of products of Fisher metrics. This is consistent with the nature of polytopes of stochastic matrices, which are Cartesian products of probability simplices. The first approach yields a scaled product of Fisher metrics; the second, a product of Fisher metrics; and the third, a product of Fisher metrics scaled by the marginal distribution.Entropy2014-06-06166Article10.3390/e16063207320732331099-43002014-06-06doi: 10.3390/e16063207Guido MontúfarJohannes RauhNihat Ay<![CDATA[Entropy, Vol. 16, Pages 3173-3206: Relative Entropy, Interaction Energy and the Nature of Dissipation]]>
http://www.mdpi.com/1099-4300/16/6/3173
Many thermodynamic relations involve inequalities, with equality if a process does not involve dissipation. In this article we provide equalities in which the dissipative contribution is shown to involve the relative entropy (a.k.a. Kullback-Leibler divergence). The processes considered are general time evolutions both in classical and quantum mechanics, and the initial state is sometimes thermal, sometimes partially so. By calculating a transport coefficient we show that indeed—at least in this case—the source of dissipation in that coefficient is the relative entropy.Entropy2014-06-06166Article10.3390/e16063173317332061099-43002014-06-06doi: 10.3390/e16063173Bernard GaveauLéo GrangerMichel MoreauLawrence Schulman<![CDATA[Entropy, Vol. 16, Pages 3149-3172: A Derivation of a Microscopic Entropy and Time Irreversibility From the Discreteness of Time]]>
http://www.mdpi.com/1099-4300/16/6/3149
The basic microsopic physical laws are time reversible. In contrast, the second law of thermodynamics, which is a macroscopic physical representation of the world, is able to describe irreversible processes in an isolated system through the change of entropy ΔS &gt; 0. It is the attempt of the present manuscript to bridge the microscopic physical world with its macrosocpic one with an alternative approach than the statistical mechanics theory of Gibbs and Boltzmann. It is proposed that time is discrete with constant step size. Its consequence is the presence of time irreversibility at the microscopic level if the present force is of complex nature (F(r) ≠ const). In order to compare this discrete time irreversible mechamics (for simplicity a “classical”, single particle in a one dimensional space is selected) with its classical Newton analog, time reversibility is reintroduced by scaling the time steps for any given time step n by the variable sn leading to the Nosé-Hoover Lagrangian. The corresponding Nos´e-Hoover Hamiltonian comprises a term Ndf kB T ln sn (kB the Boltzmann constant, T the temperature, and Ndf the number of degrees of freedom) which is defined as the microscopic entropy Sn at time point n multiplied by T. Upon ensemble averaging this microscopic entropy Sn in equilibrium for a system which does not have fast changing forces approximates its macroscopic counterpart known from thermodynamics. The presented derivation with the resulting analogy between the ensemble averaged microscopic entropy and its thermodynamic analog suggests that the original description of the entropy by Boltzmann and Gibbs is just an ensemble averaging of the time scaling variable sn which is in equilibrium close to 1, but that the entropyEntropy2014-06-06166Article10.3390/e16063149314931721099-43002014-06-06doi: 10.3390/e16063149Roland Riek<![CDATA[Entropy, Vol. 16, Pages 3136-3148: Minimum Entropy-Based Cascade Control for Governing Hydroelectric Turbines]]>
http://www.mdpi.com/1099-4300/16/6/3136
In this paper, an improved cascade control strategy is presented for hydroturbine speed governors. Different from traditional proportional-integral-derivative (PID) control and model predictive control (MPC) strategies, the performance index of the outer controller is constructed by integrating the entropy and mean value of the tracking error with the constraints on control energy. The inner controller is implemented by a proportional controller. Compared with the conventional PID-P and MPC-P cascade control methods, the proposed cascade control strategy can effectively decrease fluctuations of hydro-turbine speed under non-Gaussian disturbance conditions in practical hydropower plants. Simulation results show the advantages of the proposed cascade control method.Entropy2014-06-05166Article10.3390/e16063136313631481099-43002014-06-05doi: 10.3390/e16063136Mifeng RenDi WuJianhua ZhangMan Jiang<![CDATA[Entropy, Vol. 16, Pages 3121-3135: Quantum Flows for Secret Key Distribution in the Presence of the Photon Number Splitting Attack]]>
http://www.mdpi.com/1099-4300/16/6/3121
Physical implementations of quantum key distribution (QKD) protocols, like the Bennett-Brassard (BB84), are forced to use attenuated coherent quantum states, because the sources of single photon states are not functional yet for QKD applications. However, when using attenuated coherent states, the relatively high rate of multi-photonic pulses introduces vulnerabilities that can be exploited by the photon number splitting (PNS) attack to brake the quantum key. Some QKD protocols have been developed to be resistant to the PNS attack, like the decoy method, but those define a single photonic gain in the quantum channel. To overcome this limitation, we have developed a new QKD protocol, called ack-QKD, which is resistant to the PNS attack. Even more, it uses attenuated quantum states, but defines two interleaved photonic quantum flows to detect the eavesdropper activity by means of the quantum photonic error gain (QPEG) or the quantum bit error rate (QBER). The physical implementation of the ack-QKD is similar to the well-known BB84 protocol.Entropy2014-06-05166Article10.3390/e16063121312131351099-43002014-06-05doi: 10.3390/e16063121Luis Lizama-PérezJ. LópezEduardo De Carlos-LópezSalvador Venegas-Andraca<![CDATA[Entropy, Vol. 16, Pages 3103-3120: Analysis and Optimization of a Compressed Air Energy Storage—Combined Cycle System]]>
http://www.mdpi.com/1099-4300/16/6/3103
Compressed air energy storage (CAES) is a commercial, utility-scale technology that provides long-duration energy storage with fast ramp rates and good part-load operation. It is a promising storage technology for balancing the large-scale penetration of renewable energies, such as wind and solar power, into electric grids. This study proposes a CAES-CC system, which is based on a conventional CAES combined with a steam turbine cycle by waste heat boiler. Simulation and thermodynamic analysis are carried out on the proposed CAES-CC system. The electricity and heating rates of the proposed CAES-CC system are lower than those of the conventional CAES by 0.127 kWh/kWh and 0.338 kWh/kWh, respectively, because the CAES-CC system recycles high-temperature turbine-exhausting air. The overall efficiency of the CAES-CC system is improved by approximately 10% compared with that of the conventional CAES. In the CAES-CC system, compressing intercooler heat can keep the steam turbine on hot standby, thus improving the flexibility of CAES-CC. This study brought about a new method for improving the efficiency of CAES and provided new thoughts for integrating CAES with other electricity-generating modes.Entropy2014-06-04166Article10.3390/e16063103310331201099-43002014-06-04doi: 10.3390/e16063103Wenyi LiuLinzhi LiuLuyao ZhouJian HuangYuwen ZhangGang XuYongping Yang<![CDATA[Entropy, Vol. 16, Pages 3074-3102: Information-Geometric Markov Chain Monte Carlo Methods Using Diffusions]]>
http://www.mdpi.com/1099-4300/16/6/3074
Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.Entropy2014-06-03166Article10.3390/e16063074307431021099-43002014-06-03doi: 10.3390/e16063074Samuel LivingstoneMark Girolami<![CDATA[Entropy, Vol. 16, Pages 3062-3073: Entropy Content During Nanometric Stick-Slip Motion]]>
http://www.mdpi.com/1099-4300/16/6/3062
To explore the existence of self-organization during friction, this paper considers the motion of all atoms in a systems consisting of an Atomic Force Microscope metal tip sliding on a metal slab. The tip and the slab are set in relative motion with constant velocity. The vibrations of individual atoms with respect to that relative motion are obtained explicitly using Molecular Dynamics with Embedded Atom Method potentials. First, we obtain signatures of Self Organized Criticality in that the stick-slip jump force probability densities are power laws with exponents in the range (0.5, 1.5) for aluminum and copper. Second, we characterize the dynamical attractor by the entropy content of the overall atomic jittering. We find that in all cases, friction minimizes the entropy and thus makes a strong case for self-organization.Entropy2014-06-03166Article10.3390/e16063062306230731099-43002014-06-03doi: 10.3390/e16063062Paul CreegerFredy Zypman<![CDATA[Entropy, Vol. 16, Pages 3049-3061: Using Permutation Entropy to Measure the Changes in EEG Signals During Absence Seizures]]>
http://www.mdpi.com/1099-4300/16/6/3049
In this paper, we propose to use permutation entropy to explore whether the changes in electroencephalogram (EEG) data can effectively distinguish different phases in human absence epilepsy, i.e., the seizure-free, the pre-seizure and seizure phases. Permutation entropy is applied to analyze the EEG data from these three phases, each containing 100 19-channel EEG epochs of 2 s duration. The experimental results show the mean value of PE gradually decreases from the seizure-free to the seizure phase and provides evidence that these three different seizure phases in absence epilepsy can be effectively distinguished. Furthermore, our results strengthen the view that most frontal electrodes carry useful information and patterns that can help discriminate among different absence seizure phases.Entropy2014-05-30166Article10.3390/e16063049304930611099-43002014-05-30doi: 10.3390/e16063049Jing LiJiaqing YanXianzeng LiuGaoxiang Ouyang<![CDATA[Entropy, Vol. 16, Pages 3026-3048: Asymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different]]>
http://www.mdpi.com/1099-4300/16/6/3026
We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model.Entropy2014-05-28166Article10.3390/e16063026302630481099-43002014-05-28doi: 10.3390/e16063026Keisuke YanoFumiyasu Komaki<![CDATA[Entropy, Vol. 16, Pages 3009-3025: Tsallis Wavelet Entropy and Its Application in Power Signal Analysis]]>
http://www.mdpi.com/1099-4300/16/6/3009
As a novel data mining approach, a wavelet entropy algorithm is used to perform entropy statistics on wavelet coefficients (or reconstructed signals) at various wavelet scales on the basis of wavelet decomposition and entropy statistic theory. Shannon wavelet energy entropy, one kind of wavelet entropy algorithm, has been taken into consideration and utilized in many areas since it came into being. However, as there is wavelet aliasing after the wavelet decomposition, and the information set of different-scale wavelet decomposition coefficients (or reconstructed signals) is non-additive to a certain extent, Shannon entropy, which is more adaptable to extensive systems, couldn’t do accurate uncertainty statistics on the wavelet decomposition results. Therefore, the transient signal features are extracted incorrectly by using Shannon wavelet energy entropy. From the two aspects, the theoretical limitations and negative effects of wavelet aliasing on extraction accuracy, the problems which exist in the feature extraction process of transient signals by Shannon wavelet energy entropy, are discussed in depth. Considering the defects of Shannon wavelet energy entropy, a novel wavelet entropy named Tsallis wavelet energy entropy is proposed by using Tsallis entropy instead of Shannon entropy, and it is applied to the feature extraction of transient signals in power systems. Theoretical derivation and experimental result prove that compared with Shannon wavelet energy entropy, Tsallis wavelet energy entropy could reduce the negative effects of wavelet aliasing on accuracy of feature extraction and extract transient signal feature of power system accurately.Entropy2014-05-27166Article10.3390/e16063009300930251099-43002014-05-27doi: 10.3390/e16063009Jikai ChenGuoqing Li<![CDATA[Entropy, Vol. 16, Pages 2990-3008: Constraints of Compound Systems: Prerequisites for Thermodynamic Modeling Based on Shannon Entropy]]>
http://www.mdpi.com/1099-4300/16/6/2990
Thermodynamic modeling of extensive systems usually implicitly assumes the additivity of entropy. Furthermore, if this modeling is based on the concept of Shannon entropy, additivity of the latter function must also be guaranteed. In this case, the constituents of a thermodynamic system are treated as subsystems of a compound system, and the Shannon entropy of the compound system must be subjected to constrained maximization. The scope of this paper is to clarify prerequisites for applying the concept of Shannon entropy and the maximum entropy principle to thermodynamic modeling of extensive systems. This is accomplished by investigating how the constraints of the compound system have to depend on mean values of the subsystems in order to ensure additivity. Two examples illustrate the basic ideas behind this approach, comprising the ideal gas model and condensed phase lattice systems as limiting cases of fluid phases. The paper is the first step towards developing a new approach for modeling interacting systems using the concept of Shannon entropy.Entropy2014-05-26166Article10.3390/e16062990299030081099-43002014-05-26doi: 10.3390/e16062990Martin PflegerThomas WallekAndreas Pfennig<![CDATA[Entropy, Vol. 16, Pages 2959-2989: How to Determine Losses in a Flow Field: A Paradigm Shift towards the Second Law Analysis]]>
http://www.mdpi.com/1099-4300/16/6/2959
Assuming that CFD solutions will be more and more used to characterize losses in terms of drag for external flows and head loss for internal flows, we suggest to replace single-valued data, like the drag force or a pressure drop, by field information about the losses. These information are gained when the entropy generation in the flow field is analyzed, an approach that often is called second law analysis (SLA), referring to the second law of thermodynamics. We show that this SLA approach is straight-forward, systematic and helpful when it comes to the physical interpretation of the losses in a flow field. Various examples are given, including external and internal flows, two phase flow, compressible flow and unsteady flow. Finally, we show that an energy transfer within a certain process can be put into a broader perspective by introducing the entropic potential of an energy.Entropy2014-05-26166Article10.3390/e16062959295929891099-43002014-05-26doi: 10.3390/e16062959Heinz HerwigBastian Schmandt<![CDATA[Entropy, Vol. 16, Pages 2944-2958: Information Geometric Complexity of a Trivariate Gaussian Statistical Model]]>
http://www.mdpi.com/1099-4300/16/6/2944
We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such entropic inferences not only depends on the amount of available pieces of information but also on the manner in which such pieces are correlated. Finally, we uncover that, for certain correlational structures, the impossibility of reaching the most favorable configuration from an entropic inference viewpoint seems to lead to an information geometric analog of the well-known frustration effect that occurs in statistical physics.Entropy2014-05-26166Article10.3390/e16062944294429581099-43002014-05-26doi: 10.3390/e16062944Domenico FeliceCarlo CafaroStefano Mancini<![CDATA[Entropy, Vol. 16, Pages 2904-2943: Reaction Kinetics Path Based on Entropy Production Rate and Its Relevance to Low-Dimensional Manifolds]]>
http://www.mdpi.com/1099-4300/16/6/2904
The equation that approximately traces the trajectory in the concentration phase space of chemical kinetics is derived based on the rate of entropy production. The equation coincides with the true chemical kinetics equation to first order in a variable that characterizes the degree of quasi-equilibrium for each reaction, and the equation approximates the trajectory along at least final part of one-dimensional (1-D) manifold of true chemical kinetics that reaches equilibrium in concentration phase space. Besides the 1-D manifold, each higher dimensional manifold of the trajectories given by the equation is an approximation to that of true chemical kinetics when the contour of the entropy production rate in the concentration phase space is not highly distorted, because the Jacobian and its eigenvectors for the equation are exactly the same as those of true chemical kinetics at equilibrium; however, the path or trajectory itself is not necessarily an approximation to that of true chemical kinetics in manifolds higher than 1-D. The equation is for the path of steepest descent that sufficiently accounts for the constraints inherent in chemical kinetics such as element conservation, whereas the simple steepest-descent-path formulation whose Jacobian is the Hessian of the entropy production rate cannot even approximately reproduce any part of the 1-D manifold of true chemical kinetics except for the special case where the eigenvector of the Hessian is nearly identical to that of the Jacobian of chemical kinetics.Entropy2014-05-26166Article10.3390/e16062904290429431099-43002014-05-26doi: 10.3390/e16062904Shinji Kojima<![CDATA[Entropy, Vol. 16, Pages 2890-2903: Maximum Power of Thermally and Electrically Coupled Thermoelectric Generators]]>
http://www.mdpi.com/1099-4300/16/5/2890
In a recent work, we have reported a study on the figure of merit of a thermoelectric system composed by thermoelectric generators connected electrically and thermally in different configurations. In this work, we are interested in analyzing the output power delivered by a thermoelectric system for different arrays of thermoelectric materials in each configuration. Our study shows the impact of the array of thermoelectric materials in the output power of the composite system. We evaluate numerically the corresponding maximum output power for each configuration and determine the optimum array and configuration for maximum power. We compare our results with other recently reported studies.Entropy2014-05-23165Article10.3390/e16052890289029031099-43002014-05-23doi: 10.3390/e16052890Pablo Camacho-MedinaMiguel Olivares-RoblesAlexander Vargas-AlmeidaFrancisco Solorio-Ordaz<![CDATA[Entropy, Vol. 16, Pages 2869-2889: A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates]]>
http://www.mdpi.com/1099-4300/16/5/2869
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios.Entropy2014-05-23165Article10.3390/e16052869286928891099-43002014-05-23doi: 10.3390/e16052869Viviana MeruaneValentina FierroAlejandro Ortiz-Bernardin<![CDATA[Entropy, Vol. 16, Pages 2850-2868: A Probabilistic Description of the Configurational Entropy of Mixing]]>
http://www.mdpi.com/1099-4300/16/5/2850
This work presents a formalism to calculate the configurational entropy of mixing based on the identification of non-interacting atomic complexes in the mixture and the calculation of their respective probabilities, instead of computing the number of atomic configurations in a lattice. The methodology is applied in order to develop a general analytical expression for the configurational entropy of mixing of interstitial solutions. The expression is valid for any interstitial concentration, is suitable for the treatment of interstitial short-range order (SRO) and can be applied to tetrahedral or octahedral interstitial solutions in any crystal lattice. The effect of the SRO of H on the structural properties of the Nb-H and bcc Zr-H solid solutions is studied using an accurate description of the configurational entropy. The methodology can also be applied to systems with no translational symmetry, such as liquids and amorphous materials. An expression for the configurational entropy of a granular system composed by equal sized hard spheres is deduced.Entropy2014-05-23165Article10.3390/e16052850285028681099-43002014-05-23doi: 10.3390/e16052850Jorge Garcés<![CDATA[Entropy, Vol. 16, Pages 2839-2849: Exact Test of Independence Using Mutual Information]]>
http://www.mdpi.com/1099-4300/16/5/2839
Using a recently discovered method for producing random symbol sequences with prescribed transition counts, we present an exact null hypothesis significance test (NHST) for mutual information between two random variables, the null hypothesis being that the mutual information is zero (i.e., independence). The exact tests reported in the literature assume that data samples for each variable are sequentially independent and identically distributed (iid). In general, time series data have dependencies (Markov structure) that violate this condition. The algorithm given in this paper is the first exact significance test of mutual information that takes into account the Markov structure. When the Markov order is not known or indefinite, an exact test is used to determine an effective Markov order.Entropy2014-05-23165Article10.3390/e16052839283928491099-43002014-05-23doi: 10.3390/e16052839Shawn PethelDaniel Hahs<![CDATA[Entropy, Vol. 16, Pages 2820-2838: Randomized Binary Consensus with Faulty Agents]]>
http://www.mdpi.com/1099-4300/16/5/2820
This paper investigates self-organizing binary majority consensus disturbed by faulty nodes with random and persistent failure. We study consensus in ordered and random networks with noise, message loss and delays. Using computer simulations, we show that: (1) explicit randomization by noise, message loss and topology can increase robustness towards faulty nodes; (2) commonly-used faulty nodes with random failure inhibit consensus less than faulty nodes with persistent failure; and (3) in some cases, such randomly failing faulty nodes can even promote agreement.Entropy2014-05-21165Article10.3390/e16052820282028381099-43002014-05-21doi: 10.3390/e16052820Alexander GogolevLucio Marcenaro<![CDATA[Entropy, Vol. 16, Pages 2789-2819: Changing the Environment Based on Empowerment as Intrinsic Motivation]]>
http://www.mdpi.com/1099-4300/16/5/2789
One aspect of intelligence is the ability to restructure your own environment so that the world you live in becomes more beneficial to you. In this paper we investigate how the information-theoretic measure of agent empowerment can provide a task-independent, intrinsic motivation to restructure the world. We show how changes in embodiment and in the environment change the resulting behaviour of the agent and the artefacts left in the world. For this purpose, we introduce an approximation of the established empowerment formalism based on sparse sampling, which is simpler and significantly faster to compute for deterministic dynamics. Sparse sampling also introduces a degree of randomness into the decision making process, which turns out to beneficial for some cases. We then utilize the measure to generate agent behaviour for different agent embodiments in a Minecraft-inspired three dimensional block world. The paradigmatic results demonstrate that empowerment can be used as a suitable generic intrinsic motivation to not only generate actions in given static environments, as shown in the past, but also to modify existing environmental conditions. In doing so, the emerging strategies to modify an agent’s environment turn out to be meaningful to the specific agent capabilities, i.e., de facto to its embodiment.Entropy2014-05-21165Article10.3390/e16052789278928191099-43002014-05-21doi: 10.3390/e16052789Christoph SalgeCornelius GlackinDaniel Polani<![CDATA[Entropy, Vol. 16, Pages 2768-2788: Market Efficiency, Roughness and Long Memory in PSI20 Index Returns: Wavelet and Entropy Analysis]]>
http://www.mdpi.com/1099-4300/16/5/2768
In this study, features of the financial returns of the PSI20index, related to market efficiency, are captured using wavelet- and entropy-based techniques. This characterization includes the following points. First, the detection of long memory, associated with low frequencies, and a global measure of the time series: the Hurst exponent estimated by several methods, including wavelets. Second, the degree of roughness, or regularity variation, associated with the H¨older exponent, fractal dimension and estimation based on the multifractal spectrum. Finally, the degree of the unpredictability of the series, estimated by approximate entropy. These aspects may also be studied through the concepts of non-extensive entropy and distribution using, for instance, the Tsallis q-triplet. They allow one to study the existence of efficiency in the financial market. On the other hand, the study of local roughness is performed by considering wavelet leader-based entropy. In fact, the wavelet coefficients are computed from a multiresolution analysis, and the wavelet leaders are defined by the local suprema of these coefficients, near the point that we are considering. The resulting entropy is more accurate in that detection than the H¨older exponent. These procedures enhance the capacity to identify the occurrence of financial crashes.Entropy2014-05-19165Article10.3390/e16052768276827881099-43002014-05-19doi: 10.3390/e16052768Rui PascoalAna Monteiro<![CDATA[Entropy, Vol. 16, Pages 2756-2767: Long-Range Atomic Order and Entropy Change at the Martensitic Transformation in a Ni-Mn-In-Co Metamagnetic Shape Memory Alloy]]>
http://www.mdpi.com/1099-4300/16/5/2756
The influence of the atomic order on the martensitic transformation entropy change has been studied in a Ni-Mn-In-Co metamagnetic shape memory alloy through the evolution of the transformation temperatures under high-temperature quenching and post-quench annealing thermal treatments. It is confirmed that the entropy change evolves as a consequence of the variations on the degree of L21 atomic order brought by thermal treatments, though, contrary to what occurs in ternary Ni-Mn-In, post-quench aging appears to be the most effective way to modify the transformation entropy in Ni-Mn-In-Co. It is also shown that any entropy change value between around 40 and 5 J/kgK can be achieved in a controllable way for a single alloy under the appropriate aging treatment, thus bringing out the possibility of properly tune the magnetocaloric effect.Entropy2014-05-19165Article10.3390/e16052756275627671099-43002014-05-19doi: 10.3390/e16052756Vicente Sánchez-AlarcosVicente RecarteJosé Pérez-LandazábalEduard CesariJosé Rodríguez-Velamazán<![CDATA[Entropy, Vol. 16, Pages 2729-2755: Transitional Intermittency Exponents Through Deterministic Boundary-Layer Structures and Empirical Entropic Indices]]>
http://www.mdpi.com/1099-4300/16/5/2729
A computational procedure is developed to determine initial instabilities within a three-dimensional laminar boundary layer and to follow these instabilities in the streamwise direction through to the resulting intermittency exponents within a fully developed turbulent flow. The fluctuating velocity wave vector component equations are arranged into a Lorenz-type system of equations. The nonlinear time series solution of these equations at the fifth station downstream of the initial instabilities indicates a sequential outward burst process, while the results for the eleventh station predict a strong sequential inward sweep process. The results for the thirteenth station indicate a return to the original instability autogeneration process. The nonlinear time series solutions indicate regions of order and disorder within the solutions. Empirical entropies are defined from decomposition modes obtained from singular value decomposition techniques applied to the nonlinear time series solutions. Empirical entropic indices are obtained from the empirical entropies for two streamwise stations. The intermittency exponents are then obtained from the entropic indices for these streamwise stations that indicate the burst and autogeneration processes.Entropy2014-05-16165Article10.3390/e16052729272927551099-43002014-05-16doi: 10.3390/e16052729LaVar Isaacson<![CDATA[Entropy, Vol. 16, Pages 2713-2728: Non-Extensive Entropy Econometrics: New Statistical Features of Constant Elasticity of Substitution-Related Models]]>
http://www.mdpi.com/1099-4300/16/5/2713
Power-law (PL) formalism is known to provide an appropriate framework for canonical modeling of nonlinear systems. We estimated three stochastically distinct models of constant elasticity of substitution (CES) class functions as non-linear inverse problem and showed that these PL related functions should have a closed form. The first model is related to an aggregator production function, the second to an aggregator utility function (the Armington) and the third to an aggregator technical transformation function. A q-generalization of K-L information divergence criterion function with a priori consistency constraints is proposed. Related inferential statistical indices are computed. The approach leads to robust estimation and to new findings about the true stochastic nature of this class of nonlinear—up until now—analytically intractable functions. Outputs from traditional econometric techniques (Shannon entropy, NLLS, GMM, ML) are also presented.Entropy2014-05-16165Article10.3390/e16052713271327281099-43002014-05-16doi: 10.3390/e16052713Second Bwanakare<![CDATA[Entropy, Vol. 16, Pages 2699-2712: Action-Amplitude Approach to Controlled Entropic Self-Organization]]>
http://www.mdpi.com/1099-4300/16/5/2699
Motivated by the notion of perceptual error, as a core concept of the perceptual control theory, we propose an action-amplitude model for controlled entropic self-organization (CESO). We present several aspects of this development that illustrate its explanatory power: (i) a physical view of partition functions and path integrals, as well as entropy and phase transitions; (ii) a global view of functional compositions and commutative diagrams; (iii) a local geometric view of the Kähler–Ricci flow and time-evolution of entropic action; and (iv) a computational view using various path-integral approximations.Entropy2014-05-14165Article10.3390/e16052699269927121099-43002014-05-14doi: 10.3390/e16052699Vladimir IvancevicDarryn ReidJason Scholz<![CDATA[Entropy, Vol. 16, Pages 2686-2698: Model Selection Criteria Using Divergences]]>
http://www.mdpi.com/1099-4300/16/5/2686
In this note we introduce some divergence-based model selection criteria. These criteria are defined by estimators of the expected overall discrepancy between the true unknown model and the candidate model, using dual representations of divergences and associated minimum divergence estimators. It is shown that the proposed criteria are asymptotically unbiased. The influence functions of these criteria are also derived and some comments on robustness are provided.Entropy2014-05-14165Article10.3390/e16052686268626981099-43002014-05-14doi: 10.3390/e16052686Aida Toma<![CDATA[Entropy, Vol. 16, Pages 2669-2685: Equivalent Temperature-Enthalpy Diagram for the Study of Ejector Refrigeration Systems]]>
http://www.mdpi.com/1099-4300/16/5/2669
The Carnot factor versus enthalpy variation (heat) diagram has been used extensively for the second law analysis of heat transfer processes. With enthalpy variation (heat) as the abscissa and the Carnot factor as the ordinate the area between the curves representing the heat exchanging media on this diagram illustrates the exergy losses due to the transfer. It is also possible to draw the paths of working fluids in steady-state, steady-flow thermodynamic cycles on this diagram using the definition of “the equivalent temperature” as the ratio between the variations of enthalpy and entropy in an analyzed process. Despite the usefulness of this approach two important shortcomings should be emphasized. First, the approach is not applicable for the processes of expansion and compression particularly for the isenthalpic processes taking place in expansion valves. Second, from the point of view of rigorous thermodynamics, the proposed ratio gives the temperature dimension for the isobaric processes only. The present paper proposes to overcome these shortcomings by replacing the actual processes of expansion and compression by combinations of two thermodynamic paths: isentropic and isobaric. As a result the actual (not ideal) refrigeration and power cycles can be presented on equivalent temperature versus enthalpy variation diagrams. All the exergy losses, taking place in different equipments like pumps, turbines, compressors, expansion valves, condensers and evaporators are then clearly visualized. Moreover the exergies consumed and produced in each component of these cycles are also presented. The latter give the opportunity to also analyze the exergy efficiencies of the components. The proposed diagram is finally applied for the second law analysis of an ejector based refrigeration system. Entropy2014-05-14165Article10.3390/e16052669266926851099-43002014-05-14doi: 10.3390/e16052669Mohammed KhennichMikhail SorinNicolas Galanis<![CDATA[Entropy, Vol. 16, Pages 2642-2668: The Impact of the Prior Density on a Minimum Relative Entropy Density: A Case Study with SPX Option Data]]>
http://www.mdpi.com/1099-4300/16/5/2642
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to find the maximum entropy density of an asset price to the relative entropy case. This is applied to study the impact of the choice of prior density in two market scenarios. In the first scenario, call option prices are prescribed at only a small number of strikes, and we see that the choice of prior, or indeed its omission, yields notably different densities. The second scenario is given by CBOE option price data for S&amp;P500 index options at a large number of strikes. Prior information is now considered to be given by calibrated Heston, Schöbel–Zhu or Variance Gamma models. We find that the resulting digital option prices are essentially the same as those given by the (non-relative) Buchen–Kelly density itself. In other words, in a sufficiently liquid market, the influence of the prior density seems to vanish almost completely. Finally, we study variance swaps and derive a simple formula relating the fair variance swap rate to entropy. Then we show, again, that the prior loses its influence on the fair variance swap rate as the number of strikes increases.Entropy2014-05-14165Article10.3390/e16052642264226681099-43002014-05-14doi: 10.3390/e16052642Cassio NeriLorenz Schneider<![CDATA[Entropy, Vol. 16, Pages 2629-2641: Using Neighbor Diversity to Detect Fraudsters in Online Auctions]]>
http://www.mdpi.com/1099-4300/16/5/2629
Online auctions attract not only legitimate businesses trying to sell their products but also fraudsters wishing to commit fraudulent transactions. Consequently, fraudster detection is crucial to ensure the continued success of online auctions. This paper proposes an approach to detect fraudsters based on the concept of neighbor diversity. The neighbor diversity of an auction account quantifies the diversity of all traders that have transactions with this account. Based on four different features of each trader (i.e., the number of received ratings, the number of cancelled transactions, k-core, and the joined date), four measurements of neighbor diversity are proposed to discern fraudsters from legitimate traders. An experiment is conducted using data gathered from a real world auction website. The results show that, although the use of neighbor diversity on k-core or on the joined date shows little or no improvement in detecting fraudsters, both the neighbor diversity on the number of received ratings and the neighbor diversity on the number of cancelled transactions improve classification accuracy, compared to the state-of-the-art methods that use k-core and center weight.Entropy2014-05-14165Article10.3390/e16052629262926411099-43002014-05-14doi: 10.3390/e16052629Jun-Lin LinLaksamee Khomnotai<![CDATA[Entropy, Vol. 16, Pages 2611-2628: Scale-Invariant Divergences for Density Functions]]>
http://www.mdpi.com/1099-4300/16/5/2611
Divergence is a discrepancy measure between two objects, such as functions, vectors, matrices, and so forth. In particular, divergences defined on probability distributions are widely employed in probabilistic forecasting. As the dissimilarity measure, the divergence should satisfy some conditions. In this paper, we consider two conditions: The first one is the scale-invariance property and the second is that the divergence is approximated by the sample mean of a loss function. The first requirement is an important feature for dissimilarity measures. The divergence will depend on which system of measurements we used to measure the objects. Scale-invariant divergence is transformed in a consistent way when the system of measurements is changed to the other one. The second requirement is formalized such that the divergence is expressed by using the so-called composite score. We study the relation between composite scores and scale-invariant divergences, and we propose a new class of divergences called H¨older divergence that satisfies two conditions above. We present some theoretical properties of H¨older divergence. We show that H¨older divergence unifies existing divergences from the viewpoint of scale-invariance.Entropy2014-05-13165Article10.3390/e16052611261126281099-43002014-05-13doi: 10.3390/e16052611Takafumi Kanamori<![CDATA[Entropy, Vol. 16, Pages 2592-2610: Guided Self-Organization in a Dynamic Embodied System Based on Attractor Selection Mechanism]]>
http://www.mdpi.com/1099-4300/16/5/2592
Guided self-organization can be regarded as a paradigm proposed to understand how to guide a self-organizing system towards desirable behaviors, while maintaining its non-deterministic dynamics with emergent features. It is, however, not a trivial problem to guide the self-organizing behavior of physically embodied systems like robots, as the behavioral dynamics are results of interactions among their controller, mechanical dynamics of the body, and the environment. This paper presents a guided self-organization approach for dynamic robots based on a coupling between the system mechanical dynamics with an internal control structure known as the attractor selection mechanism. The mechanism enables the robot to gracefully shift between random and deterministic behaviors, represented by a number of attractors, depending on internally generated stochastic perturbation and sensory input. The robot used in this paper is a simulated curved beam hopping robot: a system with a variety of mechanical dynamics which depends on its actuation frequencies. Despite the simplicity of the approach, it will be shown how the approach regulates the probability of the robot to reach a goal through the interplay among the sensory input, the level of inherent stochastic perturbation, i.e., noise, and the mechanical dynamics.Entropy2014-05-13165Article10.3390/e16052592259226101099-43002014-05-13doi: 10.3390/e16052592Surya NurzamanXiaoxiang YuYongjae KimFumiya Iida<![CDATA[Entropy, Vol. 16, Pages 2568-2591: A Relevancy, Hierarchical and Contextual Maximum Entropy Framework for a Data-Driven 3D Scene Generation]]>
http://www.mdpi.com/1099-4300/16/5/2568
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data) require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations) and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation) can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated.Entropy2014-05-09165Article10.3390/e16052568256825911099-43002014-05-09doi: 10.3390/e16052568Mesfin DemaHamed Sari-Sarraf<![CDATA[Entropy, Vol. 16, Pages 2549-2567: Exergy Analysis of Flat Plate Solar Collectors]]>
http://www.mdpi.com/1099-4300/16/5/2549
This study proposes the concept of the local heat loss coefficient and examines the calculation method for the average heat loss coefficient and the average absorber plate temperature. It also presents an exergy analysis model of flat plate collectors, considering non-uniformity in temperature distribution along the absorber plate. The computation results agree well with experimental data. The effects of ambient temperature, solar irradiance, fluid inlet temperature, and fluid mass flow rate on useful heat rate, useful exergy rate, and exergy loss rate are examined. An optimal fluid inlet temperature exists for obtaining the maximum useful exergy rate. The calculated optimal fluid inlet temperature is 69 °C, and the maximum useful exergy rate is 101.6 W. Exergy rate distribution is analyzed when ambient temperature, solar irradiance, fluid mass flow rate, and fluid inlet temperature are set to 20 °C, 800 W/m2, 0.05 kg/s, and 50 °C, respectively. The exergy efficiency is 5.96%, and the largest exergy loss is caused by the temperature difference between the absorber plate surface and the sun, accounting for 72.86% of the total exergy rate.Entropy2014-05-09165Article10.3390/e16052549254925671099-43002014-05-09doi: 10.3390/e16052549Zhong GeHuitao WangHua WangSongyuan ZhangXin Guan<![CDATA[Entropy, Vol. 16, Pages 2530-2548: Measuring Instantaneous and Spectral Information Entropies by Shannon Entropy of Choi-Williams Distribution in the Context of Electroencephalography]]>
http://www.mdpi.com/1099-4300/16/5/2530
The theory of Shannon entropy was applied to the Choi-Williams time-frequency distribution (CWD) of time series in order to extract entropy information in both time and frequency domains. In this way, four novel indexes were defined: (1) partial instantaneous entropy, calculated as the entropy of the CWD with respect to time by using the probability mass function at each time instant taken independently; (2) partial spectral information entropy, calculated as the entropy of the CWD with respect to frequency by using the probability mass function of each frequency value taken independently; (3) complete instantaneous entropy, calculated as the entropy of the CWD with respect to time by using the probability mass function of the entire CWD; (4) complete spectral information entropy, calculated as the entropy of the CWD with respect to frequency by using the probability mass function of the entire CWD. These indexes were tested on synthetic time series with different behavior (periodic, chaotic and random) and on a dataset of electroencephalographic (EEG) signals recorded in different states (eyes-open, eyes-closed, ictal and non-ictal activity). The results have shown that the values of these indexes tend to decrease, with different proportion, when the behavior of the synthetic signals evolved from chaos or randomness to periodicity. Statistical differences (p-value &lt; 0.0005) were found between values of these measures comparing eyes-open and eyes-closed states and between ictal and non-ictal states in the traditional EEG frequency bands. Finally, this paper has demonstrated that the proposed measures can be useful tools to quantify the different periodic, chaotic and random components in EEG signals.Entropy2014-05-09165Article10.3390/e16052530253025481099-43002014-05-09doi: 10.3390/e16052530Umberto MeliaFrancesc ClariaMontserrat VallverduPere Caminal<![CDATA[Entropy, Vol. 16, Pages 2512-2529: Three Methods for Estimating the Entropy Parameter M Based on a Decreasing Number of Velocity Measurements in a River Cross-Section]]>
http://www.mdpi.com/1099-4300/16/5/2512
The theoretical development and practical application of three new methods for estimating the entropy parameter M used within the framework of the entropy method proposed by Chiu in the 1980s as a valid alternative to the velocity-area method for measuring the discharge in a river is here illustrated. The first method is based on reproducing the cumulative velocity distribution function associated with a flood event and requires measurements regarding the entire cross-section, whereas, in the second and third method, the estimate of M is based on reproducing the cross-sectional mean velocity by following two different procedures. Both of them rely on the entropy parameter M alone and look for that value of M that brings two different estimates of , obtained by using two different M-dependent-approaches, as close as possible. From an operational viewpoint, the acquisition of velocity data becomes increasingly simplified going from the first to the third approach, which uses only one surface velocity measurement. The procedures proposed are applied in a case study based on the Ponte Nuovo hydrometric station on the Tiber River in central Italy.Entropy2014-05-09165Article10.3390/e16052512251225291099-43002014-05-09doi: 10.3390/e16052512Giulia FarinaStefano AlvisiMarco FranchiniTommaso Moramarco<![CDATA[Entropy, Vol. 16, Pages 2488-2511: Recent Theoretical Approaches to Minimal Artificial Cells]]>
http://www.mdpi.com/1099-4300/16/5/2488
Minimal artificial cells (MACs) are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the encapsulation of chemical reacting systems inside lipid vesicles, i.e. chemical systems enclosed (compartmentalized) by a double-layered lipid membrane. Several researchers are currently interested in synthesizing such simple cellular models for biotechnological purposes or for investigating origin of life scenarios. Within this context, the properties of lipid vesicles (e.g., their stability, permeability, growth dynamics, potential to host reactions or undergo division processes…) play a central role, in combination with the dynamics of the encapsulated chemical or biochemical networks. Thus, from a theoretical standpoint, it is very important to develop kinetic equations in order to explore first—and specify later—the conditions that allow the robust implementation of these complex chemically reacting systems, as well as their controlled reproduction. Due to being compartmentalized in small volumes, the population of reacting molecules can be very low in terms of the number of molecules and therefore their behavior becomes highly affected by stochastic effects both in the time course of reactions and in occupancy distribution among the vesicle population. In this short review we report our mathematical approaches to model artificial cell systems in this complex scenario by giving a summary of three recent simulations studies on the topic of primitive cell (protocell) systems.Entropy2014-05-08165Review10.3390/e16052488248825111099-43002014-05-08doi: 10.3390/e16052488Fabio MavelliEmiliano AltamuraLuigi CassideiPasquale Stano<![CDATA[Entropy, Vol. 16, Pages 2472-2487: F-Geometry and Amari’s α-Geometry on a Statistical Manifold]]>
http://www.mdpi.com/1099-4300/16/5/2472
In this paper, we introduce a geometry called F-geometry on a statistical manifold S using an embedding F of S into the space RX of random variables. Amari’s α-geometry is a special case of F-geometry. Then using the embedding F and a positive smooth function G, we introduce (F,G)-metric and (F,G)-connections that enable one to consider weighted Fisher information metric and weighted connections. The necessary and sufficient condition for two (F,G)-connections to be dual with respect to the (F,G)-metric is obtained. Then we show that Amari’s 0-connection is the only self dual F-connection with respect to the Fisher information metric. Invariance properties of the geometric structures are discussed, which proved that Amari’s α-connections are the only F-connections that are invariant under smooth one-to-one transformations of the random variables.Entropy2014-05-06165Article10.3390/e16052472247224871099-43002014-05-06doi: 10.3390/e16052472Harsha V.Subrahamanian S.<![CDATA[Entropy, Vol. 16, Pages 2454-2471: Computational Information Geometry in Statistics: Theory and Practice]]>
http://www.mdpi.com/1099-4300/16/5/2454
A broad view of the nature and potential of computational information geometry in statistics is offered. This new area suitably extends the manifold-based approach of classical information geometry to a simplicial setting, in order to obtain an operational universal model space. Additional underlying theory and illustrative real examples are presented. In the inﬁnite-dimensional case, challenges inherent in this ambitious overall agenda are highlighted and promising new methodologies indicated.Entropy2014-05-02165Article10.3390/e16052454245424711099-43002014-05-02doi: 10.3390/e16052454Frank CritchleyPaul Marriott<![CDATA[Entropy, Vol. 16, Pages 2433-2453: Optimization of Biomass-Fuelled Combined Cooling, Heating and Power (CCHP) Systems Integrated with Subcritical or Transcritical Organic Rankine Cycles (ORCs)]]>
http://www.mdpi.com/1099-4300/16/5/2433
This work is focused on the thermodynamic optimization of Organic Rankine Cycles (ORCs), coupled with absorption or adsorption cooling units, for combined cooling heating and power (CCHP) generation from biomass combustion. Results were obtained by modelling with the main aim of providing optimization guidelines for the operating conditions of these types of systems, specifically the subcritical or transcritical ORC, when integrated in a CCHP system to supply typical heating and cooling demands in the tertiary sector. The thermodynamic approach was complemented, to avoid its possible limitations, by the technological constraints of the expander, the heat exchangers and the pump of the ORC. The working fluids considered are: n-pentane, n-heptane, octamethyltrisiloxane, toluene and dodecamethylcyclohexasiloxane. In addition, the energy and environmental performance of the different optimal CCHP plants was investigated. The optimal plant from the energy and environmental point of view is the one integrated by a toluene recuperative ORC, although it is limited to a development with a turbine type expander. Also, the trigeneration plant could be developed in an energy and environmental efficient way with an n-pentane recuperative ORC and a volumetric type expander.Entropy2014-04-30165Article10.3390/e16052433243324531099-43002014-04-30doi: 10.3390/e16052433Daniel MaraverSylvain QuoilinJavier Royo<![CDATA[Entropy, Vol. 16, Pages 2408-2432: General H-theorem and Entropies that Violate the Second Law]]>
http://www.mdpi.com/1099-4300/16/5/2408
H-theorem states that the entropy production is nonnegative and, therefore, the entropy of a closed system should monotonically change in time. In information processing, the entropy production is positive for random transformation of signals (the information processing lemma). Originally, the H-theorem and the information processing lemma were proved for the classical Boltzmann-Gibbs-Shannon entropy and for the correspondent divergence (the relative entropy). Many new entropies and divergences have been proposed during last decades and for all of them the H-theorem is needed. This note proposes a simple and general criterion to check whether the H-theorem is valid for a convex divergence H and demonstrates that some of the popular divergences obey no H-theorem. We consider systems with n states Ai that obey first order kinetics (master equation). A convex function H is a Lyapunov function for all master equations with given equilibrium if and only if its conditional minima properly describe the equilibria of pair transitions Ai ⇌ Aj . This theorem does not depend on the principle of detailed balance and is valid for general Markov kinetics. Elementary analysis of pair equilibria demonstrate that the popular Bregman divergences like Euclidian distance or Itakura-Saito distance in the space of distribution cannot be the universal Lyapunov functions for the first-order kinetics and can increase in Markov processes. Therefore, they violate the second law and the information processing lemma. In particular, for these measures of information (divergences) random manipulation with data may add information to data. The main results are extended to nonlinear generalized mass action law kinetic equations.Entropy2014-04-29165Article10.3390/e16052408240824321099-43002014-04-29doi: 10.3390/e16052408Alexander Gorban<![CDATA[Entropy, Vol. 16, Pages 2384-2407: Measuring the Complexity of Self-Organizing Traffic Lights]]>
http://www.mdpi.com/1099-4300/16/5/2384
We apply measures of complexity, emergence, and self-organization to an urban traffic model for comparing a traditional traffic-light coordination method with a self-organizing method in two scenarios: cyclic boundaries and non-orientable boundaries. We show that the measures are useful to identify and characterize different dynamical phases. It becomes clear that different operation regimes are required for different traffic demands. Thus, not only is traffic a non-stationary problem, requiring controllers to adapt constantly; controllers must also change drastically the complexity of their behavior depending on the demand. Based on our measures and extending Ashby’s law of requisite variety, we can say that the self-organizing method achieves an adaptability level comparable to that of a living system.Entropy2014-04-25165Article10.3390/e16052384238424071099-43002014-04-25doi: 10.3390/e16052384Darío ZubillagaGeovany CruzLuis AguilarJorge ZapotécatlNelson FernándezJosé AguilarDavid RosenbluethCarlos Gershenson<![CDATA[Entropy, Vol. 16, Pages 2362-2383: Gyarmati’s Variational Principle of Dissipative Processes]]>
http://www.mdpi.com/1099-4300/16/4/2362
Like in mechanics and electrodynamics, the fundamental laws of the thermodynamics of dissipative processes can be compressed into Gyarmati’s variational principle. This variational principle both in its differential (local) and in integral (global) forms was formulated by Gyarmati in 1965. The consistent application of both the local and the global forms of Gyarmati’s principle provides all the advantages throughout explicating the theory of irreversible thermodynamics that are provided in the study of mechanics and electrodynamics by the corresponding classical variational principles, e.g., Gauss’ differential principle of least constraint or Hamilton’s integral principle.Entropy2014-04-24164Article10.3390/e16042362236223831099-43002014-04-24doi: 10.3390/e16042362József Verhás<![CDATA[Entropy, Vol. 16, Pages 2350-2361: Fractional Order Generalized Information]]>
http://www.mdpi.com/1099-4300/16/4/2350
This paper formulates a novel expression for entropy inspired in the properties of Fractional Calculus. The characteristics of the generalized fractional entropy are tested both in standard probability distributions and real world data series. The results reveal that tuning the fractional order allow an high sensitivity to the signal evolution, which is useful in describing the dynamics of complex systems. The concepts are also extended to relative distances and tested with several sets of data, confirming the goodness of the generalization.Entropy2014-04-24164Article10.3390/e16042350235023611099-43002014-04-24doi: 10.3390/e16042350José Machado<![CDATA[Entropy, Vol. 16, Pages 2309-2349: Measures of Causality in Complex Datasets with Application to Financial Data]]>
http://www.mdpi.com/1099-4300/16/4/2309
This article investigates the causality structure of financial time series. We concentrate on three main approaches to measuring causality: linear Granger causality, kernel generalisations of Granger causality (based on ridge regression and the Hilbert–Schmidt norm of the cross-covariance operator) and transfer entropy, examining each method and comparing their theoretical properties, with special attention given to the ability to capture nonlinear causality. We also present the theoretical benefits of applying non-symmetrical measures rather than symmetrical measures of dependence. We apply the measures to a range of simulated and real data. The simulated data sets were generated with linear and several types of nonlinear dependence, using bivariate, as well as multivariate settings. An application to real-world financial data highlights the practical difficulties, as well as the potential of the methods. We use two real data sets: (1) U.S. inflation and one-month Libor; (2) S&amp;P data and exchange rates for the following currencies: AUDJPY, CADJPY, NZDJPY, AUDCHF, CADCHF, NZDCHF. Overall, we reach the conclusion that no single method can be recognised as the best in all circumstances, and each of the methods has its domain of best applicability. We also highlight areas for improvement and future research.Entropy2014-04-24164Article10.3390/e16042309230923491099-43002014-04-24doi: 10.3390/e16042309Anna ZarembaTomaso Aste<![CDATA[Entropy, Vol. 16, Pages 2291-2308: On the Clausius-Duhem Inequality and Maximum Entropy Production in a Simple Radiating System]]>
http://www.mdpi.com/1099-4300/16/4/2291
A black planet irradiated by a sun serves as the archetype for a simple radiating two-layer system admitting of a continuum of steady states under steadfast insolation. Steady entropy production rates may be calculated for different opacities of one of the layers, explicitly so for the radiative interactions, and indirectly for all the material irreversibilities involved in maintaining thermal uniformity in each layer. The second law of thermodynamics is laid down in two versions, one of which is the well-known Clausius-Duhem inequality, the other being a modern version known as the entropy inequality. By maximizing the material entropy production rate, a state may be selected that always fulfills the Clausius-Duhem inequality. Some formally possible steady states, while violating the latter, still obey the entropy inequality. In terms of Earth’s climate, global entropy production rates exhibit extrema for any “greenhouse effect”. However, and only insofar as the model be accepted as representative of Earth’s climate, the extrema will not be found to agree with observed (effective) temperatures assignable to both the atmosphere and surface. This notwithstanding, the overall entropy production for the present greenhouse effect on Earth is very close to the maximum entropy production rate of a uniformly warm steady state at the planet’s effective temperature. For an Earth with a weak(er) greenhouse effect the statement is no longer true.Entropy2014-04-22164Article10.3390/e16042291229123081099-43002014-04-22doi: 10.3390/e16042291Joachim Pelkowski<![CDATA[Entropy, Vol. 16, Pages 2278-2290: Going Round in Circles: Landauer vs. Norton on the Thermodynamics of Computation]]>
http://www.mdpi.com/1099-4300/16/4/2278
There seems to be a consensus among physicists that there is a connection between information processing and thermodynamics. In particular, Landauer’s Principle (LP) is widely assumed as part of the foundation of information theoretic/computational reasoning in diverse areas of physics including cosmology. It is also often appealed to in discussions about Maxwell’s demon and the status of the Second Law of Thermodynamics. However, LP has been challenged. In his 2005, Norton argued that LP has not been proved. LPSG offered a new proof of LP. Norton argued that the LPSG proof is unsound and Ladyman and Robertson defended it. However, Norton’s latest work also generalizes his critique to argue for a no go result that he purports to be the end of the thermodynamics of computation. Here we review the dialectic as it currently stands and consider Norton’s no go result.Entropy2014-04-22164Article10.3390/e16042278227822901099-43002014-04-22doi: 10.3390/e16042278James LadymanKatie Robertson<![CDATA[Entropy, Vol. 16, Pages 2244-2277: Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains]]>
http://www.mdpi.com/1099-4300/16/4/2244
We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows one to properly handle memory effects in spike statistics, for large-sized neural networks.Entropy2014-04-22164Article10.3390/e16042244224422771099-43002014-04-22doi: 10.3390/e16042244Hassan NasserBruno Cessac<![CDATA[Entropy, Vol. 16, Pages 2234-2243: Finite-Time Chaos Suppression of Permanent Magnet Synchronous Motor Systems]]>
http://www.mdpi.com/1099-4300/16/4/2234
This paper considers the problem of the chaos suppression for the Permanent Magnet Synchronous Motor (PMSM) system via the finite-time control. Based on Lyapunov stability theory and the finite-time controller are developed such that the chaos behaviors of PMSM system can be suppressed. The effectiveness and accuracy of the proposed methods are shown in numerical simulations.Entropy2014-04-21164Article10.3390/e16042234223422431099-43002014-04-21doi: 10.3390/e16042234Yi-You Hou<![CDATA[Entropy, Vol. 16, Pages 2223-2233: An Extended Result on the Optimal Estimation Under the Minimum Error Entropy Criterion]]>
http://www.mdpi.com/1099-4300/16/4/2223
The minimum error entropy (MEE) criterion has been successfully used in fields such as parameter estimation, system identification and the supervised machine learning. There is in general no explicit expression for the optimal MEE estimate unless some constraints on the conditional distribution are imposed. A recent paper has proved that if the conditional density is conditionally symmetric and unimodal (CSUM), then the optimal MEE estimate (with Shannon entropy) equals the conditional median. In this study, we extend this result to the generalized MEE estimation where the optimality criterion is the Renyi entropy or equivalently, the α-order information potential (IP).Entropy2014-04-17164Article10.3390/e16042223222322331099-43002014-04-17doi: 10.3390/e16042223Badong ChenGuangmin WangNanning ZhengJose Principe<![CDATA[Entropy, Vol. 16, Pages 2204-2222: Co and In Doped Ni-Mn-Ga Magnetic Shape Memory Alloys: A Thorough Structural, Magnetic and Magnetocaloric Study]]>
http://www.mdpi.com/1099-4300/16/4/2204
In Ni-Mn-Ga ferromagnetic shape memory alloys, Co-doping plays a major role in determining a peculiar phase diagram where, besides a change in the critical temperatures, a change of number, order and nature of phase transitions (e.g., from ferromagnetic to paramagnetic or from paramagnetic to ferromagnetic, on heating) can be obtained, together with a change in the giant magnetocaloric effect from direct to inverse. Here we present a thorough study of the intrinsic magnetic and structural properties, including their dependence on hydrostatic pressure, that are at the basis of the multifunctional behavior of Co and In-doped alloys. We study in depth their magnetocaloric properties, taking advantage of complementary calorimetric and magnetic techniques, and show that if a proper measurement protocol is adopted they all merge to the same values, even in case of first order transitions. A simplified model for the estimation of the adiabatic temperature change that relies only on indirect measurements is proposed, allowing for the quick and reliable evaluation of the magnetocaloric potentiality of new materials starting from readily available magnetic measurements.Entropy2014-04-16164Article10.3390/e16042204220422221099-43002014-04-16doi: 10.3390/e16042204Simone FabbriciGiacomo PorcariFrancesco CuginiMassimo SolziJiri KamaradZdenek ArnoldRiccardo CabassiFranca Albertini<![CDATA[Entropy, Vol. 16, Pages 2184-2203: A Two-stage Method for Solving Multi-resident Activity Recognition in Smart Environments]]>
http://www.mdpi.com/1099-4300/16/4/2184
To recognize individual activities in multi-resident environments with pervasive sensors, some researchers have pointed out that finding data associations can contribute to activity recognition and previous methods either need or infer data association when recognizing new multi-resident activities based on new observations from sensors. However, it is often difficult to find out data associations, and available approaches to multi-resident activity recognition degrade when the data association is not given or induced with low accuracy. This paper exploits some simple knowledge of multi-resident activities through defining Combined label and the state set, and proposes a two-stage activity recognition method for multi-resident activity recognition. We define Combined label states at the model building phase with the help of data association, and learn Combined label states at the new activity recognition phase without the help of data association. Our two stages method is embodied in the new activity recognition phase, where we figure out multi-resident activities in the second stage after learning Combined label states at first stage. The experiments using the multi-resident CASAS data demonstrate that our method can increase the recognition accuracy by approximately 10%.Entropy2014-04-15164Article10.3390/e16042184218422031099-43002014-04-15doi: 10.3390/e16042184Rong ChenYu Tong<![CDATA[Entropy, Vol. 16, Pages 2161-2183: Quantifying Unique Information]]>
http://www.mdpi.com/1099-4300/16/4/2161
We propose new measures of shared information, unique information and synergistic information that can be used to decompose the mutual information of a pair of random variables (Y, Z) with a third random variable X. Our measures are motivated by an operational idea of unique information, which suggests that shared information and unique information should depend only on the marginal distributions of the pairs (X, Y) and (X,Z). Although this invariance property has not been studied before, it is satisfied by other proposed measures of shared information. The invariance property does not uniquely determine our new measures, but it implies that the functions that we define are bounds to any other measures satisfying the same invariance property. We study properties of our measures and compare them to other candidate measures.Entropy2014-04-15164Article10.3390/e16042161216121831099-43002014-04-15doi: 10.3390/e16042161Nils BertschingerJohannes RauhEckehard OlbrichJürgen JostNihat Ay<![CDATA[Entropy, Vol. 16, Pages 2146-2160: Possible Further Evidence for the Thixotropic Phenomenon of Water]]>
http://www.mdpi.com/1099-4300/16/4/2146
In this work we review the literature for possible confirmation of a phenomenon that was proposed to develop when water is left to stand for some time undisturbed in closed vessels. The phenomenon has been termed thixotropy of water due to the weak gel-like behaviour which may develop spontaneously over time where ions and contact with hydrophilic surfaces seem to play important roles. Thixotropy is a property of certain gels and liquids that under normal conditions are highly viscous, whereas during mechanical processing their viscosity diminishes. We found experiments indicating water’s self-organizing properties, long-lived inhomogeneities and time-dependent changes in the spectral parameters of aqueous systems. The large-scale inhomogeneities in aqueous solutions seem to occur in a vast number of systems. Long-term spectral changes of aqueous systems were observed even though the source of radiation was switched off or removed. And water was considered to be an active excitable medium in which appropriate conditions for self-organization can be established. In short, the thixotropic phenomenon of water is further indicated by different experimental techniques and may be triggered by large-scale ordering of water in the vicinity of nucleating solutes and hydrophilic surfaces.Entropy2014-04-14164Review10.3390/e16042146214621601099-43002014-04-14doi: 10.3390/e16042146Nada VerdelPeter Bukovec