Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 16, Pages 2291-2308: On the Clausius-Duhem Inequality and Maximum Entropy Production in a Simple Radiating System]]>
http://www.mdpi.com/1099-4300/16/4/2291
A black planet irradiated by a sun serves as the archetype for a simple radiating two-layer system admitting of a continuum of steady states under steadfast insolation. Steady entropy production rates may be calculated for different opacities of one of the layers, explicitly so for the radiative interactions, and indirectly for all the material irreversibilities involved in maintaining thermal uniformity in each layer. The second law of thermodynamics is laid down in two versions, one of which is the well-known Clausius-Duhem inequality, the other being a modern version known as the entropy inequality. By maximizing the material entropy production rate, a state may be selected that always fulfills the Clausius-Duhem inequality. Some formally possible steady states, while violating the latter, still obey the entropy inequality. In terms of Earth’s climate, global entropy production rates exhibit extrema for any “greenhouse effect”. However, and only insofar as the model be accepted as representative of Earth’s climate, the extrema will not be found to agree with observed (effective) temperatures assignable to both the atmosphere and surface. This notwithstanding, the overall entropy production for the present greenhouse effect on Earth is very close to the maximum entropy production rate of a uniformly warm steady state at the planet’s effective temperature. For an Earth with a weak(er) greenhouse effect the statement is no longer true.Entropy2014-04-22164Article10.3390/e16042291229123081099-43002014-04-22doi: 10.3390/e16042291Joachim Pelkowski<![CDATA[Entropy, Vol. 16, Pages 2278-2290: Going Round in Circles: Landauer vs. Norton on the Thermodynamics of Computation]]>
http://www.mdpi.com/1099-4300/16/4/2278
There seems to be a consensus among physicists that there is a connection between information processing and thermodynamics. In particular, Landauer’s Principle (LP) is widely assumed as part of the foundation of information theoretic/computational reasoning in diverse areas of physics including cosmology. It is also often appealed to in discussions about Maxwell’s demon and the status of the Second Law of Thermodynamics. However, LP has been challenged. In his 2005, Norton argued that LP has not been proved. LPSG offered a new proof of LP. Norton argued that the LPSG proof is unsound and Ladyman and Robertson defended it. However, Norton’s latest work also generalizes his critique to argue for a no go result that he purports to be the end of the thermodynamics of computation. Here we review the dialectic as it currently stands and consider Norton’s no go result.Entropy2014-04-22164Article10.3390/e16042278227822901099-43002014-04-22doi: 10.3390/e16042278James LadymanKatie Robertson<![CDATA[Entropy, Vol. 16, Pages 2244-2277: Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains]]>
http://www.mdpi.com/1099-4300/16/4/2244
We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows one to properly handle memory effects in spike statistics, for large-sized neural networks.Entropy2014-04-22164Article10.3390/e16042244224422771099-43002014-04-22doi: 10.3390/e16042244Hassan NasserBruno Cessac<![CDATA[Entropy, Vol. 16, Pages 2234-2243: Finite-Time Chaos Suppression of Permanent Magnet Synchronous Motor Systems]]>
http://www.mdpi.com/1099-4300/16/4/2234
This paper considers the problem of the chaos suppression for the Permanent Magnet Synchronous Motor (PMSM) system via the finite-time control. Based on Lyapunov stability theory and the finite-time controller are developed such that the chaos behaviors of PMSM system can be suppressed. The effectiveness and accuracy of the proposed methods are shown in numerical simulations.Entropy2014-04-21164Article10.3390/e16042234223422431099-43002014-04-21doi: 10.3390/e16042234Yi-You Hou<![CDATA[Entropy, Vol. 16, Pages 2223-2233: An Extended Result on the Optimal Estimation Under the Minimum Error Entropy Criterion]]>
http://www.mdpi.com/1099-4300/16/4/2223
The minimum error entropy (MEE) criterion has been successfully used in fields such as parameter estimation, system identification and the supervised machine learning. There is in general no explicit expression for the optimal MEE estimate unless some constraints on the conditional distribution are imposed. A recent paper has proved that if the conditional density is conditionally symmetric and unimodal (CSUM), then the optimal MEE estimate (with Shannon entropy) equals the conditional median. In this study, we extend this result to the generalized MEE estimation where the optimality criterion is the Renyi entropy or equivalently, the α-order information potential (IP).Entropy2014-04-17164Article10.3390/e16042223222322331099-43002014-04-17doi: 10.3390/e16042223Badong ChenGuangmin WangNanning ZhengJose Principe<![CDATA[Entropy, Vol. 16, Pages 2204-2222: Co and In Doped Ni-Mn-Ga Magnetic Shape Memory Alloys: A Thorough Structural, Magnetic and Magnetocaloric Study]]>
http://www.mdpi.com/1099-4300/16/4/2204
In Ni-Mn-Ga ferromagnetic shape memory alloys, Co-doping plays a major role in determining a peculiar phase diagram where, besides a change in the critical temperatures, a change of number, order and nature of phase transitions (e.g., from ferromagnetic to paramagnetic or from paramagnetic to ferromagnetic, on heating) can be obtained, together with a change in the giant magnetocaloric effect from direct to inverse. Here we present a thorough study of the intrinsic magnetic and structural properties, including their dependence on hydrostatic pressure, that are at the basis of the multifunctional behavior of Co and In-doped alloys. We study in depth their magnetocaloric properties, taking advantage of complementary calorimetric and magnetic techniques, and show that if a proper measurement protocol is adopted they all merge to the same values, even in case of first order transitions. A simplified model for the estimation of the adiabatic temperature change that relies only on indirect measurements is proposed, allowing for the quick and reliable evaluation of the magnetocaloric potentiality of new materials starting from readily available magnetic measurements.Entropy2014-04-16164Article10.3390/e16042204220422221099-43002014-04-16doi: 10.3390/e16042204Simone FabbriciGiacomo PorcariFrancesco CuginiMassimo SolziJiri KamaradZdenek ArnoldRiccardo CabassiFranca Albertini<![CDATA[Entropy, Vol. 16, Pages 2184-2203: A Two-stage Method for Solving Multi-resident Activity Recognition in Smart Environments]]>
http://www.mdpi.com/1099-4300/16/4/2184
To recognize individual activities in multi-resident environments with pervasive sensors, some researchers have pointed out that finding data associations can contribute to activity recognition and previous methods either need or infer data association when recognizing new multi-resident activities based on new observations from sensors. However, it is often difficult to find out data associations, and available approaches to multi-resident activity recognition degrade when the data association is not given or induced with low accuracy. This paper exploits some simple knowledge of multi-resident activities through defining Combined label and the state set, and proposes a two-stage activity recognition method for multi-resident activity recognition. We define Combined label states at the model building phase with the help of data association, and learn Combined label states at the new activity recognition phase without the help of data association. Our two stages method is embodied in the new activity recognition phase, where we figure out multi-resident activities in the second stage after learning Combined label states at first stage. The experiments using the multi-resident CASAS data demonstrate that our method can increase the recognition accuracy by approximately 10%.Entropy2014-04-15164Article10.3390/e16042184218422031099-43002014-04-15doi: 10.3390/e16042184Rong ChenYu Tong<![CDATA[Entropy, Vol. 16, Pages 2161-2183: Quantifying Unique Information]]>
http://www.mdpi.com/1099-4300/16/4/2161
We propose new measures of shared information, unique information and synergistic information that can be used to decompose the mutual information of a pair of random variables (Y, Z) with a third random variable X. Our measures are motivated by an operational idea of unique information, which suggests that shared information and unique information should depend only on the marginal distributions of the pairs (X, Y) and (X,Z). Although this invariance property has not been studied before, it is satisfied by other proposed measures of shared information. The invariance property does not uniquely determine our new measures, but it implies that the functions that we define are bounds to any other measures satisfying the same invariance property. We study properties of our measures and compare them to other candidate measures.Entropy2014-04-15164Article10.3390/e16042161216121831099-43002014-04-15doi: 10.3390/e16042161Nils BertschingerJohannes RauhEckehard OlbrichJürgen JostNihat Ay<![CDATA[Entropy, Vol. 16, Pages 2146-2160: Possible Further Evidence for the Thixotropic Phenomenon of Water]]>
http://www.mdpi.com/1099-4300/16/4/2146
In this work we review the literature for possible confirmation of a phenomenon that was proposed to develop when water is left to stand for some time undisturbed in closed vessels. The phenomenon has been termed thixotropy of water due to the weak gel-like behaviour which may develop spontaneously over time where ions and contact with hydrophilic surfaces seem to play important roles. Thixotropy is a property of certain gels and liquids that under normal conditions are highly viscous, whereas during mechanical processing their viscosity diminishes. We found experiments indicating water’s self-organizing properties, long-lived inhomogeneities and time-dependent changes in the spectral parameters of aqueous systems. The large-scale inhomogeneities in aqueous solutions seem to occur in a vast number of systems. Long-term spectral changes of aqueous systems were observed even though the source of radiation was switched off or removed. And water was considered to be an active excitable medium in which appropriate conditions for self-organization can be established. In short, the thixotropic phenomenon of water is further indicated by different experimental techniques and may be triggered by large-scale ordering of water in the vicinity of nucleating solutes and hydrophilic surfaces.Entropy2014-04-14164Review10.3390/e16042146214621601099-43002014-04-14doi: 10.3390/e16042146Nada VerdelPeter Bukovec<![CDATA[Entropy, Vol. 16, Pages 2131-2145: Information Geometry of Positive Measures and Positive-Definite Matrices: Decomposable Dually Flat Structure]]>
http://www.mdpi.com/1099-4300/16/4/2131
Information geometry studies the dually flat structure of a manifold, highlighted by the generalized Pythagorean theorem. The present paper studies a class of Bregman divergences called the (ρ,τ)-divergence. A (ρ,τ) -divergence generates a dually flat structure in the manifold of positive measures, as well as in the manifold of positive-definite matrices. The class is composed of decomposable divergences, which are written as a sum of componentwise divergences. Conversely, a decomposable dually flat divergence is shown to be a (ρ,τ) -divergence. A (ρ,τ) -divergence is determined from two monotone scalar functions, ρ and τ. The class includes the KL-divergence, α-, β- and (α, β)-divergences as special cases. The transformation between an affine parameter and its dual is easily calculated in the case of a decomposable divergence. Therefore, such a divergence is useful for obtaining the center for a cluster of points, which will be applied to classification and information retrieval in vision. For the manifold of positive-definite matrices, in addition to the dually flatness and decomposability, we require the invariance under linear transformations, in particular under orthogonal transformations. This opens a way to define a new class of divergences, called the (ρ,τ) -structure in the manifold of positive-definite matrices.Entropy2014-04-14164Article10.3390/e16042131213121451099-43002014-04-14doi: 10.3390/e16042131Shun-ichi Amari<![CDATA[Entropy, Vol. 16, Pages 2105-2130: Wiretap Channel with Information Embedding on Actions]]>
http://www.mdpi.com/1099-4300/16/4/2105
Information embedding on actions is a new channel model in which a specific decoder is used to observe the actions taken by the encoder and retrieve part of the message intended for the receiver. We revisit this model and consider a different scenario where a secrecy constraint is imposed. By adding a wiretapper in the model, we aim to send the confidential message to the receiver and keep it secret from the wiretapper as much as possible. We characterize the inner and outer bounds on the capacity-equivocation region of such a channel with noncausal (and causal) channel state information. Furthermore, the lower and upper bounds on the sum secrecy capacity are also obtained. Besides, by eliminating the specific decoder, we get a new outer bound on the capacity-equivocation region of the wiretap channel with action-dependent states and prove it is tighter than the existing outer bound. A binary example is presented to illustrate the tradeoff between the sum secrecy rate and the information embedding rate under the secrecy constraint. We find that the secrecy constraint and the communication requirements of information embedding have a negative impact on improving the secrecy transmission rate of the given communication link.Entropy2014-04-14164Article10.3390/e16042105210521301099-43002014-04-14doi: 10.3390/e16042105Xinxing YinZhi Xue<![CDATA[Entropy, Vol. 16, Pages 2085-2104: Cross Layer Interference Management in Wireless Biomedical Networks]]>
http://www.mdpi.com/1099-4300/16/4/2085
Interference, in wireless networks, is a central phenomenon when multiple uncoordinated links share a common communication medium. The study of the interference channel was initiated by Shannon in 1961 and since then this problem has been thoroughly elaborated at the Information theoretic level but its characterization still remains an open issue. When multiple uncoordinated links share a common medium the effect of interference is a crucial limiting factor for network performance. In this work, using cross layer cooperative communication techniques, we study how to compensate interference in the context of wireless biomedical networks, where many links transferring biomedical or other health related data may be formed and suffer from all other interfering transmissions, to allow successful receptions and improve the overall network performance. We define the interference limited communication range to be the critical communication region around a receiver, with a number of surrounding interfering nodes, within which a successful communication link can be formed. Our results indicate that we can achieve more successful transmissions by adapting the transmission rate and power, to the path loss exponent, and the selected mode of the underline communication technique allowing interference mitigation and when possible lower power consumption and increase achievable transmission rates.Entropy2014-04-14164Article10.3390/e16042085208521041099-43002014-04-14doi: 10.3390/e16042085Emmanouil SpanakisVangelis SakkalisKostas MariasApostolos Traganitis<![CDATA[Entropy, Vol. 16, Pages 2071-2084: The Entropic Potential Concept: a New Way to Look at Energy Transfer Operations]]>
http://www.mdpi.com/1099-4300/16/4/2071
Energy transfer operations or processes are systematically analyzed with respect to the way they can be assessed. It turns out that the energy transfer should not only be characterized by the operation or process itself but that it should be seen in a wider context. This context is introduced as the entropic potential of the energy that is transferred. It takes into account the overall transfer from the energy in its initial and finite states, i.e., starting as pure exergy when it is a primary energy, for example, and ending as pure anergy when it has become part of the internal energy of the ambient. With this concept an energy devaluation number can be defined which has several properties with a reasonable physical background. Two examples of different complexity of the process assessed are given and discussed with respect to the physical meaning of the new energy devaluation number.Entropy2014-04-14164Article10.3390/e16042071207120841099-43002014-04-14doi: 10.3390/e16042071Tammo WenterodtHeinz Herwig<![CDATA[Entropy, Vol. 16, Pages 2056-2070: Entropy and Exergy Analysis of a Heat Recovery Vapor Generator for Ammonia-Water Mixtures]]>
http://www.mdpi.com/1099-4300/16/4/2056
Recently power generation systems using ammonia-water binary mixtures as a working fluid have been attracting much attention for their efficient conversion of low-grade heat sources into useful energy forms. This paper presents the First and Second Law thermodynamic analysis for a heat recovery vapor generator (HRVG) of ammonia-water mixtures when the heat source is low-temperature energy in the form of sensible heat. In the analysis, key parameters such as ammonia mass concentration and pressure of the binary mixture are studied to investigate their effects on the system performance, including the effectiveness of heat transfer, entropy generation, and exergy efficiency. The results show that the ammonia concentration and the pressure of the mixture have significant effects on the system performance of the HRVG.Entropy2014-04-11164Article10.3390/e16042056205620701099-43002014-04-11doi: 10.3390/e16042056Kyoung KimKyoungjin KimHyung Ko<![CDATA[Entropy, Vol. 16, Pages 2023-2055: Matrix Algebraic Properties of the Fisher Information Matrix of Stationary Processes]]>
http://www.mdpi.com/1099-4300/16/4/2023
In this survey paper, a summary of results which are to be found in a series of papers, is presented. The subject of interest is focused on matrix algebraic properties of the Fisher information matrix (FIM) of stationary processes. The FIM is an ingredient of the Cram´er-Rao inequality, and belongs to the basics of asymptotic estimation theory in mathematical statistics. The FIM is interconnected with the Sylvester, Bezout and tensor Sylvester matrices. Through these interconnections it is shown that the FIM of scalar and multiple stationary processes fulfill the resultant matrix property. A statistical distance measure involving entries of the FIM is presented. In quantum information, a different statistical distance measure is set forth. It is related to the Fisher information but where the information about one parameter in a particular measurement procedure is considered. The FIM of scalar stationary processes is also interconnected to the solutions of appropriate Stein equations, conditions for the FIM to verify certain Stein equations are formulated. The presence of Vandermonde matrices is also emphasized.Entropy2014-04-08164Article10.3390/e16042023202320551099-43002014-04-08doi: 10.3390/e16042023André Klein<![CDATA[Entropy, Vol. 16, Pages 2001-2022: A Fuzzy Parallel Processing Scheme for Enhancing the Effectiveness of a Dynamic Just-in-time Location-aware Service System]]>
http://www.mdpi.com/1099-4300/16/4/2001
Location-aware service systems are a hot topic in diverse research fields including mobile commerce, ambient intelligence, remote sensing and ubiquitous computing. However, the timeliness and efficiency of such systems are two issues that have rarely been emphasized. For this reason, this study tries to establish a location-aware service system in which both the timeliness and efficiency of service provision are addressed. To this end, some innovative treatments have been used in the proposed methodology. First, the uncertainty of detecting a user’s location using the global positioning system is considered by modeling the location and speed of the user with fuzzy numbers. Subsequently, a fuzzy integer-nonlinear programming model is formulated to address the problem of finding the dynamic just-in-time service location and path for the user. To help solve the problem, the maximum entropy weighting function and the basic defuzzification distribution (BADD) method are applied to defuzzify the fuzzy variables. In addition, to enhance the efficiency of solving the problem, a fuzzy parallel processing scheme is also proposed for decomposing the problem into smaller pieces that can be handled by separate processing modules. An illustrative example is used to illustrate the proposed methodology. Finally, the effectiveness of the proposed methodology has been confirmed with an experiment. According to the results, using the proposed methodology the waiting time could be reduced by 60%.Entropy2014-04-08164Article10.3390/e16042001200120221099-43002014-04-08doi: 10.3390/e16042001Toly Chen<![CDATA[Entropy, Vol. 16, Pages 1985-2000: Intersection Information Based on Common Randomness]]>
http://www.mdpi.com/1099-4300/16/4/1985
The introduction of the partial information decomposition generated a flurry of proposals for defining an intersection information that quantifies how much of “the same information” two or more random variables specify about a target random variable. As of yet, none is wholly satisfactory. A palatable measure of intersection information would provide a principled way to quantify slippery concepts, such as synergy. Here, we introduce an intersection information measure based on the Gács-Körner common random variable that is the first to satisfy the coveted target monotonicity property. Our measure is imperfect, too, and we suggest directions for improvement.Entropy2014-04-04164Article10.3390/e16041985198520001099-43002014-04-04doi: 10.3390/e16041985Virgil GriffithEdwin ChongRyan JamesChristopher EllisonJames Crutchfield<![CDATA[Entropy, Vol. 16, Pages 1983-1984: Retraction: Zheng, T. et al. Effect of Heat Leak and Finite Thermal Capacity on the Optimal Configuration of a Two-Heat-Reservoir Heat Engine for Another Linear Heat Transfer Law. Entropy 2003, 5, 519–530]]>
http://www.mdpi.com/1099-4300/16/4/1983
The editors were recently made aware that a paper published in Entropy in 2003 [1] exhibited characteristics of duplication and self-plagiarism. After investigating the matter, and discussing the situation with the authors, they have offered to retract this paper (http://www.mdpi.com/1099-4300/5/5/519) .Entropy2014-04-04164Retraction10.3390/e16041983198319841099-43002014-04-04doi: 10.3390/e16041983Kevin Knuth<![CDATA[Entropy, Vol. 16, Pages 1969-1982: Stochastic Dynamics of Proteins and the Action of Biological Molecular Machines]]>
http://www.mdpi.com/1099-4300/16/4/1969
It is now well established that most if not all enzymatic proteins display a slow stochastic dynamics of transitions between a variety of conformational substates composing their native state. A hypothesis is stated that the protein conformational transition networks, as just as higher-level biological networks, the protein interaction network, and the metabolic network, have evolved in the process of self-organized criticality. Here, the criticality means that all the three classes of networks are scale-free and, moreover, display a transition from the fractal organization on a small length-scale to the small-world organization on the large length-scale. Good mathematical models of such networks are stochastic critical branching trees extended by long-range shortcuts. Biological molecular machines are proteins that operate under isothermal conditions and hence are referred to as free energy transducers. They can be formally considered as enzymes that simultaneously catalyze two chemical reactions: the free energy-donating (input) reaction and the free energy-accepting (output) one. The far-from-equilibrium degree of coupling between the output and the input reaction fluxes have been studied both theoretically and by means of the Monte Carlo simulations on model networks. For single input and output gates the degree of coupling cannot exceed unity. Study simulations of random walks on model networks involving more extended gates indicate that the case of the degree of coupling value higher than one is realized on the mentioned above critical branching trees extended by long-range shortcuts.Entropy2014-04-01164Article10.3390/e16041969196919821099-43002014-04-01doi: 10.3390/e16041969Michal KurzynskiPrzemyslaw Chelminiak<![CDATA[Entropy, Vol. 16, Pages 1949-1968: Topological Classification of Limit Cycles of Piecewise Smooth Dynamical Systems and Its Associated Non-Standard Bifurcations]]>
http://www.mdpi.com/1099-4300/16/4/1949
In this paper, we propose a novel strategy for the synthesis and the classification of nonsmooth limit cycles and its bifurcations (named Non-Standard Bifurcations or Discontinuity Induced Bifurcations or DIBs) in n-dimensional piecewise-smooth dynamical systems, particularly Continuous PWS and Discontinuous PWS (or Filippov-type PWS) systems. The proposed qualitative approach explicitly includes two main aspects: multiple discontinuity boundaries (DBs) in the phase space and multiple intersections between DBs (or corner manifolds—CMs). Previous classifications of DIBs of limit cycles have been restricted to generic cases with a single DB or a single CM. We use the definition of piecewise topological equivalence in order to synthesize all possibilities of nonsmooth limit cycles. Families, groups and subgroups of cycles are defined depending on smoothness zones and discontinuity boundaries (DB) involved. The synthesized cycles are used to define bifurcation patterns when the system is perturbed with parametric changes. Four families of DIBs of limit cycles are defined depending on the properties of the cycles involved. Well-known and novel bifurcations can be classified using this approach.Entropy2014-04-01164Article10.3390/e16041949194919681099-43002014-04-01doi: 10.3390/e16041949John TabordaIvan Arango<![CDATA[Entropy, Vol. 16, Pages 1931-1948: Information in Biological Systems and the Fluctuation Theorem]]>
http://www.mdpi.com/1099-4300/16/4/1931
Some critical trends in information theory, its role in living systems and utilization in fluctuation theory are discussed. The mutual information of thermodynamic coupling is incorporated into the generalized fluctuation theorem by using information theory and nonequilibrium thermodynamics. Thermodynamically coupled dissipative structures in living systems are capable of degrading more energy, and processing complex information through developmental and environmental constraints. The generalized fluctuation theorem can quantify the hysteresis observed in the amount of the irreversible work in nonequilibrium regimes in the presence of information and thermodynamic coupling.Entropy2014-04-01164Article10.3390/e16041931193119481099-43002014-04-01doi: 10.3390/e16041931Yaşar Demirel<![CDATA[Entropy, Vol. 16, Pages 1917-1930: The Entropy Production Distribution in Non-Markovian Thermal Baths]]>
http://www.mdpi.com/1099-4300/16/4/1917
In this work we study the distribution function for the total entropy production of a Brownian particle embedded in a non-Markovian thermal bath. The problem is studied in the overdamped approximation of the generalized Langevin equation, which accounts for a friction memory kernel characteristic of a Gaussian colored noise. The problem is studied in two physical situations: (i) when the particle in the harmonic trap is subjected to an arbitrary time-dependent driving force; and (ii) when the minimum of the harmonic trap is arbitrarily dragged out of equilibrium by an external force. By assuming a natural non Markovian canonical distribution for the initial conditions, the distribution function for the total entropy production becomes a non Gaussian one. Its characterization is then given through the first three cumulants.Entropy2014-03-28164Article10.3390/e16041917191719301099-43002014-03-28doi: 10.3390/e16041917José Jiménez-AquinoRosa Velasco<![CDATA[Entropy, Vol. 16, Pages 1902-1916: A Dynamic Dark Information Energy Consistent with Planck Data]]>
http://www.mdpi.com/1099-4300/16/4/1902
The 2013 cosmology results from the European Space Agency Planck spacecraft provide new limits to the dark energy equation of state parameter. Here we show that Holographic Dark Information Energy (HDIE), a dynamic dark energy model, achieves an optimal fit to the published datasets where Planck data is combined with other astrophysical measurements. HDIE uses Landauer’s principle to account for dark energy by the energy equivalent of information, or entropy, of stellar heated gas and dust. Combining Landauer’s principle with the Holographic principle yields an equation of state parameter determined solely by star formation history, effectively solving the “cosmic coincidence problem”. While HDIE mimics a cosmological constant at low red-shifts, z &lt; 1, the small difference from a cosmological constant expected at higher red-shifts will only be resolved by the next generation of dark energy instrumentation. The HDIE model is shown to provide a viable alternative to the main cosmological constant/vacuum energy and scalar field/ quintessence explanations.Entropy2014-03-27164Article10.3390/e16041902190219161099-43002014-03-27doi: 10.3390/e16041902Michael Gough<![CDATA[Entropy, Vol. 16, Pages 1883-1901: District Heating Mode Analysis Based on an Air-cooled Combined Heat and Power Station]]>
http://www.mdpi.com/1099-4300/16/4/1883
As an important research subject, district heating with combined heat and power (CHP) has significant potential for energy conservation. This paper utilised a 200 MW air-cooled unit as an actual case and presented a design scheme and energy consumption analysis of three typical CHP modes, including the low vacuum mode (LVM), the extraction condensing mode (ECM), and the absorbing heat pump mode (AHPM). The advantages and disadvantages of each mode (including their practical problems) were analysed, and suggestions for the best mode were proposed. The energy consumption of the three heating modes changed with the heating load. When the heating load was increased, the net power of the entire system decreased to different degrees. In this paper, the energy conservation effect of the LVM was the most ideal, followed by the ECM and the AHPM. Besides, the LVM and AHPM were able to supply larger heat loads than the ECM, which was limited by the minimum cooling flow of the low pressure cylinder. Furthermore, in order to get a more general conclusion, a similar case with an air-cooled 300 MW unit is studied, showing that the fuel consumption levels of ECM and AHPM have changed.Entropy2014-03-26164Article10.3390/e16041883188319011099-43002014-03-26doi: 10.3390/e16041883Pei LiZhihua GeZhiping YangYuyong ChenYongping Yang<![CDATA[Entropy, Vol. 16, Pages 1860-1882: A Study of Fractality and Long-Range Order in the Distribution of Transposable Elements in Eukaryotic Genomes Using the Scaling Properties of Block Entropy and Box-Counting]]>
http://www.mdpi.com/1099-4300/16/4/1860
Repeats or Transposable Elements (TEs) are highly repeated sequence stretches, present in virtually all eukaryotic genomes. We explore the distribution of representative TEs from all major classes in entire chromosomes across various organisms. We employ two complementary approaches, the scaling of block entropy and box-counting. Both converge to the conclusion that well-developed fractality is typical of small genomes while in large genomes it appears sporadically and in some cases is rudimentary. The human genome is particularly prone to develop this pattern, as TE chromosomal distributions therein are often highly clustered and inhomogeneous. Comparing with previous works, where occurrence of power-law-like size distributions in inter-repeat distances is studied, we conclude that fractality in entire chromosomes is a more stringent (thus less often encountered) condition. We have formulated a simple evolutionary scenario for the genomic dynamics of TEs, which may account for their fractal distribution in real genomes. The observed fractality and long-range properties of TE genomic distributions have probably contributed to the formation of the “fractal globule”, a model for the confined chromatin organization of the eukaryotic nucleus proposed on the basis of experimental evidence.Entropy2014-03-26164Article10.3390/e16041860186018821099-43002014-03-26doi: 10.3390/e16041860Labrini AthanasopoulouDiamantis SellisYannis Almirantis<![CDATA[Entropy, Vol. 16, Pages 1842-1859: An Entropy-Based Contagion Index and Its Sampling Properties for Landscape Analysis]]>
http://www.mdpi.com/1099-4300/16/4/1842
Studies of spatial patterns of landscapes are useful to quantify human impact, predict wildlife effects, or describe various landscape features. A robust landscape index should quantify two components of landscape diversity: composition and configuration. One category of landscape index is the contagion index. Some landscape ecologists promote the use of relative contagion indices. It is demonstrated, using simulated landscapes, that relativized contagion indices are mathematically untenable. A new entropy contagion index (&#x0393;) is developed. Distributional properties of &#x0393;&#x005E; are derived. It is shown to be asymptotically unbiased, consistent, and asymptotically normally distributed. A variance formula for &#x0393;&#x005E; is derived using the delta method. As an application, the pattern and changes in forest types across four soil-geologic landform strata were analyzed on the 80,000 ha Savannah River Site in South Carolina, USA. One-way analysis of variance was used for hypothesis testing of contagion among strata. The differences in contagion across the strata provide insight to managers to meet structural objectives.Entropy2014-03-26164Article10.3390/e16041842184218591099-43002014-03-26doi: 10.3390/e16041842Bernard ParresolLloyd Edwards<![CDATA[Entropy, Vol. 16, Pages 1819-1841: A Hybrid Cooperative Coding Scheme for the Relay-Eavesdropper Channel]]>
http://www.mdpi.com/1099-4300/16/3/1819
This paper considers the four-node relay-eavesdropper channel, where a relay node helps the source to send secret messages to the destination in the presence of a passive eavesdropper. For the discrete memoryless case, we propose a hybrid cooperative coding scheme, which is based on the combination of the partial decode-forward scheme, the noise-forward scheme and the random binning scheme. The key feature of the proposed hybrid cooperative scheme is that the relay integrates the explicit cooperation strategy and the implicit cooperation strategy by forwarding source messages and additional interference at the same time. The derived achievable secrecy rate shows that some existing works can be viewed as special cases of the proposed scheme. Then, the achievable secrecy rate is extended to the Gaussian channel based on Gaussian codebooks, and the optimal power policy is also identified in the high power region. Both the analysis and numerical results are provided to demonstrate that the proposed hybrid cooperative coding scheme outperforms the comparable ones, especially in the high power region.Entropy2014-03-24163Article10.3390/e16031819181918411099-43002014-03-24doi: 10.3390/e16031819Peng XuZhiguo DingXuchu Dai<![CDATA[Entropy, Vol. 16, Pages 1808-1818: Entropy Change during Martensitic Transformation in Ni50−xCoxMn50−yAly Metamagnetic Shape Memory Alloys]]>
http://www.mdpi.com/1099-4300/16/3/1808
Specific heat was systematically measured by the heat flow method in Ni50-xCoxMn50-yAly metamagnetic shape memory alloys near the martensitic transformation temperatures. Martensitic transformation and ferromagnetic–paramagnetic transition for the parent phase were directly observed via the specific heat measurements. On the basis of the experimental results, the entropy change was estimated and it was found to show an abrupt decrease below the Curie temperature. The results were found to be consistent with those of earlier studies on Ni-Co-Mn-Al alloys.Entropy2014-03-24163Article10.3390/e16031808180818181099-43002014-03-24doi: 10.3390/e16031808Xiao XuWataru ItoTakeshi KanomataRyosuke Kainuma<![CDATA[Entropy, Vol. 16, Pages 1756-1807: Entropy Principle and Recent Results in Non-Equilibrium Theories]]>
http://www.mdpi.com/1099-4300/16/3/1756
We present the state of the art on the modern mathematical methods of exploiting the entropy principle in thermomechanics of continuous media. A survey of recent results and conceptual discussions of this topic in some well-known non-equilibrium theories (Classical irreversible thermodynamics CIT, Rational thermodynamics RT, Thermodynamics of irreversible processes TIP, Extended irreversible thermodynamics EIT, Rational Extended thermodynamics RET) is also summarized.Entropy2014-03-24163Article10.3390/e16031756175618071099-43002014-03-24doi: 10.3390/e16031756Vito CimmelliDavid JouTommaso RuggeriPéter Ván<![CDATA[Entropy, Vol. 16, Pages 1743-1755: Transfer Entropy Expressions for a Class of Non-Gaussian Distributions]]>
http://www.mdpi.com/1099-4300/16/3/1743
Transfer entropy is a frequently employed measure of conditional co-dependence in non-parametric analysis of Granger causality. In this paper, we derive analytical expressions for transfer entropy for the multivariate exponential, logistic, Pareto (type I – IV) and Burr distributions. The latter two fall into the class of fat-tailed distributions with power law properties, used frequently in biological, physical and actuarial sciences. We discover that the transfer entropy expressions for all four distributions are identical and depend merely on the multivariate distribution parameter and the number of distribution dimensions. Moreover, we find that in all four cases the transfer entropies are given by the same decreasing function of distribution dimensionality.Entropy2014-03-24163Article10.3390/e16031743174317551099-43002014-03-24doi: 10.3390/e16031743Mehrdad Jafari-MamaghaniJoanna Tyrcha<![CDATA[Entropy, Vol. 16, Pages 1728-1742: Discharge Estimation in a Lined Canal Using Information Entropy]]>
http://www.mdpi.com/1099-4300/16/3/1728
This study applies a new method and technology to measure the discharge in a lined canal in Taiwan. An Acoustic Digital Current Meter mounted on a measurement platform is used to measure the velocities over the full cross-section for establishing the measurement method. The proposed method primarily employs Chiu’s Equation which is based on entropy to establish a constant ratio the relation between the maximum and mean velocities in an irrigation canal, and compute the maximum velocity by the observed velocity profile. In consequence, the mean velocity of the lined canal can be rapidly determined by the maximum velocity and the constant ratio. The cross-sectional area of the artificial irrigation canal can be calculated for the water stage. Finally, the discharge in the lined canal can be efficiently determined by the estimated mean velocity and the cross-sectional area. Using the data of discharges and stages collected in the Wan-Dan Canal, the correlation of stage and discharge is also developed for remote real-time monitoring and estimating discharge from the pumping station. Overall, Chiu’s Equation is demonstrated to reliably and accurately measure discharge in a lined canal, and can serve as reference for future calibration for a stage-discharge rating curve.Entropy2014-03-24163Technical Note10.3390/e16031728172817421099-43002014-03-24doi: 10.3390/e16031728Yen-Chang ChenJune-Jein KuoSheng-Reng YuYi-Jiun LiaoHan-Chung Yang<![CDATA[Entropy, Vol. 16, Pages 1687-1727: How Do Life, Economy and Other Complex Systems Escape the Heat Death?]]>
http://www.mdpi.com/1099-4300/16/3/1687
The primordial confrontation underlying the existence of our Universe can be conceived as the battle between entropy and complexity. The law of ever-increasing entropy (Boltzmann H-theorem) evokes an irreversible, one-directional evolution (or rather involution) going uniformly and monotonically from birth to death. Since the 19th century, this concept is one of the cornerstones and in the same time puzzles of statistical mechanics. On the other hand, there is the empirical experience where one witnesses the emergence, growth and diversification of new self-organized objects with ever-increasing complexity. When modeling them in terms of simple discrete elements one finds that the emergence of collective complex adaptive objects is a rather generic phenomenon governed by a new type of laws. These “emergence” laws, not connected directly with the fundamental laws of the physical reality, nor acting “in addition” to them but acting through them were called “More is Different” by Phil Anderson, “das Maass” by Hegel etc. Even though the “emergence laws” act through the intermediary of the fundamental laws that govern the individual elementary agents, it turns out that different systems apparently governed by very different fundamental laws: gravity, chemistry, biology, economics, social psychology, end up often with similar emergence laws and outcomes. In particular the emergence of adaptive collective objects endows the system with a granular structure which in turn causes specific macroscopic cycles of intermittent fluctuations.Entropy2014-03-21163Article10.3390/e16031687168717271099-43002014-03-21doi: 10.3390/e16031687Sorin SolomonNatasa Golo<![CDATA[Entropy, Vol. 16, Pages 1652-1686: Contact Geometry of Mesoscopic Thermodynamics and Dynamics]]>
http://www.mdpi.com/1099-4300/16/3/1652
The time evolution during which macroscopic systems reach thermodynamic equilibrium states proceeds as a continuous sequence of contact structure preserving transformations maximizing the entropy. This viewpoint of mesoscopic thermodynamics and dynamics provides a unified setting for the classical equilibrium and nonequilibrium thermodynamics, kinetic theory, and statistical mechanics. One of the illustrations presented in the paper is a new version of extended nonequilibrium thermodynamics with fluxes as extra state variables.Entropy2014-03-21163Article10.3390/e16031652165216861099-43002014-03-21doi: 10.3390/e16031652Miroslav Grmela<![CDATA[Entropy, Vol. 16, Pages 1632-1651: The Superiority of Tsallis Entropy over Traditional Cost Functions for Brain MRI and SPECT Registration]]>
http://www.mdpi.com/1099-4300/16/3/1632
Neuroimage registration has an important role in clinical (for both diagnostic and therapeutic purposes) and research applications. In this article we describe the applicability of Tsallis Entropy as a new cost function for neuroimage registration through a comparative analysis based on the performance of the traditional approaches (correlation based: Entropy Correlation Coefficient (ECC) and Normalized Cross Correlation (NCC); and Mutual Information (MI) based: Mutual Information using Shannon Entropy (MIS) and Normalized Mutual Information (NMI)) and the proposed one based on MI using Tsallis entropy (MIT). We created phantoms with known geometric transformations using Single Photon Emission Computed Tomography (SPECT) and Magnetic Resonance Imaging from 3 morphologically normal subjects. The simulated volumes were registered to the original ones using both the proposed and traditional approaches. The comparative analysis of the Relative Error (RE) showed that MIT was more accurate in the intra-modality registration, whereas for inter-modality registration, MIT presented the lowest RE for rotational transformations, and the ECC the lowest RE for translational transformations. In conclusion, we have shown that, with certain limitations, Tsallis Entropy has application as a better cost function for reliable neuroimage registration.Entropy2014-03-21163Article10.3390/e16031632163216511099-43002014-03-21doi: 10.3390/e16031632Henrique Amaral-SilvaLauro Wichert-AnaLuiz MurtaLarissa Romualdo-SuzukiEmerson ItikawaGeraldo BussatoPaulo Azevedo-Marques<![CDATA[Entropy, Vol. 16, Pages 1586-1631: Optimal Forgery and Suppression of Ratings for Privacy Enhancement in Recommendation Systems]]>
http://www.mdpi.com/1099-4300/16/3/1586
Recommendation systems are information-filtering systems that tailor information to users on the basis of knowledge about their preferences. The ability of these systems to profile users is what enables such intelligent functionality, but at the same time, it is the source of serious privacy concerns. In this paper we investigate a privacy-enhancing technology that aims at hindering an attacker in its efforts to accurately profile users based on the items they rate. Our approach capitalizes on the combination of two perturbative mechanisms—the forgery and the suppression of ratings. While this technique enhances user privacy to a certain extent, it inevitably comes at the cost of a loss in data utility, namely a degradation of the recommendation’s accuracy. In short, it poses a trade-off between privacy and utility. The theoretical analysis of such trade-off is the object of this work. We measure privacy as the Kullback-Leibler divergence between the user’s and the population’s item distributions, and quantify utility as the proportion of ratings users consent to forge and eliminate. Equipped with these quantitative measures, we find a closed-form solution to the problem of optimal forgery and suppression of ratings, an optimization problem that includes, as a particular case, the maximization of the entropy of the perturbed profile. We characterize the optimal trade-off surface among privacy, forgery rate and suppression rate,and experimentally evaluate how our approach could contribute to privacy protection in a real-world recommendation system.Entropy2014-03-21163Article10.3390/e16031586158616311099-43002014-03-21doi: 10.3390/e16031586Javier Parra-ArnauDavid Rebollo-MonederoJordi Forné<![CDATA[Entropy, Vol. 16, Pages 1571-1585: Increasing the Discriminatory Power of DEA Using Shannon’s Entropy]]>
http://www.mdpi.com/1099-4300/16/3/1571
In many data envelopment analysis (DEA) applications, the analyst always confronts the difficulty that the selected data set is not suitable to apply traditional DEA models for their poor discrimination. This paper presents an approach using Shannon’s entropy to improve the discrimination of traditional DEA models. In this approach, DEA efficiencies are first calculated for all possible variable subsets and analyzed using Shannon’s entropy theory to calculate the degree of the importance of each subset in the performance measurement, then we combine the obtained efficiencies and the degrees of importance to generate a comprehensive efficiency score (CES), which can observably improve the discrimination of traditional DEA models. Finally, the proposed approach has been applied to some data sets from the prior DEA literature.Entropy2014-03-20163Article10.3390/e16031571157115851099-43002014-03-20doi: 10.3390/e16031571Qiwei XieQianzhi DaiYongjun LiAn Jiang<![CDATA[Entropy, Vol. 16, Pages 1547-1570: Recent Progress in the Definition of Thermodynamic Entropy]]>
http://www.mdpi.com/1099-4300/16/3/1547
The principal methods for the definition of thermodynamic entropy are discussed with special reference to those developed by Carathéodory, the Keenan School, Lieb and Yngvason, and the present authors. An improvement of the latter method is then presented. Seven basic axioms are employed: three Postulates, which are considered as having a quite general validity, and four Assumptions, which identify the domains of validity of the definitions of energy (Assumption 1) and entropy (Assumptions 2, 3, 4). The domain of validity of the present definition of entropy is not restricted to stable equilibrium states. For collections of simple systems, it coincides with that of the proof of existence and uniqueness of an entropy function which characterizes the relation of adiabatic accessibility proposed by Lieb and Yngvason. However, our treatment does not require the formation of scaled copies so that it applies not only to collections of simple systems, but also to systems contained in electric or magnetic fields and to small and few-particle systems.Entropy2014-03-19163Article10.3390/e16031547154715701099-43002014-03-19doi: 10.3390/e16031547Enzo ZanchiniGian Beretta<![CDATA[Entropy, Vol. 16, Pages 1515-1546: Heat and Gravitation: The Action Principle]]>
http://www.mdpi.com/1099-4300/16/3/1515
Some features of hydro- and thermo-dynamics, as applied to atmospheres and to stellar structures, are puzzling: (1) the suggestion, first made by Laplace, that our atmosphere has an adiabatic temperature distribution, is confirmed for the lower layers, but the explanation for this is very controversial; (2) the standard treatment of relativistic thermodynamics does not favor a systematic treatment of mixtures, such as the mixture of a perfect gas with radiation; (3) the concept of mass density in applications of general relativity to stellar structures is less than completely satisfactory; and (4) arguments in which a concept of energy and entropy play a role, in the context of hydro-thermodynamical systems and gravitation, are not always convincing. It is proposed that a formulation of thermodynamics as an action principle may be a suitable approach to adopt for a new investigation of these matters. This paper formulates the thermodynamics of ideal gases in a constant gravitational field in terms of the Gibbsean action principle. This approach, in the simplest cases, does not deviate from standard practice, but it lays the foundations for a more systematic approach to the various extensions, such as the incorporation of radiation, the consideration of mixtures and the integration with general relativity. We study the interaction between an ideal gas and the photon gas and the propagation of sound in a vertical, isothermal column. We determine the entropy that allows for the popular isothermal equilibrium and introduce the study of the associated adiabatic dynamics. This leads to the suggestion that the equilibrium of an ideal gas must be isentropic, in which case, the role of solar radiation would be merely to compensate for the loss of energy by radiation into the cosmos. An experiment with a centrifuge is proposed, to determine the influence of gravitation on the equilibrium distribution with a very high degree of precision.Entropy2014-03-18163Article10.3390/e16031515151515461099-43002014-03-18doi: 10.3390/e16031515Christian Frønsdal<![CDATA[Entropy, Vol. 16, Pages 1501-1514: Localization of Discrete Time Quantum Walks on the Glued Trees]]>
http://www.mdpi.com/1099-4300/16/3/1501
In this paper, we consider the time averaged distribution of discrete time quantum walks on the glued trees. In order to analyze the walks on the glued trees, we consider a reduction to the walks on path graphs. Using a spectral analysis of the Jacobi matrices defined by the corresponding random walks on the path graphs, we have a spectral decomposition of the time evolution operator of the quantum walks. We find significant contributions of the eigenvalues, ±1, of the Jacobi matrices to the time averaged limit distribution of the quantum walks. As a consequence, we obtain the lower bounds of the time averaged distribution.Entropy2014-03-18163Article10.3390/e16031501150115141099-43002014-03-18doi: 10.3390/e16031501Yusuke IdeNorio KonnoEtsuo SegawaXin-Ping Xu<![CDATA[Entropy, Vol. 16, Pages 1493-1500: An Entropy Measure of Non-Stationary Processes]]>
http://www.mdpi.com/1099-4300/16/3/1493
Shannon’s source entropy formula is not appropriate to measure the uncertainty of non-stationary processes. In this paper, we propose a new entropy measure for non-stationary processes, which is greater than or equal to Shannon’s source entropy. The maximum entropy of the non-stationary process has been considered, and it can be used as a design guideline in cryptography.Entropy2014-03-17163Article10.3390/e16031493149315001099-43002014-03-17doi: 10.3390/e16031493Ling LiuHan HuYa DengNai Ding<![CDATA[Entropy, Vol. 16, Pages 1484-1492: Symmetric Informationally-Complete Quantum States as Analogues to Orthonormal Bases and Minimum-Uncertainty States]]>
http://www.mdpi.com/1099-4300/16/3/1484
Recently there has been much effort in the quantum information community to prove (or disprove) the existence of symmetric informationally complete (SIC) sets of quantum states in arbitrary finite dimension. This paper strengthens the urgency of this question by showing that if SIC-sets exist: (1) by a natural measure of orthonormality, they are as close to being an orthonormal basis for the space of density operators as possible; and (2) in prime dimensions, the standard construction for complete sets of mutually unbiased bases and Weyl-Heisenberg covariant SIC-sets are intimately related: The latter represent minimum uncertainty states for the former in the sense of Wootters and Sussman. Finally, we contribute to the question of existence by conjecturing a quadratic redundancy in the equations for Weyl-Heisenberg SIC-sets.Entropy2014-03-14163Article10.3390/e16031484148414921099-43002014-03-14doi: 10.3390/e16031484D. ApplebyHoan DangChristopher Fuchs<![CDATA[Entropy, Vol. 16, Pages 1462-1483: Applied Thermodynamics: Grain Boundary Segregation]]>
http://www.mdpi.com/1099-4300/16/3/1462
Chemical composition of interfaces—free surfaces and grain boundaries—is generally described by the Langmuir–McLean segregation isotherm controlled by Gibbs energy of segregation. Various components of the Gibbs energy of segregation, the standard and the excess ones as well as other thermodynamic state functions—enthalpy, entropy and volume—of interfacial segregation are derived and their physical meaning is elucidated. The importance of the thermodynamic state functions of grain boundary segregation, their dependence on volume solid solubility, mutual solute–solute interaction and pressure effect in ferrous alloys is demonstrated.Entropy2014-03-12163Article10.3390/e16031462146214831099-43002014-03-12doi: 10.3390/e16031462Pavel LejčekLei ZhengSiegfried HofmannMojmír Šob<![CDATA[Entropy, Vol. 16, Pages 1426-1461: Non-Equilibrium Liouville and Wigner Equations: Moment Methods and Long-Time Approximations]]>
http://www.mdpi.com/1099-4300/16/3/1426
We treat the non-equilibrium evolution of an open one-particle statistical system, subject to a potential and to an external “heat bath” (hb) with negligible dissipation. For the classical equilibrium Boltzmann distribution, Wc,eq, a non-equilibrium three-term hierarchy for moments fulfills Hermiticity, which allows one to justify an approximate long-time thermalization. That gives partial dynamical support to Boltzmann’s Wc,eq, out of the set of classical stationary distributions, Wc;st, also investigated here, for which neither Hermiticity nor that thermalization hold, in general. For closed classical many-particle systems without hb (by using Wc,eq), the long-time approximate thermalization for three-term hierarchies is justified and yields an approximate Lyapunov function and an arrow of time. The largest part of the work treats an open quantum one-particle system through the non-equilibrium Wigner function, W. Weq for a repulsive finite square well is reported. W’s (&lt; 0 in various cases) are assumed to be quasi-definite functionals regarding their dependences on momentum (q). That yields orthogonal polynomials, HQ,n(q), for Weq (and for stationary Wst), non-equilibrium moments, Wn, of W and hierarchies. For the first excited state of the harmonic oscillator, its stationary Wst is a quasi-definite functional, and the orthogonal polynomials and three-term hierarchy are studied. In general, the non-equilibrium quantum hierarchies (associated with Weq) for the Wn’s are not three-term ones. As an illustration, we outline a non-equilibrium four-term hierarchy and its solution in terms of generalized operator continued fractions. Such structures also allow one to formulate long-time approximations, but make it more difficult to justify thermalization. For large thermal and de Broglie wavelengths, the dominant Weq and a non-equilibrium equation for W are reported: the non-equilibrium hierarchy could plausibly be a three-term one and possibly not far from Gaussian, and thermalization could possibly be justified.Entropy2014-03-11163Article10.3390/e16031426142614611099-43002014-03-11doi: 10.3390/e16031426Ramon Álvarez-Estrada<![CDATA[Entropy, Vol. 16, Pages 1414-1425: Analysis of Solar Neutrino Data from Super-Kamiokande I and II]]>
http://www.mdpi.com/1099-4300/16/3/1414
We are going back to the roots of the original solar neutrino problem: the analysis of data from solar neutrino experiments. The application of standard deviation analysis (SDA) and diffusion entropy analysis (DEA) to the Super-Kamiokande I and II data reveals that they represent a non-Gaussian signal. The Hurst exponent is different from the scaling exponent of the probability density function, and both the Hurst exponent and scaling exponent of the probability density function of the Super-Kamiokande data deviate considerably from the value of 0.5, which indicates that the statistics of the underlying phenomenon is anomalous. To develop a road to the possible interpretation of this finding, we utilize Mathai’s pathway model and consider fractional reaction and fractional diffusion as possible explanations of the non-Gaussian content of the Super-Kamiokande data.Entropy2014-03-10163Article10.3390/e16031414141414251099-43002014-03-10doi: 10.3390/e16031414Hans HauboldArak MathaiRam Saxena<![CDATA[Entropy, Vol. 16, Pages 1396-1413: Infinite Excess Entropy Processes with Countable-State Generators]]>
http://www.mdpi.com/1099-4300/16/3/1396
We present two examples of finite-alphabet, infinite excess entropy processes generated by stationary hidden Markov models (HMMs) with countable state sets. The first, simpler example is not ergodic, but the second is. These are the first explicit constructions of processes of this type.Entropy2014-03-10163Article10.3390/e16031396139614131099-43002014-03-10doi: 10.3390/e16031396Nicholas TraversJames Crutchfield<![CDATA[Entropy, Vol. 16, Pages 1376-1395: Bayesian Test of Significance for Conditional Independence: The Multinomial Model]]>
http://www.mdpi.com/1099-4300/16/3/1376
Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The full Bayesian significance test is a powerful Bayesian test for precise hypothesis, as an alternative to the frequentist’s significance tests (characterized by the calculation of the p-value).Entropy2014-03-07163Article10.3390/e16031376137613951099-43002014-03-07doi: 10.3390/e16031376Pablo de Morais AndradeJulio SternCarlos de Bragança Pereira<![CDATA[Entropy, Vol. 16, Pages 1365-1375: Ensemble Entropy for Monitoring Network Design]]>
http://www.mdpi.com/1099-4300/16/3/1365
Information-theory provides, among others, conceptual methods to quantify the amount of information contained in single random variables and methods to quantify the amount of information contained and shared among two or more variables. Although these concepts have been successfully applied in hydrology and other fields, the evaluation of these quantities is sensitive to different assumptions in the estimation of probabilities. An example is the histogram bin size used to estimate probabilities to calculate Information Theory quantities via frequency methods. The present research aims at introducing a method to take into consideration the uncertainty coming from these parameters in the evaluation of the North Sea’s water level network. The main idea is that the entropy of a random variable can be represented as a probability distribution of possible values, instead of entropy being a deterministic value. The method consists of solving multiple scenarios of Multi-Objective Optimization Problem in which information content is maximized and redundancy is minimized. Results include probabilistic analysis of the chosen parameters on the resulting family of Pareto fronts, providing additional criteria on the selection of the final set of monitoring points.Entropy2014-03-04163Article10.3390/e16031365136513751099-43002014-03-04doi: 10.3390/e16031365Leonardo AlfonsoElena RidolfiSandra Gaytan-AguilarFrancesco NapolitanoFabio Russo<![CDATA[Entropy, Vol. 16, Pages 1349-1364: Entropy Estimation of Disaggregate Production Functions: An Application to Northern Mexico]]>
http://www.mdpi.com/1099-4300/16/3/1349
This paper demonstrates a robust maximum entropy approach to estimating flexible-form farm-level multi-input/multi-output production functions using minimally specified disaggregated data. Since our goal is to address policy questions, we emphasize the model’s ability to reproduce characteristics of the existing production system and predict outcomes of policy changes at a disaggregate level. Measurement of distributional impacts of policy changes requires use of farm-level models estimated across a wide spectrum of sizes and types, which is often difficult with traditional econometric methods due to data limitations. We use a two-stage approach to generate observation-specific shadow values for incompletely priced inputs. We then use the shadow values and nominal input prices to estimate crop-specific production functions using generalized maximum entropy (GME) to capture individual heterogeneity of the production environment while replicating observed inputs and outputs to production. The two-stage GME approach can be implemented with small data sets. We demonstrate this methodology in an empirical application to a small cross-section data set for Northern Rio Bravo, Mexico and estimate production functions for small family farms and moderate commercial farms. The estimates show considerable distributional differences resulting from policies that change water subsidies in the region or shift price supports to direct payments.Entropy2014-03-03163Concept Paper10.3390/e16031349134913641099-43002014-03-03doi: 10.3390/e16031349Richard HowittSiwa Msangi<![CDATA[Entropy, Vol. 16, Pages 1331-1348: Describing the Structural Diversity within an RNA’s Ensemble]]>
http://www.mdpi.com/1099-4300/16/3/1331
RNA is usually classified as either structured or unstructured; however, neither category is adequate in describing the diversity of secondary structures expected in biological systems We describe this diversity within the ensemble of structures by using two different metrics: the average Shannon entropy and the ensemble defect. The average Shannon entropy is a measure of the structural diversity calculated from the base pair probability matrix. The ensemble defect, a tool in identifying optimal sequences for a given structure, is a measure of the average number of structural differences between a target structure and all the structures that make up the ensemble, scaled to the length of the sequence. In this paper, we show examples and discuss various uses of these metrics in both structured and unstructured RNA. By exploring how these two metrics describe RNA as an ensemble of different structures, as would be found in biological systems, it will push the field beyond the standard “structured” and “unstructured” categorization.Entropy2014-03-03163Article10.3390/e16031331133113481099-43002014-03-03doi: 10.3390/e16031331Joshua Martin<![CDATA[Entropy, Vol. 16, Pages 1315-1330: Information Flow in Animal-Robot Interactions]]>
http://www.mdpi.com/1099-4300/16/3/1315
The nonverbal transmission of information between social animals is a primary driving force behind their actions and, therefore, an important quantity to measure in animal behavior studies. Despite its key role in social behavior, the flow of information has only been inferred by correlating the actions of individuals with a simplifying assumption of linearity. In this paper, we leverage information-theoretic tools to relax this assumption. To demonstrate the feasibility of our approach, we focus on a robotics-based experimental paradigm, which affords consistent and controllable delivery of visual stimuli to zebrafish. Specifically, we use a robotic arm to maneuver a life-sized replica of a zebrafish in a predetermined trajectory as it interacts with a focal subject in a test tank. We track the fish and the replica through time and use the resulting trajectory data to measure the transfer entropy between the replica and the focal subject, which, in turn, is used to quantify one-directional information flow from the robot to the fish. In agreement with our expectations, we find that the information flow from the replica to the zebrafish is significantly more than the other way around. Notably, such information is specifically related to the response of the fish to the replica, whereby we observe that the information flow is reduced significantly if the motion of the replica is randomly delayed in a surrogate dataset. In addition, comparison with a control experiment, where the replica is replaced by a conspecific, shows that the information flow toward the focal fish is significantly more for a robotic than a live stimulus. These findings support the reliability of using transfer entropy as a measure of information flow, while providing indirect evidence for the efficacy of a robotics-based platform in animal behavioral studies.Entropy2014-02-28163Article10.3390/e16031315131513301099-43002014-02-28doi: 10.3390/e16031315Sachit ButailFabrizio LaduDavide SpinelloMaurizio Porfiri<![CDATA[Entropy, Vol. 16, Pages 1287-1314: Entropy: From Thermodynamics to Hydrology]]>
http://www.mdpi.com/1099-4300/16/3/1287
Some known results from statistical thermophysics as well as from hydrology are revisited from a different perspective trying: (a) to unify the notion of entropy in thermodynamic and statistical/stochastic approaches of complex hydrological systems and (b) to show the power of entropy and the principle of maximum entropy in inference, both deductive and inductive. The capability for deductive reasoning is illustrated by deriving the law of phase change transition of water (Clausius-Clapeyron) from scratch by maximizing entropy in a formal probabilistic frame. However, such deductive reasoning cannot work in more complex hydrological systems with diverse elements, yet the entropy maximization framework can help in inductive inference, necessarily based on data. Several examples of this type are provided in an attempt to link statistical thermophysics with hydrology with a unifying view of entropy.Entropy2014-02-27163Article10.3390/e16031287128713141099-43002014-02-27doi: 10.3390/e16031287Demetris Koutsoyiannis<![CDATA[Entropy, Vol. 16, Pages 1272-1286: Information Theory Analysis of Cascading Process in a Synthetic Model of Fluid Turbulence]]>
http://www.mdpi.com/1099-4300/16/3/1272
The use of transfer entropy has proven to be helpful in detecting which is the verse of dynamical driving in the interaction of two processes, X and Y . In this paper, we present a different normalization for the transfer entropy, which is capable of better detecting the information transfer direction. This new normalized transfer entropy is applied to the detection of the verse of energy flux transfer in a synthetic model of fluid turbulence, namely the Gledzer–Ohkitana–Yamada shell model. Indeed, this is a fully well-known model able to model the fully developed turbulence in the Fourier space, which is characterized by an energy cascade towards the small scales (large wavenumbers k), so that the application of the information-theory analysis to its outcome tests the reliability of the analysis tool rather than exploring the model physics. As a result, the presence of a direct cascade along the scales in the shell model and the locality of the interactions in the space of wavenumbers come out as expected, indicating the validity of this data analysis tool. In this context, the use of a normalized version of transfer entropy, able to account for the difference of the intrinsic randomness of the interacting processes, appears to perform better, being able to discriminate the wrong conclusions to which the “traditional” transfer entropy would drive.Entropy2014-02-27163Article10.3390/e16031272127212861099-43002014-02-27doi: 10.3390/e16031272Massimo MaterassiGiuseppe ConsoliniNathan SmithRossana De Marco<![CDATA[Entropy, Vol. 16, Pages 1243-1271: A Bayesian Approach to the Balancing of Statistical Economic Data]]>
http://www.mdpi.com/1099-4300/16/3/1243
This paper addresses the problem of balancing statistical economic data, when data structure is arbitrary and both uncertainty estimates and a ranking of data quality are available. Using a Bayesian approach, the prior configuration is described as a multivariate random vector and the balanced posterior is obtained by application of relative entropy minimization. The paper shows that conventional data balancing methods, such as generalized least squares, weighted least squares and biproportional methods are particular cases of the general method described here. As a consequence, it is possible to determine the underlying assumptions and range of application of each traditional method. In particular, the popular biproportional method is found to assume that all source data has the same relative uncertainty. Finally, this paper proposes a simple linear iterative method that generalizes the biproportional method to the data balancing problem with arbitrary data structure, uncertainty estimates and multiple data quality levels.Entropy2014-02-26163Article10.3390/e16031243124312711099-43002014-02-26doi: 10.3390/e16031243João Rodrigues<![CDATA[Entropy, Vol. 16, Pages 1211-1242: The Maximum Error Probability Criterion, Random Encoder, and Feedback, in Multiple Input Channels]]>
http://www.mdpi.com/1099-4300/16/3/1211
For a multiple input channel, one may define different capacity regions, according to the criterions of error, types of codes, and presence of feedback. In this paper, we aim to draw a complete picture of relations among these different capacity regions. To this end, we first prove that the average-error-probability capacity region of a multiple input channel can be achieved by a random code under the criterion of maximum error probability. Moreover, we show that for a non-deterministic multiple input channel with feedback, the capacity regions are the same under two different error criterions. In addition, we discuss two special classes of channels to shed light on the relation of different capacity regions. In particular, to illustrate the roles of feedback, we provide a class of MAC, for which feedback may enlarge maximum-error-probability capacity regions, but not average-error-capacity regions. Besides, we present a class of MAC, as an example for which the maximum-error-probability capacity regions are strictly smaller than the average-error-probability capacity regions (first example showing this was due to G. Dueck). Differently from G. Dueck’s enlightening example in which a deterministic MAC was considered, our example includes and further generalizes G. Dueck’s example by taking both deterministic and non-deterministic MACs into account. Finally, we extend our results for a discrete memoryless two-input channel, to compound, arbitrarily varying MAC, and MAC with more than two inputs.Entropy2014-02-25163Article10.3390/e16031211121112421099-43002014-02-25doi: 10.3390/e16031211Ning Cai<![CDATA[Entropy, Vol. 16, Pages 1191-1210: One Antimatter— Two Possible Thermodynamics]]>
http://www.mdpi.com/1099-4300/16/3/1191
Conventional thermodynamics, which is formulated for our world populated by radiation and matter, can be extended to describe physical properties of antimatter in two mutually exclusive ways: CP-invariant or CPT-invariant. Here we refer to invariance of physical laws under charge (C), parity (P) and time reversal (T) transformations. While in quantum field theory CPT invariance is a theorem confirmed by experiments, the symmetry principles applied to macroscopic phenomena or to the whole of the Universe represent only hypotheses. Since both versions of thermodynamics are different only in their treatment of antimatter, but are the same in describing our world dominated by matter, making a clear experimentally justified choice between CP invariance and CPT invariance in context of thermodynamics is not possible at present. This work investigates the comparative properties of the CP- and CPT-invariant extensions of thermodynamics (focusing on the latter, which is less conventional than the former) and examines conditions under which these extensions can be experimentally tested.Entropy2014-02-25163Article10.3390/e16031191119112101099-43002014-02-25doi: 10.3390/e16031191Alexander KlimenkoUlrich Maas<![CDATA[Entropy, Vol. 16, Pages 1178-1190: Fluctuations, Entropic Quantifiers and Classical-Quantum Transition]]>
http://www.mdpi.com/1099-4300/16/3/1178
We show that a special entropic quantifier, called the statistical complexity, becomes maximal at the transition between super-Poisson and sub-Poisson regimes. This acquires important connotations given the fact that these regimes are usually associated with, respectively, classical and quantum processes.Entropy2014-02-25163Article10.3390/e16031178117811901099-43002014-02-25doi: 10.3390/e16031178Flavia PenniniAngelo Plastino<![CDATA[Entropy, Vol. 16, Pages 1169-1177: Acknowledgement to Reviewers of Entropy in 2013]]>
http://www.mdpi.com/1099-4300/16/2/1169
The editors of Entropy would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2013Entropy2014-02-24162Editorial10.3390/e16021169116911771099-43002014-02-24doi: 10.3390/e16021169 Entropy Editorial Office<![CDATA[Entropy, Vol. 16, Pages 1134-1168: Signal Processing for the Measurement of the Deuterium/Hydrogen Ratio in the Local Interstellar Medium]]>
http://www.mdpi.com/1099-4300/16/2/1134
We report on a comprehensive signal processing procedure for very low signal levels for the measurement of neutral deuterium in the local interstellar medium from a spacecraft in Earth orbit. The deuterium measurements were performed with the IBEX-Lo camera on NASA’s Interstellar Boundary Explorer (IBEX) satellite. Our analysis technique for these data consists of creating a mass relation in three-dimensional time of flight space to accurately determine the position of the predicted D events, to precisely model the tail of the H events in the region where the H tail events are near the expected D events, and then to separate the H tail from the observations to extract the very faint D signal. This interstellar D signal, which is expected to be a few counts per year, is extracted from a strong terrestrial background signal, consisting of sputter products from the sensor’s conversion surface. As reference we accurately measure the terrestrial D/H ratio in these sputtered products and then discriminate this terrestrial background source. During the three years of the mission time when the deuterium signal was visible to IBEX, the observation geometry and orbit allowed for a total observation time of 115.3 days. Because of the spinning of the spacecraft and the stepping through eight energy channels the actual observing time of the interstellar wind was only 1.44 days. With the optimised data analysis we found three counts that could be attributed to interstellar deuterium. These results update our earlier work.Entropy2014-02-24162Article10.3390/e16021134113411681099-43002014-02-24doi: 10.3390/e16021134Diego Rodríguez MorenoPeter WurzLukas SaulMaciej BzowskiMarzena KubiakJustyna SokółPriscilla FrischStephen FuselierDavid McComasEberhard MöbiusNathan Schwadron<![CDATA[Entropy, Vol. 16, Pages 1123-1133: A Relationship between the Ordinary Maximum Entropy Method and the Method of Maximum Entropy in the Mean]]>
http://www.mdpi.com/1099-4300/16/2/1123
There are two entropy-based methods to deal with linear inverse problems, which we shall call the ordinary method of maximum entropy (OME) and the method of maximum entropy in the mean (MEM). Not only doesMEM use OME as a stepping stone, it also allows for greater generality. First, because it allows to include convex constraints in a natural way, and second, because it allows to incorporate and to estimate (additive) measurement errors from the data. Here we shall see both methods in action in a specific example. We shall solve the discretized version of the problem by two variants of MEM and directly with OME. We shall see that OME is actually a particular instance of MEM, when the reference measure is a Poisson Measure.Entropy2014-02-24162Article10.3390/e16021123112311331099-43002014-02-24doi: 10.3390/e16021123Henryk GzylEnrique ter Horst<![CDATA[Entropy, Vol. 16, Pages 1122: Retraction: Aydin, B. Statistical Convergent Topological Sequence Entropy Maps of the Circle. Entropy 2004, 6, 257–261]]>
http://www.mdpi.com/1099-4300/16/2/1122
The editors were made aware that a paper published in Entropy in 2004 [1] may have plagiarized an earlier paper by Roman Hric published in 2000 [2]. After checking with specialized plagiarism software, we found that this claim is indeed correct and almost the entire paper is a verbatim copy of the earlier one. After confirmation of this fact, the editors of Entropy have decided to retract the paper immediately. [...]Entropy2014-02-21162Retraction10.3390/e16021122112211221099-43002014-02-21doi: 10.3390/e16021122Kevin Knuth<![CDATA[Entropy, Vol. 16, Pages 1101-1121: Realization of Thermal Inertia in Frequency Domain]]>
http://www.mdpi.com/1099-4300/16/2/1101
To realize the lagging behavior in heat conduction observed in these two decades, this paper firstly theoretically excludes the possibility that the underlying thermal inertia is a result of the time delay in heat diffusion. Instead, we verify in experiments the electro-thermal analogy, wherein the thermal inertial is parameterized by thermal inductance that formulates hyperbolic heat-conduction. The thermal hyperbolicity exhibits a special frequency response in Bode plot, wherein the amplitude ratios is kept flat after crossing some certain frequency, as opposed to Fourier heat-conduction. We apply this specialty to design an instrument that reliably identifies thermal inductances of some materials in frequency domain. The instrument is embedded with a DSP-based frequency synthesizer capable of modulating frequencies in utmost high-resolution. Thermal inertia implies a new possibility for energy storage in analogy to inductive energy storage in electricity or mechanics.Entropy2014-02-20162Article10.3390/e16021101110111211099-43002014-02-20doi: 10.3390/e16021101Boe-Shong HongChia-Yu Chou<![CDATA[Entropy, Vol. 16, Pages 1089-1100: Entropy and Its Correlations with Other Related Quantities]]>
http://www.mdpi.com/1099-4300/16/2/1089
In order to find more correlations between entropy and other related quantities, an analogical analysis is conducted between thermal science and other branches of physics. Potential energy in various forms is the product of a conserved extensive quantity (for example, mass or electric charge) and an intensive quantity which is its potential (for example, gravitational potential or electrical voltage), while energy in specific form is a dissipative quantity during irreversible transfer process (for example mechanical or electrical energy will be dissipated as thermal energy). However, it has been shown that heat or thermal energy, like mass or electric charge, is conserved during heat transfer processes. When a heat transfer process is for object heating or cooling, the potential of internal energy U is the temperature T and its potential “energy” is UT/2 (called entransy and it is the simplified expression of thermomass potential energy); when a heat transfer process is for heat-work conversion, the potential of internal energy U is (1 − T0/T), and the available potential energy of a system in reversible heat interaction with the environment is U − U0 − T0(S − S0), then T0/T and T0(S − S0) are the unavailable potential and the unavailable potential energy of a system respectively. Hence, entropy is related to the unavailable potential energy per unit environmental temperature for heat-work conversion during reversible heat interaction between the system and its environment. Entropy transfer, like other forms of potential energy transfer, is the product of the heat and its potential, the reciprocal of temperature, although it is in form of the quotient of the heat and the temperature. Thus, the physical essence of entropy transfer is the unavailable potential energy transfer per unit environmental temperature. Entropy is a non-conserved, extensive, state quantity of a system, and entropy generation in an irreversible heat transfer process is proportional to the destruction of available potential energy.Entropy2014-02-19162Article10.3390/e16021089108911001099-43002014-02-19doi: 10.3390/e16021089Jing WuZengyuan Guo<![CDATA[Entropy, Vol. 16, Pages 1070-1088: The Application of Weighted Entropy Theory in Vulnerability Assessment and On-Line Reconfiguration Implementation of Microgrids]]>
http://www.mdpi.com/1099-4300/16/2/1070
With the consideration of randomness of distributed generations and loads, this paper has proposed a method for vulnerability assessment of microgrids based on complex network theory and entropy theory, which can explain the influence of the inherent structure characteristics and system internal energy distribution on the microgrid. The vulnerability assessment index is built, and the on-line reconfiguration model considering the vulnerability assessment of microgrid is also established. An improved cellular bat algorithm is tested on the CERTS system to implement the real time reconfiguration fast and accurately to provide the basis of theory and practice.Entropy2014-02-19162Article10.3390/e16021070107010881099-43002014-02-19doi: 10.3390/e16021070Xin ZhanTieyuan XiangHongkun Chen<![CDATA[Entropy, Vol. 16, Pages 1047-1069: The Kalman Filter Revisited Using Maximum Relative Entropy]]>
http://www.mdpi.com/1099-4300/16/2/1047
In 1960, Rudolf E. Kalman created what is known as the Kalman filter, which is a way to estimate unknown variables from noisy measurements. The algorithm follows the logic that if the previous state of the system is known, it could be used as the best guess for the current state. This information is first applied a priori to any measurement by using it in the underlying dynamics of the system. Second, measurements of the unknown variables are taken. These two pieces of information are taken into account to determine the current state of the system. Bayesian inference is specifically designed to accommodate the problem of updating what we think of the world based on partial or uncertain information. In this paper, we present a derivation of the general Bayesian filter, then adapt it for Markov systems. A simple example is shown for pedagogical purposes. We also show that by using the Kalman assumptions or “constraints”, we can arrive at the Kalman filter using the method of maximum (relative) entropy (MrE), which goes beyond Bayesian methods. Finally, we derive a generalized, nonlinear filter using MrE, where the original Kalman Filter is a special case. We further show that the variable relationship can be any function, and thus, approximations, such as the extended Kalman filter, the unscented Kalman filter and other Kalman variants are special cases as well.Entropy2014-02-19162Article10.3390/e16021047104710691099-43002014-02-19doi: 10.3390/e16021047Adom GiffinRenaldas Urniezius<![CDATA[Entropy, Vol. 16, Pages 1037-1046: Maximum Entropy Production vs. Kolmogorov-Sinai Entropy in a Constrained ASEP Model]]>
http://www.mdpi.com/1099-4300/16/2/1037
The asymmetric simple exclusion process (ASEP) has become a paradigmatic toy-model of a non-equilibrium system, and much effort has been made in the past decades to compute exactly its statistics for given dynamical rules. Here, a different approach is developed; analogously to the equilibrium situation, we consider that the dynamical rules are not exactly known. Allowing for the transition rate to vary, we show that the dynamical rules that maximize the entropy production and those that maximise the rate of variation of the dynamical entropy, known as the Kolmogorov-Sinai entropy coincide with good accuracy. We study the dependence of this agreement on the size of the system and the couplings with the reservoirs, for the original ASEP and a variant with Langmuir kinetics.Entropy2014-02-19162Article10.3390/e16021037103710461099-43002014-02-19doi: 10.3390/e16021037Martin MihelichBérengère DubrulleDidier PaillardCorentin Herbert<![CDATA[Entropy, Vol. 16, Pages 1002-1036: Learning from Complex Systems: On the Roles of Entropy and Fisher Information in Pairwise Isotropic Gaussian Markov Random Fields]]>
http://www.mdpi.com/1099-4300/16/2/1002
Markov random field models are powerful tools for the study of complex systems. However, little is known about how the interactions between the elements of such systems are encoded, especially from an information-theoretic perspective. In this paper, our goal is to enlighten the connection between Fisher information, Shannon entropy, information geometry and the behavior of complex systems modeled by isotropic pairwise Gaussian Markov random fields. We propose analytical expressions to compute local and global versions of these measures using Besag’s pseudo-likelihood function, characterizing the system’s behavior through its Fisher curve , a parametric trajectory across the information space that provides a geometric representation for the study of complex systems in which temperature deviates from infinity. Computational experiments show how the proposed tools can be useful in extracting relevant information from complex patterns. The obtained results quantify and support our main conclusion, which is: in terms of information, moving towards higher entropy states (A –&gt; B) is different from moving towards lower entropy states (B –&gt; A), since the Fisher curves are not the same, given a natural orientation (the direction of time).Entropy2014-02-18162Article10.3390/e16021002100210361099-43002014-02-18doi: 10.3390/e16021002Alexandre Levada<![CDATA[Entropy, Vol. 16, Pages 990-1001: Prediction Method for Image Coding Quality Based on Differential Information Entropy]]>
http://www.mdpi.com/1099-4300/16/2/990
For the requirement of quality-based image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 as an example, the relationship between differential information entropy and the objective assessment indicator PSNR at a fixed compression ratio is established via data fitting, and the constraint for fitting is to minimize the average error. Next, the relationship among differential information entropy, compression ratio and PSNR at various compression ratios is constructed and this relationship is used as an indicator to predict the image coding quality. Finally, the proposed approach is compared with some traditional approaches. From the experiments, it can be seen that the differential information entropy has a better linear relationship with image coding quality than that with the image activity. Therefore, the conclusion can be reached that the proposed approach is capable of predicting image coding quality at low compression ratios with small errors, and can be widely applied in a variety of real-time space image coding systems for its simplicity.Entropy2014-02-17162Article10.3390/e1602099099010011099-43002014-02-17doi: 10.3390/e16020990Xin TianTao LiJin-Wen TianSong Li<![CDATA[Entropy, Vol. 16, Pages 968-989: Information Bottleneck Approach to Predictive Inference]]>
http://www.mdpi.com/1099-4300/16/2/968
This paper synthesizes a recent line of work on automated predictive model making inspired by Rate-Distortion theory, in particular by the Information Bottleneck method. Predictive inference is interpreted as a strategy for efficient communication. The relationship to thermodynamic efficiency is discussed. The overall aim of this paper is to explain how this information theoretic approach provides an intuitive, overarching framework for predictive inference.Entropy2014-02-17162Article10.3390/e160209689689891099-43002014-02-17doi: 10.3390/e16020968Susanne Still<![CDATA[Entropy, Vol. 16, Pages 953-967: The Elusive Nature of Entropy and Its Physical Meaning]]>
http://www.mdpi.com/1099-4300/16/2/953
Entropy is the most used and often abused concept in science, but also in philosophy and society. Further confusions are produced by some attempts to generalize entropy with similar but not the same concepts in other disciplines. The physical meaning of phenomenological, thermodynamic entropy is reasoned and elaborated by generalizing Clausius definition with inclusion of generated heat, since it is irrelevant if entropy is changed due to reversible heat transfer or irreversible heat generation. Irreversible, caloric heat transfer is introduced as complementing reversible heat transfer. It is also reasoned and thus proven why entropy cannot be destroyed but is always generated (and thus over-all increased) locally and globally, at every space and time scales, without any exception. It is concluded that entropy is a thermal displacement (dynamic thermal-volume) of thermal energy due to absolute temperature as a thermal potential (dQ = TdS), and thus associated with thermal heat and absolute temperature, i.e., distribution of thermal energy within thermal micro-particles in space. Entropy is an integral measure of (random) thermal energy redistribution (due to heat transfer and/or irreversible heat generation) within a material system structure in space, per absolute temperature level: dS = dQSys/T = mCSysdT/T, thus logarithmic integral function, with J/K unit. It may be also expressed as a measure of “thermal disorder”, being related to logarithm of number of all thermal, dynamic microstates W (their position and momenta), S = kBlnW, or to the sum of their logarithmic probabilities S = −kB∑pilnpi, that correspond to, or are consistent with the given thermodynamic macro-state. The number of thermal microstates W, is correlated with macro-properties temperature T and volume V for ideal gases. A system form and/or functional order or disorder are not (thermal) energy order/disorder and the former is not related to Thermodynamic entropy. Expanding entropy to any type of disorder or information is a source of many misconceptions. Granted, there are certain benefits of simplified statistical descriptions to better comprehend the randomness of thermal motion and related physical quantities, but the limitations should be stated so the generalizations are not overstretched and the real physics overlooked, or worse discredited.Entropy2014-02-17162Article10.3390/e160209539539671099-43002014-02-17doi: 10.3390/e16020953Milivoje Kostic<![CDATA[Entropy, Vol. 16, Pages 943-952: Fused Entropy Algorithm in Optical Computed Tomography]]>
http://www.mdpi.com/1099-4300/16/2/943
In most applications of optical computed tomography (OpCT), limited-view problems are often encountered, which can be solved to a certain extent with typical OpCT reconstructive algorithms. The concept of entropy first emerged in information theory has been introduced into OpCT algorithms, such as maximum entropy (ME) algorithms and cross entropy (CE) algorithms, which have demonstrated their superiority over traditional OpCT algorithms, yet have their own limitations. A fused entropy (FE) algorithm, which follows an optimized criterion combining self-adaptively ME with CE, is proposed and investigated by comparisons with ME, CE and some traditional OpCT algorithms. Reconstructed results of several physical models show this FE algorithm has a good convergence and can achieve better precision than other algorithms, which verifies the feasibility of FE as an approach of optimizing computation, not only for OpCT, but also for other image processing applications.Entropy2014-02-17162Article10.3390/e160209439439521099-43002014-02-17doi: 10.3390/e16020943Xiong WanPeng WangZhimin ZhangHuaming Zhang<![CDATA[Entropy, Vol. 16, Pages 921-942: Statistical Analysis of Distance Estimators with Density Differences and Density Ratios]]>
http://www.mdpi.com/1099-4300/16/2/921
Estimating a discrepancy between two probability distributions from samples is an important task in statistics and machine learning. There are mainly two classes of discrepancy measures: distance measures based on the density difference, such as the Lp-distances, and divergence measures based on the density ratio, such as the Φ-divergences. The intersection of these two classes is the L1-distance measure, and thus, it can be estimated either based on the density difference or the density ratio. In this paper, we first show that the Bregman scores, which are widely employed for the estimation of probability densities in statistical data analysis, allows us to estimate the density difference and the density ratio directly without separately estimating each probability distribution. We then theoretically elucidate the robustness of these estimators and present numerical experiments.Entropy2014-02-17162Article10.3390/e160209219219421099-43002014-02-17doi: 10.3390/e16020921Takafumi KanamoriMasashi Sugiyama<![CDATA[Entropy, Vol. 16, Pages 912-920: Thermodynamic Analysis of the V-Shaped Area of High Pressure and High Temperature in Cubic Boron Nitride Synthesis with Li3N as a Catalyst]]>
http://www.mdpi.com/1099-4300/16/2/912
The possibilities of different phase transitions to cBN with Li3N as catalyst at high temperature and high pressure (1600–2200 K, 4.8–6.0 GPa) are analyzed, in the framework of the second law of thermodynamics. The Gibbs free energy (∆G) of three reactions which may happen in the Li3N-BN system: hBN + Li3N→Li3BN2, hBN→cBN, and Li3BN2→cBN + Li3N, is calculated, with the influence of high temperature and high pressure on volume included. We show that ∆G of hBN + Li3N→Li3BN2 and hBN→cBN are between −35~−10 KJ·mol−1 and −25~−19 KJ·mol−1, respectively. However, ∆G of Li3BN2→cBN + Li3N can be positive or negative. The area formed by the positive data is a V-shaped area, which covers the most part of the cBN growing V-shaped area. It confirms that Li3BN2 is stable in the P-T area of cBN synthesis, and cBN is probably transformed directly from hBN. Analysis suggests that Li3BN2 promotes the transition from hBN to cBN.Entropy2014-02-14162Article10.3390/e160209129129201099-43002014-02-14doi: 10.3390/e16020912Bin XuMei-Zhe LvHong-Mei YangZhen-Xing Wen<![CDATA[Entropy, Vol. 16, Pages 895-911: Alloying and Processing Effects on the Aqueous Corrosion Behavior of High-Entropy Alloys]]>
http://www.mdpi.com/1099-4300/16/2/895
The effects of metallurgical factors on the aqueous corrosion behavior of high-entropy alloys (HEAs) are reviewed in this article. Alloying (e.g., Al and Cu) and processing (e.g., heat treatments) effects on the aqueous corrosion behavior of HEAs, including passive film formation, galvanic corrosion, and pitting corrosion, are discussed in detail. Corrosion rates of HEAs are calculated using electrochemical measurements and the weight-loss method. Available experimental corrosion data of HEAs in two common solutions [sulfuric acid (0.5 M H2SO4) and salt water (3.5 weight percent, wt.%, NaCl)], such as the corrosion potential (Ecorr), corrosion current density (icorr), pitting potential (Epit), and passive region (ΔE), are summarized and compared with conventional corrosion-resistant alloys. Possible directions of future work on the corrosion behavior of HEAs are suggested.Entropy2014-02-14162Review10.3390/e160208958959111099-43002014-02-14doi: 10.3390/e16020895Zhi TangLu HuangWei HePeter Liaw<![CDATA[Entropy, Vol. 16, Pages 885-894: From Random Motion of Hamiltonian Systems to Boltzmann’s H Theorem and Second Law of Thermodynamics: a Pathway by Path Probability]]>
http://www.mdpi.com/1099-4300/16/2/885
A numerical experiment of ideal stochastic motion of a particle subject to conservative forces and Gaussian noise reveals that the path probability depends exponentially on action. This distribution implies a fundamental principle generalizing the least action principle of the Hamiltonian/Lagrangian mechanics and yields an extended formalism of mechanics for random dynamics. Within this theory, Liouville’s theorem of conservation of phase density distribution must be modified to allow time evolution of phase density and consequently the Boltzmann H theorem. We argue that the gap between the regular Newtonian dynamics and the random dynamics was not considered in the criticisms of the H theorem.Entropy2014-02-13162Article10.3390/e160208858858941099-43002014-02-13doi: 10.3390/e16020885Qiuping WangAziz El Kaabouchiu<![CDATA[Entropy, Vol. 16, Pages 870-884: Microstructures and Crackling Noise of AlxNbTiMoV High Entropy Alloys]]>
http://www.mdpi.com/1099-4300/16/2/870
A series of high entropy alloys (HEAs), AlxNbTiMoV, was produced by a vacuum arc-melting method. Their microstructures and compressive mechanical behavior at room temperature were investigated. It has been found that a single solid-solution phase with a body-centered cubic (BCC) crystal structure forms in these alloys. Among these alloys, Al0.5NbTiMoV reaches the highest yield strength (1,625 MPa), which should be attributed to the considerable solid-solution strengthening behavior. Furthermore, serration and crackling noises near the yielding point was observed in the NbTiMoV alloy, which represents the first such reported phenomenon at room temperature in HEAs.Entropy2014-02-13162Article10.3390/e160208708708841099-43002014-02-13doi: 10.3390/e16020870Shu ChenXiao YangKarin DahmenPeter LiawYong Zhang<![CDATA[Entropy, Vol. 16, Pages 854-869: Fast Feature Selection in a GPU Cluster Using the Delta Test]]>
http://www.mdpi.com/1099-4300/16/2/854
Feature or variable selection still remains an unsolved problem, due to the infeasible evaluation of all the solution space. Several algorithms based on heuristics have been proposed so far with successful results. However, these algorithms were not designed for considering very large datasets, making their execution impossible, due to the memory and time limitations. This paper presents an implementation of a genetic algorithm that has been parallelized using the classical island approach, but also considering graphic processing units to speed up the computation of the fitness function. Special attention has been paid to the population evaluation, as well as to the migration operator in the parallel genetic algorithm (GA), which is not usually considered too significant; although, as the experiments will show, it is crucial in order to obtain robust results.Entropy2014-02-13162Article10.3390/e160208548548691099-43002014-02-13doi: 10.3390/e16020854Alberto GuillénM. García ArenasMark van HeeswijkDusan SoviljAmaury LendasseLuis HerreraHéctor PomaresIgnacio Rojas<![CDATA[Entropy, Vol. 16, Pages 825-853: Generalized Maximum Entropy Analysis of the Linear Simultaneous Equations Model]]>
http://www.mdpi.com/1099-4300/16/2/825
A generalized maximum entropy estimator is developed for the linear simultaneous equations model. Monte Carlo sampling experiments are used to evaluate the estimator’s performance in small and medium sized samples, suggesting contexts in which the current generalized maximum entropy estimator is superior in mean square error to two and three stage least squares. Analytical results are provided relating to asymptotic properties of the estimator and associated hypothesis testing statistics. Monte Carlo experiments are also used to provide evidence on the power and size of test statistics. An empirical application is included to demonstrate the practical implementation of the estimator.Entropy2014-02-12162Article10.3390/e160208258258531099-43002014-02-12doi: 10.3390/e16020825Thomas MarshRon MittelhammerNicholas Cardell<![CDATA[Entropy, Vol. 16, Pages 814-824: A Note on the W-S Lower Bound of the MEE Estimation]]>
http://www.mdpi.com/1099-4300/16/2/814
The minimum error entropy (MEE) estimation is concerned with the estimation of a certain random variable (unknown variable) based on another random variable (observation), so that the entropy of the estimation error is minimized. This estimation method may outperform the well-known minimum mean square error (MMSE) estimation especially for non-Gaussian situations. There is an important performance bound on the MEE estimation, namely the W-S lower bound, which is computed as the conditional entropy of the unknown variable given observation. Though it has been known in the literature for a considerable time, up to now there is little study on this performance bound. In this paper, we reexamine the W-S lower bound. Some basic properties of the W-S lower bound are presented, and the characterization of Gaussian distribution using the W-S lower bound is investigated.Entropy2014-02-10162Article10.3390/e160208148148241099-43002014-02-10doi: 10.3390/e16020814Badong ChenGuangmin WangNanning ZhengJose Principe<![CDATA[Entropy, Vol. 16, Pages 789-813: Autonomous Search for a Diffusive Source in an UnknownStructured Environment]]>
http://www.mdpi.com/1099-4300/16/2/789
The paper presents a framework for autonomous search for a diffusive emitting source of a tracer (e.g., aerosol, gas) in an environment with an unknown map of randomly placed and shaped obstacles. The measurements of the tracer concentration are sporadic, noisy and without directional information. The search domain is discretised and modelled by a finite two-dimensional lattice. The links in the lattice represent the traversable paths for emitted particles and for the searcher. A missing link in the lattice indicates a blocked path due to an obstacle. The searcher must simultaneously estimate the source parameters, the map of the search domain and its own location within the map. The solution is formulated in the sequential Bayesian framework and implemented as a Rao-Blackwellised particle filter with entropy-reduction motion control. The numerical results demonstrate the concept and its performance.Entropy2014-02-10162Article10.3390/e160207897898131099-43002014-02-10doi: 10.3390/e16020789Branko RisticAlex SkvortsovAndrew Walker<![CDATA[Entropy, Vol. 16, Pages 770-788: A Symmetric Chaos-Based Image Cipher with an Improved Bit-Level Permutation Strategy]]>
http://www.mdpi.com/1099-4300/16/2/770
Very recently, several chaos-based image ciphers using a bit-level permutation have been suggested and shown promising results. Due to the diffusion effect introduced in the permutation stage, the workload of the time-consuming diffusion stage is reduced, and hence the performance of the cryptosystem is improved. In this paper, a symmetric chaos-based image cipher with a 3D cat map-based spatial bit-level permutation strategy is proposed. Compared with those recently proposed bit-level permutation methods, the diffusion effect of the new method is superior as the bits are shuffled among different bit-planes rather than within the same bit-plane. Moreover, the diffusion key stream extracted from hyperchaotic system is related to both the secret key and the plain image, which enhances the security against known/chosen plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis, plaintext sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme.Entropy2014-02-10162Article10.3390/e160207707707881099-43002014-02-10doi: 10.3390/e16020770Chong FuJun-Bin HuangNing-Ning WangQi-Bin HouWei-Min Lei<![CDATA[Entropy, Vol. 16, Pages 747-769: Modelling and Simulation of Seasonal Rainfall Using the Principle of Maximum Entropy]]>
http://www.mdpi.com/1099-4300/16/2/747
We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to model the joint probability distribution for total seasonal rainfall and a set of two-parameter gamma distributions to model each of the marginal monthly rainfall totals. The model allows us to match the grade correlation coefficients for the checkerboard copula to the observed Spearman rank correlation coefficients for the monthly rainfalls and, hence, provides a model that correctly describes the mean and variance for each of the monthly totals and also for the overall seasonal total. Thus, we avoid the need for a posteriori adjustment of simulated monthly totals in order to correctly simulate the observed seasonal statistics. Detailed results are presented for the modelling and simulation of seasonal rainfall in the town of Kempsey on the mid-north coast of New South Wales. Empirical evidence from extensive simulations is used to validate this application of the model. A similar analysis for Sydney is also described.Entropy2014-02-10162Article10.3390/e160207477477691099-43002014-02-10doi: 10.3390/e16020747Jonathan BorweinPhil HowlettJulia Piantadosi<![CDATA[Entropy, Vol. 16, Pages 729-746: Robust Control of a Class of Uncertain Fractional-Order Chaotic Systems with Input Nonlinearity via an Adaptive Sliding Mode Technique]]>
http://www.mdpi.com/1099-4300/16/2/729
In this paper, the problem of stabilizing a class of fractional-order chaotic systems with sector and dead-zone nonlinear inputs is investigated. The effects of model uncertainties and external disturbances are fully taken into account. Moreover, the bounds of both model uncertainties and external disturbances are assumed to be unknown in advance. To deal with the system’s nonlinear items and unknown bounded uncertainties, an adaptive fractional-order sliding mode (AFSM) controller is designed. Then, Lyapunov’s stability theory is used to prove the stability of the designed control scheme. Finally, two simulation examples are given to verify the effectiveness and robustness of the proposed control approach. Entropy2014-02-07162Article10.3390/e160207297297461099-43002014-02-07doi: 10.3390/e16020729Xiaomin TianShumin Fei<![CDATA[Entropy, Vol. 16, Pages 726-728: Entropy Best Paper Award 2014]]>
http://www.mdpi.com/1099-4300/16/2/726
In 2013, Entropy instituted the “Best Paper” award to recognize outstanding papers in the area of entropy and information studies published in Entropy [1]. We are pleased to announce the “Entropy Best Paper Award” for 2014. Nominations were selected by the Editor-in-Chief and designated Editorial Board Members from all the papers published in 2010.Entropy2014-01-27162Editorial10.3390/e160207267267281099-43002014-01-27doi: 10.3390/e16020726Kevin Knuth<![CDATA[Entropy, Vol. 16, Pages 699-725: Thermodynamics as Control Theory]]>
http://www.mdpi.com/1099-4300/16/2/699
I explore the reduction of thermodynamics to statistical mechanics by treating the former as a control theory: A theory of which transitions between states can be induced on a system (assumed to obey some known underlying dynamics) by means of operations from a fixed list. I recover the results of standard thermodynamics in this framework on the assumption that the available operations do not include measurements which affect subsequent choices of operations. I then relax this assumption and use the framework to consider the vexed questions of Maxwell’s demon and Landauer’s principle. Throughout, I assume rather than prove the basic irreversibility features of statistical mechanics, taking care to distinguish them from the conceptually distinct assumptions of thermodynamics proper.Entropy2014-01-24162Article10.3390/e160206996997251099-43002014-01-24doi: 10.3390/e16020699David Wallace<![CDATA[Entropy, Vol. 16, Pages 675-698: New Methods of Entropy-Robust Estimation for Randomized Models under Limited Data]]>
http://www.mdpi.com/1099-4300/16/2/675
The paper presents a new approach to restoration characteristics randomized models under small amounts of input and output data. This approach proceeds from involving randomized static and dynamic models and estimating the probabilistic characteristics of their parameters. We consider static and dynamic models described by Volterra polynomials. The procedures of robust parametric and non-parametric estimation are constructed by exploiting the entropy concept based on the generalized informational Boltzmann’s and Fermi’s entropies.Entropy2014-01-23162Article10.3390/e160206756756981099-43002014-01-23doi: 10.3390/e16020675Yuri PopkovAlexey Popkov<![CDATA[Entropy, Vol. 16, Pages 645-674: Dynamical Stability and Predictability of Football Players: The Study of One Match]]>
http://www.mdpi.com/1099-4300/16/2/645
The game of football demands new computational approaches to measure individual and collective performance. Understanding the phenomena involved in the game may foster the identification of strengths and weaknesses, not only of each player, but also of the whole team. The development of assertive quantitative methodologies constitutes a key element in sports training. In football, the predictability and stability inherent in the motion of a given player may be seen as one of the most important concepts to fully characterise the variability of the whole team. This paper characterises the predictability and stability levels of players during an official football match. A Fractional Calculus (FC) approach to define a player’s trajectory. By applying FC, one can benefit from newly considered modeling perspectives, such as the fractional coefficient, to estimate a player’s predictability and stability. This paper also formulates the concept of attraction domain, related to the tactical region of each player, inspired by stability theory principles. To compare the variability inherent in the player’s process variables (e.g., distance covered) and to assess his predictability and stability, entropy measures are considered. Experimental results suggest that the most predictable player is the goalkeeper while, conversely, the most unpredictable players are the midfielders. We also conclude that, despite his predictability, the goalkeeper is the most unstable player, while lateral defenders are the most stable during the match.Entropy2014-01-23162Article10.3390/e160206456456741099-43002014-01-23doi: 10.3390/e16020645Micael CouceiroFilipe ClementeFernando MartinsJosé Machado<![CDATA[Entropy, Vol. 16, Pages 627-644: A Simplified Algorithm for the Topological Entropy of Multimodal Maps]]>
http://www.mdpi.com/1099-4300/16/2/627
A numerical algorithm to compute the topological entropy of multimodal maps is proposed. This algorithm results from a closed formula containing the so-called min-max symbols, which are closely related to the kneading symbols. Furthermore, it simplifies a previous algorithm, also based on min-max symbols, which was originally proposed for twice differentiable multimodal maps. The new algorithm has been benchmarked against the old one with a number of multimodal maps, the results being reported in the paper. The comparison is favorable to the new algorithm, except in the unimodal case.Entropy2014-01-23162Article10.3390/e160206276276441099-43002014-01-23doi: 10.3390/e16020627José AmigóÁngel Giménez<![CDATA[Entropy, Vol. 16, Pages 607-626: Defect Detection for Wheel-Bearings with Time-Spectral Kurtosis and Entropy]]>
http://www.mdpi.com/1099-4300/16/1/607
Wheel-bearings easily acquire defects due to their high-speed operating conditions and constant metal-metal contact, so defect detection is of great importance for railroad safety. The conventional spectral kurtosis (SK) technique provides an optimal bandwidth for envelope demodulation. However, this technique may cause false detections when processing real vibration signals for wheel-bearings, because of sparse interference impulses. In this paper, a novel defect detection method with entropy, time-spectral kurtosis (TSK) and support vector machine (SVM) is proposed. In this method, the possible outliers in the short time Fourier transform (STFT) amplitude series are first estimated and preprocessed with information entropy. Then the method extends the SK technique to the time-domain, and extracts defective frequencies from reconstructed vibration signals by TSK filtering. Finally, the multi-class SVM was applied to classify bearing defects. The effectiveness of the proposed method is illustrated using real wheel-bearing vibration signals. Experimental results show that the proposed method provides a better performance in defect frequency detection and classification than the conventional SK-based envelope demodulation.Entropy2014-01-17161Article10.3390/e160106076076261099-43002014-01-17doi: 10.3390/e16010607Bin ChenZhaoli YanWei Chen<![CDATA[Entropy, Vol. 16, Pages 582-606: Two-Atom Collisions and the Loading of Atoms in Microtraps]]>
http://www.mdpi.com/1099-4300/16/1/582
We review light assisted collisions in a high-density far-off resonant optical trap (FORT). By tuning the parameters of the light that induces the collisions, the effects of the collisions can be controlled. Trap loss can be suppressed even at high atomic densities, allowing us to count the atoms using fluorescence detection. When only two atoms are trapped, individual loss events reveal new information about the process, and the simplicity of the system allows for a numerical simulation of the dynamics. By optimizing the experimental parameters, we implement an efficient method to prepare single atoms in the FORT. Our methods can be extended to load quantum registers for quantum information processing.Entropy2014-01-16161Review10.3390/e160105825826061099-43002014-01-16doi: 10.3390/e16010582Yin FungAlicia CarpentierPimonpan SompetMikkel Andersen<![CDATA[Entropy, Vol. 16, Pages 567-581: Multiscale Model Selection for High-Frequency Financial Data of a Large Tick Stock by Means of the Jensen–Shannon Metric]]>
http://www.mdpi.com/1099-4300/16/1/567
Modeling financial time series at different time scales is still an open challenge. The choice of a suitable indicator quantifying the distance between the model and the data is therefore of fundamental importance for selecting models. In this paper, we propose a multiscale model selection method based on the Jensen–Shannon distance in order to select the model that is able to better reproduce the distribution of price changes at different time scales. Specifically, we consider the problem of modeling the ultra high frequency dynamics of an asset with a large tick-to-price ratio. We study the price process at different time scales and compute the Jensen–Shannon distance between the original dataset and different models, showing that the coupling between spread and returns is important to model return distribution at different time scales of observation, ranging from the scale of single transactions to the daily time scale.Entropy2014-01-16161Article10.3390/e160105675675811099-43002014-01-16doi: 10.3390/e16010567Gianbiagio CuratoFabrizio Lillo<![CDATA[Entropy, Vol. 16, Pages 557-566: Properties of Branch Length Similarity Entropy on the Network in Rk]]>
http://www.mdpi.com/1099-4300/16/1/557
Branching network is one of the most universal phenomena in living or non-living systems, such as river systems and the bronchial trees of mammals. To topologically characterize the branching networks, the Branch Length Similarity (BLS) entropy was suggested and the statistical methods based on the entropy have been applied to the shape identification and pattern recognition. However, the mathematical properties of the BLS entropy have not still been explored in depth because of the lack of application and utilization requiring advanced mathematical understanding. Regarding the mathematical study, it was reported, as a theorem, that all BLS entropy values obtained for simple networks created by connecting pixels along the boundary of a shape are exactly unity when the shape has infinite resolution. In the present study, we extended the theorem to the network created by linking infinitely many nodes distributed on the bounded or unbounded domain in Rk for k ≥ 1. We proved that all BLS entropies of the nodes in the network go to one as the number of nodes, n, goes to infinite and its convergence rate is 1 - O(1= ln n), which was confirmed by the numerical tests.Entropy2014-01-16161Article10.3390/e160105575575661099-43002014-01-16doi: 10.3390/e16010557Oh KwonSang-Hee Lee<![CDATA[Entropy, Vol. 16, Pages 543-556: Entropy and the Predictability of Online Life]]>
http://www.mdpi.com/1099-4300/16/1/543
Using mobile phone records and information theory measures, our daily lives have been recently shown to follow strict statistical regularities, and our movement patterns are, to a large extent, predictable. Here, we apply entropy and predictability measures to two datasets of the behavioral actions and the mobility of a large number of players in the virtual universe of a massive multiplayer online game. We find that movements in virtual human lives follow the same high levels of predictability as offline mobility, where future movements can, to some extent, be predicted well if the temporal correlations of visited places are accounted for. Time series of behavioral actions show similar high levels of predictability, even when temporal correlations are neglected. Entropy conditional on specific behavioral actions reveals that in terms of predictability, negative behavior has a wider variety than positive actions. The actions that contain the information to best predict an individual’s subsequent action are negative, such as attacks or enemy markings, while the positive actions of friendship marking, trade and communication contain the least amount of predictive information. These observations show that predicting behavioral actions requires less information than predicting the mobility patterns of humans for which the additional knowledge of past visited locations is crucial and that the type and sign of a social relation has an essential impact on the ability to determine future behavior.Entropy2014-01-16161Article10.3390/e160105435435561099-43002014-01-16doi: 10.3390/e16010543Roberta SinatraMichael Szell<![CDATA[Entropy, Vol. 16, Pages 526-542: Complexity in Animal Communication: Estimating the Size of N-Gram Structures]]>
http://www.mdpi.com/1099-4300/16/1/526
In this paper, new techniques that allow conditional entropy to estimate the combinatorics of symbols are applied to animal communication studies to estimate the communication’s repertoire size. By using the conditional entropy estimates at multiple orders, the paper estimates the total repertoire sizes for animal communication across bottlenose dolphins, humpback whales and several species of birds for an N-gram length of one to three. In addition to discussing the impact of this method on studies of animal communication complexity, the reliability of these estimates is compared to other methods through simulation. While entropy does undercount the total repertoire size due to rare N-grams, it gives a more accurate picture of the most frequently used repertoire than just repertoire size alone.Entropy2014-01-16161Article10.3390/e160105265265421099-43002014-01-16doi: 10.3390/e16010526Reginald Smith<![CDATA[Entropy, Vol. 16, Pages 494-525: Exploration and Development of High Entropy Alloys for Structural Applications]]>
http://www.mdpi.com/1099-4300/16/1/494
We develop a strategy to design and evaluate high-entropy alloys (HEAs) for structural use in the transportation and energy industries. We give HEA goal properties for low (≤150 °C), medium (≤450 °C) and high (≥1,100 °C) use temperatures. A systematic design approach uses palettes of elements chosen to meet target properties of each HEA family and gives methods to build HEAs from these palettes. We show that intermetallic phases are consistent with HEA definitions, and the strategy developed here includes both single-phase, solid solution HEAs and HEAs with intentional addition of a 2nd phase for particulate hardening. A thermodynamic estimate of the effectiveness of configurational entropy to suppress or delay compound formation is given. A 3-stage approach is given to systematically screen and evaluate a vast number of HEAs by integrating high-throughput computations and experiments. CALPHAD methods are used to predict phase equilibria, and high-throughput experiments on materials libraries with controlled composition and microstructure gradients are suggested. Much of this evaluation can be done now, but key components (materials libraries with microstructure gradients and high-throughput tensile testing) are currently missing. Suggestions for future HEA efforts are given.Entropy2014-01-10161Article10.3390/e160104944945251099-43002014-01-10doi: 10.3390/e16010494Daniel MiracleJonathan MillerOleg SenkovChristopher WoodwardMichael UchicJaimie Tiley<![CDATA[Entropy, Vol. 16, Pages 471-493: Multiple Solutions of Nonlinear Boundary Value Problems of Fractional Order: A New Analytic Iterative Technique]]>
http://www.mdpi.com/1099-4300/16/1/471
The purpose of this paper is to present a new kind of analytical method, the so-called residual power series, to predict and represent the multiplicity of solutions to nonlinear boundary value problems of fractional order. The present method is capable of calculating all branches of solutions simultaneously, even if these multiple solutions are very close and thus rather difficult to distinguish even by numerical techniques. To verify the computational efficiency of the designed proposed technique, two nonlinear models are performed, one of them arises in mixed convection flows and the other one arises in heat transfer, which both admit multiple solutions. The results reveal that the method is very effective, straightforward, and powerful for formulating these multiple solutions.Entropy2014-01-09161Article10.3390/e160104714714931099-43002014-01-09doi: 10.3390/e16010471Omar ArqubAhmad El-AjouZeyad Al ZhourShaher Momani<![CDATA[Entropy, Vol. 16, Pages 455-470: Dynamics of Correlation Structure in Stock Market]]>
http://www.mdpi.com/1099-4300/16/1/455
In this paper a correction factor for Jennrich’s statistic is introduced in order to be able not only to test the stability of correlation structure, but also to identify the time windows where the instability occurs. If Jennrich’s statistic is only to test the stability of correlation structure along predetermined non-overlapping time windows, the corrected statistic provides us with the history of correlation structure dynamics from time window to time window. A graphical representation will be provided to visualize that history. This information is necessary to make further analysis about, for example, the change of topological properties of minimal spanning tree. An example using NYSE data will illustrate its advantages.Entropy2014-01-06161Article10.3390/e160104554554701099-43002014-01-06doi: 10.3390/e16010455Maman DjauhariSiew Gan<![CDATA[Entropy, Vol. 16, Pages 443-454: Entropy Estimation of Generalized Half-Logistic Distribution (GHLD) Based on Type-II Censored Samples]]>
http://www.mdpi.com/1099-4300/16/1/443
This paper derives the entropy of a generalized half-logistic distribution based on Type-II censored samples, obtains some entropy estimators by using Bayes estimators of an unknown parameter in the generalized half-logistic distribution based on Type-II censored samples and compares these estimators in terms of the mean squared error and the bias through Monte Carlo simulations.Entropy2014-01-02161Article10.3390/e160104434434541099-43002014-01-02doi: 10.3390/e16010443Jung-In SeoSuk-Bok Kang<![CDATA[Entropy, Vol. 16, Pages 418-442: Quantifying Compressibility and Slip in Multiparticle Collision (MPC) Flow Through a Local Constriction]]>
http://www.mdpi.com/1099-4300/16/1/418
The flow of a compressible fluid with slip through a cylinder with an asymmetric local constriction has been considered both numerically, as well as analytically. For the numerical work, a particle-based method whose dynamics is governed by the multiparticle collision (MPC) rule has been used together with a generalized boundary condition that allows for slip at the wall. Since it is well known that an MPC system corresponds to an ideal gas and behaves like a compressible, viscous flow on average, an approximate analytical solution has been derived from the compressible Navier–Stokes equations of motion coupled to an ideal gas equation of state using the Karman–Pohlhausen method. The constriction is assumed to have a polynomial form, and the location of maximum constriction is varied throughout the constricted portion of the cylinder. Results for centerline densities and centerline velocities have been compared for various Reynolds numbers, Mach numbers, wall slip values and flow geometries.Entropy2014-01-02161Article10.3390/e160104184184421099-43002014-01-02doi: 10.3390/e16010418Tahmina AkhterKatrin Rohlf<![CDATA[Entropy, Vol. 16, Pages 405-417: Nanomechanical Properties and Deformation Behaviors of Multi-Component (AlCrTaTiZr)NxSiy High-Entropy Coatings]]>
http://www.mdpi.com/1099-4300/16/1/405
In this study multi-component (AlCrTaTiZr)NxSiy high-entropy coatings were developed by co-sputtering of AlCrTaTiZr alloy and Si in an Ar/N2 mixed atmosphere with the application of different substrate biases and Si-target powers. Their nanomechanical properties and deformation behaviors were characterized by nanoindentation tests. Because of the effect of high mixing entropies, all the deposited multi-component (AlCrTaTiZr)NxSiy high-entropy coatings exhibited a simple face-centered cubic solid-solution structure. With an increased substrate bias and Si-target power, their microstructures changed from large columns with a [111] preferred orientation to a nanocomposite form with ultrafine grains. The hardness, H/E ratio and H3/E2 ratio of (AlCrTaTiZr)N1.07Si0.15 coating reached 30.2 GPa, 0.12 and 0.41 GPa, respectively, suggesting markedly suppressed dislocation activities and a very high resistance to wear and plastic deformation, attributable to grain refinements and film densification by the application of substrate bias, a nanocomposite structure by the introduction of silicon nitrides, and a strengthening effect induced by severe lattice distortions. In the deformed regions under indents, stacking faults or partial dislocations were formed, while in the stress-released regions, near-perfect lattices recovered.Entropy2013-12-31161Article10.3390/e160104054054171099-43002013-12-31doi: 10.3390/e16010405Shao-Yi LinShou-Yi ChangChia-Jung ChangYi-Chung Huang<![CDATA[Entropy, Vol. 16, Pages 389-404: A Novel Approach to Extracting Casing Status Features Using Data Mining]]>
http://www.mdpi.com/1099-4300/16/1/389
Casing coupling location signals provided by the magnetic localizer in retractors are typically used to ascertain the position of casing couplings in horizontal wells. However, the casing coupling location signal is usually submerged in noise, which will result in the failure of casing coupling detection under the harsh logging environment conditions. The limitation of Shannon wavelet time entropy, in the feature extraction of casing status, is presented by analyzing its application mechanism, and a corresponding improved algorithm is subsequently proposed. On the basis of wavelet transform, two derivative algorithms, singular values decomposition and Tsallis entropy theory, are proposed and their physics meanings are researched. Meanwhile, a novel data mining approach to extract casing status features with Tsallis wavelet singularity entropy is put forward in this paper. The theoretical analysis and experiment results indicate that the proposed approach can not only extract the casing coupling features accurately, but also identify the characteristics of perforation and local corrosion in casings. The innovation of the paper is in the use of simple wavelet entropy algorithms to extract the complex nonlinear logging signal features of a horizontal well tractor.Entropy2013-12-31161Article10.3390/e160103893894041099-43002013-12-31doi: 10.3390/e16010389Jikai ChenHaoyu LiYanjun WangRonghua XieXingbin Liu