Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 16, Pages 5601-5617: On One-Sided, D-Chaotic CA Without Fixed Points, Having Continuum of Periodic Points With Period 2 and Topological Entropy log(p) for Any Prime p]]>
http://www.mdpi.com/1099-4300/16/11/5601
A method is known by which any integer \(\, n\geq2\,\) in a metric Cantor space of right-infinite words \(\,\tilde{A}_{n}^{\,\mathbb N}\,\) gives a construction of a non-injective cellular automaton \(\,(\tilde{A}_{n}^{\,\mathbb N},\,\tilde{F}_{n}),\,\) which is chaotic in Devaney sense, has a radius \(\, r=1,\,\) continuum of fixed points and topological entropy \(\, log(n).\,\) As a generalization of this method we present for any integer \(\, n\geq2,\,\) a construction of a cellular automaton \(\,(A_{n}^{\,\mathbb{N}},\, F_{n}),\,\) which has the listed properties of \(\,(\tilde{A}_{n}^{\,\mathbb N},\,\tilde{F}_{n}),\,\) but has no fixed points and has continuum of periodic points with the period 2. The construction is based on properties of cellular automaton introduced here \(\,(B^{\,\mathbb N},\, F)\,\) with radius \(1\) defined for any prime number \(\, p.\,\) We prove that \(\,(B^{\,\mathbb N},\, F)\,\) is non-injective, chaotic in Devaney sense, has no fixed points, has continuum of periodic points with the period \(2\) and topological entropy \(\, log(p).\,\)Entropy2014-10-241611Article10.3390/e16115601560156171099-43002014-10-24doi: 10.3390/e16115601Wit ForysJanusz Matyja<![CDATA[Entropy, Vol. 16, Pages 5575-5600: Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps]]>
http://www.mdpi.com/1099-4300/16/10/5575
Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC) multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.Entropy2014-10-231610Article10.3390/e16105575557556001099-43002014-10-23doi: 10.3390/e16105575Fadi AlmasalhaRogelio Hasimoto-BeltranAshfaq Khokhar<![CDATA[Entropy, Vol. 16, Pages 5560-5574: Contributions to the Transformation Entropy Change and Influencing Factors in Metamagnetic Ni-Co-Mn-Ga Shape Memory Alloys]]>
http://www.mdpi.com/1099-4300/16/10/5560
Ni-Co-Mn-Ga ferromagnetic shape memory alloys show metamagnetic transition from ferromagnetic austenite to paramagnetic (or weak-magnetic) martensite for a limited range of Co contents. The temperatures of the structural and magnetic transitions depend strongly on composition and atomic order degree, in such a way that combined composition and thermal treatment allows obtaining martensitic transformation (MT) between any magnetic state of austenite and martensite. The entropy change ΔS measured in the magnetostructural transition comprises a magnetic contribution which depends on the type and degree of magnetic order of the related phases. Consequently, both the magnetization jump across the MT (DM) and DS are composition and atomic order dependent. Both ΔS and ΔM determine the effect of applied magnetic fields on the MT, hence knowledge and understanding of their behavior can help to approach the best conditions for magnetic field induced MT and related effects. In previous papers, we have reported findings regarding the behavior of the transformation entropy in relation to composition and atomic order in Ni50−xCoxMn25+yGa25−y (x = 3–8, y = 5–7) alloys. In the present paper we will review our recent results, summarizing the key findings and drawing general conclusions regarding the magnetic contribution to ΔS and the effect of different factors on the magnetic and structural properties of these metamagnetic alloys.Entropy2014-10-221610Article10.3390/e16105560556055741099-43002014-10-22doi: 10.3390/e16105560Concepció SeguíEduard Cesari<![CDATA[Entropy, Vol. 16, Pages 5546-5559: Conventional Point-Velocity Records and Surface Velocity Observations for Estimating High Flow Discharge]]>
http://www.mdpi.com/1099-4300/16/10/5546
Flow velocity measurements using point-velocity meters are normally obtained by sampling one, two or three velocity points per vertical profile. During high floods their use is inhibited due to the difficulty of sampling in lower portions of the flow area. Nevertheless, the application of standard methods allows estimation of a parameter, α, which depends on the energy slope and the Manning roughness coefficient. During high floods, monitoring of velocity can be accomplished by sampling the maximum velocity, umax, only, which can be used to estimate the mean flow velocity, um, by applying the linear entropy relationship depending on the parameter, M, estimated on the basis of historical observed pairs (um, umax). In this context, this work attempts to analyze if a correlation between α and M holds, so that the monitoring for high flows can be addressed by exploiting information from standard methods. A methodology is proposed to estimate M from α, by coupling the “historical” information derived by standard methods, and “new” information from the measurement of umax surmised at later times. Results from four gauged river sites of different hydraulic and geometric characteristics have shown the robust estimation of M based on α.Entropy2014-10-211610Article10.3390/e16105546554655591099-43002014-10-21doi: 10.3390/e16105546Giovanni CoratoAbdelhadi AmmariTommaso Moramarco<![CDATA[Entropy, Vol. 16, Pages 5537-5545: The Q-Exponential Decay of Subjective Probability for Future Reward: A Psychophysical Time Approach]]>
http://www.mdpi.com/1099-4300/16/10/5537
This study experimentally examined why subjective probability for delayed reward decays non-exponentially (“hyperbolically”, i.e., q ˂ 1 in the q-exponential discount function) in humans. Our results indicate that nonlinear psychophysical time causes hyperbolic time-decay of subjective probability for delayed reward. Implications for econophysics and neuroeconomics are discussed.Entropy2014-10-211610Article10.3390/e16105537553755451099-43002014-10-21doi: 10.3390/e16105537Taiki TakahashiShinsuke TokudaMasato NishimuraRyo Kimura<![CDATA[Entropy, Vol. 16, Pages 5523-5536: Extreme Value Laws for Superstatistics]]>
http://www.mdpi.com/1099-4300/16/10/5523
We study the extreme value distribution of stochastic processes modeled by superstatistics. Classical extreme value theory asserts that (under mild asymptotic independence assumptions) only three possible limit distributions are possible, namely: Gumbel, Fr and Weibull distribution. On the other hand, superstatistics contains three important universality classes, namely \(\chi^2\) \(\chi^2\) and lognormal superstatistics, all maximizing different effective entropy measures. We investigate how the three classes of extreme value theory are related to the three classes of superstatistics. We show that for any superstatistical process whose local equilibrium distribution does not live on a finite support, the Weibull distribution cannot occur. Under the above mild asymptotic independence assumptions, we also show that \(\chi^2\) generally leads an extreme value statistics described by a Fr distribution, whereas inverse \(\chi^2\) as well as lognormal superstatistics, lead to an extreme value statistics associated with the Gumbel distribution.Entropy2014-10-201610Article10.3390/e16105523552355361099-43002014-10-20doi: 10.3390/e16105523Pau RabassaChristian Beck<![CDATA[Entropy, Vol. 16, Pages 5428-5522: Directionality Theory and the Entropic Principle of Natural Selection]]>
http://www.mdpi.com/1099-4300/16/10/5428
Darwinian fitness describes the capacity of an organism to appropriate resources from the environment and to convert these resources into net-offspring production. Studies of competition between related types indicate that fitness is analytically described by entropy, a statistical measure which is positively correlated with population stability, and describes the number of accessible pathways of energy flow between the individuals in the population. Directionality theory is a mathematical model of the evolutionary process based on the concept evolutionary entropy as the measure of fitness. The theory predicts that the changes which occur as a population evolves from one non-equilibrium steady state to another are described by the following directionality principle–fundamental theorem of evolution: (a) an increase in evolutionary entropy when resource composition is diverse, and resource abundance constant; (b) a decrease in evolutionary entropy when resource composition is singular, and resource abundance variable. Evolutionary entropy characterizes the dynamics of energy flow between the individual elements in various classes of biological networks: (a) where the units are individuals parameterized by age, and their age-specific fecundity and mortality; where the units are metabolites, and the transitions are the biochemical reactions that convert substrates to products; (c) where the units are social groups, and the forces are the cooperative and competitive interactions between the individual groups. % This article reviews the analytical basis of the evolutionary entropic principle, and describes applications of directionality theory to the study of evolutionary dynamics in two biological systems; (i) social networks–the evolution of cooperation; (ii) metabolic networks–the evolution of body size. Statistical thermodynamics is a mathematical model of macroscopic behavior in inanimate matter based on entropy, a statistical measure which describes the number of ways the molecules that compose the a material aggregate can be arranged to attain the same total energy. This theory predicts an increase in thermodynamic entropy as the system evolves towards its equilibrium state. We will delineate the relation between directionality theory and statistical thermodynamics, and review the claim that the entropic principle for thermodynamic systems is the limit, as the resource production rate tends to zero, and population size tends to infinity, of the entropic principle for evolutionary systems.Entropy2014-10-201610Article10.3390/e16105428542855221099-43002014-10-20doi: 10.3390/e16105428Lloyd DemetriusVolker Gundlach<![CDATA[Entropy, Vol. 16, Pages 5416-5427: A Note on Distance-based Graph Entropies]]>
http://www.mdpi.com/1099-4300/16/10/5416
A variety of problems in, e.g., discrete mathematics, computer science, information theory, statistics, chemistry, biology, etc., deal with inferring and characterizing relational structures by using graph measures. In this sense, it has been proven that information-theoretic quantities representing graph entropies possess useful properties such as a meaningful structural interpretation and uniqueness. As classical work, many distance-based graph entropies, e.g., the ones due to Bonchev et al. and related quantities have been proposed and studied. Our contribution is to explore graph entropies that are based on a novel information functional, which is the number of vertices with distance \(k\) to a given vertex. In particular, we investigate some properties thereof leading to a better understanding of this new information-theoretic quantity.Entropy2014-10-201610Article10.3390/e16105416541654271099-43002014-10-20doi: 10.3390/e16105416Zengqiang ChenMatthias DehmerYongtang Shi<![CDATA[Entropy, Vol. 16, Pages 5400-5415: Performance Degradation Assessment of Rolling Element Bearings Based on an Index Combining SVD and Information Exergy]]>
http://www.mdpi.com/1099-4300/16/10/5400
Performance degradation assessment of rolling element bearings is vital for the reliable and cost-efficient operation and maintenance of rotating machines, especially for the implementation of condition-based maintenance (CBM). For robust degradation assessment of rolling element bearings, uncertainties such as those induced from usage variations or sensor errors must be taken into account. This paper presents an information exergy index for bearing performance degradation assessment that combines singular value decomposition (SVD) and the information exergy method. Information exergy integrates condition monitoring information of multiple instants and multiple sensors, and thus performance degradation assessment uncertainties are reduced and robust degradation assessment results can be obtained using the proposed index. The effectiveness and robustness of the proposed information exergy index are validated through experimental case studies.Entropy2014-10-161610Article10.3390/e16105400540054151099-43002014-10-16doi: 10.3390/e16105400Bin ZhangLijun ZhangJinwu XuPingfeng Wang<![CDATA[Entropy, Vol. 16, Pages 5377-5399: On Some Properties of Tsallis Hypoentropies and Hypodivergences]]>
http://www.mdpi.com/1099-4300/16/10/5377
Both the Kullback–Leibler and the Tsallis divergence have a strong limitation: if the value zero appears in probability distributions (p1, ··· , pn) and (q1, ··· , qn), it must appear in the same positions for the sake of significance. In order to avoid that limitation in the framework of Shannon statistics, Ferreri introduced in 1980 hypoentropy: “such conditions rarely occur in practice”. The aim of the present paper is to extend Ferreri’s hypoentropy to the Tsallis statistics. We introduce the Tsallis hypoentropy and the Tsallis hypodivergence and describe their mathematical behavior. Fundamental properties, like nonnegativity, monotonicity, the chain rule and subadditivity, are established.Entropy2014-10-151610Article10.3390/e16105377537753991099-43002014-10-15doi: 10.3390/e16105377Shigeru FuruichiFlavia-Corina Mitroi-SymeonidisEleutherius Symeonidis<![CDATA[Entropy, Vol. 16, Pages 5358-5376: Research and Development of a Chaotic Signal Synchronization Error Dynamics-Based Ball Bearing Fault Diagnostor]]>
http://www.mdpi.com/1099-4300/16/10/5358
This paper describes the fault diagnosis in the operation of industrial ball bearings. In order to cluster the very small differential signals of the four classic fault types of the ball bearing system, the chaos synchronization (CS) concept is used in this study as the chaos system is very sensitive to a system’s variation such as initial conditions or system parameters. In this study, the Chen-Lee chaotic system was used to load the normal and fault signals of the bearings into the chaos synchronization error dynamics system. The fractal theory was applied to determine the fractal dimension and lacunarity from the CS error dynamics. Extenics theory was then applied to distinguish the state of the bearing faults. This study also compared the proposed method with discrete Fourier transform and wavelet packet analysis. According to the results, it is shown that the proposed chaos synchronization method combined with extenics theory can separate the characteristics (fractal dimension vs. lacunarity) completely. Therefore, it has a better fault diagnosis rate than the two traditional signal processing methods, i.e., Fourier transform and wavelet packet analysis combined with extenics theory.Entropy2014-10-151610Article10.3390/e16105358535853761099-43002014-10-15doi: 10.3390/e16105358Ying-Che KuoChin-Tsung HsiehHer-Terng YauYu-Chung Li<![CDATA[Entropy, Vol. 16, Pages 5339-5357: Redundancy of Exchangeable Estimators]]>
http://www.mdpi.com/1099-4300/16/10/5339
Exchangeable random partition processes are the basis for Bayesian approaches to statistical inference in large alphabet settings. On the other hand, the notion of the pattern of a sequence provides an information-theoretic framework for data compression in large alphabet scenarios. Because data compression and parameter estimation are intimately related, we study the redundancy of Bayes estimators coming from Poisson–Dirichlet priors (or “Chinese restaurant processes”) and the Pitman–Yor prior. This provides an understanding of these estimators in the setting of unknown discrete alphabets from the perspective of universal compression. In particular, we identify relations between alphabet sizes and sample sizes where the redundancy is small, thereby characterizing useful regimes for these estimators.Entropy2014-10-131610Article10.3390/e16105339533953571099-43002014-10-13doi: 10.3390/e16105339Narayana SanthanamAnand SarwateJae Woo<![CDATA[Entropy, Vol. 16, Pages 5290-5338: Quantum Computation-Based Image Representation, Processing Operations and Their Applications]]>
http://www.mdpi.com/1099-4300/16/10/5290
A flexible representation of quantum images (FRQI) was proposed to facilitate the extension of classical (non-quantum)-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI) and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI), quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.Entropy2014-10-101610Review10.3390/e16105290529053381099-43002014-10-10doi: 10.3390/e16105290Fei YanAbdullah IliyasuZhengang Jiang<![CDATA[Entropy, Vol. 16, Pages 5263-5289: Cross-Scale Interactions and Information Transfer]]>
http://www.mdpi.com/1099-4300/16/10/5263
An information-theoretic approach for detecting interactions and informationtransfer between two systems is extended to interactions between dynamical phenomenaevolving on different time scales of a complex, multiscale process. The approach isdemonstrated in the detection of an information transfer from larger to smaller time scales ina model multifractal process and applied in a study of cross-scale interactions in atmosphericdynamics. Applying a form of the conditional mutual information and a statistical test basedon the Fourier transform and multifractal surrogate data to about a century long recordsof daily mean surface air temperature from various European locations, an informationtransfer from larger to smaller time scales has been observed as the influence of the phaseof slow oscillatory phenomena with the periods around 6–11 years on the amplitudes of thevariability characterized by the smaller temporal scales from a few months to 4–5 years.These directed cross-scale interactions have a non-negligible effect on interannual airtemperature variability in a large area of Europe.Entropy2014-10-101610Article10.3390/e16105263526352891099-43002014-10-10doi: 10.3390/e16105263Milan Paluš<![CDATA[Entropy, Vol. 16, Pages 5242-5262: How to Mine Information from Each Instance to Extract an Abbreviated and Credible Logical Rule]]>
http://www.mdpi.com/1099-4300/16/10/5242
Decision trees are particularly promising in symbolic representation and reasoning due to their comprehensible nature, which resembles the hierarchical process of human decision making. However, their drawbacks, caused by the single-tree structure,cannot be ignored. A rigid decision path may cause the majority class to overwhelm otherclass when dealing with imbalanced data sets, and pruning removes not only superfluousnodes, but also subtrees. The proposed learning algorithm, flexible hybrid decision forest(FHDF), mines information implicated in each instance to form logical rules on the basis of a chain rule of local mutual information, then forms different decision tree structures and decision forests later. The most credible decision path from the decision forest can be selected to make a prediction. Furthermore, functional dependencies (FDs), which are extracted from the whole data set based on association rule analysis, perform embedded attribute selection to remove nodes rather than subtrees, thus helping to achieve different levels of knowledge representation and improve model comprehension in the framework of semi-supervised learning. Naive Bayes replaces the leaf nodes at the bottom of the tree hierarchy, where the conditional independence assumption may hold. This technique reduces the potential for overfitting and overtraining and improves the prediction quality and generalization. Experimental results on UCI data sets demonstrate the efficacy of the proposed approach.Entropy2014-10-091610Article10.3390/e16105242524252621099-43002014-10-09doi: 10.3390/e16105242Limin WangMinghui SunChunhong Cao<![CDATA[Entropy, Vol. 16, Pages 5232-5241: Entropy Methods in Guided Self-Organisation]]>
http://www.mdpi.com/1099-4300/16/10/5232
Self-organisation occurs in natural phenomena when a spontaneous increase in order is produced by the interactions of elements of a complex system. Thermodynamically, this increase must be offset by production of entropy which, broadly speaking, can be understood as a decrease in order. Ideally, self-organisation can be used to guide the system towards a desired regime or state, while "exporting" the entropy to the system's exterior. Thus, Guided Self-Organisation (GSO) attempts to harness the order-inducing potential of self-organisation for specific purposes. Not surprisingly, general methods developed to study entropy can also be applied to guided self-organisation. This special issue covers abroad diversity of GSO approaches which can be classified in three categories: information theory, intelligent agents, and collective behavior. The proposals make another step towards a unifying theory of GSO which promises to impact numerous research fields.Entropy2014-10-091610Editorial10.3390/e16105232523252411099-43002014-10-09doi: 10.3390/e16105232Mikhail ProkopenkoCarlos Gershenson<![CDATA[Entropy, Vol. 16, Pages 5223-5231: Elimination of a Second-Law-Attack, and All Cable-Resistance-Based Attacks, in the Kirchhoff-Law-Johnson-Noise (KLJN) Secure Key Exchange System]]>
http://www.mdpi.com/1099-4300/16/10/5223
We introduce the so far most efficient attack against the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange system. This attack utilizes the lack of exact thermal equilibrium in practical applications and is based on cable resistance losses and the fact that the Second Law of Thermodynamics cannot provide full security when such losses are present. The new attack does not challenge the unconditional security of the KLJN scheme, but it puts more stringent demands on the security/privacy enhancing protocol than for any earlier attack. In this paper we present a simple defense protocol to fully eliminate this new attack by increasing the noise-temperature at the side of the smaller resistance value over the noise-temperature at the side with the greater resistance. It is shown that this simple protocol totally removes Eve’s information not only for the new attack but also for the old Bergou-Scheuer-Yariv attack. The presently most efficient attacks against the KLJN scheme are thereby completely nullified.Entropy2014-10-071610Article10.3390/e16105223522352311099-43002014-10-07doi: 10.3390/e16105223Laszlo KishClaes-Göran Granqvist<![CDATA[Entropy, Vol. 16, Pages 5211-5222: A Further Indication of the Self-Ordering Capacity of Water Via the Droplet Evaporation Method]]>
http://www.mdpi.com/1099-4300/16/10/5211
The droplet evaporation method (DEM) is increasingly used for assessing various characteristics of water. In our research we tried to use DEM to detect a possible self-ordering capability of (spring) water that would be similar to the already found and described autothixotropic phenomenon, namely increasing order of non-distilled water subject to aging. The output of DEM is a droplet remnant pattern (DRP). For analysis of DRP images we used a specially developed computer program that does the frequency distribution analysis of certain parameters of the images. The results of experiments demonstrated statistically significant differences in both aging of water as well as in the glass exposed surface/volume ratio of the aged water. The most important result supporting the self-ordering character of water was found in an increasing dependence between two analyzed parameters: distance and frequency, at the peak frequency. As the result concerns mostly aging and shows increasing order it further corroborates other findings concerning increasing order by aging. Such further confirmation of self-ordering capacity of water is not important only for physical chemistry, but also for biology.Entropy2014-10-071610Article10.3390/e16105211521152221099-43002014-10-07doi: 10.3390/e16105211Igor JermanPetra Ratajc<![CDATA[Entropy, Vol. 16, Pages 5198-5210: Exact Probability Distribution versus Entropy]]>
http://www.mdpi.com/1099-4300/16/10/5198
The problem addressed concerns the determination of the average number of successive attempts of guessing a word of a certain length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations to a natural language are considered. The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations are necessary in order to estimate the number of guesses. Several kinds of approximations are discussed demonstrating moderate requirements regarding both memory and central processing unit (CPU) time. When considering realistic sizes of alphabets and words (100), the number of guesses can be estimated within minutes with reasonable accuracy (a few percent) and may therefore constitute an alternative to, e.g., various entropy expressions. For many probability distributions, the density of the logarithm of probability products is close to a normal distribution. For those cases, it is possible to derive an analytical expression for the average number of guesses. The proportion of guesses needed on average compared to the total number decreases almost exponentially with the word length. The leading term in an asymptotic expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided.Entropy2014-10-071610Article10.3390/e16105198519852101099-43002014-10-07doi: 10.3390/e16105198Kerstin Andersson<![CDATA[Entropy, Vol. 16, Pages 5178-5197: Entropy Generation in Flow of Highly Concentrated Non-Newtonian Emulsions in Smooth Tubes]]>
http://www.mdpi.com/1099-4300/16/10/5178
Entropy generation in adiabatic flow of highly concentrated non-Newtonian emulsions in smooth tubes of five different diameters (7.15–26.54 mm) was investigated experimentally. The emulsions were of oil-in-water type with dispersed-phase concentration (Φ) ranging from 59.61–72.21% vol. The emulsions exhibited shear-thinning behavior in that the viscosity decreased with the increase in shear rate. The shear-stress (τ) versus shear rate (˙γ) data of emulsions could be described well by the power-law model: τ=K˙γn. The flow behavior index n was less than 1 and it decreased sharply with the increase in Φ whereas the consistency index K increased rapidly with the increase in Φ . For a given emulsion and tube diameter, the entropy generation rate per unit tube length increased linearly with the increase in the generalized Reynolds number ( Re_n ) on a log-log scale. For emulsions with Φ ≤65.15 % vol., the entropy generation rate decreased with the increase in tube diameter. A reverse trend in diameter-dependence was observed for the emulsion with Φ of 72.21% vol. New models are developed for the prediction of entropy generation rate in flow of power-law emulsions in smooth tubes. The experimental data shows good agreement with the proposed models.Entropy2014-10-071610Article10.3390/e16105178517851971099-43002014-10-07doi: 10.3390/e16105178Rajinder Pal<![CDATA[Entropy, Vol. 16, Pages 5159-5177: Progress in the Prediction of Entropy Generation in Turbulent Reacting Flows Using Large Eddy Simulation]]>
http://www.mdpi.com/1099-4300/16/10/5159
An overview is presented of the recent developments in the application of large eddy simulation (LES) for prediction and analysis of local entropy generation in turbulent reacting flows. A challenging issue in such LES is subgrid-scale (SGS) modeling of filtered entropy generation terms. An effective closure strategy, recently developed, is based on the filtered density function (FDF) methodology with inclusion of entropy variations. This methodology, titled entropy FDF (En-FDF), is the main focus of this article. The En-FDF has been introduced as the joint velocity-scalar-turbulent frequency-entropy FDF and the marginal scalar-entropy FDF. Both formulations contain the chemical reaction and its entropy generation effects in closed forms. The former constitutes the most comprehensive form of the En-FDF and provides closure for all of the unclosed terms in LES transport equations. The latter is the marginal En-FDF and accounts for entropy generation effects, as well as scalar-entropy statistics. The En-FDF methodologies are described, and some of their recent predictions of entropy statistics and entropy generation in turbulent shear flows are presented.Entropy2014-09-261610Article10.3390/e16105159515951771099-43002014-09-26doi: 10.3390/e16105159Mehdi SafariFatemeh HadiM. Sheikhi<![CDATA[Entropy, Vol. 16, Pages 5144-5158: Nonlinearities in Elliptic Curve Authentication]]>
http://www.mdpi.com/1099-4300/16/9/5144
In order to construct the border solutions for nonsupersingular elliptic curve equations, some common used models need to be adapted from linear treated cases for use in particular nonlinear cases. There are some approaches that conclude with these solutions. Optimization in this area means finding the majority of points on the elliptic curve and minimizing the time to compute the solution in contrast with the necessary time to compute the inverse solution. We can compute the positive solution of PDE (partial differential equation) like oscillations of f(s)/s around the principal eigenvalue λ1 of -Δ in H 0 1 (Ω).Translating mathematics into cryptographic applications will be relevant in everyday life, where in there are situations in which two parts that communicate need a third part to confirm this process. For example, if two persons want to agree on something they need an impartial person to confirm this agreement, like a notary. This third part does not influence in anyway the communication process. It is just a witness to the agreement. We present a system where the communicating parties do not authenticate one another. Each party authenticates itself to a third part who also sends the keys for the encryption/decryption process. Another advantage of such a system is that if someone (sender) wants to transmit messages to more than one person (receivers), he needs only one authentication, unlike the classic systems where he would need to authenticate himself to each receiver. We propose an authentication method based on zero-knowledge and elliptic curves.Entropy2014-09-25169Article10.3390/e16095144514451581099-43002014-09-25doi: 10.3390/e16095144Ramzi AlsaediNicolae ConstantinescuVicenţiu Rādulescu<![CDATA[Entropy, Vol. 16, Pages 5122-5143: Entropy of Closure Operators and Network Coding Solvability]]>
http://www.mdpi.com/1099-4300/16/9/5122
The entropy of a closure operator has been recently proposed for the study of network coding and secret sharing. In this paper, we study closure operators in relation to their entropy. We first introduce four different kinds of rank functions for a given closure operator, which determine bounds on the entropy of that operator. This yields new axioms for matroids based on their closure operators. We also determine necessary conditions for a large class of closure operators to be solvable. We then define the Shannon entropy of a closure operator and use it to prove that the set of closure entropies is dense. Finally, we justify why we focus on the solvability of closure operators only.Entropy2014-09-25169Article10.3390/e16095122512251431099-43002014-09-25doi: 10.3390/e16095122Maximilien Gadouleau<![CDATA[Entropy, Vol. 16, Pages 5102-5121: Strategic Islands in Economic Games: Isolating Economies From Better Outcomes]]>
http://www.mdpi.com/1099-4300/16/9/5102
Many of the issues we face as a society are made more problematic by the rapidly changing context in which important decisions are made. For example buying a petrol powered car is most advantageous when there are many petrol pumps providing cheap petrol whereas buying an electric car is most advantageous when there are many electrical recharge points or high capacity batteries available. Such collective decision-making is often studied using economic game theory where the focus is on how individuals might reach an agreement regarding the supply and demand for the different energy types. But even if the two parties find a mutually agreeable strategy, as technology and costs change over time, for example through cheaper and more efficient batteries and a more accurate pricing of the total cost of oil consumption, so too do the incentives for the choices buyers and sellers make, the result of which can be the stranding of an industry or even a whole economy on an island of inefficient outcomes. In this article we consider the issue of how changes in the underlying incentives can move us from an optimal economy to a sub-optimal economy while at the same time making it impossible to collectively navigate our way to a better strategy without forcing us to pass through a socially undesirable “tipping point”. We show that different perturbations to underlying incentives results in the creation or destruction of “strategic islands” isolated by disruptive transitions between strategies. The significant result in this work is the illustration that an economy that remains strategically stationary can over time become stranded in a suboptimal outcome from which there is no easy way to put the economy on a path to better outcomes without going through an economic tipping point.Entropy2014-09-24169Article10.3390/e16095102510251211099-43002014-09-24doi: 10.3390/e16095102Michael HarréTerry Bossomaier<![CDATA[Entropy, Vol. 16, Pages 5078-5101: Hierarchical Sensor Placement Using Joint Entropy and the Effect of Modeling Error]]>
http://www.mdpi.com/1099-4300/16/9/5078
Good prediction of the behavior of wind around buildings improves designs for natural ventilation in warm climates. However wind modeling is complex, predictions are often inaccurate due to the large uncertainties in parameter values. The goal of this work is to enhance wind prediction around buildings using measurements through implementing a multiple-model system-identification approach. The success of system-identification approaches depends directly upon the location and number of sensors. Therefore, this research proposes a methodology for optimal sensor configuration based on hierarchical sensor placement involving calculations of prediction-value joint entropy. Computational Fluid Dynamics (CFD) models are generated to create a discrete population of possible wind-flow predictions, which are then used to identify optimal sensor locations. Optimal sensor configurations are revealed using the proposed methodology and considering the effect of systematic and spatially distributed modeling errors, as well as the common information between sensor locations. The methodology is applied to a full-scale case study and optimum configurations are evaluated for their ability to falsify models and improve predictions at locations where no measurements have been taken. It is concluded that a sensor placement strategy using joint entropy is able to lead to predictions of wind characteristics around buildings and capture short-term wind variability more effectively than sequential strategies, which maximize entropy.Entropy2014-09-23169Article10.3390/e16095078507851011099-43002014-09-23doi: 10.3390/e16095078Maria PapadopoulouBenny RaphaelIan SmithChandra Sekhar<![CDATA[Entropy, Vol. 16, Pages 5068-5077: Editorial Comment on the Special Issue of “Information in Dynamical Systems and Complex Systems”]]>
http://www.mdpi.com/1099-4300/16/9/5068
This special issue collects contributions from the participants of the “Information in Dynamical Systems and Complex Systems” workshop, which cover a wide range of important problems and new approaches that lie in the intersection of information theory and dynamical systems. The contributions include theoretical characterization and understanding of the different types of information flow and causality in general stochastic processes, inference and identification of coupling structure and parameters of system dynamics, rigorous coarse-grain modeling of network dynamical systems, and exact statistical testing of fundamental information-theoretic quantities such as the mutual information. The collective efforts reported here in reflect a modern perspective of the intimate connection between dynamical systems and information flow, leading to the promise of better understanding and modeling of natural complex systems and better/optimal design of engineering systems.Entropy2014-09-23169Editorial10.3390/e16095068506850771099-43002014-09-23doi: 10.3390/e16095068Erik BolltJie Sun<![CDATA[Entropy, Vol. 16, Pages 5032-5067: An Entropy-Based Upper Bound Methodology for Robust Predictive Multi-Mode RCPSP Schedules]]>
http://www.mdpi.com/1099-4300/16/9/5032
Projects are an important part of our activities and regardless of their magnitude, scheduling is at the very core of every project. In an ideal world makespan minimization, which is the most commonly sought objective, would give us an advantage. However, every time we execute a project we have to deal with uncertainty; part of it coming from known sources and part remaining unknown until it affects us. For this reason, it is much more practical to focus on making our schedules robust, capable of handling uncertainty, and even to determine a range in which the project could be completed. In this paper we focus on an approach to determine such a range for the Multi-mode Resource Constrained Project Scheduling Problem (MRCPSP), a widely researched, NP-complete problem, but without adding any subjective considerations to its estimation. We do this by using a concept well known in the domain of thermodynamics, entropy and a three-stage approach. First we use Artificial Bee Colony (ABC)—an effective and powerful meta-heuristic—to determine a schedule with minimized makespan which serves as a lower bound. The second stage defines buffer times and creates an upper bound makespan using an entropy function, with the advantage over other methods that it only considers elements which are inherent to the schedule itself and does not introduce any subjectivity to the buffer time generation. In the last stage, we use the ABC algorithm with an objective function that seeks to maximize robustness while staying within the makespan boundaries defined previously and in some cases even below the lower boundary. We evaluate our approach with two different benchmarks sets: when using the PSPLIB for the MRCPSP benchmark set, the computational results indicate that it is possible to generate robust schedules which generally result in an increase of less than 10% of the best known solutions while increasing the robustness in at least 20% for practically every benchmark set. And, in an attempt to solve larger instances with 50 or 100 activities, we also used the MRCPSP/max benchmark sets, where the increase of the makespan is approximately 35% with respect to the best known solutions at the same time as with a 20% increase in robustness.Entropy2014-09-22169Article10.3390/e16095032503250671099-43002014-09-22doi: 10.3390/e16095032Angela ChenYun-Chia LiangJose Padilla<![CDATA[Entropy, Vol. 16, Pages 5020-5031: Adaptive Leader-Following Consensus of Multi-Agent Systems with Unknown Nonlinear Dynamics]]>
http://www.mdpi.com/1099-4300/16/9/5020
This paper deals with the leader-following consensus of multi-agent systems with matched nonlinear dynamics. Compared with previous works, the major difficulty here is caused by the simultaneous existence of nonidentical agent dynamics and unknown system parameters, which are more practical in real-world applications. To tackle this difficulty, a distributed adaptive control law for each follower is proposed based on algebraic graph theory and algebraic Riccati equation. By a Lyapunov function method, we show that the designed control law guarantees that each follower asymptotically converges to the leader under connected communication graphs. A simulation example demonstrates the effectiveness of the proposed scheme.Entropy2014-09-22169Article10.3390/e16095020502050311099-43002014-09-22doi: 10.3390/e16095020Junwei WangKairui ChenQinghua Ma<![CDATA[Entropy, Vol. 16, Pages 4992-5019: Ab Initio and Monte Carlo Approaches For the Magnetocaloric Effect in Co- and In-Doped Ni-Mn-Ga Heusler Alloys]]>
http://www.mdpi.com/1099-4300/16/9/4992
The complex magnetic and structural properties of Co-doped Ni-Mn-Ga Heusler alloys have been investigated by using a combination of first-principles calculations and classical Monte Carlo simulations. We have restricted the investigations to systems with 0, 5 and 9 at% Co. Ab initio calculations show the presence of the ferrimagnetic order of austenite and martensite depending on the composition, where the excess Mn atoms on Ga sites show reversed spin configurations. Stable ferrimagnetic martensite is found for systems with 0 (5) at% Co and a c=a ratio of 1.31 (1.28), respectively, leading to a strong competition of ferro- and antiferro-magnetic exchange interactions between nearest neighbor Mn atoms. The Monte Carlo simulations with ab initio exchange coupling constants as input parameters allow one to discuss the behavior at finite temperatures and to determine magnetic transition temperatures. The Curie temperature of austenite is found to increase with Co, while the Curie temperature of martensite decreases with increasing Co content. This behavior can be attributed to the stronger Co-Mn, Mn-Mn and Mn-Ni exchange coupling constants in austenite compared to the corresponding ones in martensite. The crossover from a direct to inverse magnetocaloric effect in Ni-Mn-Ga due to the substitution of Ni by Co leads to the appearance of a “paramagnetic gap” in the martensitic phase. Doping with In increases the magnetic jump at the martensitic transition temperature. The simulated magnetic and magnetocaloric properties of Co- and In-doped Ni-Mn-Ga alloys are in good qualitative agreement with the available experimental data.Entropy2014-09-19169Article10.3390/e16094992499250191099-43002014-09-19doi: 10.3390/e16094992Vladimir SokolovskiyAnna GrünebohmVasiliy BuchelnikovPeter Entel<![CDATA[Entropy, Vol. 16, Pages 4974-4991: Simultaneous State and Parameter Estimation Using Maximum Relative Entropy with Nonhomogenous Differential Equation Constraints]]>
http://www.mdpi.com/1099-4300/16/9/4974
In this paper, we continue our efforts to show how maximum relative entropy (MrE) can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF). However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption) may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA). Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.Entropy2014-09-17169Article10.3390/e16094974497449911099-43002014-09-17doi: 10.3390/e16094974Adom GiffinRenaldas Urniezius<![CDATA[Entropy, Vol. 16, Pages 4960-4973: The Character of Entropy Production in Rayleigh–Bénard Convection]]>
http://www.mdpi.com/1099-4300/16/9/4960
In this study; the Rayleigh–Bénard convection model was established; and a great number of Bénard cells with different numbered vortexes were acquired by numerical simulation. Additionally; the Bénard cell with two vortexes; which appeared in the steady Bénard fluid with a different Rayleigh number (abbreviated Ra); was found to display the primary characteristics of the system’s entropy production. It was found that two entropy productions; which are calculated using either linear theory or classical thermodynamic theory; are all basically consistent when the system can form a steady Bénard flow in the proper range of the Rayleigh number’s parameters. Furthermore; in a steady Bénard flow; the entropy productions of the system increase alongside the Ra parameters. It was also found that the difference between the two entropy productions is the driving force to drive the system to a steady state. Otherwise; through the distribution of the local entropy production of the Bénard cell; two vortexes are clearly located where there is minimum local entropy production and in the borders around the cell’s areas of larger local entropy production.Entropy2014-09-17169Article10.3390/e16094960496049731099-43002014-09-17doi: 10.3390/e16094960Chenxia JiaChengjun JingJian Liu<![CDATA[Entropy, Vol. 16, Pages 4937-4959: Distributed Control of Heat Conduction in Thermal Inductive Materials with 2D Geometrical Isomorphism]]>
http://www.mdpi.com/1099-4300/16/9/4937
In a previous study we provided analytical and experimental evidence that some materials are able to store entropy-flow, of which the heat-conduction behaves as standing waves in a bounded region small enough in practice. In this paper we continue to develop distributed control of heat conduction in these thermal-inductive materials. The control objective is to achieve subtle temperature distribution in space and simultaneously to suppress its transient overshoots in time. This technology concerns safe and accurate heating/cooling treatments in medical operations, polymer processing, and other prevailing modern day practices. Serving for distributed feedback, spatiotemporal H ∞ /μ control is developed by expansion of the conventional 1D-H ∞ /μ control to a 2D version. Therein 2D geometrical isomorphism is constructed with the Laplace-Galerkin transform, which extends the small-gain theorem into the mode-frequency domain, wherein 2D transfer-function controllers are synthesized with graphical methods. Finally, 2D digital-signal processing is programmed to implement 2D transfer-function controllers, possibly of spatial fraction-orders, into DSP-engine embedded microcontrollers.Entropy2014-09-15169Article10.3390/e16094937493749591099-43002014-09-15doi: 10.3390/e16094937Chia-Yu ChouBoe-Shong HongPei-Ju ChiangWen-Teng WangLiang-Kuang ChenChia-Yen Lee<![CDATA[Entropy, Vol. 16, Pages 4923-4936: Effect of Conformational Entropy on the Nanomechanics of Microcantilever-Based Single-Stranded DNA Sensors]]>
http://www.mdpi.com/1099-4300/16/9/4923
An entropy-controlled bending mechanism is presented to study the nanomechanics of microcantilever-based single-stranded DNA (ssDNA) sensors. First; the conformational free energy of the ssDNA layer is given with an improved scaling theory of thermal blobs considering the curvature effect; and the mechanical energy of the non-biological layer is described by Zhang’s two-variable method for laminated beams. Then; an analytical model for static deflections of ssDNA microcantilevers is formulated by the principle of minimum energy. The comparisons of deflections predicted by the proposed model; Utz–Begley’s model and Hagan’s model are also examined. Numerical results show that the conformational entropy effect on microcantilever deflections cannot be ignored; especially at the conditions of high packing density or long chain systems; and the variation of deflection predicted by the proposed analytical model not only accords with that observed in the related experiments qualitatively; but also appears quantitatively closer to the experimental values than that by the preexisting models. In order to improve the sensitivity of static-mode biosensors; it should be as small as possible to reduce the substrate stiffness.Entropy2014-09-15169Article10.3390/e16094923492349361099-43002014-09-15doi: 10.3390/e16094923Zou-Qing TanNeng-Hui Zhang<![CDATA[Entropy, Vol. 16, Pages 4911-4922: Existence of Entropy Solutions for Nonsymmetric Fractional Systems]]>
http://www.mdpi.com/1099-4300/16/9/4911
The present work focuses on entropy solutions for the fractional Cauchy problem of nonsymmetric systems. We impose sufficient conditions on the parameters to obtain bounded solutions of L∞ . The solutions attained are unique and exclusive. Performance is established by utilizing the maximum principle for certain generalized time and space-fractional diffusion equations. The fractional differential operator is inspected based on the interpretation of the Riemann–Liouville differential operator. Fractional entropy inequalities are imposed.Entropy2014-09-12169Article10.3390/e16094911491149221099-43002014-09-12doi: 10.3390/e16094911Rabha IbrahimHamid Jalab<![CDATA[Entropy, Vol. 16, Pages 4892-4910: On Shannon’s Formula and Hartley’s Rule: Beyond the Mathematical Coincidence]]>
http://www.mdpi.com/1099-4300/16/9/4892
In the information theory community, the following “historical” statements are generally well accepted: (1) Hartley did put forth his rule twenty years before Shannon; (2) Shannon’s formula as a fundamental tradeoff between transmission rate, bandwidth, and signal-to-noise ratio came out unexpected in 1948; (3) Hartley’s rule is inexact while Shannon’s formula is characteristic of the additive white Gaussian noise channel; (4) Hartley’s rule is an imprecise relation that is not an appropriate formula for the capacity of a communication channel. We show that all these four statements are somewhat wrong. In fact, a careful calculation shows that “Hartley’s rule” in fact coincides with Shannon’s formula. We explain this mathematical coincidence by deriving the necessary and sufficient conditions on an additive noise channel such that its capacity is given by Shannon’s formula and construct a sequence of such channels that makes the link between the uniform (Hartley) and Gaussian (Shannon) channels.Entropy2014-09-10169Article10.3390/e16094892489249101099-43002014-09-10doi: 10.3390/e16094892Olivier José Magossi<![CDATA[Entropy, Vol. 16, Pages 4874-4891: Illuminating Water and Life]]>
http://www.mdpi.com/1099-4300/16/9/4874
This paper reviews the quantum electrodynamics theory of water put forward by Del Giudice and colleagues and how it may provide a useful foundation for a new science of water for life. The interaction of light with liquid water generates quantum coherent domains in which the water molecules oscillate between the ground state and an excited state close to the ionizing potential of water. This produces a plasma of almost free electrons favouring redox reactions, the basis of energy metabolism in living organisms. Coherent domains stabilized by surfaces, such as membranes and macromolecules, provide the excited interfacial water that enables photosynthesis to take place, on which most of life on Earth depends. Excited water is the source of superconducting protons for rapid intercommunication within the body that may be associated with the acupuncture meridians. Coherent domains can also trap electromagnetic frequencies from the environment to orchestrate and activate specific biochemical reactions through resonance, a mechanism for the most precise regulation of gene function.Entropy2014-09-10169Review10.3390/e16094874487448911099-43002014-09-10doi: 10.3390/e16094874Mae-Wan Ho<![CDATA[Entropy, Vol. 16, Pages 4855-4873: Entropy Evaluation Based on Value Validity]]>
http://www.mdpi.com/1099-4300/16/9/4855
Besides its importance in statistical physics and information theory, the Boltzmann-Shannon entropy S has become one of the most widely used and misused summary measures of various attributes (characteristics) in diverse fields of study. It has also been the subject of extensive and perhaps excessive generalizations. This paper introduces the concept and criteria for value validity as a means of determining if an entropy takes on values that reasonably reflect the attribute being measured and that permit different types of comparisons to be made for different probability distributions. While neither S nor its relative entropy equivalent S* meet the value-validity conditions, certain power functions of S and S* do to a considerable extent. No parametric generalization offers any advantage over S in this regard. A measure based on Euclidean distances between probability distributions is introduced as a potential entropy that does comply fully with the value-validity requirements and its statistical inference procedure is discussed.Entropy2014-09-05169Article10.3390/e16094855485548731099-43002014-09-05doi: 10.3390/e16094855Tarald Kvålseth<![CDATA[Entropy, Vol. 16, Pages 4839-4854: Low-Pass Filtering Approach via Empirical Mode Decomposition Improves Short-Scale Entropy-Based Complexity Estimation of QT Interval Variability in Long QT Syndrome Type 1 Patients]]>
http://www.mdpi.com/1099-4300/16/9/4839
Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD) and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP) and QT (interval from Q-wave onset to T-wave end) variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs) and 34 mutation carriers (MCs) subdivided into 11 asymptomatic MCs (AMCs) and 23 symptomatic MCs (SMCs). All individuals belonged to the same family developing long QT syndrome type 1 (LQT1) via KCNQ1-A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.Entropy2014-09-05169Article10.3390/e16094839483948541099-43002014-09-05doi: 10.3390/e16094839Vlasta BariAndrea MarchiBeatrice De MariaGiulia GirardengoAlfred GeorgePaul BrinkSergio CeruttiLia CrottiPeter SchwartzAlberto Porta<![CDATA[Entropy, Vol. 16, Pages 4818-4838: A Trustworthiness Evaluation Method for Software Architectures Based on the Principle of Maximum Entropy (POME) and the Grey Decision-Making Method (GDMM)]]>
http://www.mdpi.com/1099-4300/16/9/4818
As the early design decision-making structure, a software architecture plays a key role in the final software product quality and the whole project. In the software design and development process, an effective evaluation of the trustworthiness of a software architecture can help making scientific and reasonable decisions on the architecture, which are necessary for the construction of highly trustworthy software. In consideration of lacking the trustworthiness evaluation and measurement studies for software architecture, this paper provides one trustworthy attribute model of software architecture. Based on this model, the paper proposes to use the Principle of Maximum Entropy (POME) and Grey Decision-making Method (GDMM) as the trustworthiness evaluation method of a software architecture and proves the scientificity and rationality of this method, as well as verifies the feasibility through case analysis.Entropy2014-09-03169Article10.3390/e16094818481848381099-43002014-09-03doi: 10.3390/e16094818Rong Jiang<![CDATA[Entropy, Vol. 16, Pages 4801-4817: Pneumatic Performance Study of a High Pressure Ejection Device Based on Real Specific Energy and Specific Enthalpy]]>
http://www.mdpi.com/1099-4300/16/9/4801
In high-pressure dynamic thermodynamic processes, the pressure is much higher than the air critical pressure, and the temperature can deviate significantly from the Boyle temperature. In such situations, the thermo-physical properties and pneumatic performance can’t be described accurately by the ideal gas law. This paper proposes an approach to evaluate the pneumatic performance of a high-pressure air catapult launch system, in which esidual functions are used to compensate the thermal physical property uncertainties of caused by real gas effects. Compared with the Nelson-Obert generalized compressibility charts, the precision of the improved virial equation of state is better than Soave-Redlich-Kwong (S-R-K) and Peng-Robinson (P-R) equations for high pressure air. In this paper, the improved virial equation of state is further used to establish a compressibility factor database which is applied to evaluate real gas effects. The specific residual thermodynamic energy and specific residual enthalpy of the high-pressure air are also derived using the modified corresponding state equation and improved virial equation of state which are truncated to the third virial coefficient. The pneumatic equations are established on the basis of the derived residual functions. The comparison of the numerical results shows that the real gas effects are strong, and the pneumatic performance analysis indicates that the real dynamic thermodynamic process is obviously different from the ideal one.Entropy2014-09-03169Article10.3390/e16094801480148171099-43002014-09-03doi: 10.3390/e16094801Jie RenFengbo YangDawei MaGuigao LeJianlin Zhong<![CDATA[Entropy, Vol. 16, Pages 4788-4800: Application of Entropy-Based Attribute Reduction and an Artificial Neural Network in Medicine: A Case Study of Estimating Medical Care Costs Associated with Myocardial Infarction]]>
http://www.mdpi.com/1099-4300/16/9/4788
In medicine, artificial neural networks (ANN) have been extensively applied in many fields to model the nonlinear relationship of multivariate data. Due to the difficulty of selecting input variables, attribute reduction techniques were widely used to reduce data to get a smaller set of attributes. However, to compute reductions from heterogeneous data, a discretizing algorithm was often introduced in dimensionality reduction methods, which may cause information loss. In this study, we developed an integrated method for estimating the medical care costs, obtained from 798 cases, associated with myocardial infarction disease. The subset of attributes was selected as the input variables of ANN by using an entropy-based information measure, fuzzy information entropy, which can deal with both categorical attributes and numerical attributes without discretization. Then, we applied a correction for the Akaike information criterion (ΑICc) to compare the networks. The results revealed that fuzzy information entropy was capable of selecting input variables from heterogeneous data for ANN, and the proposed procedure of this study provided a reasonable estimation of medical care costs, which can be adopted in other fields of medical science.Entropy2014-08-29169Article10.3390/e16094788478848001099-43002014-08-29doi: 10.3390/e16094788Qingyun DuKe NieZhensheng Wang<![CDATA[Entropy, Vol. 16, Pages 4769-4787: Study on Mixed Working Fluids with Different Compositions in Organic Rankine Cycle (ORC) Systems for Vehicle Diesel Engines]]>
http://www.mdpi.com/1099-4300/16/9/4769
One way to increase the thermal efficiency of vehicle diesel engines is to recover waste heat by using an organic Rankine cycle (ORC) system. Tests were conducted to study the running performances of diesel engines in the whole operating range. The law of variation of the exhaust energy rate under various engine operating conditions was also analyzed. A diesel engine-ORC combined system was designed, and relevant evaluation indexes proposed. The variation of the running performances of the combined system under various engine operating conditions was investigated. R245fa and R152a were selected as the components of the mixed working fluid. Thereafter, six kinds of mixed working fluids with different compositions were presented. The effects of mixed working fluids with different compositions on the running performances of the combined system were revealed. Results show that the running performances of the combined system can be improved effectively when mass fraction R152a in the mixed working fluid is high and the engine operates with high power. For the mixed working fluid M1 (R245fa/R152a, 0.1/0.9, by mass fraction), the net power output of the combined system reaches the maximum of 34.61 kW. Output energy density of working fluid (OEDWF), waste heat recovery efficiency (WHRE), and engine thermal efficiency increasing ratio (ETEIR) all reach their maximum values at 42.7 kJ/kg, 10.90%, and 11.29%, respectively.Entropy2014-08-27169Article10.3390/e16094769476947871099-43002014-08-27doi: 10.3390/e16094769Kai YangHongguang ZhangEnhua WangSongsong SongChen BeiYing ChangHongjin WangBaofeng Yao<![CDATA[Entropy, Vol. 16, Pages 4749-4768: Multicomponent and High Entropy Alloys]]>
http://www.mdpi.com/1099-4300/16/9/4749
This paper describes some underlying principles of multicomponent and high entropy alloys, and gives some examples of these materials. Different types of multicomponent alloy and different methods of accessing multicomponent phase space are discussed. The alloys were manufactured by conventional and high speed solidification techniques, and their macroscopic, microscopic and nanoscale structures were studied by optical, X-ray and electron microscope methods. They exhibit a variety of amorphous, quasicrystalline, dendritic and eutectic structures.Entropy2014-08-26169Review10.3390/e16094749474947681099-43002014-08-26doi: 10.3390/e16094749Brian Cantor<![CDATA[Entropy, Vol. 16, Pages 4713-4748: Information Anatomy of Stochastic Equilibria]]>
http://www.mdpi.com/1099-4300/16/9/4713
A stochastic nonlinear dynamical system generates information, as measured by its entropy rate. Some—the ephemeral information—is dissipated and some—the bound information—is actively stored and so affects future behavior. We derive analytic expressions for the ephemeral and bound information in the limit of infinitesimal time discretization for two classical systems that exhibit dynamical equilibria: first-order Langevin equations (i) where the drift is the gradient of an analytic potential function and the diffusion matrix is invertible and (ii) with a linear drift term (Ornstein–Uhlenbeck), but a noninvertible diffusion matrix. In both cases, the bound information is sensitive to the drift and diffusion, while the ephemeral information is sensitive only to the diffusion matrix and not to the drift. Notably, this information anatomy changes discontinuously as any of the diffusion coefficients vanishes, indicating that it is very sensitive to the noise structure. We then calculate the information anatomy of the stochastic cusp catastrophe and of particles diffusing in a heat bath in the overdamped limit, both examples of stochastic gradient descent on a potential landscape. Finally, we use our methods to calculate and compare approximations for the time-local predictive information for adaptive agents.Entropy2014-08-25169Article10.3390/e16094713471347481099-43002014-08-25doi: 10.3390/e16094713Sarah MarzenJames Crutchfield<![CDATA[Entropy, Vol. 16, Pages 4693-4712: Some New Results on the Multiple-AccessWiretap Channel]]>
http://www.mdpi.com/1099-4300/16/8/4693
In this paper, some new results on the multiple-access wiretap channel (MAC-WT) are provided. Specifically, first, we investigate the degraded MAC-WT, where two users transmit their corresponding confidential messages (no common message) to a legitimate receiver via a multiple-access channel (MAC), while a wiretapper wishes to obtain the messages via a physically degraded wiretap channel. The secrecy capacity region of this model is determined for both the discrete memoryless and Gaussian cases. For the Gaussian case, we find that this secrecy capacity region is exactly the same as the achievable secrecy rate region provided by Tekin and Yener, i.e., Tekin–Yener’s achievable region is exactly the secrecy capacity region of the degraded Gaussian MAC-WT. Second, we study a special Gaussian MAC-WT, and find the power control for two kinds of optimal points (max-min point and single user point) on the secrecy rate region of this special Gaussian model.Entropy2014-08-21168Article10.3390/e16084693469347121099-43002014-08-21doi: 10.3390/e16084693Bin DaiZheng Ma<![CDATA[Entropy, Vol. 16, Pages 4677-4692: Entropy-Complexity Characterization of Brain Development in Chickens]]>
http://www.mdpi.com/1099-4300/16/8/4677
Electroencephalography (EEG) reflects the electrical activity of the brain, which can be considered chaotic and ruled by a nonlinear dynamics. Chickens exhibit a protracted period of maturation, and this temporal separation of the synapse formation and maturation phases is analogous to human neural development, though the changes in chickens occur in weeks compared to years in humans. The development of synaptic networks in the chicken brain can be regarded as occurring in two broadly defined phases. We specifically describe the chicken brain development phases in the causality entropy-complexity plane H × C, showing that the complexity of the electrical activity can be characterized by estimating the intrinsic correlational structure of the EEG signal. This allows us to identify the dynamics of the developing chicken brain within the zone of a chaotic dissipative behavior in the plane H × C.Entropy2014-08-21168Article10.3390/e16084677467746921099-43002014-08-21doi: 10.3390/e16084677Fernando MontaniOsvaldo Rosso<![CDATA[Entropy, Vol. 16, Pages 4662-4676: Information-Theoretic Bounded Rationality and ε-Optimality]]>
http://www.mdpi.com/1099-4300/16/8/4662
Bounded rationality concerns the study of decision makers with limited information processing resources. Previously, the free energy difference functional has been suggested to model bounded rational decision making, as it provides a natural trade-off between an energy or utility function that is to be optimized and information processing costs that are measured by entropic search costs. The main question of this article is how the information-theoretic free energy model relates to simple ε-optimality models of bounded rational decision making, where the decision maker is satisfied with any action in an ε-neighborhood of the optimal utility. We find that the stochastic policies that optimize the free energy trade-off comply with the notion of ε-optimality. Moreover, this optimality criterion even holds when the environment is adversarial. We conclude that the study of bounded rationality based on ε-optimality criteria that abstract away from the particulars of the information processing constraints is compatible with the information-theoretic free energy model of bounded rationality.Entropy2014-08-21168Article10.3390/e16084662466246761099-43002014-08-21doi: 10.3390/e16084662Daniel BraunPedro Ortega<![CDATA[Entropy, Vol. 16, Pages 4648-4661: Entropy Production in Pipeline Flow of Dispersions of Water in Oil]]>
http://www.mdpi.com/1099-4300/16/8/4648
Entropy production in pipeline adiabatic flow of water-in-oil emulsions is investigated experimentally in three different diameter pipes. The dispersed-phase (water droplets) concentration of emulsion is varied from 0 to 41% vol. The entropy production rates in emulsion flow are compared with the values expected in single-phase flow of Newtonian fluids with the same properties (viscosity and density). While in the laminar regime the entropy production rates in emulsion flow can be described adequately by the single-phase Newtonian equations, a significant deviation from single-phase flow behavior is observed in the turbulent regime. In the turbulent regime, the entropy production rates in emulsion flow are found to be substantially smaller than those expected on the basis of single-phase equations. For example, the entropy production rate in water-in-oil emulsion flow at a dispersed-phase volume fraction of 0.41 is only 38.4% of that observed in flow of a single-phase Newtonian fluid with the same viscosity and density, when comparison is made at a Reynolds number of 4000. Thus emulsion flow in pipelines is more efficient thermodynamically than single-phase Newtonian flow.Entropy2014-08-19168Article10.3390/e16084648464846611099-43002014-08-19doi: 10.3390/e16084648Rajinder Pal<![CDATA[Entropy, Vol. 16, Pages 4626-4647: Spatiotemporal Scaling Effect on Rainfall Network Design Using Entropy]]>
http://www.mdpi.com/1099-4300/16/8/4626
Because of high variation in mountainous areas, rainfall data at different spatiotemporal scales may yield potential uncertainty for network design. However, few studies focus on the scaling effect on both the spatial and the temporal scale. By calculating the maximum joint entropy of hourly typhoon events, monthly, six dry and wet months and annual rainfall between 1992 and 2012 for 1-, 3-, and 5-km grids, the relocated candidate rain gauges in the National Taiwan University Experimental Forest of Central Taiwan are prioritized. The results show: (1) the network exhibits different locations for first prioritized candidate rain gauges for different spatiotemporal scales; (2) the effect of spatial scales is insignificant compared to temporal scales; and (3) a smaller number and a lower percentage of required stations (PRS) reach stable joint entropy for a long duration at finer spatial scale. Prioritized candidate rain gauges provide key reference points for adjusting the network to capture more accurate information and minimize redundancy.Entropy2014-08-18168Article10.3390/e16084626462646471099-43002014-08-18doi: 10.3390/e16084626Chiang WeiHui-Chung YehYen-Chang Chen<![CDATA[Entropy, Vol. 16, Pages 4612-4625: Exergetic and Thermoeconomic Analyses of Solar Air Heating Processes Using a Parabolic Trough Collector]]>
http://www.mdpi.com/1099-4300/16/8/4612
This paper presents a theoretical and practical analysis of the application of the thermoeconomic method. A furnace for heating air is evaluated using the methodology. The furnace works with solar energy, received from a parabolic trough collector and with electricity supplied by an electric power utility. The methodology evaluates the process by the first and second law of thermodynamics as the first step then the cost analysis is applied for getting the thermoeconomic cost. For this study, the climatic conditions of the city of Queretaro (Mexico) are considered. Two periods were taken into account: from July 2006 to June 2007 and on 6 January 2011. The prototype, located at CICATA-IPN, Qro, was analyzed in two different scenarios i.e., with 100% of electricity and 100% of solar energy. The results showed that thermoeconomic costs for the heating process with electricity, inside the chamber, are less than those using solar heating. This may be ascribed to the high cost of the materials, fittings, and manufacturing of the solar equipment. Also, the influence of the mass flow, aperture area, length and diameter of the receiver of the solar prototype is a parameter for increasing the efficiency of the prototype in addition to the price of manufacturing. The optimum design parameters are: length is 3 to 5 m, mass flow rate is 0.03 kg/s, diameter of the receiver is around 10 to 30 mm and aperture area is 3 m2.Entropy2014-08-18168Article10.3390/e16084612461246251099-43002014-08-18doi: 10.3390/e16084612Miguel Hernández-RománAlejandro Manzano-RamírezJorge Pineda-PiñónJorge Ortega-Moody<![CDATA[Entropy, Vol. 16, Pages 4603-4611: A Maximum Entropy Approach for Predicting Epileptic Tonic-Clonic Seizure]]>
http://www.mdpi.com/1099-4300/16/8/4603
The development of methods for time series analysis and prediction has always been and continues to be an active area of research. In this work, we develop a technique for modelling chaotic time series in parametric fashion. In the case of tonic-clonic epileptic electroencephalographic (EEG) analysis, we show that appropriate information theory tools provide valuable insights into the dynamics of neural activity. Our purpose is to demonstrate the feasibility of the maximum entropy principle to anticipate tonic-clonic seizure in patients with epilepsy.Entropy2014-08-18168Article10.3390/e16084603460346111099-43002014-08-18doi: 10.3390/e16084603Maria MartínAngelo PlastinoVictoria Vampa<![CDATA[Entropy, Vol. 16, Pages 4583-4602: Information Entropy-Based Metrics for Measuring Emergences in Artificial Societies]]>
http://www.mdpi.com/1099-4300/16/8/4583
Emergence is a common phenomenon, and it is also a general and important concept in complex dynamic systems like artificial societies. Usually, artificial societies are used for assisting in resolving several complex social issues (e.g., emergency management, intelligent transportation system) with the aid of computer science. The levels of an emergence may have an effect on decisions making, and the occurrence and degree of an emergence are generally perceived by human observers. However, due to the ambiguity and inaccuracy of human observers, to propose a quantitative method to measure emergences in artificial societies is a meaningful and challenging task. This article mainly concentrates upon three kinds of emergences in artificial societies, including emergence of attribution, emergence of behavior, and emergence of structure. Based on information entropy, three metrics have been proposed to measure emergences in a quantitative way. Meanwhile, the correctness of these metrics has been verified through three case studies (the spread of an infectious influenza, a dynamic microblog network, and a flock of birds) with several experimental simulations on the Netlogo platform. These experimental results confirm that these metrics increase with the rising degree of emergences. In addition, this article also has discussed the limitations and extended applications of these metrics.Entropy2014-08-15168Article10.3390/e16084583458346021099-43002014-08-15doi: 10.3390/e16084583Mingsheng TangXinjun Mao<![CDATA[Entropy, Vol. 16, Pages 4566-4582: Chaos Synchronization Error Technique-Based Defect Pattern Recognition for GIS through Partial Discharge Signal Analysis]]>
http://www.mdpi.com/1099-4300/16/8/4566
The work is aimed at using the chaos synchronization error dynamics (CSED) technique for defect pattern recognition in gas insulated switchgear (GIS). The radiated electromagnetic waves generated due to internal defects were measured by the self-made ultrahigh frequency (UHF) micro-strip antenna, so as to determine whether partial discharge will occur. Firstly, a data pretreatment is performed on the measured raw data for the purpose of computational burden reduction. A characteristic matrix is then constructed according to dynamic error trajectories in a chaos synchronization system, subsequent to which characteristics are extracted. A comparison with the existing Hilbert-Huang Transform (HHT) method reveals that the two characteristics extracted from the CSED results presented herein using the fractal theory were recognized at a higher rate pattern.Entropy2014-08-13168Article10.3390/e16084566456645821099-43002014-08-13doi: 10.3390/e16084566Hung-Cheng ChenHer-Terng YauPo-Yan Chen<![CDATA[Entropy, Vol. 16, Pages 4521-4565: Koszul Information Geometry and Souriau Geometric Temperature/Capacity of Lie Group Thermodynamics]]>
http://www.mdpi.com/1099-4300/16/8/4521
The François Massieu 1869 idea to derive some mechanical and thermal properties of physical systems from “Characteristic Functions”, was developed by Gibbs and Duhem in thermodynamics with the concept of potentials, and introduced by Poincaré in probability. This paper deals with generalization of this Characteristic Function concept by Jean-Louis Koszul in Mathematics and by Jean-Marie Souriau in Statistical Physics. The Koszul-Vinberg Characteristic Function (KVCF) on convex cones will be presented as cornerstone of “Information Geometry” theory, defining Koszul Entropy as Legendre transform of minus the logarithm of KVCF, and Fisher Information Metrics as hessian of these dual functions, invariant by their automorphisms. In parallel, Souriau has extended the Characteristic Function in Statistical Physics looking for other kinds of invariances through co-adjoint action of a group on its momentum space, defining physical observables like energy, heat and momentum as pure geometrical objects. In covariant Souriau model, Gibbs equilibriums states are indexed by a geometric parameter, the Geometric (Planck) Temperature, with values in the Lie algebra of the dynamical Galileo/Poincaré groups, interpreted as a space-time vector, giving to the metric tensor a null Lie derivative. Fisher Information metric appears as the opposite of the derivative of Mean “Moment map” by geometric temperature, equivalent to a Geometric Capacity or Specific Heat. We will synthetize the analogies between both Koszul and Souriau models, and will reduce their definitions to the exclusive Cartan “Inner Product”. Interpreting Legendre transform as Fourier transform in (Min,+) algebra, we conclude with a definition of Entropy given by a relation mixing Fourier/Laplace transforms: Entropy = (minus) Fourier(Min,+) o Log o Laplace(+,X).Entropy2014-08-12168Article10.3390/e16084521452145651099-43002014-08-12doi: 10.3390/e16084521Frédéric Barbaresco<![CDATA[Entropy, Vol. 16, Pages 4497-4520: Fractal Structure and Entropy Production within the Central Nervous System]]>
http://www.mdpi.com/1099-4300/16/8/4497
Our goal is to explore the relationship between two traditionally unrelated concepts, fractal structure and entropy production, evaluating both within the central nervous system (CNS). Fractals are temporal or spatial structures with self-similarity across scales of measurement; whereas entropy production represents the necessary exportation of entropy to our environment that comes with metabolism and life. Fractals may be measured by their fractal dimension; and human entropy production may be estimated by oxygen and glucose metabolism. In this paper, we observe fractal structures ubiquitously present in the CNS, and explore a hypothetical and unexplored link between fractal structure and entropy production, as measured by oxygen and glucose metabolism. Rapid increase in both fractal structures and metabolism occur with childhood and adolescent growth, followed by slow decrease during aging. Concomitant increases and decreases in fractal structure and metabolism occur with cancer vs. Alzheimer’s and multiple sclerosis, respectively. In addition to fractals being related to entropy production, we hypothesize that the emergence of fractal structures spontaneously occurs because a fractal is more efficient at dissipating energy gradients, thus maximizing entropy production. Experimental evaluation and further understanding of limitations and necessary conditions are indicated to address broad scientific and clinical implications of this work.Entropy2014-08-12168Article10.3390/e16084497449745201099-43002014-08-12doi: 10.3390/e16084497Andrew SeelyKimberley NewmanChristophe Herry<![CDATA[Entropy, Vol. 16, Pages 4489-4496: Complexity and the Emergence of Physical Properties]]>
http://www.mdpi.com/1099-4300/16/8/4489
Using the effective complexity measure, proposed by M. Gell-Mann and S. Lloyd, we give a quantitative definition of an emergent property. We use several previous results and properties of this particular information measure closely related to the random features of the entity and its regularities.Entropy2014-08-11168Article10.3390/e16084489448944961099-43002014-08-11doi: 10.3390/e16084489Miguel Fuentes<![CDATA[Entropy, Vol. 16, Pages 4483-4488: Is Gravity Entropic Force?]]>
http://www.mdpi.com/1099-4300/16/8/4483
If we assume that the source of thermodynamic system, ρ and p, are also the source of gravity, then either thermal quantities, such as entropy, temperature, and chemical potential, can induce gravitational effects, or gravity can induce thermal effects. We find that gravity can be seen as entropic force only for systems with constant temperature and zero chemical potential. The case for Newtonian approximation is discussed.Entropy2014-08-11168Article10.3390/e16084483448344881099-43002014-08-11doi: 10.3390/e16084483Rongjia Yang<![CDATA[Entropy, Vol. 16, Pages 4443-4482: Structure of a Global Network of Financial Companies Based on Transfer Entropy]]>
http://www.mdpi.com/1099-4300/16/8/4443
This work uses the stocks of the 197 largest companies in the world, in terms of market capitalization, in the financial area, from 2003 to 2012. We study the causal relationships between them using Transfer Entropy, which is calculated using the stocks of those companies and their counterparts lagged by one day. With this, we can assess which companies influence others according to sub-areas of the financial sector, which are banks, diversified financial services, savings and loans, insurance, private equity funds, real estate investment companies, and real estate trust funds. We also analyze the exchange of information between those stocks as seen by Transfer Entropy and the network formed by them based on this measure, verifying that they cluster mainly according to countries of origin, and then by industry and sub-industry. Then we use data on the stocks of companies in the financial sector of some countries that are suffering the most with the current credit crisis, namely Greece, Cyprus, Ireland, Spain, Portugal, and Italy, and assess, also using Transfer Entropy, which companies from the largest 197 are most affected by the stocks of these countries in crisis. The aim is to map a network of influences that may be used in the study of possible contagions originating in those countries in financial crisis.Entropy2014-08-07168Article10.3390/e16084443444344821099-43002014-08-07doi: 10.3390/e16084443Leonidas Sandoval<![CDATA[Entropy, Vol. 16, Pages 4420-4442: Historical and Physical Account on Entropy and Perspectives on the Second Law of Thermodynamics for Astrophysical and Cosmological Systems]]>
http://www.mdpi.com/1099-4300/16/8/4420
We performed an in depth analysis of the subjects of entropy and the second law of thermodynamics and how they are treated in astrophysical systems. These subjects are retraced historically from the early works on thermodynamics to the modern statistical mechanical approach and analyzed in view of specific practices within the field of astrophysics. As often happens in discussions regarding cosmology, the implications of this analysis range from physics to philosophy of science. We argue that the difficult question regarding entropy and the second law in the scope of cosmology is a consequence of the dominating paradigm. We further demonstrate this point by assuming an alternative paradigm, not related to thermodynamics of horizons, and successfully describing entropic behavior of astrophysical systems.Entropy2014-08-05168Article10.3390/e16084420442044421099-43002014-08-05doi: 10.3390/e16084420Jeroen Schoenmaker<![CDATA[Entropy, Vol. 16, Pages 4408-4419: Information Entropy Evolution for Groundwater Flow System: A Case Study of Artificial Recharge in Shijiazhuang City, China]]>
http://www.mdpi.com/1099-4300/16/8/4408
The groundwater flow system is typical dissipative structure system, and its evolution can be described with system information entropies. The information entropies of groundwater in Shijiazhuang City had been calculated between 1960 and 2005, and the results show that the entropies have a decreasing trend throughout the research period, and they can be divided into our stages based on the groundwater flow system entropy variation as follows: entropy steady period (1960–1965), entropy decreasing period (1965–1980), entropy increasing period (1980–1995) and secondary entropy decreasing period (1995–2005); understanding the major and significant driving the pattern changing forces of groundwater levels is essential to groundwater management,. A new method of grey correlation analysis has been presented, and the results show that, the grey correlation grade between groundwater flow system information entropies and precipitation series is γ01 = 0.749, the grey correlation grade between groundwater flow system information entropies and groundwater withdrawal series is γ02 = 0.814, as the groundwater withdrawal is the main driving force of groundwater flow system entropy variation; based on the numerical simulation results, information entropy increased with artificial recharge, and a smaller recharge water volume would enhance the information entropy drastically, but then doubled water would not increase the information correspondingly, which could be useful to assess the health state of groundwater flow systems.Entropy2014-08-05168Article10.3390/e16084408440844191099-43002014-08-05doi: 10.3390/e16084408Wei XuShanghai Du<![CDATA[Entropy, Vol. 16, Pages 4392-4407: Exergy Analysis of a Subcritical Refrigeration Cycle with an Improved Impulse Turbo Expander]]>
http://www.mdpi.com/1099-4300/16/8/4392
The impulse turbo expander (ITE) is employed to replace the throttling valve in the vapor compression refrigeration cycle to improve the system performance. An improved ITE and the corresponding cycle are presented. In the new cycle, the ITE not only acts as an expansion device with work extraction, but also serves as an economizer with vapor injection. An increase of 20% in the isentropic efficiency can be attained for the improved ITE compared with the conventional ITE owing to the reduction of the friction losses of the rotor. The performance of the novel cycle is investigated based on energy and exergy analysis. A correlation of the optimum intermediate pressure in terms of ITE efficiency is developed. The improved ITE cycle increases the exergy efficiency by 1.4%–6.1% over the conventional ITE cycle, 4.6%–8.3% over the economizer cycle and 7.2%–21.6% over the base cycle. Furthermore, the improved ITE cycle is also preferred due to its lower exergy loss.Entropy2014-08-04168Article10.3390/e16084392439244071099-43002014-08-04doi: 10.3390/e16084392Zhenying ZhangLili Tian<![CDATA[Entropy, Vol. 16, Pages 4375-4391: An Energetic Analysis of the Phase Separation in Non-Ionic Surfactant Mixtures: The Role of the Headgroup Structure]]>
http://www.mdpi.com/1099-4300/16/8/4375
The main goal of this paper was to examine the effect of the hydrophilic surfactant headgroup on the phase behavior of non-ionic surfactant mixtures. Four mixed systems composed of an ethoxylated plus sugar-based surfactants, each having the same hydrophobic tail, were investigated. We found that the hydrophilicity of the surfactant inhibits the tendency of the system to phase separate, which is sensitive to the presence of NaCl. Applying a classical phase separation thermodynamic model, the corresponding energy parameters were evaluated. In all cases, the parameters were found to depend on the type of nonionic surfactant, its concentration in the micellar solution and the presence of NaCl in the medium. The experimental results can be explained by assuming the phase separation process takes place as a result of reduced hydration of the surfactant headgroup caused by a temperature increase. The enthalpy-entropy compensation plot exhibits excellent linearity. We found that all the mixed surfactant systems coincided on the same straight line, the compensation temperature being lower in the presence of NaCl.Entropy2014-08-04168Article10.3390/e16084375437543911099-43002014-08-04doi: 10.3390/e16084375José HierrezueloJosé Molina-BolívarCristóbal Ruiz<![CDATA[Entropy, Vol. 16, Pages 4353-4374: Learning Functions and Approximate Bayesian Computation Design: ABCD]]>
http://www.mdpi.com/1099-4300/16/8/4353
A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected distance. A close connection is made between the latter and a metric embedding theory due to Schoenberg and others. For the Shannon type, there is a connection to majorization theory for distributions. A computational method is described to solve generalized optimal experimental design problems arising from the learning framework based on a version of the well-known approximate Bayesian computation (ABC) method for carrying out the Bayesian analysis based on Monte Carlo simulation. Some simple examples are given.Entropy2014-08-04168Article10.3390/e16084353435343741099-43002014-08-04doi: 10.3390/e16084353Markus HainyWerner MüllerHenry P. Wynn<![CDATA[Entropy, Vol. 16, Pages 4338-4352: A Natural Gradient Algorithm for Stochastic Distribution Systems]]>
http://www.mdpi.com/1099-4300/16/8/4338
In this paper, we propose a steepest descent algorithm based on the natural gradient to design the controller of an open-loop stochastic distribution control system (SDCS) of multi-input and single output with a stochastic noise. Since the control input vector decides the shape of the output probability density function (PDF), the purpose of the controller design is to select a proper control input vector, so that the output PDF of the SDCS can be as close as possible to the target PDF. In virtue of the statistical characterizations of the SDCS, a new framework based on a statistical manifold is proposed to formulate the control design of the input and output SDCSs. Here, the Kullback–Leibler divergence is presented as a cost function to measure the distance between the output PDF and the target PDF. Therefore, an iterative descent algorithm is provided, and the convergence of the algorithm is discussed, followed by an illustrative example of the effectiveness.Entropy2014-08-04168Article10.3390/e16084338433843521099-43002014-08-04doi: 10.3390/e16084338Zhenning ZhangHuafei SunLinyu PengLin Jiu<![CDATA[Entropy, Vol. 16, Pages 4322-4337: Exclusion-Zone Dynamics Explored with Microfluidics and Optical Tweezers]]>
http://www.mdpi.com/1099-4300/16/8/4322
The exclusion zone (EZ) is a boundary region devoid of macromolecules and microscopic particles formed spontaneously in the vicinity of hydrophilic surfaces. The exact mechanisms behind this remarkable phenomenon are still not fully understood and are debated. We measured the short- and long-time-scale kinetics of EZ formation around a Nafion gel embedded in specially designed microfluidic devices. The time-dependent kinetics of EZ formation follow a power law with an exponent of 0.6 that is strikingly close to the value of 0.5 expected for a diffusion-driven process. By using optical tweezers we show that exclusion forces, which are estimated to fall in the sub-pN regime, persist within the fully-developed EZ, suggesting that EZ formation is not a quasi-static but rather an irreversible process. Accordingly, the EZ-forming capacity of the Nafion gel could be exhausted with time, on a scale of hours in the presence of 1 mM Na2HPO4. EZ formation may thus be a non-equilibrium thermodynamic cross-effect coupled to a diffusion-driven transport process. Such phenomena might be particularly important in the living cell by providing mechanical cues within the complex cytoplasmic environment.Entropy2014-08-04168Article10.3390/e16084322432243371099-43002014-08-04doi: 10.3390/e16084322István HuszárZsolt MártonfalviAndrás LakiKristóf IvánMiklós Kellermayer<![CDATA[Entropy, Vol. 16, Pages 4309-4321: Effect of Suction Nozzle Pressure Drop on the Performance of an Ejector-Expansion Transcritical CO2 Refrigeration Cycle]]>
http://www.mdpi.com/1099-4300/16/8/4309
The basic transcritical CO2 systems exhibit low energy efficiency due to their large throttling loss. Replacing the throttle valve with an ejector is an effective measure for recovering some of the energy lost in the expansion process. In this paper, a thermodynamic model of the ejector-expansion transcritical CO2 refrigeration cycle is developed. The effect of the suction nozzle pressure drop (SNPD) on the cycle performance is discussed. The results indicate that the SNPD has little impact on entrainment ratio. There exists an optimum SNPD which gives a maximum recovered pressure and COP under a specified condition. The value of the optimum SNPD mainly depends on the efficiencies of the motive nozzle and the suction nozzle, but it is essentially independent of evaporating temperature and gas cooler outlet temperature. Through optimizing the value of SNPD, the maximum COP of the ejector-expansion cycle can be up to 45.1% higher than that of the basic cycle. The exergy loss of the ejector-expansion cycle is reduced about 43.0% compared with the basic cycle.Entropy2014-08-04168Article10.3390/e16084309430943211099-43002014-08-04doi: 10.3390/e16084309Zhenying ZhangLili Tian<![CDATA[Entropy, Vol. 16, Pages 4290-4308: “Lagrangian Temperature”: Derivation and Physical Meaning for Systems Described by Kappa Distributions]]>
http://www.mdpi.com/1099-4300/16/8/4290
The paper studies the “Lagrangian temperature” defined through the entropy maximization in the canonical ensemble, which is the negative inverse Lagrangian multiplier corresponding to the constraint of internal energy. The Lagrangian temperature is derived for systems out of thermal equilibrium described by kappa distributions such as space plasmas. The physical meaning of temperature is manifested by the equivalency of two different definitions, that is, through Maxwell’s kinetic theory and Clausius’ thermodynamics. The equivalency of the two definitions is true either for systems at thermal equilibrium described by Maxwell distributions or for systems out of thermal equilibrium described by kappa distributions, and gives the meaning of the actual temperature, that is, the real or measured temperature. However, the third definition, that of the Lagrangian temperature, coincides with the primary two definitions only at thermal equilibrium, and thus, in the general case of systems out of thermal equilibrium, it does not represent the actual temperature, but it is rather a function of this. The paper derives and examines the exact expression and physical meaning of the Lagrangian temperature, showing that it has essentially different content to what is commonly thought. This is achieved by: (i) maximizing the entropy in the continuous description of energy within the general framework of non-extensive statistical mechanics, (ii) using the concept of the “N-particle” kappa distribution, which is governed by a special kappa index that is invariant of the degrees of freedom and the number of particles, and (iii) determining the appropriate scales of length and speed involved in the phase-space microstates. Finally, the paper demonstrates the behavior of the Lagrangian against the actual temperature in various datasets of space plasmas.Entropy2014-07-30168Article10.3390/e16084290429043081099-43002014-07-30doi: 10.3390/e16084290George Livadiotis<![CDATA[Entropy, Vol. 16, Pages 4260-4289: Combinatorial Optimization with Information Geometry: The Newton Method]]>
http://www.mdpi.com/1099-4300/16/8/4260
We discuss the use of the Newton method in the computation of max(p → Εp [f]), where p belongs to a statistical exponential family on a finite state space. In a number of papers, the authors have applied first order search methods based on information geometry. Second order methods have been widely used in optimization on manifolds, e.g., matrix manifolds, but appear to be new in statistical manifolds. These methods require the computation of the Riemannian Hessian in a statistical manifold. We use a non-parametric formulation of information geometry in view of further applications in the continuous state space cases, where the construction of a proper Riemannian structure is still an open problem.Entropy2014-07-28168Article10.3390/e16084260426042891099-43002014-07-28doi: 10.3390/e16084260Luigi MalagòGiovanni Pistone<![CDATA[Entropy, Vol. 16, Pages 4246-4259: Thermoeconomic Evaluation of Integrated Solar Combined Cycle Systems (ISCCS)]]>
http://www.mdpi.com/1099-4300/16/8/4246
Three alternatives for integrating a solar field with the bottoming cycle of a combined cycle plant are modeled: parabolic troughs with oil at intermediate and low cycle pressures and Fresnel linear collectors at low cycle pressure. It is assumed that the plant will always operate at nominal conditions, using post-combustion during the hours of no solar resource. A thermoeconomic study of the operation of the plant throughout a year has been carried out. The energy and exergy efficiencies of the plant working in fuel only and hybrid modes are compared. The energy efficiencies obtained are very similar; slightly better for the fuel only mode. The exergy efficiencies are slightly better for hybrid operation than for fuel-only mode, due to the high exergy destruction associated with post-combustion. The values for solar electric efficiency are in line with those of similar studies. The economic study shows that the Fresnel hybridization alternative offers similar performance to the others at a significantly lower cost.Entropy2014-07-28168Article10.3390/e16084246424642591099-43002014-07-28doi: 10.3390/e16084246Javier MartínEva RodríguezIgnacio PaniaguaCelina Fernández<![CDATA[Entropy, Vol. 16, Pages 4199-4245: Computer Simulations of Soft Matter: Linking the Scales]]>
http://www.mdpi.com/1099-4300/16/8/4199
In the last few decades, computer simulations have become a fundamental tool in the field of soft matter science, allowing researchers to investigate the properties of a large variety of systems. Nonetheless, even the most powerful computational resources presently available are, in general, sufficient to simulate complex biomolecules only for a few nanoseconds. This limitation is often circumvented by using coarse-grained models, in which only a subset of the system’s degrees of freedom is retained; for an effective and insightful use of these simplified models; however, an appropriate parametrization of the interactions is of fundamental importance. Additionally, in many cases the removal of fine-grained details in a specific, small region of the system would destroy relevant features; such cases can be treated using dual-resolution simulation methods, where a subregion of the system is described with high resolution, and a coarse-grained representation is employed in the rest of the simulation domain. In this review we discuss the basic notions of coarse-graining theory, presenting the most common methodologies employed to build low-resolution descriptions of a system and putting particular emphasis on their similarities and differences. The AdResS and H-AdResS adaptive resolution simulation schemes are reported as examples of dual-resolution approaches, especially focusing in particular on their theoretical background.Entropy2014-07-28168Review10.3390/e16084199419942451099-43002014-07-28doi: 10.3390/e16084199Raffaello PotestioChristine PeterKurt Kremer<![CDATA[Entropy, Vol. 16, Pages 4185-4198: Block Access Token Renewal Scheme Based on Secret Sharing in Apache Hadoop]]>
http://www.mdpi.com/1099-4300/16/8/4185
In a cloud computing environment, user data is encrypted and stored using a large number of distributed servers. Global Internet service companies such as Google and Yahoo have recognized the importance of Internet service platforms and conducted their own research and development to utilize large cluster-based cloud computing platform technologies based on low-cost commercial off-the-shelf nodes. Accordingly, as various data services are now allowed over a distributed computing environment, distributed management of big data has become a major issue. On the other hand, security vulnerability and privacy infringement due to malicious attackers or internal users can occur by means of various usage types of big data. In particular, various security vulnerabilities can occur in the block access token, which is used for the permission control of data blocks in Hadoop. To solve this problem, we have proposed a weight-applied XOR-based efficient distribution storage and recovery scheme in this paper. In particular, various security vulnerabilities can occur in the block access token, which is used for the permission control of data blocks in Hadoop. In this paper, a secret sharing-based block access token management scheme is proposed to overcome such security vulnerabilities.Entropy2014-07-24168Article10.3390/e16084185418541981099-43002014-07-24doi: 10.3390/e16084185Su-Hyun KimIm-Yeong Lee<![CDATA[Entropy, Vol. 16, Pages 4168-4184: Characterizing the Asymptotic Per-Symbol Redundancy of Memoryless Sources over Countable Alphabets in Terms of Single-Letter Marginals]]>
http://www.mdpi.com/1099-4300/16/7/4168
The minimum expected number of bits needed to describe a random variable is its entropy, assuming knowledge of the distribution of the random variable. On the other hand, universal compression describes data supposing that the underlying distribution is unknown, but that it belongs to a known set Ρ of distributions. However, since universal descriptions are not matched exactly to the underlying distribution, the number of bits they use on average is higher, and the excess over the entropy used is the redundancy. In this paper, we study the redundancy incurred by the universal description of strings of positive integers (Z+), the strings being generated independently and identically distributed (i.i.d.) according an unknown distribution over Z+ in a known collection P. We first show that if describing a single symbol incurs finite redundancy, then P is tight, but that the converse does not always hold. If a single symbol can be described with finite worst-case regret (a more stringent formulation than redundancy above), then it is known that describing length n i.i.d. strings only incurs vanishing (to zero) redundancy per symbol as n increases. On the contrary, we show it is possible that the description of a single symbol from an unknown distribution of P incurs finite redundancy, yet the description of length n i.i.d. strings incurs a constant (&gt; 0) redundancy per symbol encoded. We then show a sufficient condition on single-letter marginals, such that length n i.i.d. samples will incur vanishing redundancy per symbol encoded.Entropy2014-07-23167Article10.3390/e16074168416841841099-43002014-07-23doi: 10.3390/e16074168Maryam HosseiniNarayana Santhanam<![CDATA[Entropy, Vol. 16, Pages 4132-4167: Network Decomposition and Complexity Measures: An Information Geometrical Approach]]>
http://www.mdpi.com/1099-4300/16/7/4132
We consider the graph representation of the stochastic model with n binary variables, and develop an information theoretical framework to measure the degree of statistical association existing between subsystems as well as the ones represented by each edge of the graph representation. Besides, we consider the novel measures of complexity with respect to the system decompositionability, by introducing the geometric product of Kullback–Leibler (KL-) divergence. The novel complexity measures satisfy the boundary condition of vanishing at the limit of completely random and ordered state, and also with the existence of independent subsystem of any size. Such complexity measures based on the geometric means are relevant to the heterogeneity of dependencies between subsystems, and the amount of information propagation shared entirely in the system.Entropy2014-07-23167Article10.3390/e16074132413241671099-43002014-07-23doi: 10.3390/e16074132Masatoshi Funabashi<![CDATA[Entropy, Vol. 16, Pages 4121-4131: Numerical Investigation on the Temperature Characteristics of the Voice Coil for a Woofer Using Thermal Equivalent Heat Conduction Models]]>
http://www.mdpi.com/1099-4300/16/7/4121
The objective of this study is to numerically investigate the temperature and heat transfer characteristics of the voice coil for a woofer with and without bobbins using the thermal equivalent heat conduction models. The temperature and heat transfer characteristics of the main components of the woofer were analyzed with input powers ranging from 5 W to 60 W. The numerical results of the voice coil showed good agreement within ±1% of the data by Odenbach (2003). The temperatures of the voice coil and its units for the woofer without the bobbin were 6.1% and 5.0% on average, respectively; lower than those of the woofer with the bobbin. However, at an input power of 30 W for the voice coil, the temperatures of the main components of the woofer without the bobbin were 40.0% higher on average than those of the woofer obtained by Lee et al. (2013).Entropy2014-07-21167Article10.3390/e16074121412141311099-43002014-07-21doi: 10.3390/e16074121Moo-Yeon LeeHyung-Jin Kim<![CDATA[Entropy, Vol. 16, Pages 4101-4120: Can the Hexagonal Ice-like Model Render the Spectroscopic Fingerprints of Structured Water? Feedback from Quantum-Chemical Computations]]>
http://www.mdpi.com/1099-4300/16/7/4101
The spectroscopic features of the multilayer honeycomb model of structured water are analyzed on theoretical grounds, by using high-level ab initio quantum-chemical methodologies, through model systems built by two fused hexagons of water molecules: the monomeric system [H19O10], in different oxidation states (anionic and neutral species). The findings do not support anionic species as the origin of the spectroscopic fingerprints observed experimentally for structured water. In this context, hexameric anions can just be seen as a source of hydrated hydroxyl anions and cationic species. The results for the neutral dimer are, however, fully consistent with the experimental evidence related to both, absorption and fluorescence spectra. The neutral π-stacked dimer [H38O20] can be assigned as the main responsible for the recorded absorption and fluorescence spectra with computed band maxima at 271 nm (4.58 eV) and 441 nm (2.81 eV), respectively. The important role of triplet excited states is finally discussed. The most intense vertical triplet⇨ triplet transition is predicted to be at 318 nm (3.90 eV).Entropy2014-07-21167Article10.3390/e16074101410141201099-43002014-07-21doi: 10.3390/e16074101Javier Segarra-MartíDaniel Roca-SanjuánManuela Merchán<![CDATA[Entropy, Vol. 16, Pages 4088-4100: Using Geometry to Select One Dimensional Exponential Families That Are Monotone Likelihood Ratio in the Sample Space, Are Weakly Unimodal and Can Be Parametrized by a Measure of Central Tendency]]>
http://www.mdpi.com/1099-4300/16/7/4088
One dimensional exponential families on finite sample spaces are studied using the geometry of the simplex Δn°-1 and that of a transformation Vn-1 of its interior. This transformation is the natural parameter space associated with the family of multinomial distributions. The space Vn-1 is partitioned into cones that are used to find one dimensional families with desirable properties for modeling and inference. These properties include the availability of uniformly most powerful tests and estimators that exhibit optimal properties in terms of variability and unbiasedness.Entropy2014-07-18167Article10.3390/e16074088408841001099-43002014-07-18doi: 10.3390/e16074088Paul VosKarim Anaya-Izquierdo<![CDATA[Entropy, Vol. 16, Pages 4060-4087: Biosemiotic Entropy: Concluding the Series]]>
http://www.mdpi.com/1099-4300/16/7/4060
This article concludes the special issue on Biosemiotic Entropy looking toward the future on the basis of current and prior results. It highlights certain aspects of the series, concerning factors that damage and degenerate biosignaling systems. As in ordinary linguistic discourse, well-formedness (coherence) in biological signaling systems depends on valid representations correctly construed: a series of proofs are presented and generalized to all meaningful sign systems. The proofs show why infants must (as empirical evidence shows they do) proceed through a strict sequence of formal steps in acquiring any language. Classical and contemporary conceptions of entropy and information are deployed showing why factors that interfere with coherence in biological signaling systems are necessary and sufficient causes of disorders, diseases, and mortality. Known sources of such formal degeneracy in living organisms (here termed, biosemiotic entropy) include: (a) toxicants, (b) pathogens; (c) excessive exposures to radiant energy and/or sufficiently powerful electromagnetic fields; (d) traumatic injuries; and (e) interactions between the foregoing factors. Just as Jaynes proved that irreversible changes invariably increase entropy, the theory of true narrative representations (TNR theory) demonstrates that factors disrupting the well-formedness (coherence) of valid representations, all else being held equal, must increase biosemiotic entropy—the kind impacting biosignaling systems.Entropy2014-07-18167Editorial10.3390/e16074060406040871099-43002014-07-18doi: 10.3390/e16074060John Oller<![CDATA[Entropy, Vol. 16, Pages 4044-4059: Entropy and Its Discontents: A Note on Definitions]]>
http://www.mdpi.com/1099-4300/16/7/4044
The routine definitions of Shannon entropy for both discrete and continuous probability laws show inconsistencies that make them not reciprocally coherent. We propose a few possible modifications of these quantities so that: (1) they no longer show incongruities; and (2) they go one into the other in a suitable limit as the result of a renormalization. The properties of the new quantities would slightly differ from that of the usual entropies in a few other respects.Entropy2014-07-17167Article10.3390/e16074044404440591099-43002014-07-17doi: 10.3390/e16074044Nicola Petroni<![CDATA[Entropy, Vol. 16, Pages 4032-4043: Application of a Modified Entropy Computational Method in Assessing the Complexity of Pulse Wave Velocity Signals in Healthy and Diabetic Subjects]]>
http://www.mdpi.com/1099-4300/16/7/4032
Using 1000 successive points of a pulse wave velocity (PWV) series, we previously distinguished healthy from diabetic subjects with multi-scale entropy (MSE) using a scale factor of 10. One major limitation is the long time for data acquisition (i.e., 20 min). This study aimed at validating the sensitivity of a novel method, short time MSE (sMSE) that utilized a substantially smaller sample size (i.e., 600 consecutive points), in differentiating the complexity of PWV signals both in simulation and in human subjects that were divided into four groups: healthy young (Group 1; n = 24) and middle-aged (Group 2; n = 30) subjects without known cardiovascular disease and middle-aged individuals with well-controlled (Group 3; n = 18) and poorly-controlled (Group 4; n = 22) diabetes mellitus type 2. The results demonstrated that although conventional MSE could differentiate the subjects using 1000 consecutive PWV series points, sensitivity was lost using only 600 points. Simulation study revealed consistent results. By contrast, the novel sMSE method produced significant differences in entropy in both simulation and testing subjects. In conclusion, this study demonstrated that using a novel sMSE approach for PWV analysis, the time for data acquisition can be substantially reduced to that required for 600 cardiac cycles (~10 min) with remarkable preservation of sensitivity in differentiating among healthy, aged, and diabetic populations.Entropy2014-07-17167Article10.3390/e16074032403240431099-43002014-07-17doi: 10.3390/e16074032Yi-Chung ChangHsien-Tsai WuHong-Ruei ChenAn-Bang LiuJung-Jen YehMen-Tzung LoJen-Ho TsaoChieh-Ju TangI-Ting TsaiCheuk-Kwan Sun<![CDATA[Entropy, Vol. 16, Pages 4015-4031: New Riemannian Priors on the Univariate Normal Model]]>
http://www.mdpi.com/1099-4300/16/7/4015
The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ) with hyperparameters θ - ∈ Θ and γ &gt; 0, whose density with respect to Riemannian volume is proportional to exp(−d2(θ, θ - )/2γ2), where d2(θ, θ - ) is the square of Rao’s Riemannian distance. The distributions G( θ - , γ) are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ) is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ) has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ), as shown in the paper), and its dispersion away from θ - is given by γ. Therefore, one thinks of members of the class represented by G( θ - , γ) as being centered around θ - and lying within a typical distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ) and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ) can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that this leads to an improvement in performance over the use of conjugate priors.Entropy2014-07-17167Article10.3390/e16074015401540311099-43002014-07-17doi: 10.3390/e16074015Salem SaidLionel BombrunYannick Berthoumieu<![CDATA[Entropy, Vol. 16, Pages 4004-4014: A Note of Caution on Maximizing Entropy]]>
http://www.mdpi.com/1099-4300/16/7/4004
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples.Entropy2014-07-17167Article10.3390/e16074004400440141099-43002014-07-17doi: 10.3390/e16074004Richard NeapolitanXia Jiang<![CDATA[Entropy, Vol. 16, Pages 3939-4003: Human Brain Networks: Spiking Neuron Models, Multistability, Synchronization, Thermodynamics, Maximum Entropy Production, and Anesthetic Cascade Mechanisms]]>
http://www.mdpi.com/1099-4300/16/7/3939
Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a theoretical foundation for general anesthesia using the network properties of the brain. Finally, we present some key emergent properties from the fields of thermodynamics and electromagnetic field theory to qualitatively explain the underlying neuronal mechanisms of action for anesthesia and consciousness.Entropy2014-07-17167Article10.3390/e16073939393940031099-43002014-07-17doi: 10.3390/e16073939Wassim HaddadQing HuiJames Bailey<![CDATA[Entropy, Vol. 16, Pages 3903-3938: Panel I: Connecting 2nd Law Analysis with Economics, Ecology and Energy Policy]]>
http://www.mdpi.com/1099-4300/16/7/3903
The present paper is a review of several papers from the Proceedings of the Joint European Thermodynamics Conference, held in Brescia, Italy, 1–5 July 2013, namely papers introduced by their authors at Panel I of the conference. Panel I was devoted to applications of the Second Law of Thermodynamics to social issues—economics, ecology, sustainability, and energy policy. The concept called Available Energy which goes back to mid-nineteenth century work of Kelvin, Rankine, Maxwell and Gibbs, is relevant to all of the papers. Various names have been applied to the concept when interactions between the system of interest and an environment are involved. Today, the name exergy is generally accepted. The scope of the papers being reviewed is wide and they complement one another well.Entropy2014-07-16167Review10.3390/e16073903390339381099-43002014-07-16doi: 10.3390/e16073903Richard GaggioliMauro Reini<![CDATA[Entropy, Vol. 16, Pages 3889-3902: Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing]]>
http://www.mdpi.com/1099-4300/16/7/3889
We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights.Entropy2014-07-15167Article10.3390/e16073889388939021099-43002014-07-15doi: 10.3390/e16073889Ri-Qi SuYing-Cheng LaiXiao Wang<![CDATA[Entropy, Vol. 16, Pages 3878-3888: The Entropy-Based Quantum Metric]]>
http://www.mdpi.com/1099-4300/16/7/3878
The von Neumann entropy S( D ^ ) generates in the space of quantum density matrices D ^ the Riemannian metric ds2 = −d2S( D ^ ), which is physically founded and which characterises the amount of quantum information lost by mixing D ^ and D ^ + d D ^ . A rich geometric structure is thereby implemented in quantum mechanics. It includes a canonical mapping between the spaces of states and of observables, which involves the Legendre transform of S( D ^ ). The Kubo scalar product is recovered within the space of observables. Applications are given to equilibrium and non equilibrium quantum statistical mechanics. There the formalism is specialised to the relevant space of observables and to the associated reduced states issued from the maximum entropy criterion, which result from the exact states through an orthogonal projection. Von Neumann’s entropy specialises into a relevant entropy. Comparison is made with other metrics. The Riemannian properties of the metric ds2 = −d2S( D ^ ) are derived. The curvature arises from the non-Abelian nature of quantum mechanics; its general expression and its explicit form for q-bits are given, as well as geodesics.Entropy2014-07-15167Article10.3390/e16073878387838881099-43002014-07-15doi: 10.3390/e16073878Roger Balian<![CDATA[Entropy, Vol. 16, Pages 3866-3877: Many Can Work Better than the Best: Diagnosing with Medical Images via Crowdsourcing]]>
http://www.mdpi.com/1099-4300/16/7/3866
We study a crowdsourcing-based diagnosis algorithm, which is against the fact that currently we do not lack medical staff, but high level experts. Our approach is to make use of the general practitioners’ efforts: For every patient whose illness cannot be judged definitely, we arrange for them to be diagnosed multiple times by different doctors, and we collect the all diagnosis results to derive the final judgement. Our inference model is based on the statistical consistency of the diagnosis data. To evaluate the proposed model, we conduct experiments on both the synthetic and real data; the results show that it outperforms the benchmarks.Entropy2014-07-14167Article10.3390/e16073866386638771099-43002014-07-14doi: 10.3390/e16073866Xian-Hong XiangXiao-Yu HuangXiao-Ling ZhangChun-Fang CaiJian-Yong YangLei Li<![CDATA[Entropy, Vol. 16, Pages 3848-3865: Hierarchical Geometry Verification via Maximum Entropy Saliency in Image Retrieval]]>
http://www.mdpi.com/1099-4300/16/7/3848
We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical saliency and also fully exploring the geometric context of all visual words in images. First of all, we obtain hierarchical salient regions of a query image based on the maximum entropy principle and label visual features with salient tags. The tags added to the feature descriptors are used to compute the saliency matching score, and the scores are regarded as the weight information in the geometry verification step. Second we define a spatial pattern as a triangle composed of three matched features and evaluate the similarity between every two spatial patterns. Finally, we sum all spatial matching scores with weights to generate the final ranking list. Experiment results prove that Hierarchical Geometry Verification based on Maximum Entropy Saliency can not only improve retrieval accuracy, but also reduce the time consumption of the full retrieval.Entropy2014-07-14167Article10.3390/e16073848384838651099-43002014-07-14doi: 10.3390/e16073848Hongwei ZhaoQingliang LiPingping Liu<![CDATA[Entropy, Vol. 16, Pages 3832-3847: Variational Bayes for Regime-Switching Log-Normal Models]]>
http://www.mdpi.com/1099-4300/16/7/3832
The power of projection using divergence functions is a major theme in information geometry. One version of this is the variational Bayes (VB) method. This paper looks at VB in the context of other projection-based methods in information geometry. It also describes how to apply VB to the regime-switching log-normal model and how it provides a computationally fast solution to quantify the uncertainty in the model specification. The results show that the method can recover exactly the model structure, gives the reasonable point estimates and is very computationally efficient. The potential problems of the method in quantifying the parameter uncertainty are discussed.Entropy2014-07-14167Article10.3390/e16073832383238471099-43002014-07-14doi: 10.3390/e16073832Hui ZhaoPaul Marriott<![CDATA[Entropy, Vol. 16, Pages 3813-3831: Phase Competitions behind the Giant Magnetic Entropy Variation: Gd5Si2Ge2 and Tb5Si2Ge2 Case Studies]]>
http://www.mdpi.com/1099-4300/16/7/3813
Magnetic materials with strong spin-lattice coupling are a powerful set of candidates for multifunctional applications because of their multiferroic, magnetocaloric (MCE), magnetostrictive and magnetoresistive effects. In these materials there is a strong competition between two states (where a state comprises an atomic and an associated magnetic structure) that leads to the occurrence of phase transitions under subtle variations of external parameters, such as temperature, magnetic field and hydrostatic pressure. In this review a general method combining detailed magnetic measurements/analysis and first principles calculations with the purpose of estimating the phase transition temperature is presented with the help of two examples (Gd5Si2Ge2 and Tb5Si2Ge2). It is demonstrated that such method is an important tool for a deeper understanding of the (de)coupled nature of each phase transition in the materials belonging to the R5(Si,Ge)4 family and most possibly can be applied to other systems. The exotic Griffiths-like phase in the framework of the R5(SixGe1-x)4 compounds is reviewed and its generalization as a requisite for strong phase competitions systems that present large magneto-responsive properties is proposed.Entropy2014-07-11167Review10.3390/e16073813381338311099-43002014-07-11doi: 10.3390/e16073813Ana PiresJoão BeloArmandina LopesIsabel GomesLuis MorellónCesar MagenPedro AlgarabelManuel IbarraAndré PereiraJoão Araújo<![CDATA[Entropy, Vol. 16, Pages 3808-3812: Entropy Generation in Steady Laminar Boundary Layers with Pressure Gradients]]>
http://www.mdpi.com/1099-4300/16/7/3808
In an earlier paper in Entropy [1] we hypothesized that the entropy generation rate is the driving force for boundary layer transition from laminar to turbulent flow. Subsequently, with our colleagues we have examined the prediction of entropy generation during such transitions [2,3]. We found that reasonable predictions for engineering purposes could be obtained for flows with negligible streamwise pressure gradients by adapting the linear combination model of Emmons [4]. A question then arises—will the Emmons approach be useful for boundary layer transition with significant streamwise pressure gradients as by Nolan and Zaki [5]. In our implementation the intermittency is calculated by comparison to skin friction correlations for laminar and turbulent boundary layers and is then applied with comparable correlations for the energy dissipation coefficient (i.e., non-dimensional integral entropy generation rate). In the case of negligible pressure gradients the Blasius theory provides the necessary laminar correlations.Entropy2014-07-10167Letter10.3390/e16073808380838121099-43002014-07-10doi: 10.3390/e16073808Donald McEligotEdmond Walsh<![CDATA[Entropy, Vol. 16, Pages 3793-3807: Entropy vs. Majorization: What Determines Complexity?]]>
http://www.mdpi.com/1099-4300/16/7/3793
The evolution of a microcanonical statistical ensemble of states of isolated systems from order to disorder as determined by increasing entropy, is compared to an alternative evolution that is determined by mixing character. The fact that the partitions of an integer N are in one-to-one correspondence with macrostates for N distinguishable objects is noted. Orders for integer partitions are given, including the original order by Young and the Boltzmann order by entropy. Mixing character (represented by Young diagrams) is seen to be a partially ordered quality rather than a quantity (See Ruch, 1975). The majorization partial order is reviewed as is its Hasse diagram representation known as the Young Diagram Lattice (YDL).Two lattices that show allowed transitions between macrostates are obtained from the YDL: we term these the mixing lattice and the diversity lattice. We study the dynamics (time evolution) on the two lattices, namely the sequence of steps on the lattices (i.e., the path or trajectory) that leads from low entropy, less mixed states to high entropy, highly mixed states. These paths are sequences of macrostates with monotonically increasing entropy. The distributions of path lengths on the two lattices are obtained via Monte Carlo methods, and surprisingly both distributions appear Gaussian. However, the width of the path length distribution for diversity is the square root of the mixing case, suggesting a qualitative difference in their temporal evolution. Another surprising result is that some macrostates occur in many paths while others do not. The evolution at low entropy and at high entropy is quite simple, but at intermediate entropies, the number of possible evolutionary paths is extremely large (due to the extensive branching of the lattices). A quantitative complexity measure associated with incomparability of macrostates in the mixing partial order is proposed, complementing Kolmogorov complexity and Shannon entropy.Entropy2014-07-09167Article10.3390/e16073793379338071099-43002014-07-09doi: 10.3390/e16073793William SeitzA. Kirwan<![CDATA[Entropy, Vol. 16, Pages 3769-3792: On Conservation Equation Combinations and Closure Relations]]>
http://www.mdpi.com/1099-4300/16/7/3769
Fundamental conservation equations for mass, momentum and energy of chemical species can be combined with thermodynamic relations to obtain secondary forms, such as conservation equations for phases, an internal energy balance and a mechanical energy balance. In fact, the forms of secondary equations are infinite and depend on the criteria used in determining which species-based equations to employ and how to combine them. If one uses these secondary forms in developing an entropy inequality to be used in formulating closure relations, care must be employed to ensure that the appropriate equations are used, or problematic results can develop for multispecies systems. We show here that the use of the fundamental forms minimizes the chance of an erroneous formulation in terms of secondary forms and also provides guidance as to which secondary forms should be used if one uses them as a starting point.Entropy2014-07-07167Article10.3390/e16073769376937921099-43002014-07-07doi: 10.3390/e16073769William GrayAmanda Dye<![CDATA[Entropy, Vol. 16, Pages 3754-3768: Maximum Entropy in Drug Discovery]]>
http://www.mdpi.com/1099-4300/16/7/3754
Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.Entropy2014-07-07167Review10.3390/e16073754375437681099-43002014-07-07doi: 10.3390/e16073754Chih-Yuan TsengJack Tuszynski<![CDATA[Entropy, Vol. 16, Pages 3732-3753: On the Connections of Generalized Entropies With Shannon and Kolmogorov–Sinai Entropies]]>
http://www.mdpi.com/1099-4300/16/7/3732
We consider the concept of generalized Kolmogorov–Sinai entropy, where instead of the Shannon entropy function, we consider an arbitrary concave function defined on the unit interval, vanishing in the origin. Under mild assumptions on this function, we show that this isomorphism invariant is linearly dependent on the Kolmogorov–Sinai entropy.Entropy2014-07-03167Article10.3390/e16073732373237531099-43002014-07-03doi: 10.3390/e16073732Fryderyk Falniowski<![CDATA[Entropy, Vol. 16, Pages 3710-3731: The Role of Vegetation on the Ecosystem Radiative Entropy Budget and Trends Along Ecological Succession]]>
http://www.mdpi.com/1099-4300/16/7/3710
Ecosystem entropy production is predicted to increase along ecological succession and approach a state of maximum entropy production, but few studies have bridged the gap between theory and data. Here, we explore radiative entropy production in terrestrial ecosystems using measurements from 64 Free/Fair-Use sites in the FLUXNET database, including a successional chronosequence in the Duke Forest in the southeastern United States. Ecosystem radiative entropy production increased then decreased as succession progressed in the Duke Forest ecosystems, and did not exceed 95% of the calculated empirical maximum entropy production in the FLUXNET study sites. Forest vegetation, especially evergreen needleleaf forests characterized by low shortwave albedo and close coupling to the atmosphere, had a significantly higher ratio of radiative entropy production to the empirical maximum entropy production than did croplands and grasslands. Our results demonstrate that ecosystems approach, but do not reach, maximum entropy production and that the relationship between succession and entropy production depends on vegetation characteristics. Future studies should investigate how natural disturbances and anthropogenic management—especially the tendency to shift vegetation to an earlier successional state—alter energy flux and entropy production at the surface-atmosphere interface.Entropy2014-07-03167Article10.3390/e16073710371037311099-43002014-07-03doi: 10.3390/e16073710Paul StoyHua LinKimberly NovickMario SiqueiraJehn-Yih Juang<![CDATA[Entropy, Vol. 16, Pages 3689-3709: Searching for Conservation Laws in Brain Dynamics—BOLD Flux and Source Imaging]]>
http://www.mdpi.com/1099-4300/16/7/3689
Blood-oxygen-level-dependent (BOLD) imaging is the most important noninvasive tool to map human brain function. It relies on local blood-flow changes controlled by neurovascular coupling effects, usually in response to some cognitive or perceptual task. In this contribution we ask if the spatiotemporal dynamics of the BOLD signal can be modeled by a conservation law. In analogy to the description of physical laws, which often can be derived from some underlying conservation law, identification of conservation laws in the brain could lead to new models for the functional organization of the brain. Our model is independent of the nature of the conservation law, but we discuss possible hints and motivations for conservation laws. For example, globally limited blood supply and local competition between brain regions for blood might restrict the large scale BOLD signal in certain ways that could be observable. One proposed selective pressure for the evolution of such conservation laws is the closed volume of the skull limiting the expansion of brain tissue by increases in blood volume. These ideas are demonstrated on a mental motor imagery fMRI experiment, in which functional brain activation was mapped in a group of volunteers imagining themselves swimming. In order to search for local conservation laws during this complex cognitive process, we derived maps of quantities resulting from spatial interaction of the BOLD amplitudes. Specifically, we mapped fluxes and sources of the BOLD signal, terms that would appear in a description by a continuity equation. Whereas we cannot present final answers with the particular analysis of this particular experiment, some results seem to be non-trivial. For example, we found that during task the group BOLD flux covered more widespread regions than identified by conventional BOLD mapping and was always increasing during task. It is our hope that these results motivate more work towards the search for conservation laws in neuroimaging experiments or at least towards imaging procedures based on spatial interactions of signals. The payoff could be new models for the dynamics of the healthy brain or more sensitive clinical imaging approaches, respectively.Entropy2014-07-03167Article10.3390/e16073689368937091099-43002014-07-03doi: 10.3390/e16073689Henning VossNicholas Schiff<![CDATA[Entropy, Vol. 16, Pages 3670-3688: Extending the Extreme Physical Information to Universal Cognitive Models via a Confident Information First Principle]]>
http://www.mdpi.com/1099-4300/16/7/3670
The principle of extreme physical information (EPI) can be used to derive many known laws and distributions in theoretical physics by extremizing the physical information loss K, i.e., the difference between the observed Fisher information I and the intrinsic information bound J of the physical phenomenon being measured. However, for complex cognitive systems of high dimensionality (e.g., human language processing and image recognition), the information bound J could be excessively larger than I (J ≫ I), due to insufficient observation, which would lead to serious over-fitting problems in the derivation of cognitive models. Moreover, there is a lack of an established exact invariance principle that gives rise to the bound information in universal cognitive systems. This limits the direct application of EPI. To narrow down the gap between I and J, in this paper, we propose a confident-information-first (CIF) principle to lower the information bound J by preserving confident parameters and ruling out unreliable or noisy parameters in the probability density function being measured. The confidence of each parameter can be assessed by its contribution to the expected Fisher information distance between the physical phenomenon and its observations. In addition, given a specific parametric representation, this contribution can often be directly assessed by the Fisher information, which establishes a connection with the inverse variance of any unbiased estimate for the parameter via the Cramér–Rao bound. We then consider the dimensionality reduction in the parameter spaces of binary multivariate distributions. We show that the single-layer Boltzmann machine without hidden units (SBM) can be derived using the CIF principle. An illustrative experiment is conducted to show how the CIF principle improves the density estimation performance.Entropy2014-07-01167Article10.3390/e16073670367036881099-43002014-07-01doi: 10.3390/e16073670Xiaozhao ZhaoYuexian HouDawei SongWenjie Li<![CDATA[Entropy, Vol. 16, Pages 3655-3669: An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples]]>
http://www.mdpi.com/1099-4300/16/7/3655
In this paper, based on a doubly generalized Type II censored sample, the maximum likelihood estimators (MLEs), the approximate MLE and the Bayes estimator for the entropy of the Rayleigh distribution are derived. We compare the entropy estimators’ root mean squared error (RMSE), bias and Kullback–Leibler divergence values. The simulation procedure is repeated 10,000 times for the sample size n = 10, 20, 40 and 100 and various doubly generalized Type II hybrid censoring schemes. Finally, a real data set has been analyzed for illustrative purposes.Entropy2014-07-01167Article10.3390/e16073655365536691099-43002014-07-01doi: 10.3390/e16073655Youngseuk ChoHokeun SunKyeongjun Lee<![CDATA[Entropy, Vol. 16, Pages 3635-3654: A Maximum Entropy Fixed-Point Route Choice Model for Route Correlation]]>
http://www.mdpi.com/1099-4300/16/7/3635
In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial Logit models. The proposed model considers a fixed point problem for treating correlations between routes, which can be solved iteratively. We estimated the new model on the Santiago (Chile) Metro network and compared the results with other route choice models that can be found in the literature. The new model has better explanatory and predictive power that many other alternative models, correctly capturing the correlation factor. Our methodology can be extended to private transport networks.Entropy2014-06-30167Article10.3390/e16073635363536541099-43002014-06-30doi: 10.3390/e16073635Louis de GrangeSebastián RaveauFelipe González<![CDATA[Entropy, Vol. 16, Pages 3605-3634: Entropy Evolution and Uncertainty Estimation with Dynamical Systems]]>
http://www.mdpi.com/1099-4300/16/7/3605
This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D, some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In general, both H and D are dissipated, and the dissipation bears a form reminiscent of the Fisher information; in the absence of stochasticity, dH/dt is connected to the rate of phase space expansion, and D stays invariant, i.e., the separation of two probability density functions is always conserved. These formulas are validated with linear systems, and put to application with the Lorenz system and a large-dimensional stochastic quasi-geostrophic flow problem. In the Lorenz case, H falls at a constant rate with time, implying that H will eventually become negative, a situation beyond the capability of the commonly used computational technique like coarse-graining and bin counting. For the stochastic flow problem, it is first reduced to a computationally tractable low-dimensional system, using a reduced model approach, and then handled through ensemble prediction. Both the Lorenz system and the stochastic flow system are examples of self-organization in the light of uncertainty reduction. The latter particularly shows that, sometimes stochasticity may actually enhance the self-organization process.Entropy2014-06-30167Article10.3390/e16073605360536341099-43002014-06-30doi: 10.3390/e16073605X. Liang