Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 18, Pages 281: Acoustic Detection of Coronary Occlusions before and after Stent Placement Using an Electronic Stethoscope]]>
http://www.mdpi.com/1099-4300/18/8/281
More than 370,000 Americans die every year from coronary artery disease (CAD). Early detection and treatment are crucial to reducing this number. Current diagnostic and disease-monitoring methods are invasive, costly, and time-consuming. Using an electronic stethoscope and spectral and nonlinear dynamics analysis of the recorded heart sound, we investigated the acoustic signature of CAD in subjects with only a single coronary occlusion before and after stent placement, as well as subjects with clinically normal coronary arteries. The CAD signature was evaluated by estimating power ratios of the total power above 150 Hz over the total power below 150 Hz of the FFT of the acoustic signal. Additionally, approximate entropy values were estimated to assess the differences induced by the stent placement procedure to the acoustic signature of the signals in the time domain. The groups were identified with this method with 82% sensitivity and 64% specificity (using the power ratio method) and 82% sensitivity and 55% specificity (using the approximate entropy). Power ratios and approximate entropy values after stent placement are not statistically different from those estimated from subjects with no coronary occlusions. Our approach demonstrates that the effect of stent placement on coronary occlusions can be monitored using an electronic stethoscope.Entropy2016-07-29188Article10.3390/e180802812811099-43002016-07-29doi: 10.3390/e18080281Andrei DragomirAllison PostYasemin AkayHani JneidDavid PaniaguaAli DenktasBiykem BozkurtMetin Akay<![CDATA[Entropy, Vol. 18, Pages 280: Acoustic Entropy of the Materials in the Course of Degradation]]>
http://www.mdpi.com/1099-4300/18/8/280
We report experimental observations on the evolution of acoustic entropy in the course of cyclic loading as degradation occurs due to fatigue. The measured entropy is a result of the materials’ microstructural changes that occur as degradation due to cyclic mechanical loading. Experimental results demonstrate that maximum acoustic entropy emanating from materials during the course of degradation remains similar. Experiments are shown for two different types of materials: Aluminum 6061 (a metallic alloy) and glass/epoxy (a composite laminate). The evolution of the acoustic entropy demonstrates a persistent trend over the course of degradation.Entropy2016-07-28188Article10.3390/e180802802801099-43002016-07-28doi: 10.3390/e18080280Ali KahirdehM. Khonsari<![CDATA[Entropy, Vol. 18, Pages 279: Entropy Generation through Non-Equilibrium Ordered Structures in Corner Flows with Sidewall Mass Injection]]>
http://www.mdpi.com/1099-4300/18/8/279
Additional entropy generation rates through non-equilibrium ordered structures are predicted for corner flows with sidewall mass injection. Well-defined non-equilibrium ordered structures are predicted at a normalized vertical station of approximately eighteen percent of the boundary-layer thickness. These structures are in addition to the ordered structures previously reported at approximately thirty-eight percent of the boundary layer thickness. The computational procedure is used to determine the entropy generation rate for each spectral velocity component at each of several stream wise stations and for each of several injection velocity values. Application of the procedure to possible thermal system processes is discussed. These results indicate that cooling sidewall mass injection into a horizontal laminar boundary layer may actually increase the heat transfer to the horizontal surface.Entropy2016-07-28188Article10.3390/e180802792791099-43002016-07-28doi: 10.3390/e18080279LaVar Isaacson<![CDATA[Entropy, Vol. 18, Pages 278: Expected Logarithm of Central Quadratic Form and Its Use in KL-Divergence of Some Distributions]]>
http://www.mdpi.com/1099-4300/18/8/278
In this paper, we develop three different methods for computing the expected logarithm of central quadratic forms: a series method, an integral method and a fast (but inexact) set of methods. The approach used for deriving the integral method is novel and can be used for computing the expected logarithm of other random variables. Furthermore, we derive expressions for the Kullback–Leibler (KL) divergence of elliptical gamma distributions and angular central Gaussian distributions, which turn out to be functions dependent on the expected logarithm of a central quadratic form. Through several experimental studies, we compare the performance of these methods.Entropy2016-07-28188Article10.3390/e180802782781099-43002016-07-28doi: 10.3390/e18080278Pourya Habib ZadehReshad Hosseini<![CDATA[Entropy, Vol. 18, Pages 277: A Proximal Point Algorithm for Minimum Divergence Estimators with Application to Mixture Models]]>
http://www.mdpi.com/1099-4300/18/8/277
Estimators derived from a divergence criterion such as φ - divergences are generally more robust than the maximum likelihood ones. We are interested in particular in the so-called minimum dual φ–divergence estimator (MDφDE), an estimator built using a dual representation of φ–divergences. We present in this paper an iterative proximal point algorithm that permits the calculation of such an estimator. The algorithm contains by construction the well-known Expectation Maximization (EM) algorithm. Our work is based on the paper of Tseng on the likelihood function. We provide some convergence properties by adapting the ideas of Tseng. We improve Tseng’s results by relaxing the identifiability condition on the proximal term, a condition which is not verified for most mixture models and is hard to be verified for “non mixture” ones. Convergence of the EM algorithm in a two-component Gaussian mixture is discussed in the spirit of our approach. Several experimental results on mixture models are provided to confirm the validity of the approach.Entropy2016-07-27188Article10.3390/e180802772771099-43002016-07-27doi: 10.3390/e18080277Diaa Al MohamadMichel Broniatowski<![CDATA[Entropy, Vol. 18, Pages 275: Symmetric Fractional Diffusion and Entropy Production]]>
http://www.mdpi.com/1099-4300/18/7/275
The discovery of the entropy production paradox (Hoffmann et al., 1998) raised basic questions about the nature of irreversibility in the regime between diffusion and waves. First studied in the form of spatial movements of moments of H functions, pseudo propagation is the pre-limit propagation-like movements of skewed probability density function (PDFs) in the domain between the wave and diffusion equations that goes over to classical partial differential equation propagation of characteristics in the wave limit. Many of the strange properties that occur in this extraordinary regime were thought to be connected in some manner to this form of proto-movement. This paper eliminates pseudo propagation by employing a similar evolution equation that imposes spatial unimodal symmetry on evolving PDFs. Contrary to initial expectations, familiar peculiarities emerge despite the imposed symmetry, but they have a distinct character.Entropy2016-07-23187Article10.3390/e180702752751099-43002016-07-23doi: 10.3390/e18070275Janett PrehlFrank BoldtKarl HoffmannChristopher Essex<![CDATA[Entropy, Vol. 18, Pages 273: Efficiency Bound of Local Z-Estimators on Discrete Sample Spaces]]>
http://www.mdpi.com/1099-4300/18/7/273
Many statistical models over a discrete sample space often face the computational difficulty of the normalization constant. Because of that, the maximum likelihood estimator does not work. In order to circumvent the computation difficulty, alternative estimators such as pseudo-likelihood and composite likelihood that require only a local computation over the sample space have been proposed. In this paper, we present a theoretical analysis of such localized estimators. The asymptotic variance of localized estimators depends on the neighborhood system on the sample space. We investigate the relation between the neighborhood system and estimation accuracy of localized estimators. Moreover, we derive the efficiency bound. The theoretical results are applied to investigate the statistical properties of existing estimators and some extended ones.Entropy2016-07-23187Article10.3390/e180702732731099-43002016-07-23doi: 10.3390/e18070273Takafumi Kanamori<![CDATA[Entropy, Vol. 18, Pages 271: Thermal Characteristic Analysis and Experimental Study of a Spindle-Bearing System]]>
http://www.mdpi.com/1099-4300/18/7/271
In this paper, a thermo-mechanical coupling analysis model of the spindle-bearing system based on Hertz’s contact theory and a point contact non-Newtonian thermal elastohydrodynamic lubrication (EHL) theory are developed. In this model, the effect of preload, centrifugal force, the gyroscopic moment, and the lubrication state of the spindle-bearing system are considered. According to the heat transfer theory, the mathematical model for the temperature field of the spindle system is developed and the effect of the spindle cooling system on the spindle temperature distribution is analyzed. The theoretical simulations and the experimental results indicate that the bearing preload has great effect on the frictional heat generation; the cooling fluid has great effect on the heat balance of the spindle system. If a steady-state heat balance between the friction heat generation and the cooling system cannot be reached, thermally-induced preload will lead to a further increase of the frictional heat generation and then cause the thermal failure of the spindle.Entropy2016-07-22187Article10.3390/e180702712711099-43002016-07-22doi: 10.3390/e18070271Li WuQingchang Tan<![CDATA[Entropy, Vol. 18, Pages 274: An Entropy-Based Kernel Learning Scheme toward Efficient Data Prediction in Cloud-Assisted Network Environments]]>
http://www.mdpi.com/1099-4300/18/7/274
With the recent emergence of wireless sensor networks (WSNs) in the cloud computing environment, it is now possible to monitor and gather physical information via lots of sensor nodes to meet the requirements of cloud services. Generally, those sensor nodes collect data and send data to sink node where end-users can query all the information and achieve cloud applications. Currently, one of the main disadvantages in the sensor nodes is that they are with limited physical performance relating to less memory for storage and less source of power. Therefore, in order to avoid such limitation, it is necessary to develop an efficient data prediction method in WSN. To serve this purpose, by reducing the redundant data transmission between sensor nodes and sink node while maintaining the required acceptable errors, this article proposes an entropy-based learning scheme for data prediction through the use of kernel least mean square (KLMS) algorithm. The proposed scheme called E-KLMS develops a mechanism to maintain the predicted data synchronous at both sides. Specifically, the kernel-based method is able to adjust the coefficients adaptively in accordance with every input, which will achieve a better performance with smaller prediction errors, while employing information entropy to remove these data which may cause relatively large errors. E-KLMS can effectively solve the tradeoff problem between prediction accuracy and computational efforts while greatly simplifying the training structure compared with some other data prediction approaches. What’s more, the kernel-based method and entropy technique could ensure the prediction effect by both improving the accuracy and reducing errors. Experiments with some real data sets have been carried out to validate the efficiency and effectiveness of E-KLMS learning scheme, and the experiment results show advantages of the our method in prediction accuracy and computational time.Entropy2016-07-22187Article10.3390/e180702742741099-43002016-07-22doi: 10.3390/e18070274Xiong LuoJi LiuDandan ZhangWeiping WangYueqin Zhu<![CDATA[Entropy, Vol. 18, Pages 270: Toward Improved Understanding of the Physical Meaning of Entropy in Classical Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/7/270
The year 2015 marked the 150th anniversary of “entropy” as a concept in classical thermodynamics. Despite its central role in the mathematical formulation of the Second Law and most of classical thermodynamics, its physical meaning continues to be elusive and confusing. This is especially true when we seek a reconstruction of the classical thermodynamics of a system from the statistical behavior of its constituent microscopic particles or vice versa. This paper sketches the classical definition by Clausius and offers a modified mathematical definition that is intended to improve its conceptual meaning. In the modified version, the differential of specific entropy appears as a non-dimensional energy term that captures the invigoration or reduction of microscopic motion upon addition or withdrawal of heat from the system. It is also argued that heat transfer is a better model process to illustrate entropy; the canonical heat engines and refrigerators often used to illustrate this concept are not very relevant to new areas of thermodynamics (e.g., thermodynamics of biological systems). It is emphasized that entropy changes, as invoked in the Second Law, are necessarily related to the non-equilibrium interactions of two or more systems that might have initially been in thermal equilibrium but at different temperatures. The overall direction of entropy increase indicates the direction of naturally occurring heat transfer processes in an isolated system that consists of internally interacting (non-isolated) sub systems. We discuss the implication of the proposed modification on statements of the Second Law, interpretation of entropy in statistical thermodynamics, and the Third Law.Entropy2016-07-22187Article10.3390/e180702702701099-43002016-07-22doi: 10.3390/e18070270Ben Akih-Kumgeh<![CDATA[Entropy, Vol. 18, Pages 268: Mechanothermodynamic Entropy and Analysis of Damage State of Complex Systems]]>
http://www.mdpi.com/1099-4300/18/7/268
Mechanics from its side and thermodynamics from its side consider evolution of complex systems, including the Universe. Created classical thermodynamic theory of evolution has one important drawback since it predicts an inevitable heat death of the Universe which is unlikely to take place according to the modern perceptions. The attempts to create a generalized theory of evolution in mechanics were unsuccessful since mechanical equations do not discriminate between future and past. It is natural that the union of mechanics and thermodynamics was difficult to realize since they are based on different methodology. We make an attempt to propose a generalized theory of evolution which is based on the concept of tribo-fatigue entropy. Essence of the proposed approach is that tribo-fatigue entropy is determined by the processes of damageability conditioned by thermodynamic and mechanical effects causing to the change of states of any systems. Law of entropy increase is formulated analytically in the general form. Mechanothermodynamical function is constructed for specific case of fatigue damage of materials due to variation of temperature from 3 K to 0.8 of melting temperature basing on the analysis of 136 experimental results.Entropy2016-07-20187Article10.3390/e180702682681099-43002016-07-20doi: 10.3390/e18070268Leonid SosnovskiySergei Sherbakov<![CDATA[Entropy, Vol. 18, Pages 266: Complex Dynamics of a Continuous Bertrand Duopoly Game Model with Two-Stage Delay]]>
http://www.mdpi.com/1099-4300/18/7/266
This paper studies a continuous Bertrand duopoly game model with two-stage delay. Our aim is to investigate the influence of delay and weight on the complex dynamic characteristics of the system. We obtain the bifurcation point of the system respect to delay parameter by calculating. In addition, the dynamic properties of the system are simulated by power spectrum, attractor, bifurcation diagram, the largest Lyapunov exponent, 3D surface chart, 4D Cubic Chart, 2D parameter bifurcation diagram, and 3D parameter bifurcation diagram. The results show that the stability of the system depends on the delay and weight, in order to maintain stability of price and ensure the firm profit, the firms must control the parameters in the reasonable region. Otherwise, the system will lose stability, and even into chaos, which will cause fluctuations in prices, the firms cannot be profitable. Finally, the chaos control of the system is carried out by a control strategy of the state variables’ feedback and parameter variation, which effectively avoid the damage of chaos to the economic system. Therefore, the results of this study have an important practical significance to make decisions with multi-stage delay for oligopoly firms.Entropy2016-07-20187Article10.3390/e180702662661099-43002016-07-20doi: 10.3390/e18070266Junhai MaFengshan Si<![CDATA[Entropy, Vol. 18, Pages 267: Novel Criteria for Deterministic Remote State Preparation via the Entangled Six-Qubit State]]>
http://www.mdpi.com/1099-4300/18/7/267
In this paper, our concern is to design some criteria for deterministic remote state preparation for preparing an arbitrary three-particle state via a genuinely entangled six-qubit state. First, we put forward two schemes in both the real and complex Hilbert space, respectively. Using an appropriate set of eight-qubit measurement basis, the remote three-qubit preparation is completed with unit success probability. Departing from previous research, our protocol has a salient feature in that the serviceable measurement basis only contains the initial coefficients and their conjugate values. By utilizing the permutation group, it is convenient to provide the permutation relationship between coefficients. Second, our ideas and methods can also be generalized to the situation of preparing an arbitrary N-particle state in complex case by taking advantage of Bell states as quantum resources. More importantly, criteria satisfied conditions for preparation with 100% success probability in complex Hilbert space is summarized. Third, the classical communication costs of our scheme are calculated to determine the classical recourses required. It is also worth mentioning that our protocol has higher efficiency and lower resource costs compared with the other papers.Entropy2016-07-20187Article10.3390/e180702672671099-43002016-07-20doi: 10.3390/e18070267Gang XuXiu-Bo ChenZhao DouJing LiXin LiuZongpeng Li<![CDATA[Entropy, Vol. 18, Pages 264: The Structure of the Class of Maximum Tsallis–Havrda–Chavát Entropy Copulas]]>
http://www.mdpi.com/1099-4300/18/7/264
A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy copula to be a copula in the class introduced in Rodríguez-Lallena and Úbeda-Flores (2004), and we also show that each copula in that class is a maximum entropy copula.Entropy2016-07-19187Article10.3390/e180702642641099-43002016-07-19doi: 10.3390/e18070264Jesús GarcíaVerónica González-LópezRoger Nelsen<![CDATA[Entropy, Vol. 18, Pages 265: Noise Suppression in 94 GHz Radar-Detected Speech Based on Perceptual Wavelet Packet]]>
http://www.mdpi.com/1099-4300/18/7/265
A millimeter wave (MMW) radar sensor is employed in our laboratory to detect human speech because it provides a new non-contact speech acquisition method that is suitable for various applications. However, the speech detected by the radar sensor is often degraded by combined noise. This paper proposes a new perceptual wavelet packet method that is able to enhance the speech acquired using a 94 GHz MMW radar system by suppressing the noise. The process is as follows. First, the radar speech signal is decomposed using a perceptual wavelet packet. Then, an adaptive wavelet threshold and new modified thresholding function are employed to remove the noise from the detected speech. The results obtained from the speech spectrograms, listening tests and objective evaluation show that the new method significantly improves the performance of the detected speech.Entropy2016-07-19187Article10.3390/e180702652651099-43002016-07-19doi: 10.3390/e18070265Fuming ChenChuantao LiQiang AnFulai LiangFugui QiSheng LiJianqi Wang<![CDATA[Entropy, Vol. 18, Pages 263: Positive Sofic Entropy Implies Finite Stabilizer]]>
http://www.mdpi.com/1099-4300/18/7/263
We prove that, for a measure preserving action of a sofic group with positive sofic entropy, the stabilizer is finite on a set of positive measures. This extends the results of Weiss and Seward for amenable groups and free groups, respectively. It follows that the action of a sofic group on its subgroups by inner automorphisms has zero topological sofic entropy, and that a faithful action that has completely positive sofic entropy must be free.Entropy2016-07-18187Article10.3390/e180702632631099-43002016-07-18doi: 10.3390/e18070263Tom Meyerovitch<![CDATA[Entropy, Vol. 18, Pages 262: Greedy Algorithms for Optimal Distribution Approximation]]>
http://www.mdpi.com/1099-4300/18/7/262
The approximation of a discrete probability distribution t by an M-type distribution p is considered. The approximation error is measured by the informational divergence D ( t ∥ p ) , which is an appropriate measure, e.g., in the context of data compression. Properties of the optimal approximation are derived and bounds on the approximation error are presented, which are asymptotically tight. A greedy algorithm is proposed that solves this M-type approximation problem optimally. Finally, it is shown that different instantiations of this algorithm minimize the informational divergence D ( p ∥ t ) or the variational distance ∥ p − t ∥ 1 .Entropy2016-07-18187Article10.3390/e180702622621099-43002016-07-18doi: 10.3390/e18070262Bernhard GeigerGeorg Böcherer<![CDATA[Entropy, Vol. 18, Pages 252: Three Strategies for the Design of Advanced High-Entropy Alloys]]>
http://www.mdpi.com/1099-4300/18/7/252
High-entropy alloys (HEAs) have recently become a vibrant field of study in the metallic materials area. In the early years, the design of HEAs was more of an exploratory nature. The selection of compositions was somewhat arbitrary, and there was typically no specific goal to be achieved in the design. Very recently, however, the development of HEAs has gradually entered a different stage. Unlike the early alloys, HEAs developed nowadays are usually designed to meet clear goals, and have carefully chosen components, deliberately introduced multiple phases, and tailored microstructures. These alloys are referred to as advanced HEAs. In this paper, the progress in advanced HEAs is briefly reviewed. The design strategies for these materials are examined and are classified into three categories. Representative works in each category are presented. Finally, important issues and future directions in the development of advanced HEAs are pointed out and discussed.Entropy2016-07-15187Review10.3390/e180702522521099-43002016-07-15doi: 10.3390/e18070252Ming-Hung Tsai<![CDATA[Entropy, Vol. 18, Pages 255: Coupled Thermoelectric Devices: Theory and Experiment]]>
http://www.mdpi.com/1099-4300/18/7/255
In this paper, we address theoretically and experimentally the optimization problem of the heat transfer occurring in two coupled thermoelectric devices. A simple experimental set up is used. The optimization parameters are the applied electric currents. When one thermoelectric is analysed, the temperature difference Δ T between the thermoelectric boundaries shows a parabolic profile with respect to the applied electric current. This behaviour agrees qualitatively with the corresponding experimental measurement. The global entropy generation shows a monotonous increase with the electric current. In the case of two coupled thermoelectric devices, elliptic isocontours for Δ T are obtained in applying an electric current through each of the thermoelectrics. The isocontours also fit well with measurements. Optimal figure of merit is found for a specific set of values of the applied electric currents. The entropy generation-thermal figure of merit relationship is studied. It is shown that, given a value of the thermal figure of merit, the device can be operated in a state of minimum entropy production.Entropy2016-07-14187Article10.3390/e180702552551099-43002016-07-14doi: 10.3390/e18070255Jaziel RojasIván RiveraAldo FigueroaFederico Vázquez<![CDATA[Entropy, Vol. 18, Pages 260: Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices]]>
http://www.mdpi.com/1099-4300/18/7/260
Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.Entropy2016-07-14187Article10.3390/e180702602601099-43002016-07-14doi: 10.3390/e18070260Luis BonillaManuel Carretero<![CDATA[Entropy, Vol. 18, Pages 257: Using Wearable Accelerometers in a Community Service Context to Categorize Falling Behavior]]>
http://www.mdpi.com/1099-4300/18/7/257
In this paper, the Multiscale Entropy (MSE) analysis of acceleration data collected from a wearable inertial sensor was compared with other features reported in the literature to observe falling behavior from the acceleration data, and traditional clinical scales to evaluate falling behavior. We use a fall risk assessment over a four-month period to examine &gt;65 year old participants in a community service context using simple clinical tests, including the Short Form Berg Balance Scale (SFBBS), Timed Up and Go test (TUG), and the Short Portable Mental Status Questionnaire (SPMSQ), with wearable accelerometers for the TUG test. We classified participants into fallers and non-fallers to (1) compare the features extracted from the accelerometers and (2) categorize fall risk using statistics from TUG test results. Combined, TUG and SFBBS results revealed defining features were test time, Slope(A) and slope(B) in Sit(A)-to-stand(B), and range(A) and slope(B) in Stand(B)-to-sit(A). Of (1) SPMSQ; (2) TUG and SPMSQ; and (3) BBS and SPMSQ results, only range(A) in Stand(B)-to-sit(A) was a defining feature. From MSE indicators, we found that whether in the X, Y or Z direction, TUG, BBS, and the combined TUG and SFBBS are all distinguishable, showing that MSE can effectively classify participants in these clinical tests using behavioral actions. This study highlights the advantages of body-worn sensors as ordinary and low cost tools available outside the laboratory. The results indicated that MSE analysis of acceleration data can be used as an effective metric to categorize falling behavior of community-dwelling elderly. In addition to clinical application, (1) our approach requires no expert physical therapist, nurse, or doctor for evaluations and (2) fallers can be categorized irrespective of the critical value from clinical tests.Entropy2016-07-13187Article10.3390/e180702572571099-43002016-07-13doi: 10.3390/e18070257Chia-Hsuan LeeTien-Lung SunBernard JiangVictor Choi<![CDATA[Entropy, Vol. 18, Pages 258: Structures in Sound: Analysis of Classical Music Using the Information Length]]>
http://www.mdpi.com/1099-4300/18/7/258
We show that music is represented by fluctuations away from the minimum path through statistical space. Our key idea is to envision music as the evolution of a non-equilibrium system and to construct probability distribution functions (PDFs) from musical instrument digital interface (MIDI) files of classical compositions. Classical music is then viewed through the lens of generalized position and velocity, based on the Fisher metric. Through these statistical tools we discuss a way to quantitatively discriminate between music and noise.Entropy2016-07-13187Article10.3390/e180702582581099-43002016-07-13doi: 10.3390/e18070258Schuyler NicholsonEun-jin Kim<![CDATA[Entropy, Vol. 18, Pages 256: The Logical Consistency of Simultaneous Agnostic Hypothesis Tests]]>
http://www.mdpi.com/1099-4300/18/7/256
Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such outcomes are generally inconvenient to statisticians (who want to communicate the results to practitioners in a simple fashion) and non-statisticians (confused by conflicting pieces of information). Based on this inconvenience, one might want to use tests that satisfy logical requirements. However, Izbicki and Esteves shows that the only tests that are in accordance with three logical requirements (monotonicity, invertibility and consonance) are trivial tests based on point estimation, which generally lack statistical optimality. As a possible solution to this dilemma, this paper adapts the above logical requirements to agnostic tests, in which one can accept, reject or remain agnostic with respect to a given hypothesis. Each of the logical requirements is characterized in terms of a Bayesian decision theoretic perspective. Contrary to the results obtained for regular hypothesis tests, there exist agnostic tests that satisfy all logical requirements and also perform well statistically. In particular, agnostic tests that fulfill all logical requirements are characterized as region estimator-based tests. Examples of such tests are provided.Entropy2016-07-13187Article10.3390/e180702562561099-43002016-07-13doi: 10.3390/e18070256Luís EstevesRafael IzbickiJulio SternRafael Stern<![CDATA[Entropy, Vol. 18, Pages 259: Ensemble Equivalence for Distinguishable Particles]]>
http://www.mdpi.com/1099-4300/18/7/259
Statistics of distinguishable particles has become relevant in systems of colloidal particles and in the context of applications of statistical mechanics to complex networks. In this paper, we present evidence that a commonly used expression for the partition function of a system of distinguishable particles leads to huge fluctuations of the number of particles in the grand canonical ensemble and, consequently, to nonequivalence of statistical ensembles. We will show that the alternative definition of the partition function including, naturally, Boltzmann’s correct counting factor for distinguishable particles solves the problem and restores ensemble equivalence. Finally, we also show that this choice for the partition function does not produce any inconsistency for a system of distinguishable localized particles, where the monoparticular partition function is not extensive.Entropy2016-07-13187Article10.3390/e180702592591099-43002016-07-13doi: 10.3390/e18070259Antonio Fernández-PeraltaRaúl Toral<![CDATA[Entropy, Vol. 18, Pages 254: Link between Lie Group Statistical Mechanics and Thermodynamics of Continua]]>
http://www.mdpi.com/1099-4300/18/7/254
In this work, we consider the value of the momentum map of the symplectic mechanics as an affine tensor called momentum tensor. From this point of view, we analyze the underlying geometric structure of the theories of Lie group statistical mechanics and relativistic thermodynamics of continua, formulated by Souriau independently of each other. We bridge the gap between them in the classical Galilean context. These geometric structures of the thermodynamics are rich and we think they might be a source of inspiration for the geometric theory of information based on the concept of entropy.Entropy2016-07-12187Article10.3390/e180702542541099-43002016-07-12doi: 10.3390/e18070254Géry de Saxcé<![CDATA[Entropy, Vol. 18, Pages 253: The Use of Denoising and Analysis of the Acoustic Signal Entropy in Diagnosing Engine Valve Clearance]]>
http://www.mdpi.com/1099-4300/18/7/253
The paper presents a method for processing acoustic signals which allows the extraction, from a very noisy signal, of components which contain diagnostically useful information on the increased valve clearance of a combustion engine. This method used two-stage denoising of the acoustic signal performed by means of a discrete wavelet transform. Afterwards, based on the signal cleaned-up in this manner, its entropy was calculated as a quantitative measure of qualitative changes caused by the excessive clearance. The testing and processing of the actual acoustic signal of a combustion engine enabled clear extraction of components which contain information on the valve clearance being diagnosed.Entropy2016-07-12187Article10.3390/e180702532531099-43002016-07-12doi: 10.3390/e18070253Tomasz FiglusJozef GnapTomáš SkrúcanýBranislav ŠarkanJozef Stoklosa<![CDATA[Entropy, Vol. 18, Pages 246: Effect of a Percutaneous Coronary Intervention Procedure on Heart Rate Variability and Pulse Transit Time Variability: A Comparison Study Based on Fuzzy Measure Entropy]]>
http://www.mdpi.com/1099-4300/18/7/246
Percutaneous coronary intervention (PCI) is a common treatment method for patients with coronary artery disease (CAD), but its effect on synchronously measured heart rate variability (HRV) and pulse transit time variability (PTTV) have not been well established. This study aimed to verify whether PCI for CAD patients affects both HRV and PTTV parameters. Sixteen CAD patients were enrolled. Two five-minute ECG and finger photoplethysmography (PPG) signals were recorded, one within 24 h before PCI and another within 24 h after PCI. The changes of RR and pulse transit time (PTT) intervals due to the PCI procedure were first compared. Then, HRV and PTTV were evaluated by a standard short-term time-domain variability index of standard deviation of time series (SDTS) and our previously developed entropy-based index of fuzzy measure entropy (FuzzyMEn). To test the effect of different time series length on HRV and PTTV results, we segmented the RR and PTT time series using four time windows of 200, 100, 50 and 25 beats respectively. The PCI-induced changes in HRV and PTTV, as well as in RR and PTT intervals, are different. PCI procedure significantly decreased RR intervals (before PCI 973 ± 85 vs. after PCI 907 ± 100 ms, p &lt; 0.05) while significantly increasing PTT intervals (207 ± 18 vs. 214 ± 19 ms, p &lt; 0.01). For HRV, SDTS-only output significant lower values after PCI when time windows are 100 and 25 beats while presenting no significant decreases for other two time windows. By contrast, FuzzyMEn gave significant lower values after PCI for all four time windows (all p &lt; 0.05). For PTTV, SDTS hardly changed after PCI at any time window (all p &gt; 0.90) whereas FuzzyMEn still reported significant lower values (p &lt; 0.05 for 25 beats time window and p &lt; 0.01 for other three time windows). For both HRV and PTTV, with the increase of time window values, SDTS decreased while FuzzyMEn increased. This pilot study demonstrated that the RR interval decreased whereas the PTT interval increased after the PCI procedure and that there were significant reductions in both HRV and PTTV immediately after PCI using the FuzzyMEn method, indicating the changes in underlying mechanisms in cardiovascular system.Entropy2016-07-09187Article10.3390/e180702462461099-43002016-07-09doi: 10.3390/e18070246Guang ZhangChengyu LiuLizhen JiJing YangChangchun Liu<![CDATA[Entropy, Vol. 18, Pages 251: Maximum Entropy Learning with Deep Belief Networks]]>
http://www.mdpi.com/1099-4300/18/7/251
Conventionally, the maximum likelihood (ML) criterion is applied to train a deep belief network (DBN). We present a maximum entropy (ME) learning algorithm for DBNs, designed specifically to handle limited training data. Maximizing only the entropy of parameters in the DBN allows more effective generalization capability, less bias towards data distributions, and robustness to over-fitting compared to ML learning. Results of text classification and object recognition tasks demonstrate ME-trained DBN outperforms ML-trained DBN when training data is limited.Entropy2016-07-08187Article10.3390/e180702512511099-43002016-07-08doi: 10.3390/e18070251Payton LinSzu-Wei FuSyu-Siang WangYing-Hui LaiYu Tsao<![CDATA[Entropy, Vol. 18, Pages 249: Modeling Fluid’s Dynamics with Master Equations in Ultrametric Spaces Representing the Treelike Structure of Capillary Networks]]>
http://www.mdpi.com/1099-4300/18/7/249
We present a new conceptual approach for modeling of fluid flows in random porous media based on explicit exploration of the treelike geometry of complex capillary networks. Such patterns can be represented mathematically as ultrametric spaces and the dynamics of fluids by ultrametric diffusion. The images of p-adic fields, extracted from the real multiscale rock samples and from some reference images, are depicted. In this model the porous background is treated as the environment contributing to the coefficients of evolutionary equations. For the simplest trees, these equations are essentially less complicated than those with fractional differential operators which are commonly applied in geological studies looking for some fractional analogs to conventional Euclidean space but with anomalous scaling and diffusion properties. It is possible to solve the former equation analytically and, in particular, to find stationary solutions. The main aim of this paper is to attract the attention of researchers working on modeling of geological processes to the novel utrametric approach and to show some examples from the petroleum reservoir static and dynamic characterization, able to integrate the p-adic approach with multifractals, thermodynamics and scaling. We also present a non-mathematician friendly review of trees and ultrametric spaces and pseudo-differential operators on such spaces.Entropy2016-07-07187Article10.3390/e180702492491099-43002016-07-07doi: 10.3390/e18070249Andrei KhrennikovKlaudia OleschkoMaría Correa López<![CDATA[Entropy, Vol. 18, Pages 250: Thermoeconomic Coherence: A Methodology for the Analysis and Optimisation of Thermal Systems]]>
http://www.mdpi.com/1099-4300/18/7/250
In the field of thermal systems, different approaches and methodologies have been proposed to merge thermodynamics and economics. They are usually referred as thermoeconomic methodologies and their objective is to find the optimum design of the thermal system given a specific objective function. Some thermoeconomic analyses go beyond that objective and attempt to find whether every component of the system is correctly designed or to quantify the inefficiencies of the components in economic terms. This paper takes another step in that direction and presents a new methodology to measure the thermoeconomic coherence of thermal systems, as well as the contribution of each parameter of the system to that coherence. It is based on the equality of marginal costs in the optimum. The methodology establishes a criterion to design coherently the system. Additionally, it may be used to evaluate how much a specific design is far from the optimum, which components are undersized or oversized and to measure the strength of the restrictions of the system. Finally, it may be extended to the analysis of uncertainties of the design process, providing a coherent design and sizing of the components with high uncertainties.Entropy2016-07-05187Article10.3390/e180702502501099-43002016-07-05doi: 10.3390/e18070250Antonio RoviraJosé Martínez-ValManuel Valdés<![CDATA[Entropy, Vol. 18, Pages 248: Cumulative Paired φ-Entropy]]>
http://www.mdpi.com/1099-4300/18/7/248
A new kind of entropy will be introduced which generalizes both the differential entropy and the cumulative (residual) entropy. The generalization is twofold. First, we simultaneously define the entropy for cumulative distribution functions (cdfs) and survivor functions (sfs), instead of defining it separately for densities, cdfs, or sfs. Secondly, we consider a general “entropy generating function” φ, the same way Burbea et al. (IEEE Trans. Inf. Theory 1982, 28, 489–495) and Liese et al. (Convex Statistical Distances; Teubner-Verlag, 1987) did in the context of φ-divergences. Combining the ideas of φ-entropy and cumulative entropy leads to the new “cumulative paired φ-entropy” ( C P E φ ). This new entropy has already been discussed in at least four scientific disciplines, be it with certain modifications or simplifications. In the fuzzy set theory, for example, cumulative paired φ-entropies were defined for membership functions, whereas in uncertainty and reliability theories some variations of C P E φ were recently considered as measures of information. With a single exception, the discussions in the scientific disciplines appear to be held independently of each other. We consider C P E φ for continuous cdfs and show that C P E φ is rather a measure of dispersion than a measure of information. In the first place, this will be demonstrated by deriving an upper bound which is determined by the standard deviation and by solving the maximum entropy problem under the restriction of a fixed variance. Next, this paper specifically shows that C P E φ satisfies the axioms of a dispersion measure. The corresponding dispersion functional can easily be estimated by an L-estimator, containing all its known asymptotic properties. C P E φ is the basis for several related concepts like mutual φ-information, φ-correlation, and φ-regression, which generalize Gini correlation and Gini regression. In addition, linear rank tests for scale that are based on the new entropy have been developed. We show that almost all known linear rank tests are special cases, and we introduce certain new tests. Moreover, formulas for different distributions and entropy calculations are presented for C P E φ if the cdf is available in a closed form.Entropy2016-07-01187Article10.3390/e180702482481099-43002016-07-01doi: 10.3390/e18070248Ingo KleinBenedikt MangoldMonika Doll<![CDATA[Entropy, Vol. 18, Pages 247: Entropy? Honest!]]>
http://www.mdpi.com/1099-4300/18/7/247
Here we deconstruct, and then in a reasoned way reconstruct, the concept of “entropy of a system”, paying particular attention to where the randomness may be coming from. We start with the core concept of entropy as a count associated with a description; this count (traditionally expressed in logarithmic form for a number of good reasons) is in essence the number of possibilities—specific instances or “scenarios”—that match that description. Very natural (and virtually inescapable) generalizations of the idea of description are the probability distribution and its quantum mechanical counterpart, the density operator. We track the process of dynamically updating entropy as a system evolves. Three factors may cause entropy to change: (1) the system’s internal dynamics; (2) unsolicited external influences on it; and (3) the approximations one has to make when one tries to predict the system’s future state. The latter task is usually hampered by hard-to-quantify aspects of the original description, limited data storage and processing resource, and possibly algorithmic inadequacy. Factors 2 and 3 introduce randomness—often huge amounts of it—into one’s predictions and accordingly degrade them. When forecasting, as long as the entropy bookkeping is conducted in an honest fashion, this degradation will always lead to an entropy increase. To clarify the above point we introduce the notion of honest entropy, which coalesces much of what is of course already done, often tacitly, in responsible entropy-bookkeping practice. This notion—we believe—will help to fill an expressivity gap in scientific discourse. With its help, we shall prove that any dynamical system—not just our physical universe—strictly obeys Clausius’s original formulation of the second law of thermodynamics if and only if it is invertible. Thus this law is a tautological property of invertible systems!Entropy2016-06-30187Review10.3390/e180702472471099-43002016-06-30doi: 10.3390/e18070247Tommaso Toffoli<![CDATA[Entropy, Vol. 18, Pages 244: Multiatom Quantum Coherences in Micromasers as Fuel for Thermal and Nonthermal Machines]]>
http://www.mdpi.com/1099-4300/18/7/244
In this paper, we address the question: To what extent is the quantum state preparation of multiatom clusters (before they are injected into the microwave cavity) instrumental for determining not only the kind of machine we may operate, but also the quantitative bounds of its performance? Figuratively speaking, if the multiatom cluster is the “crude oil”, the question is: Which preparation of the cluster is the refining process that can deliver a “gasoline” with a “specific octane”? We classify coherences or quantum correlations among the atoms according to their ability to serve as: (i) fuel for nonthermal machines corresponding to atomic states whose coherences displace or squeeze the cavity field, as well as cause its heating; and (ii) fuel that is purely “combustible”, i.e., corresponds to atomic states that only allow for heat and entropy exchange with the field and can energize a proper heat engine. We identify highly promising multiatom states for each kind of fuel and propose viable experimental schemes for their implementation.Entropy2016-06-29187Article10.3390/e180702442441099-43002016-06-29doi: 10.3390/e18070244Ceren DağWolfgang NiedenzuÖzgür MüstecaplıoğluGershon Kurizki<![CDATA[Entropy, Vol. 18, Pages 245: Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map]]>
http://www.mdpi.com/1099-4300/18/7/245
Multiple description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform) domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance.Entropy2016-06-29187Article10.3390/e180702452451099-43002016-06-29doi: 10.3390/e18070245Sen HanHuihui BaiMengmeng Zhang<![CDATA[Entropy, Vol. 18, Pages 243: Nonlinear Thermodynamic Analysis and Optimization of a Carnot Engine Cycle]]>
http://www.mdpi.com/1099-4300/18/7/243
As part of the efforts to unify the various branches of Irreversible Thermodynamics, the proposed work reconsiders the approach of the Carnot engine taking into account the finite physical dimensions (heat transfer conductances) and the finite speed of the piston. The models introduce the irreversibility of the engine by two methods involving different constraints. The first method introduces the irreversibility by a so-called irreversibility ratio in the entropy balance applied to the cycle, while in the second method it is emphasized by the entropy generation rate. Various forms of heat transfer laws are analyzed, but most of the results are given for the case of the linear law. Also, individual cases are studied and reported in order to provide a simple analytical form of the results. The engine model developed allowed a formal optimization using the calculus of variations.Entropy2016-06-28187Article10.3390/e180702432431099-43002016-06-28doi: 10.3390/e18070243Michel FeidtMonica CosteaStoian PetrescuCamelia Stanciu<![CDATA[Entropy, Vol. 18, Pages 242: Fast EEMD Based AM-Correntropy Matrix and Its Application on Roller Bearing Fault Diagnosis]]>
http://www.mdpi.com/1099-4300/18/7/242
Roller bearing plays a significant role in industrial sectors. To improve the ability of roller bearing fault diagnosis under multi-rotating situation, this paper proposes a novel roller bearing fault characteristic: the Amplitude Modulation (AM) based correntropy extracted from the Intrinsic Mode Functions (IMFs), which are decomposed by Fast Ensemble Empirical mode decomposition (FEEMD) and employ Least Square Support Vector Machine (LSSVM) to implement intelligent fault identification. Firstly, the roller bearing vibration acceleration signal is decomposed by FEEMD to extract IMFs. Secondly, IMF correntropy matrix (IMFCM) as the fault feature matrix is calculated from the AM-correntropy model of the primary vibration signal and IMFs. Furthermore, depending on LSSVM, the fault identification results of the roller bearing are obtained. Through the bearing identification experiments in stationary rotating conditions, it was verified that IMFCM generates more stable and higher diagnosis accuracy than conventional fault features such as energy moment, fuzzy entropy, and spectral kurtosis. Additionally, it proves that IMFCM has more diagnosis robustness than conventional fault features under cross-mixed roller bearing operating conditions. The diagnosis accuracy was more than 84% for the cross-mixed operating condition, which is much higher than the traditional features. In conclusion, it was proven that FEEMD-IMFCM-LSSVM is a reliable technology for roller bearing fault diagnosis under the constant or multi-positioned operating conditions, and as such, it possesses potential prospects for a broad application of uses.Entropy2016-06-28187Article10.3390/e180702422421099-43002016-06-28doi: 10.3390/e18070242Yunxiao FuLimin JiaYong QinJie YangDing Fu<![CDATA[Entropy, Vol. 18, Pages 241: A Simulation-Based Study on Bayesian Estimators for the Skew Brownian Motion]]>
http://www.mdpi.com/1099-4300/18/7/241
In analyzing a temporal data set from a continuous variable, diffusion processes can be suitable under certain conditions, depending on the distribution of increments. We are interested in processes where a semi-permeable barrier splits the state space, producing a skewed diffusion that can have different rates on each side. In this work, the asymptotic behavior of some Bayesian inferences for this class of processes is discussed and validated through simulations. As an application, we model the location of South American sea lions (Otaria flavescens) on the coast of Calbuco, southern Chile, which can be used to understand how the foraging behavior of apex predators varies temporally and spatially.Entropy2016-06-28187Article10.3390/e180702412411099-43002016-06-28doi: 10.3390/e18070241Manuel BarahonaLaura RifoMaritza SepúlvedaSoledad Torres<![CDATA[Entropy, Vol. 18, Pages 238: Strong Secrecy Capacity of a Class of Wiretap Networks]]>
http://www.mdpi.com/1099-4300/18/7/238
This paper considers a special class of wiretap networks with a single source node and K sink nodes. The source message is encoded into a binary digital sequence of length N, divided into K subsequences, and sent to the K sink nodes respectively through noiseless channels. The legitimate receivers are able to obtain subsequences from arbitrary μ 1 = K α 1 sink nodes. Meanwhile, there exist eavesdroppers who are able to observe subsequences from arbitrary μ 2 = K α 2 sink nodes, where 0 ≤ α 2 &lt; α 1 ≤ 1 . The goal is to let the receivers be able to recover the source message with a vanishing decoding error probability, and keep the eavesdroppers ignorant about the source message. It is clear that the communication model is an extension of wiretap channel II. Secrecy capacity with respect to the strong secrecy criterion is established. In the proof of the direct part, a codebook is generated by a randomized scheme and partitioned by Csiszár’s almost independent coloring scheme. Unlike the linear network coding schemes, our coding scheme is working on the binary field and hence independent of the scale of the network.Entropy2016-06-24187Article10.3390/e180702382381099-43002016-06-24doi: 10.3390/e18070238Dan HeWangmei Guo<![CDATA[Entropy, Vol. 18, Pages 239: Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation]]>
http://www.mdpi.com/1099-4300/18/7/239
The minimum error entropy (MEE) algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error) and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.Entropy2016-06-24187Article10.3390/e180702392391099-43002016-06-24doi: 10.3390/e18070239Namyong KimKihyeon Kwon<![CDATA[Entropy, Vol. 18, Pages 240: When Is an Area Law Not an Area Law?]]>
http://www.mdpi.com/1099-4300/18/7/240
Entanglement entropy is typically proportional to area, but sometimes it acquires an additional logarithmic pre-factor. We offer some intuitive explanations for these facts.Entropy2016-06-24187Article10.3390/e180702402401099-43002016-06-24doi: 10.3390/e18070240Anushya ChandranChris LaumannRafael Sorkin<![CDATA[Entropy, Vol. 18, Pages 237: Thermodynamic Analysis of Resources Used in Thermal Spray Processes: Energy and Exergy Methods]]>
http://www.mdpi.com/1099-4300/18/7/237
In manufacturing, thermal spray technology encompasses a group of coating processes that provide functional surfaces to improve the performance of the components and protect them from corrosion, wear, heat and other failings. Many types and forms of feedstock can be thermal sprayed, and each requires different process conditions and life cycle preparations. The required thermal energy is generated by a chemical (combustion) or electrical (plasma/or arc) energy source. Due to high inefficiencies associated with energy and material consumption in this process, a comprehensive resources used analysis for a sustainable improvement has always been promising. This study aims to identify and compare the influence of using different forms of feedstock (powder, suspension) as well as energy sources (combustion, plasma) on efficiency and effectiveness of energy conversion and resources consumption for different thermal spray processes based on energy and exergy analysis. Exergy destruction ratio and effectiveness efficiency are used to evaluate the energy conversion efficiency. The degree of perfection and degree of energy ratio are applied to account for the intensity of resources consumption (energy or material) in thermal spray processes. It is indicated that high velocity suspension flame spray has the lowest effectiveness efficiency and the highest exergy destruction compared to other thermal spray processes. For resource accounting purposes, in general, suspension thermal spray showed the lower degree of perfection and accordingly the higher inefficiency of resources used compared to powder thermal spray.Entropy2016-06-24187Article10.3390/e180702372371099-43002016-06-24doi: 10.3390/e18070237Kamran TaheriMohamed ElhorinyMartin PlachettaRainer Gadow<![CDATA[Entropy, Vol. 18, Pages 236: Generalisations of Fisher Matrices]]>
http://www.mdpi.com/1099-4300/18/6/236
Fisher matrices play an important role in experimental design and in data analysis. Their primary role is to make predictions for the inference of model parameters—both their errors and covariances. In this short review, I outline a number of extensions to the simple Fisher matrix formalism, covering a number of recent developments in the field. These are: (a) situations where the data (in the form of ( x , y ) pairs) have errors in both x and y; (b) modifications to parameter inference in the presence of systematic errors, or through fixing the values of some model parameters; (c) Derivative Approximation for LIkelihoods (DALI) - higher-order expansions of the likelihood surface, going beyond the Gaussian shape approximation; (d) extensions of the Fisher-like formalism, to treat model selection problems with Bayesian evidence.Entropy2016-06-22186Review10.3390/e180602362361099-43002016-06-22doi: 10.3390/e18060236Alan Heavens<![CDATA[Entropy, Vol. 18, Pages 235: A PUT-Based Approach to Automatically Extracting Quantities and Generating Final Answers for Numerical Attributes]]>
http://www.mdpi.com/1099-4300/18/6/235
Automatically extracting quantities and generating final answers for numerical attributes is very useful in many occasions, including question answering, image processing, human-computer interaction, etc. A common approach is to learn linguistics templates or wrappers and employ some algorithm or model to generate a final answer. However, building linguistics templates or wrappers is a tough task for builders. In addition, linguistics templates or wrappers are domain-dependent. To make the builder escape from building linguistics templates or wrappers, we propose a new approach to final answer generation based on Predicates-Units Table (PUT), a mini domain-independent knowledge base. It is deserved to point out that, in the following cases, quantities are not represented well. Quantities are absent of units. Quantities are perhaps wrong for a given question. Even if all of them are represented well, their units are perhaps inconsistent. These cases have a strong impact on final answer solving. One thousand nine hundred twenty-six real queries are employed to test the proposed method, and the experimental results show that the average correctness ratio of our approach is 87.1%.Entropy2016-06-22186Article10.3390/e180602352351099-43002016-06-22doi: 10.3390/e18060235Yaqing LiuLidong WangRong ChenYingjie SongYalin Cai<![CDATA[Entropy, Vol. 18, Pages 234: Constant Slope Maps and the Vere-Jones Classification]]>
http://www.mdpi.com/1099-4300/18/6/234
We study continuous countably-piecewise monotone interval maps and formulate conditions under which these are conjugate to maps of constant slope, particularly when this slope is given by the topological entropy of the map. We confine our investigation to the Markov case and phrase our conditions in the terminology of the Vere-Jones classification of infinite matrices.Entropy2016-06-22186Article10.3390/e180602342341099-43002016-06-22doi: 10.3390/e18060234Jozef BobokHenk Bruin<![CDATA[Entropy, Vol. 18, Pages 232: 3D Buoyancy-Induced Flow and Entropy Generation of Nanofluid-Filled Open Cavities Having Adiabatic Diamond Shaped Obstacles]]>
http://www.mdpi.com/1099-4300/18/6/232
A three dimensional computational solution has been obtained to investigate the natural convection and entropy generation of nanofluid-filled open cavities with an adiabatic diamond shaped obstacle. In the model, the finite volume technique was used to solve the governing equations. Based on the configuration, the cavity is heated from the left vertical wall and the diamond shape was chosen as adiabatic. Effects of nanoparticle volume fraction, Rayleigh number (103 ≤ Ra ≤ 106) and width of diamond shape were studied as governing parameters. It was found that the geometry of the partition is a control parameter for heat and fluid flow inside the open enclosure.Entropy2016-06-21186Article10.3390/e180602322321099-43002016-06-21doi: 10.3390/e18060232Lioua KolsiOmid MahianHakan ÖztopWalid AichMohamed BorjiniNidal Abu-HamdehHabib Aissia<![CDATA[Entropy, Vol. 18, Pages 231: Product Design Time Forecasting by Kernel-Based Regression with Gaussian Distribution Weights]]>
http://www.mdpi.com/1099-4300/18/6/231
There exist problems of small samples and heteroscedastic noise in design time forecasts. To solve them, a kernel-based regression with Gaussian distribution weights (GDW-KR) is proposed here. GDW-KR maintains a Gaussian distribution over weight vectors for the regression. It is applied to seek the least informative distribution from those that keep the target value within the confidence interval of the forecast value. GDW-KR inherits the benefits of Gaussian margin machines. By assuming a Gaussian distribution over weight vectors, it could simultaneously offer a point forecast and its confidence interval, thus providing more information about product design time. Our experiments with real examples verify the effectiveness and flexibility of GDW-KR.Entropy2016-06-21186Article10.3390/e180602312311099-43002016-06-21doi: 10.3390/e18060231Zhi-Gen ShangHong-Sen Yan<![CDATA[Entropy, Vol. 18, Pages 233: Entropic Measure of Time, and Gas Expansion in Vacuum]]>
http://www.mdpi.com/1099-4300/18/6/233
The study considers advantages of the introduced measure of time based on the entropy change under irreversible processes (entropy production). Using the example of non-equilibrium expansion of an ideal gas in vacuum, such a measure is introduced. It is shown that, in the general case, this measure of time proves to be nonlinearly related to the reference measure assumed uniform by convention. The connection between this result and the results of other authors investigating the measure of time in some biological and cosmological problems is noted.Entropy2016-06-21186Article10.3390/e180602332331099-43002016-06-21doi: 10.3390/e18060233Leonid MartyushevEvgenii Shaiapin<![CDATA[Entropy, Vol. 18, Pages 229: Investigating Aging-Related Changes in the Coordination of Agonist and Antagonist Muscles Using Fuzzy Entropy and Mutual Information]]>
http://www.mdpi.com/1099-4300/18/6/229
Aging alters muscular coordination patterns. This study aimed to investigate aging-related changes in the coordination of agonist and antagonist muscles from two aspects, the activities of individual muscles and the inter-muscular coupling. Eighteen young subjects and 10 elderly subjects were recruited to modulate the agonist muscle activity to track a target during voluntary isometric elbow flexion and extension. Normalized muscle activation and fuzzy entropy (FuzzyEn) were applied to depict the activities of biceps and triceps. Mutual information (MI) was utilized to measure the inter-muscular coupling between biceps and triceps. The agonist activation decreased and the antagonist activation increased significantly during elbow flexion and extension with aging. FuzzyEn values of agonist electromyogram (EMG) were similar between the two age groups. FuzzyEn values of antagonist EMG increased significantly with aging during elbow extension. MI decreased significantly with aging during elbow extension. These results indicated increased antagonist co-activation and decreased inter-muscular coupling with aging during elbow extension, which might result from the reduced reciprocal inhibition and the recruitment of additional cortical-spinal pathways connected to biceps. Based on FuzzyEn and MI, this study provided a comprehensive understanding of the mechanisms underlying the aging-related changes in the coordination of agonist and antagonist muscles.Entropy2016-06-20186Article10.3390/e180602292291099-43002016-06-20doi: 10.3390/e18060229Wenbo SunJingtao LiangYuan YangYuanyu WuTiebin YanRong Song<![CDATA[Entropy, Vol. 18, Pages 213: Optimal Noise Enhanced Signal Detection in a Unified Framework]]>
http://www.mdpi.com/1099-4300/18/6/213
In this paper, a new framework for variable detectors is formulated in order to solve different noise enhanced signal detection optimal problems, where six different disjoint sets of detector and discrete vector pairs are defined according to the two inequality-constraints on detection and false-alarm probabilities. Then theorems and algorithms constructed based on the new framework are presented to search the optimal noise enhanced solutions to maximize the relative improvements of the detection and the false-alarm probabilities, respectively. Further, the optimal noise enhanced solution of the maximum overall improvement is obtained based on the new framework and the relationship among the three maximums is presented. In addition, the sufficient conditions for improvability or non-improvability under the two certain constraints are given. Finally, numerous examples are presented to illustrate the theoretical results and the proofs of the main theorems are given in the Appendix.Entropy2016-06-17186Article10.3390/e180602132131099-43002016-06-17doi: 10.3390/e18060213Ting YangShujun LiuMingchun TangKui ZhangXinzheng Zhang<![CDATA[Entropy, Vol. 18, Pages 230: On Extensions over Semigroups and Applications]]>
http://www.mdpi.com/1099-4300/18/6/230
Applying a theorem according to Rhemtulla and Formanek, we partially solve an open problem raised by Hochman with an affirmative answer. Namely, we show that if G is a countable torsion-free locally nilpotent group that acts by homeomorphisms on X, and S ⊂ G is a subsemigroup not containing the unit of G such that f ∈ 〈 1 , s f : s ∈ S 〉 for every f ∈ C ( X ) , then ( X , G ) has zero topological entropy.Entropy2016-06-15186Article10.3390/e180602302301099-43002016-06-15doi: 10.3390/e18060230Wen HuangLei JinXiangdong Ye<![CDATA[Entropy, Vol. 18, Pages 197: Information and Selforganization: A Unifying Approach and Applications]]>
http://www.mdpi.com/1099-4300/18/6/197
Selforganization is a process by which the interaction between the parts of a complex system gives rise to the spontaneous emergence of patterns, structures or functions. In this interaction the system elements exchange matter, energy and information. We focus our attention on the relations between selforganization and information in general and the way they are linked to cognitive processes in particular. We do so from the analytical and mathematical perspective of the “second foundation of synergetics” and its “synergetic computer” and with reference to several forms of information: Shannon’s information that deals with the quantity of a message irrespective of its meaning, semantic and pragmatic forms of information that deal with the meaning conveyed by messages and information adaptation that refers to the interplay between Shannon’s information and semantic or pragmatic information. We first elucidate the relations between selforganization and information theoretically and mathematically and then by means of specific case studies.Entropy2016-06-14186Article10.3390/e180601971971099-43002016-06-14doi: 10.3390/e18060197Hermann HakenJuval Portugali<![CDATA[Entropy, Vol. 18, Pages 228: Discrete Time Dirac Quantum Walk in 3+1 Dimensions]]>
http://www.mdpi.com/1099-4300/18/6/228
In this paper we consider quantum walks whose evolution converges to the Dirac equation in the limit of small wave-vectors. We show exact Fast Fourier implementation of the Dirac quantum walks in one, two, and three space dimensions. The behaviour of particle states—defined as states smoothly peaked in some wave-vector eigenstate of the walk—is described by an approximated dispersive differential equation that for small wave-vectors gives the usual Dirac particle and antiparticle kinematics. The accuracy of the approximation is provided in terms of a lower bound on the fidelity between the exactly evolved state and the approximated one. The jittering of the position operator expectation value for states having both a particle and an antiparticle component is analytically derived and observed in the numerical implementations.Entropy2016-06-14186Article10.3390/e180602282281099-43002016-06-14doi: 10.3390/e18060228Giacomo D’ArianoNicola MoscoPaolo PerinottiAlessandro Tosini<![CDATA[Entropy, Vol. 18, Pages 227: Fractional-Order Grey Prediction Method for Non-Equidistant Sequences]]>
http://www.mdpi.com/1099-4300/18/6/227
There are lots of non-equidistant sequences in actual applications due to random sampling, imperfect sensors, event-triggered phenomena, and so on. A new grey prediction method for non-equidistant sequences (r-NGM(1,1)) is proposed based on the basic grey model and the developed fractional-order non-equidistant accumulated generating operation (r-NAGO), and the accumulated order is extended from the positive to the negative. The whole r-NAGO deletes the randomness of original sequences in the form of weighted accumulation and improves the exponential law of accumulated sequences. Furthermore, the Levenberg–Marquardt algorithm is used to optimize the fractional order. The optimal r-NGM(1,1) can enhance the predicting performance of the non-equidistant sequences. Results of three practical cases in engineering applications demonstrate that the proposed r-NGM(1,1) provides the significant predicting performance compared with the traditional grey model.Entropy2016-06-14186Article10.3390/e180602272271099-43002016-06-14doi: 10.3390/e18060227Yue ShenBo HePing Qin<![CDATA[Entropy, Vol. 18, Pages 171: Information-Theoretic-Entropy Based Weight Aggregation Method in Multiple-Attribute Group Decision-Making]]>
http://www.mdpi.com/1099-4300/18/6/171
Weight aggregation is the key process to solve a multiple-attribute group decision-making (MAGDM) problem. This paper is trying to propose a possible approach to objectivize subjective information and to aggregate information from attribute values themselves and decision-makers’ judgment. An MAGDM problem without information about decision-makers’ and attributes’ weight is considered. In order to define decision-makers’ subjective preference, their utility function is introduced. The attributes value matrix is converted into a subjective attributes value matrix based on their subjective judgment on attribute values. By utilizing the entropy weighting technique, decision-maker’s subjective weight on attributes and objective weight on attributes are determined individually based on the subjective attributes value matrix and attributes value matrix. Based on the principle of minimum cross-entropy, all decision-makers’ subjective weights are integrated into a single weight vector that is closest to all decision-makers’ judgment without any extra information added. Then, by applying the principle of minimum cross-entropy again, a weight aggregation method is proposed to combine the subjective and objective weight of attributes. Finally, an MAGDM example of project choosing is presented to illustrate the procedure of the proposed method.Entropy2016-06-14186Article10.3390/e180601711711099-43002016-06-14doi: 10.3390/e18060171Dayi HeJiaqiang XuXiaoling Chen<![CDATA[Entropy, Vol. 18, Pages 226: Nano-Crystallization of High-Entropy Amorphous NbTiAlSiWxNy Films Prepared by Magnetron Sputtering]]>
http://www.mdpi.com/1099-4300/18/6/226
High-entropy amorphous NbTiAlSiWxNy films (x = 0 or 1, i.e., NbTiAlSiNy and NbTiAlSiWNy) were prepared by magnetron sputtering method in the atmosphere of a mixture of N2 + Ar (N2 + Ar = 24 standard cubic centimeter per minute (sccm)), where N2 = 0, 4, and 8 sccm). All the as-deposited films present amorphous structures, which remain stable at 700 °C for over 24 h. After heat treatment at 1000 °C the films began to crystalize, and while the NbTiAlSiNy films (N2 = 4, 8 sccm) exhibit a face-centered cubic (FCC) structure, the NbTiAlSiW metallic films show a body-centered cubic (BCC) structure and then transit into a FCC structure composed of nanoscaled particles with increasing nitrogen flow rate. The hardness and modulus of the as-deposited NbTiAlSiNy films reach maximum values of 20.5 GPa and 206.8 GPa, respectively. For the as-deposited NbTiAlSiWNy films, both modulus and hardness increased to maximum values of 13.6 GPa and 154.4 GPa, respectively, and then decrease as the N2 flow rate is increased. Both films could be potential candidates for protective coatings at high temperature.Entropy2016-06-13186Article10.3390/e180602262261099-43002016-06-13doi: 10.3390/e18060226Wenjie ShengXiao YangCong WangYong Zhang<![CDATA[Entropy, Vol. 18, Pages 224: Entropy Generation on MHD Eyring–Powell Nanofluid through a Permeable Stretching Surface]]>
http://www.mdpi.com/1099-4300/18/6/224
In this article, entropy generation of an Eyring–Powell nanofluid through a permeable stretching surface has been investigated. The impact of magnetohydrodynamics (MHD) and nonlinear thermal radiation are also taken into account. The governing flow problem is modeled with the help of similarity transformation variables. The resulting nonlinear ordinary differential equations are solved numerically with the combination of the Successive linearization method and Chebyshev spectral collocation method. The impact of all the emerging parameters such as Hartmann number, Prandtl number, radiation parameter, Lewis number, thermophoresis parameter, Brownian motion parameter, Reynolds number, fluid parameter, and Brinkmann number are discussed with the help of graphs and tables. It is observed that the influence of the magnetic field opposes the flow. Moreover, entropy generation profile behaves as an increasing function of all the physical parameters.Entropy2016-06-08186Article10.3390/e180602242241099-43002016-06-08doi: 10.3390/e18060224Muhammad BhattiTehseen AbbasMohammad RashidiMohamed AliZhigang Yang<![CDATA[Entropy, Vol. 18, Pages 225: Extreme Learning Machine for Multi-Label Classification]]>
http://www.mdpi.com/1099-4300/18/6/225
Extreme learning machine (ELM) techniques have received considerable attention in the computational intelligence and machine learning communities because of the significantly low computational time required for training new classifiers. ELM provides solutions for regression, clustering, binary classification, multiclass classifications and so on, but not for multi-label learning. Multi-label learning deals with objects having multiple labels simultaneously, which widely exist in real-world applications. Therefore, a thresholding method-based ELM is proposed in this paper to adapt ELM to multi-label classification, called extreme learning machine for multi-label classification (ELM-ML). ELM-ML outperforms other multi-label classification methods in several standard data sets in most cases, especially for applications which only have a small labeled data set.Entropy2016-06-08186Article10.3390/e180602252251099-43002016-06-08doi: 10.3390/e18060225Xia SunJingting XuChangmeng JiangJun FengSu-Shing ChenFeijuan He<![CDATA[Entropy, Vol. 18, Pages 223: Entropy Generation on Nanofluid Flow through a Horizontal Riga Plate]]>
http://www.mdpi.com/1099-4300/18/6/223
In this article, entropy generation on viscous nanofluid through a horizontal Riga plate has been examined. The present flow problem consists of continuity, linear momentum, thermal energy, and nanoparticle concentration equation which are simplified with the help of Oberbeck-Boussinesq approximation. The resulting highly nonlinear coupled partial differential equations are solved numerically by means of the shooting method (SM). The expression of local Nusselt number and local Sherwood number are also taken into account and discussed with the help of table. The physical influence of all the emerging parameters such as Brownian motion parameter, thermophoresis parameter, Brinkmann number, Richardson number, nanoparticle flux parameter, Lewis number and suction parameter are demonstrated graphically. In particular, we conferred their influence on velocity profile, temperature profile, nanoparticle concentration profile and Entropy profile.Entropy2016-06-08186Article10.3390/e180602232231099-43002016-06-08doi: 10.3390/e18060223Tehseen AbbasMuhammad AyubMuhammad BhattiMohammad RashidiMohamed Ali<![CDATA[Entropy, Vol. 18, Pages 222: Stimuli-Magnitude-Adaptive Sample Selection for Data-Driven Haptic Modeling]]>
http://www.mdpi.com/1099-4300/18/6/222
Data-driven haptic modeling is an emerging technique where contact dynamics are simulated and interpolated based on a generic input-output matching model identified by data sensed from interaction with target physical objects. In data-driven modeling, selecting representative samples from a large set of data in a way that they can efficiently and accurately describe the whole dataset has been a long standing problem. This paper presents a new algorithm for the sample selection where the variances of output are observed for selecting representative input-output samples in order to ensure the quality of output prediction. The main idea is that representative pairs of input-output are chosen so that the ratio of the standard deviation to the mean of the corresponding output group does not exceed an application-dependent threshold. This output- and standard deviation-based sample selection is very effective in applications where the variance or relative error of the output should be kept within a certain threshold. This threshold is used for partitioning the input space using Binary Space Partitioning-tree (BSP-tree) and k-means algorithms. We apply the new approach to data-driven haptic modeling scenario where the relative error of the output prediction result should be less than a perceptual threshold. For evaluation, the proposed algorithm is compared to two state-of-the-art sample selection algorithms for regression tasks. Four kinds of haptic related behavior–force datasets are tested. The results showed that the proposed algorithm outperformed the others in terms of output-approximation quality and computational complexity.Entropy2016-06-07186Article10.3390/e180602222221099-43002016-06-07doi: 10.3390/e18060222Arsen AbdulaliWaseem HassanSeokhee Jeon<![CDATA[Entropy, Vol. 18, Pages 220: Zero Entropy Is Generic]]>
http://www.mdpi.com/1099-4300/18/6/220
Dan Rudolph showed that for an amenable group, Γ, the generic measure-preserving action of Γ on a Lebesgue space has zero entropy. Here, this is extended to nonamenable groups. In fact, the proof shows that every action is a factor of a zero entropy action! This uses the strange phenomena that in the presence of nonamenability, entropy can increase under a factor map. The proof uses Seward’s recent generalization of Sinai’s Factor Theorem, the Gaboriau–Lyons result and my theorem that for every nonabelian free group, all Bernoulli shifts factor onto each other.Entropy2016-06-04186Article10.3390/e180602202201099-43002016-06-04doi: 10.3390/e18060220Lewis Bowen<![CDATA[Entropy, Vol. 18, Pages 221: Application of Entropy-Based Metrics to Identify Emotional Distress from Electroencephalographic Recordings]]>
http://www.mdpi.com/1099-4300/18/6/221
Recognition of emotions is still an unresolved challenge, which could be helpful to improve current human-machine interfaces. Recently, nonlinear analysis of some physiological signals has shown to play a more relevant role in this context than their traditional linear exploration. Thus, the present work introduces for the first time the application of three recent entropy-based metrics: sample entropy (SE), quadratic SE (QSE) and distribution entropy (DE) to discern between emotional states of calm and negative stress (also called distress). In the last few years, distress has received growing attention because it is a common negative factor in the modern lifestyle of people from developed countries and, moreover, it may lead to serious mental and physical health problems. Precisely, 279 segments of 32-channel electroencephalographic (EEG) recordings from 32 subjects elicited to be calm or negatively stressed have been analyzed. Results provide that QSE is the first single metric presented to date with the ability to identify negative stress. Indeed, this metric has reported a discriminant ability of around 70%, which is only slightly lower than the one obtained by some previous works. Nonetheless, discriminant models from dozens or even hundreds of features have been previously obtained by using advanced classifiers to yield diagnostic accuracies about 80%. Moreover, in agreement with previous neuroanatomy findings, QSE has also revealed notable differences for all the brain regions in the neural activation triggered by the two considered emotions. Consequently, given these results, as well as easy interpretation of QSE, this work opens a new standpoint in the detection of emotional distress, which may gain new insights about the brain’s behavior under this negative emotion.Entropy2016-06-03186Article10.3390/e180602212211099-43002016-06-03doi: 10.3390/e18060221Beatriz García-MartínezArturo Martínez-RodrigoRoberto Zangróniz CantabranaJose Pastor GarcíaRaúl Alcaraz<![CDATA[Entropy, Vol. 18, Pages 218: Single Neuron Stochastic Predictive PID Control Algorithm for Nonlinear and Non-Gaussian Systems Using the Survival Information Potential Criterion]]>
http://www.mdpi.com/1099-4300/18/6/218
This paper presents a novel stochastic predictive tracking control strategy for nonlinear and non-Gaussian stochastic systems based on the single neuron controller structure in the framework of information theory. Firstly, in order to characterize the randomness of the control system, survival information potential (SIP), instead of entropy, is adopted to formulate the performance index, which is not shift-invariant, i.e., its value varies with the change of the distribution location. Then, the optimal weights of the single neuron controller can be obtained by minimizing the presented SIP based predictive control criterion. Furthermore, mean-square convergence of the proposed control algorithm is also analyzed from the energy conservation perspective. Finally, a numerical example is given to show the effectiveness of the proposed method.Entropy2016-06-03186Article10.3390/e180602182181099-43002016-06-03doi: 10.3390/e18060218Mifeng RenTing ChengJunghui ChenXinying XuLan Cheng<![CDATA[Entropy, Vol. 18, Pages 217: Empirical Laws and Foreseeing the Future of Technological Progress]]>
http://www.mdpi.com/1099-4300/18/6/217
The Moore’s law (ML) is one of many empirical expressions that is used to characterize natural and artificial phenomena. The ML addresses technological progress and is expected to predict future trends. Yet, the “art” of predicting is often confused with the accurate fitting of trendlines to past events. Presently, data-series of multiple sources are available for scientific and computational processing. The data can be described by means of mathematical expressions that, in some cases, follow simple expressions and empirical laws. However, the extrapolation toward the future is considered with skepticism by the scientific community, particularly in the case of phenomena involving complex behavior. This paper addresses these issues in the light of entropy and pseudo-state space. The statistical and dynamical techniques lead to a more assertive perspective on the adoption of a given candidate law.Entropy2016-06-02186Article10.3390/e180602172171099-43002016-06-02doi: 10.3390/e18060217António LopesJosé Tenreiro MachadoAlexandra Galhano<![CDATA[Entropy, Vol. 18, Pages 219: Correction: Wolpert, D.H. The Free Energy Requirements of Biological Organisms; Implications for Evolution. Entropy 2016, 18, 138]]>
http://www.mdpi.com/1099-4300/18/6/219
The following corrections should be made to the published paper [1]: [...]Entropy2016-06-02186Correction10.3390/e180602192191099-43002016-06-02doi: 10.3390/e18060219David Wolpert<![CDATA[Entropy, Vol. 18, Pages 128: Experimental Study of Single Phase Flow in a Closed-Loop Cooling System with Integrated Mini-Channel Heat Sink]]>
http://www.mdpi.com/1099-4300/18/6/128
The flow and heat transfer characteristics of a closed-loop cooling system with a mini-channel heat sink for thermal management of electronics is studied experimentally. The heat sink is designed with corrugated fins to improve its heat dissipation capability. The experiments are performed using variable coolant volumetric flow rates and input heating powers. The experimental results show a high and reliable thermal performance using the heat sink with corrugated fins. The heat transfer capability is improved up to 30 W/cm2 when the base temperature is kept at a stable and acceptable level. Besides the heat transfer capability enhancement, the capability of the system to transfer heat for a long distance is also studied and a fast thermal response time to reach steady state is observed once the input heating power or the volume flow rate are varied. Under different input heat source powers and volumetric flow rates, our results suggest potential applications of the designed mini-channel heat sink in cooling microelectronics.Entropy2016-06-02186Article10.3390/e180601281281099-43002016-06-02doi: 10.3390/e18060128Lei MaXuxin ZhaoHongyuan SunQixing WuWei Liu<![CDATA[Entropy, Vol. 18, Pages 216: On Two-Distillable Werner States]]>
http://www.mdpi.com/1099-4300/18/6/216
We consider bipartite mixed states ρ in a d ⊗ d quantum system. We say that ρ is PPT if its partial transpose 1 ⊗ T ( ρ ) is positive semidefinite, and otherwise ρ is NPT. The well-known Werner states are divided into three types: (a) the separable states (the same as the PPT states); (b) the one-distillable states (necessarily NPT); and (c) the NPT states which are not one-distillable. We give several different formulations and provide further evidence for the validity of the conjecture that Werner states of type (c) are not two-distillable.Entropy2016-06-02186Article10.3390/e180602162161099-43002016-06-02doi: 10.3390/e18060216Dragomir Đoković<![CDATA[Entropy, Vol. 18, Pages 215: General Bulk-Viscous Solutions and Estimates of Bulk Viscosity in the Cosmic Fluid]]>
http://www.mdpi.com/1099-4300/18/6/215
We derive a general formalism for bulk viscous solutions of the energy-conservation equation for ρ ( a , ζ ) , both for a single-component and a multicomponent fluid in the Friedmann universe. For our purposes, these general solutions become valuable in estimating the order of magnitude of the phenomenological viscosity in the cosmic fluid at present. H ( z ) observations are found to put an upper limit on the magnitude of the modulus of the present-day bulk viscosity. It is found to be ζ 0 ∼ 10 6 Pa·s, in agreement with previous works. We point out that this magnitude is acceptable from a hydrodynamic point of view. Finally, we bring new insight by using our estimates of ζ to analyze the fate of the future universe. Of special interest is the case ζ ∝ ρ for which the fluid, originally situated in the quintessence region, may slide through the phantom barrier and inevitably be driven into a big rip. Typical rip times are found to be a few hundred Gy.Entropy2016-06-02186Article10.3390/e180602152151099-43002016-06-02doi: 10.3390/e18060215Ben NormannIver Brevik<![CDATA[Entropy, Vol. 18, Pages 214: Harmonic Source Localization Approach Based on Fast Kernel Entropy Optimization ICA and Minimum Conditional Entropy]]>
http://www.mdpi.com/1099-4300/18/6/214
Based on the fast kernel entropy optimization independent component analysis and the minimum conditional entropy, this paper proposes a harmonic source localization method which aims at accurately estimating harmonic currents and identifying harmonic sources. The injected harmonic currents are estimated by the fast kernel entropy optimization independent component analysis (FKEO-ICA) in the absence of prior knowledge of harmonic impedances. Then, the minimum conditional entropy is applied to locate the harmonic sources based on the estimated harmonic currents. The proposed harmonic source localization method is validated on the IEEE 34-bus system. By applying the correlation coefficient and three error evaluation indicators, comparison has been made among the performances of the FKEO-ICA and three other ICA algorithms. The results show that the FKEO-ICA algorithm could achieve a significantly better accuracy of harmonic current estimation, while the minimum conditional entropy could determine the locations of harmonic sources precisely.Entropy2016-06-01186Article10.3390/e180602142141099-43002016-06-01doi: 10.3390/e18060214Tianlei ZangZhengyou HeLing FuJing ChenQingquan Qian<![CDATA[Entropy, Vol. 18, Pages 211: A Confidence Set Analysis for Observed Samples: A Fuzzy Set Approach]]>
http://www.mdpi.com/1099-4300/18/6/211
Confidence sets are generally interpreted in terms of replications of an experiment. However, this interpretation is only valid before observing the sample. After observing the sample, any confidence sets have probability zero or one to contain the parameter value. In this paper, we provide a confidence set analysis for an observed sample based on fuzzy set theory by using the concept of membership functions. We show that the traditional ad hoc thresholds (the confidence and significance levels) can be attained from a general membership function. The applicability of the newly proposed theory is demonstrated by using well-known examples from the statistical literature and an application in the context of contingency tables.Entropy2016-05-30186Review10.3390/e180602112111099-43002016-05-30doi: 10.3390/e18060211José GonzálezLuis CastroVíctor LachosAlexandre Patriota<![CDATA[Entropy, Vol. 18, Pages 212: Extended First Law for Entanglement Entropy in Lovelock Gravity]]>
http://www.mdpi.com/1099-4300/18/6/212
The first law for the holographic entanglement entropy of spheres in a boundary CFT (Conformal Field Theory) with a bulk Lovelock dual is extended to include variations of the bulk Lovelock coupling constants. Such variations in the bulk correspond to perturbations within a family of boundary CFTs. The new contribution to the first law is found to be the product of the variation δ a of the “A”-type trace anomaly coefficient for even dimensional CFTs, or more generally its extension δ a * to include odd dimensional boundaries, times the ratio S / a * . Since a * is a measure of the number of degrees of freedom N per unit volume of the boundary CFT, this new term has the form μ δ N , where the chemical potential μ is given by the entanglement entropy per degree of freedom.Entropy2016-05-30186Article10.3390/e180602122121099-43002016-05-30doi: 10.3390/e18060212David KastorSourya RayJennie Traschen<![CDATA[Entropy, Vol. 18, Pages 210: Unified Quantum Model of Work Generation in Thermoelectric Generators, Solar and Fuel Cells]]>
http://www.mdpi.com/1099-4300/18/6/210
In the previous papers, the idea of “hidden oscillations” has been applied to explain work generation in semiconductor photovoltaic cells and thermoelectric generators. The aim of this paper is firstly to extend this approach to fuel cells and, secondly, to create a unified quantum model for all types of such devices. They are treated as electron pumps powered by heat or chemical engines. The working fluid is electron gas and the necessary oscillating element (“piston”) is provided by plasma oscillation. Those oscillations are localized around the junction that also serves as a diode rectifying fast electric charge oscillations and yielding a final output direct current (DC). The dynamics of the devices are governed by the Markovian master equations that can be derived in a rigorous way from the underlying Hamiltonian models and are consistent with the laws of thermodynamics. The new ingredient is the derivation of master equations for systems driven by chemical reactions.Entropy2016-05-28186Article10.3390/e180602102101099-43002016-05-28doi: 10.3390/e18060210Robert Alicki<![CDATA[Entropy, Vol. 18, Pages 207: Sparse Estimation Based on a New Random Regularized Matching Pursuit Generalized Approximate Message Passing Algorithm]]>
http://www.mdpi.com/1099-4300/18/6/207
Approximate Message Passing (AMP) and Generalized AMP (GAMP) algorithms usually suffer from serious convergence issues when the elements of the sensing matrix do not exactly match the zero-mean Gaussian assumption. To stabilize AMP/GAMP in these contexts, we have proposed a new sparse reconstruction algorithm, termed the Random regularized Matching pursuit GAMP (RrMpGAMP). It utilizes a random splitting support operation and some dropout/replacement support operations to make the matching pursuit steps regularized and uses a new GAMP-like algorithm to estimate the non-zero elements in a sparse vector. Moreover, our proposed algorithm can save much memory, be equipped with a comparable computational complexity as GAMP and support parallel computing in some steps. We have analyzed the convergence of this GAMP-like algorithm by the replica method and provided the convergence conditions of it. The analysis also gives an explanation about the broader variance range of the elements of the sensing matrix for this GAMP-like algorithm. Experiments using simulation data and real-world synthetic aperture radar tomography (TomoSAR) data show that our method provides the expected performance for scenarios where AMP/GAMP diverges.Entropy2016-05-28186Article10.3390/e180602072071099-43002016-05-28doi: 10.3390/e18060207Yongjie LuoGuan GuiXunchao CongQun Wan<![CDATA[Entropy, Vol. 18, Pages 179: Multiphoton Controllable Transport between Remote Resonators]]>
http://www.mdpi.com/1099-4300/18/6/179
We develop a novel method for multiphoton controllable transport between remote resonators. Specifically, an auxiliary resonator is used to control the coherent long-range coupling of two spatially separated resonators, mediated by a coupled-resonator chain of arbitrary length. In this manner, an arbitrary multiphoton quantum state can be either transmitted through or reflected off the intermediate chain on demand, with very high fidelity. We find, on using a time-independent perturbative treatment, that quantum information leakage of an arbitrary Fock state is limited by two upper bounds, one for the transmitted case and the other for the reflected case. In principle, the two upper bounds can be made arbitrarily small, which is confirmed by numerical simulations.Entropy2016-05-27186Article10.3390/e180601791791099-43002016-05-27doi: 10.3390/e18060179Wei QinGuilu Long<![CDATA[Entropy, Vol. 18, Pages 208: Quantum Coherent Three-Terminal Thermoelectrics: Maximum Efficiency at Given Power Output]]>
http://www.mdpi.com/1099-4300/18/6/208
This work considers the nonlinear scattering theory for three-terminal thermoelectric devices used for power generation or refrigeration. Such systems are quantum phase-coherent versions of a thermocouple, and the theory applies to systems in which interactions can be treated at a mean-field level. It considers an arbitrary three-terminal system in any external magnetic field, including systems with broken time-reversal symmetry, such as chiral thermoelectrics, as well as systems in which the magnetic field plays no role. It is shown that the upper bound on efficiency at given power output is of quantum origin and is stricter than Carnot’s bound. The bound is exactly the same as previously found for two-terminal devices and can be achieved by three-terminal systems with or without broken time-reversal symmetry, i.e., chiral and non-chiral thermoelectrics.Entropy2016-05-27186Article10.3390/e180602082081099-43002016-05-27doi: 10.3390/e18060208Robert Whitney<![CDATA[Entropy, Vol. 18, Pages 209: Shannon Entropy in a European Seabass (Dicentrarchus labrax) System during the Initial Recovery Period after a Short-Term Exposure to Methylmercury]]>
http://www.mdpi.com/1099-4300/18/6/209
Methylmercury (MeHg) is an environmental contaminant of increasing relevance as a seafood safety hazard that affects the health and welfare of fish. Non-invasive, on-line methodologies to monitor and evaluate the behavior of a fish system in aquaculture may make the identification of altered systems feasible—for example, due to the presence of agents that compromise their welfare and wholesomeness—and find a place in the implementation of Hazard Analysis and Critical Control Points and Fish Welfare Assurance Systems. The Shannon entropy (SE) of a European seabass (Dicentrarchus labrax) system has been shown to differentiate MeHg-treated from non-treated fish, the former displaying a lower SE value than the latter. However, little is known about the initial evolution of the system after removal of the toxicant. To help to cover this gap, the present work aims at providing information about the evolution of the SE of a European seabass system during a recuperation period of 11 days following a two-week treatment with 4 µg·MeHg/L. The results indicate that the SE of the system did not show a recovery trend during the examined period, displaying erratic responses with daily fluctuations and lacking a tendency to reach the initial SE values.Entropy2016-05-27186Article10.3390/e180602092091099-43002016-05-27doi: 10.3390/e18060209Harkaitz EguiraunKarmele López-de-IpiñaIciar Martinez<![CDATA[Entropy, Vol. 18, Pages 206: A Reliable Algorithm for a Local Fractional Tricomi Equation Arising in Fractal Transonic Flow]]>
http://www.mdpi.com/1099-4300/18/6/206
The pivotal proposal of this work is to present a reliable algorithm based on the local fractional homotopy perturbation Sumudu transform technique for solving a local fractional Tricomi equation occurring in fractal transonic flow. The proposed technique provides the results without any transformation of the equation into discrete counterparts or imposing restrictive assumptions and is completely free of round-off errors. The results of the scheme show that the approach is straightforward to apply and computationally very user-friendly and accurate.Entropy2016-05-25186Article10.3390/e180602062061099-43002016-05-25doi: 10.3390/e18060206Jagdev SinghDevendra KumarJuan Nieto<![CDATA[Entropy, Vol. 18, Pages 205: Maximum Power Output of Quantum Heat Engine with Energy Bath]]>
http://www.mdpi.com/1099-4300/18/6/205
The difference between quantum isoenergetic process and quantum isothermal process comes from the violation of the law of equipartition of energy in the quantum regime. To reveal an important physical meaning of this fact, here we study a special type of quantum heat engine consisting of three processes: isoenergetic, isothermal and adiabatic processes. Therefore, this engine works between the energy and heat baths. Combining two engines of this kind, it is possible to realize the quantum Carnot engine. Furthermore, considering finite velocity of change of the potential shape, here an infinite square well with moving walls, the power output of the engine is discussed. It is found that the efficiency and power output are both closely dependent on the initial and final states of the quantum isothermal process. The performance of the engine cycle is shown to be optimized by control of the occupation probability of the ground state, which is determined by the temperature and the potential width. The relation between the efficiency and power output is also discussed.Entropy2016-05-25186Article10.3390/e180602052051099-43002016-05-25doi: 10.3390/e18060205Shengnan LiuCongjie Ou<![CDATA[Entropy, Vol. 18, Pages 200: Numerical Simulation of Entropy Generation with Thermal Radiation on MHD Carreau Nanofluid towards a Shrinking Sheet]]>
http://www.mdpi.com/1099-4300/18/6/200
In this article, entropy generation with radiation on non-Newtonian Carreau nanofluid towards a shrinking sheet is investigated numerically. The effects of magnetohydrodynamics (MHD) are also taken into account. Firstly, the governing flow problem is simplified into ordinary differential equations from partial differential equations with the help of similarity variables. The solution of the resulting nonlinear differential equations is solved numerically with the help of the successive linearization method and Chebyshev spectral collocation method. The influence of all the emerging parameters is discussed with the help of graphs and tables. It is observed that the influence of magnetic field and fluid parameters oppose the flow. It is also analyzed that thermal radiation effects and the Prandtl number show opposite behavior on temperature profile. Furthermore, it is also observed that entropy profile increases for all the physical parameters.Entropy2016-05-24186Article10.3390/e180602002001099-43002016-05-24doi: 10.3390/e18060200Muhammad BhattiTehseen AbbasMohammad RashidiMohamed Ali<![CDATA[Entropy, Vol. 18, Pages 202: Hydrodynamic Theories for Flows of Active Liquid Crystals and the Generalized Onsager Principle]]>
http://www.mdpi.com/1099-4300/18/6/202
We articulate and apply the generalized Onsager principle to derive transport equations for active liquid crystals in a fixed domain as well as in a free surface domain adjacent to a passive fluid matrix. The Onsager principle ensures fundamental variational structure of the models as well as dissipative properties of the passive component in the models, irrespective of the choice of scale (kinetic to continuum) and of the physical potentials. Many popular models for passive and active liquid crystals in a fixed domain subject to consistent boundary conditions at solid walls, as well as active liquid crystals in a free surface domain with consistent transport equations along the free boundaries, can be systematically derived from the generalized Onsager principle. The dynamical boundary conditions are shown to reduce to the static boundary conditions for passive liquid crystals used previously.Entropy2016-05-24186Article10.3390/e180602022021099-43002016-05-24doi: 10.3390/e18060202Xiaogang YangJun LiM. ForestQi Wang<![CDATA[Entropy, Vol. 18, Pages 204: Distant Supervision for Relation Extraction with Ranking-Based Methods]]>
http://www.mdpi.com/1099-4300/18/6/204
Relation extraction has benefited from distant supervision in recent years with the development of natural language processing techniques and data explosion. However, distant supervision is still greatly limited by the quality of training data, due to its natural motivation for greatly reducing the heavy cost of data annotation. In this paper, we construct an architecture called MIML-sort (Multi-instance Multi-label Learning with Sorting Strategies), which is built on the famous MIML framework. Based on MIML-sort, we propose three ranking-based methods for sample selection with which we identify relation extractors from a subset of the training data. Experiments are set up on the KBP (Knowledge Base Propagation) corpus, one of the benchmark datasets for distant supervision, which is large and noisy. Compared with previous work, the proposed methods produce considerably better results. Furthermore, the three methods together achieve the best F1 on the official testing set, with an optimal enhancement of F1 from 27.3% to 29.98%.Entropy2016-05-24186Article10.3390/e180602042041099-43002016-05-24doi: 10.3390/e18060204Yang XiangQingcai ChenXiaolong WangYang Qin<![CDATA[Entropy, Vol. 18, Pages 203: Numerical Investigation of Thermal Radiation and Viscous Effects on Entropy Generation in Forced Convection Blood Flow over an Axisymmetric Stretching Sheet]]>
http://www.mdpi.com/1099-4300/18/6/203
Numerical and analytical investigation of the effects of thermal radiation and viscous heating on a convective flow of a non-Newtonian, incompressible fluid in an axisymmetric stretching sheet with constant temperature wall is performed. The power law model of the blood is used for the non-Newtonian model of the fluid and the Rosseland model for the thermal radiative heat transfer in an absorbing medium and viscous heating are considered as the heat sources. The non-dimensional governing equations are transformed to similarity form and solved numerically. A parameter study on entropy generation in medium is presented based on the Second Law of Thermodynamics by considering various parameters such as the thermal radiation parameter, the Brinkman number, Prandtl number, Eckert number.Entropy2016-05-24186Article10.3390/e180602032031099-43002016-05-24doi: 10.3390/e18060203Mohammad Abdollahzadeh JamalabadiPayam HooshmandAshkan HesabiMoon KwakIsma’il PirzadehAhmad KeikhaMohammadreza Negahdari<![CDATA[Entropy, Vol. 18, Pages 201: An Intelligent and Fast Chaotic Encryption Using Digital Logic Circuits for Ad-Hoc and Ubiquitous Computing]]>
http://www.mdpi.com/1099-4300/18/5/201
Delays added by the encryption process represent an overhead for smart computing devices in ad-hoc and ubiquitous computing intelligent systems. Digital Logic Circuits are faster than other computing techniques, so these can be used for fast encryption to minimize processing delays. Chaotic Encryption is more attack-resilient than other encryption techniques. One of the most attractive properties of cryptography is known as an avalanche effect, in which two different keys produce distinct cipher text for the same information. Important properties of chaotic systems are sensitivity to initial conditions and nonlinearity, which makes two similar keys that generate different cipher text a source of confusion. In this paper a novel fast and secure Chaotic Map-based encryption technique using 2’s Compliment (CET-2C) has been proposed, which uses a logistic map which implies that a negligible difference in parameters of the map generates different cipher text. Cryptanalysis of the proposed algorithm shows the strength and security of algorithm and keys. Performance of the proposed algorithm has been analyzed in terms of running time, throughput and power consumption. It is to be shown in comparison graphs that the proposed algorithm gave better results compare to different algorithms like AES and some others.Entropy2016-05-23185Article10.3390/e180502012011099-43002016-05-23doi: 10.3390/e18050201Ankur KharePiyush ShuklaMurtaza RizviShalini Stalin<![CDATA[Entropy, Vol. 18, Pages 199: Beyond Hypothesis Testing]]>
http://www.mdpi.com/1099-4300/18/5/199
The extraordinary success of physicists to find simple laws that explain many phenomena is beguiling. With the exception of quantum mechanics, it suggests a deterministic world in which theories are right or wrong, and the world is simple. However, attempts to apply such thinking to other phenomena have not been so successful. Individually and collectively we face many situations dominated by uncertainty, about weather and climate, about how wisely to raise children, and how the economy should be managed. The controversy about hypothesis testing is dominated by the tension between simple explanations and the complexity of the world we live in.Entropy2016-05-20185Review10.3390/e180501991991099-43002016-05-20doi: 10.3390/e18050199Joseph Kadane<![CDATA[Entropy, Vol. 18, Pages 198: What Exactly is the Nusselt Number in Convective Heat Transfer Problems and are There Alternatives?]]>
http://www.mdpi.com/1099-4300/18/5/198
The often used Nusselt number is critically questioned with respect to its physical meaning. Based on a rigorous dimensional analysis, alternative assessment numbers are found that in a systematic way separately account for the quantitative and qualitative aspect of a heat transfer process. The qualitative aspect is related to the entropy generated in the temperature field of a real, irreversible heat transfer. The irreversibility can be quantified by referring it to the so-called entropic potential of the energy involved in the transfer process.Entropy2016-05-20185Article10.3390/e180501981981099-43002016-05-20doi: 10.3390/e18050198Heinz Herwig<![CDATA[Entropy, Vol. 18, Pages 188: A Two-Stage Maximum Entropy Prior of Location Parameter with a Stochastic Multivariate Interval Constraint and Its Properties]]>
http://www.mdpi.com/1099-4300/18/5/188
This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN), which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.Entropy2016-05-20185Article10.3390/e180501881881099-43002016-05-20doi: 10.3390/e18050188Hea-Jung Kim<![CDATA[Entropy, Vol. 18, Pages 196: Insights into Entropy as a Measure of Multivariate Variability]]>
http://www.mdpi.com/1099-4300/18/5/196
Entropy has been widely employed as a measure of variability for problems, such as machine learning and signal processing. In this paper, we provide some new insights into the behaviors of entropy as a measure of multivariate variability. The relationships between multivariate entropy (joint or total marginal) and traditional measures of multivariate variability, such as total dispersion and generalized variance, are investigated. It is shown that for the jointly Gaussian case, the joint entropy (or entropy power) is equivalent to the generalized variance, while total marginal entropy is equivalent to the geometric mean of the marginal variances and total marginal entropy power is equivalent to the total dispersion. The smoothed multivariate entropy (joint or total marginal) and the kernel density estimation (KDE)-based entropy estimator (with finite samples) are also studied, which, under certain conditions, will be approximately equivalent to the total dispersion (or a total dispersion estimator), regardless of the data distribution.Entropy2016-05-20185Article10.3390/e180501961961099-43002016-05-20doi: 10.3390/e18050196Badong ChenJianji WangHaiquan ZhaoJose Principe<![CDATA[Entropy, Vol. 18, Pages 194: Detection of Left-Sided and Right-Sided Hearing Loss via Fractional Fourier Transform]]>
http://www.mdpi.com/1099-4300/18/5/194
In order to detect hearing loss more efficiently and accurately, this study proposed a new method based on fractional Fourier transform (FRFT). Three-dimensional volumetric magnetic resonance images were obtained from 15 patients with left-sided hearing loss (LHL), 20 healthy controls (HC), and 14 patients with right-sided hearing loss (RHL). Twenty-five FRFT spectrums were reduced by principal component analysis with thresholds of 90%, 95%, and 98%, respectively. The classifier is the single-hidden-layer feed-forward neural network (SFN) trained by the Levenberg–Marquardt algorithm. The results showed that the accuracies of all three classes are higher than 95%. In all, our method is promising and may raise interest from other researchers.Entropy2016-05-19185Article10.3390/e180501941941099-43002016-05-19doi: 10.3390/e18050194Shuihua WangMing YangYin ZhangJianwu LiLing ZouSiyuan LuBin LiuJiquan YangYudong Zhang<![CDATA[Entropy, Vol. 18, Pages 193: Entropy and the Self-Organization of Information and Value]]>
http://www.mdpi.com/1099-4300/18/5/193
Adam Smith, Charles Darwin, Rudolf Clausius, and Léon Brillouin considered certain “values” as key quantities in their descriptions of market competition, natural selection, thermodynamic processes, and information exchange, respectively. None of those values can be computed from elementary properties of the particular object they are attributed to, but rather values represent emergent, irreducible properties. In this paper, such values are jointly understood as information values in certain contexts. For this aim, structural information is distinguished from symbolic information. While the first can be associated with arbitrary physical processes or structures, the latter requires conventions which govern encoding and decoding of the symbols which form a message. As a value of energy, Clausius’ entropy is a universal measure of the structural information contained in a thermodynamic system. The structural information of a message, in contrast to its meaning, can be evaluated by Shannon’s entropy of communication. Symbolic information is found only in the realm of life, such as in animal behavior, human sociology, science, or technology, and is often cooperatively valuated by competition. Ritualization is described here as a universal scenario for the self-organization of symbols by which symbolic information emerges from structural information in the course of evolution processes. Emergent symbolic information exhibits the novel fundamental code symmetry which prevents the meaning of a message from being reducible to the physical structure of its carrier. While symbols turn arbitrary during the ritualization transition, their structures preserve information about their evolution history.Entropy2016-05-19185Article10.3390/e180501931931099-43002016-05-19doi: 10.3390/e18050193Rainer FeistelWerner Ebeling<![CDATA[Entropy, Vol. 18, Pages 195: Predicting China’s SME Credit Risk in Supply Chain Finance Based on Machine Learning Methods]]>
http://www.mdpi.com/1099-4300/18/5/195
We propose a new integrated ensemble machine learning (ML) method, i.e., RS-RAB (Random Subspace-Real AdaBoost), for predicting the credit risk of China’s small and medium-sized enterprise (SME) in supply chain finance (SCF). The sample of empirical analysis is comprised of two data sets on a quarterly basis during the period of 2012–2013: one includes 48 listed SMEs obtained from the SME Board of Shenzhen Stock Exchange; the other one consists of three listed core enterprises (CEs) and six listed CEs that are respectively collected from the Main Board of Shenzhen Stock Exchange and Shanghai Stock Exchange. The experimental results show that RS-RAB possesses an outstanding prediction performance and is very suitable for forecasting the credit risk of China’s SME in SCF by comparison with the other three ML methods.Entropy2016-05-19185Article10.3390/e180501951951099-43002016-05-19doi: 10.3390/e18050195You ZhuChi XieGang-Jin WangXin-Guo Yan<![CDATA[Entropy, Vol. 18, Pages 191: Rotation of Galaxies within Gravity of the Universe]]>
http://www.mdpi.com/1099-4300/18/5/191
Rotation of galaxies is examined by the general principle of least action. This law of nature describes a system in its surroundings, here specifically a galaxy in the surrounding Universe. According to this holistic theory the gravitational potential due to all matter in the expanding Universe relates to the universal curvature which, in turn, manifests itself as the universal acceleration. Then the orbital velocities from the central bulge to distant perimeters are understood to balance both the galactic and universal acceleration. Since the galactic acceleration decreases with distance from the galaxy’s center to its luminous edge, the orbital velocities of ever more distant stars and gas clouds tend toward a value that tallies the universal acceleration. This tiny term has been acknowledged earlier by including it as a parameter in the modified gravitational law, but here the tiny acceleration is understood to result from the gravitational potential that spans across the expanding Universe. This resolution of the galaxy rotation problem is compared with observations and contrasted with models of dark matter. Also, other astronomical observations that have been interpreted as evidence for dark matter are discussed in light of the least-action principle.Entropy2016-05-19185Article10.3390/e180501911911099-43002016-05-19doi: 10.3390/e18050191Arto Annila<![CDATA[Entropy, Vol. 18, Pages 189: MoNbTaV Medium-Entropy Alloy]]>
http://www.mdpi.com/1099-4300/18/5/189
Guided by CALPHAD (Calculation of Phase Diagrams) modeling, the refractory medium-entropy alloy MoNbTaV was synthesized by vacuum arc melting under a high-purity argon atmosphere. A body-centered cubic solid solution phase was experimentally confirmed in the as-cast ingot using X-ray diffraction and scanning electron microscopy. The measured lattice parameter of the alloy (3.208 Å) obeys the rule of mixtures (ROM), but the Vickers microhardness (4.95 GPa) and the yield strength (1.5 GPa) are about 4.5 and 4.6 times those estimated from the ROM, respectively. Using a simple model on solid solution strengthening predicts a yield strength of approximately 1.5 GPa. Thermodynamic analysis shows that the total entropy of the alloy is more than three times the configurational entropy at room temperature, and the entropy of mixing exhibits a small negative departure from ideal mixing.Entropy2016-05-19185Article10.3390/e180501891891099-43002016-05-19doi: 10.3390/e18050189Hongwei YaoJun-Wei QiaoMichael GaoJeffrey HawkSheng-Guo MaHefeng Zhou<![CDATA[Entropy, Vol. 18, Pages 192: Common Probability Patterns Arise from Simple Invariances]]>
http://www.mdpi.com/1099-4300/18/5/192
Shift and stretch invariance lead to the exponential-Boltzmann probability distribution. Rotational invariance generates the Gaussian distribution. Particular scaling relations transform the canonical exponential and Gaussian patterns into the variety of commonly observed patterns. The scaling relations themselves arise from the fundamental invariances of shift, stretch and rotation, plus a few additional invariances. Prior work described the three fundamental invariances as a consequence of the equilibrium canonical ensemble of statistical mechanics or the Jaynesian maximization of information entropy. By contrast, I emphasize the primacy and sufficiency of invariance alone to explain the commonly observed patterns. Primary invariance naturally creates the array of commonly observed scaling relations and associated probability patterns, whereas the classical approaches derived from statistical mechanics or information theory require special assumptions to derive commonly observed scales.Entropy2016-05-19185Article10.3390/e180501921921099-43002016-05-19doi: 10.3390/e18050192Steven Frank<![CDATA[Entropy, Vol. 18, Pages 190: Specific Differential Entropy Rate Estimation for Continuous-Valued Time Series]]>
http://www.mdpi.com/1099-4300/18/5/190
We introduce a method for quantifying the inherent unpredictability of a continuous-valued time series via an extension of the differential Shannon entropy rate. Our extension, the specific entropy rate, quantifies the amount of predictive uncertainty associated with a specific state, rather than averaged over all states. We provide a data-driven approach for estimating the specific entropy rate of an observed time series. Finally, we consider three case studies of estimating the specific entropy rate from synthetic and physiological data relevant to the analysis of heart rate variability.Entropy2016-05-19185Article10.3390/e180501901901099-43002016-05-19doi: 10.3390/e18050190David Darmon<![CDATA[Entropy, Vol. 18, Pages 187: Charged, Rotating Black Objects in Einstein–Maxwell-Dilaton Theory in D ≥ 5]]>
http://www.mdpi.com/1099-4300/18/5/187
We show that the general framework proposed by Kleihaus et al. (2015) for the study of asymptotically flat vacuum black objects with k + 1 equal magnitude angular momenta in D ≥ 5 spacetime dimensions (with 0 ≤ k ≤ D - 5 2 ) can be extended to the case of Einstein–Maxwell-dilaton (EMd) theory. This framework can describe black holes with spherical horizon topology, the simplest solutions corresponding to a class of electrically charged (dilatonic) Myers–Perry black holes. Balanced charged black objects with S n + 1 × S 2 k + 1 horizon topology can also be studied (with D = 2 k + n + 4 ). Black rings correspond to the case k = 0 , while the solutions with k &gt; 0 are black ringoids. The basic properties of EMd solutions are discussed for the special case of a Kaluza–Klein value of the dilaton coupling constant. We argue that all features of these solutions can be derived from those of the vacuum seed configurations.Entropy2016-05-16185Article10.3390/e180501871871099-43002016-05-16doi: 10.3390/e18050187Burkhard KleihausJutta KunzEugen Radu<![CDATA[Entropy, Vol. 18, Pages 184: Selection of Entropy Based Features for Automatic Analysis of Essential Tremor]]>
http://www.mdpi.com/1099-4300/18/5/184
Biomedical systems produce biosignals that arise from interaction mechanisms. In a general form, those mechanisms occur across multiple scales, both spatial and temporal, and contain linear and non-linear information. In this framework, entropy measures are good candidates in order provide useful evidence about disorder in the system, lack of information in time-series and/or irregularity of the signals. The most common movement disorder is essential tremor (ET), which occurs 20 times more than Parkinson’s disease. Interestingly, about 50%–70% of the cases of ET have a genetic origin. One of the most used standard tests for clinical diagnosis of ET is Archimedes’ spiral drawing. This work focuses on the selection of non-linear biomarkers from such drawings and handwriting, and it is part of a wider cross study on the diagnosis of essential tremor, where our piece of research presents the selection of entropy features for early ET diagnosis. Classic entropy features are compared with features based on permutation entropy. Automatic analysis system settled on several Machine Learning paradigms is performed, while automatic features selection is implemented by means of ANOVA (analysis of variance) test. The obtained results for early detection are promising and appear applicable to real environments.Entropy2016-05-16185Article10.3390/e180501841841099-43002016-05-16doi: 10.3390/e18050184Karmele López-de-IpiñaJordi Solé-CasalsMarcos Faundez-ZanuyPilar CalvoEnric SesaUnai Martinez de LizarduyPatricia De La RivaJose Marti-MassoBlanca BeitiaAlberto Bergareche<![CDATA[Entropy, Vol. 18, Pages 186: Quantum Thermodynamics in Strong Coupling: Heat Transport and Refrigeration]]>
http://www.mdpi.com/1099-4300/18/5/186
The performance characteristics of a heat rectifier and a heat pump are studied in a non-Markovian framework. The device is constructed from a molecule connected to a hot and cold reservoir. The heat baths are modelled using the stochastic surrogate Hamiltonian method. The molecule is modelled by an asymmetric double-well potential. Each well is semi-locally connected to a heat bath composed of spins. The dynamics are driven by a combined system–bath Hamiltonian. The temperature of the baths is regulated by a secondary spin bath composed of identical spins in thermal equilibrium. A random swap operation exchange spins between the primary and secondary baths. The combined system is studied in various system–bath coupling strengths. In all cases, the average heat current always flows from the hot towards the cold bath in accordance with the second law of thermodynamics. The asymmetry of the double well generates a rectifying effect, meaning that when the left and right baths are exchanged the heat current follows the hot-to-cold direction. The heat current is larger when the high frequency is coupled to the hot bath. Adding an external driving field can reverse the transport direction. Such a refrigeration effect is modelled by a periodic driving field in resonance with the frequency difference of the two potential wells. A minimal driving amplitude is required to overcome the heat leak effect. In the strong driving regime the cooling power is non-monotonic with the system–bath coupling.Entropy2016-05-16185Article10.3390/e180501861861099-43002016-05-16doi: 10.3390/e18050186Gil KatzRonnie Kosloff<![CDATA[Entropy, Vol. 18, Pages 185: An Information Entropy-Based Animal Migration Optimization Algorithm for Data Clustering]]>
http://www.mdpi.com/1099-4300/18/5/185
Data clustering is useful in a wide range of application areas. The Animal Migration Optimization (AMO) algorithm is one of the recently introduced swarm-based algorithms, which has demonstrated good performances for solving numeric optimization problems. In this paper, we presented a modified AMO algorithm with an entropy-based heuristic strategy for data clustering. The main contribution is that we calculate the information entropy of each attribute for a given data set and propose an adaptive strategy that can automatically balance convergence speed and global search efforts according to its entropy in both migration and updating steps. A series of well-known benchmark clustering problems are employed to evaluate the performance of our approach. We compare experimental results with k-means, Artificial Bee Colony (ABC), AMO, and the state-of-the-art algorithms for clustering and show that the proposed AMO algorithm generally performs better than the compared algorithms on the considered clustering problems.Entropy2016-05-16185Article10.3390/e180501851851099-43002016-05-16doi: 10.3390/e18050185Lei HouJian GaoRong Chen<![CDATA[Entropy, Vol. 18, Pages 181: Geometric Model of Black Hole Quantum N-portrait, Extradimensions and Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/5/181
Recently a short scale modified black hole metric, known as holographic metric, has been proposed in order to capture the self-complete character of gravity. In this paper we show that such a metric can reproduce some geometric features expected from the quantum N-portrait beyond the semi-classical limit. We show that for a generic N this corresponds to having an effective energy momentum tensor in Einstein equations or, equivalently, non-local terms in the gravity action. We also consider the higher dimensional extension of the metric and the case of an AdS cosmological term. We provide a detailed thermodynamic analysis of both cases, with particular reference to the repercussions on the Hawking-Page phase transition.Entropy2016-05-14185Article10.3390/e180501811811099-43002016-05-14doi: 10.3390/e18050181Antonia FrassinoSven KöppelPiero Nicolini<![CDATA[Entropy, Vol. 18, Pages 180: Relationship between Population Dynamics and the Self-Energy in Driven Non-Equilibrium Systems]]>
http://www.mdpi.com/1099-4300/18/5/180
We compare the decay rates of excited populations directly calculated within a Keldysh formalism to the equation of motion of the population itself for a Hubbard-Holstein model in two dimensions. While it is true that these two approaches must give the same answer, it is common to make a number of simplifying assumptions, within the differential equation for the populations, that allows one to interpret the decay in terms of hot electrons interacting with a phonon bath. Here, we show how care must be taken to ensure an accurate treatment of the equation of motion for the populations due to the fact that there are identities that require cancellations of terms that naively look like they contribute to the decay rates. In particular, the average time dependence of the Green’s functions and self-energies plays a pivotal role in determining these decay rates.Entropy2016-05-13185Article10.3390/e180501801801099-43002016-05-13doi: 10.3390/e18050180Alexander KemperJames Freericks<![CDATA[Entropy, Vol. 18, Pages 183: A Conjecture Regarding the Extremal Values of Graph Entropy Based on Degree Powers]]>
http://www.mdpi.com/1099-4300/18/5/183
Many graph invariants have been used for the construction of entropy-based measures to characterize the structure of complex networks. The starting point has been always based on assigning a probability distribution to a network when using Shannon’s entropy. In particular, Cao et al. (2014 and 2015) defined special graph entropy measures which are based on degrees powers. In this paper, we obtain some lower and upper bounds for these measures and characterize extremal graphs. Moreover we resolve one part of a conjecture stated by Cao et al.Entropy2016-05-13185Article10.3390/e180501831831099-43002016-05-13doi: 10.3390/e18050183Kinkar DasMatthias Dehmer