Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 18, Pages 434: Entropy and Stability Analysis of Delayed Energy Supply–Demand Model]]>
http://www.mdpi.com/1099-4300/18/12/434
In this paper, a four-dimensional model of energy supply–demand with two-delay is built. The interactions among energy demand of east China, energy supply of west China and the utilization of renewable energy in east China are delayed in this model. We discuss stability of the system affected by parameters and the existence of Hopf bifurcation at the equilibrium point from two aspects: single delay and two-delay. The stability and complexity of the system are demonstrated through bifurcation diagram, Poincare section plot, entropy diagram, etc. in numerical simulation. The simulation results show that the parameters beyond the stable region will cause the system to be unstable and increase the complexity of the system. At this point, because of energy supply–demand system fluctuations, it is difficult to formulate energy policies. Finally, the bifurcation control is realized successfully by the method of delayed feedback control. The results of bifurcation control simulation indicate that the system can return to stable state by adjusting the control parameter. In addition, we find that the bigger the value of the control parameter, the better the effect of the bifurcation control. The results of this paper can provide help for maintaining the stability of the system, which will be conducive to energy scheduling.Entropy2016-12-031812Article10.3390/e181204344341099-43002016-12-03doi: 10.3390/e18120434Jing WangFengshan SiYuling WangShide Duan<![CDATA[Entropy, Vol. 18, Pages 430: Multivariable Fuzzy Measure Entropy Analysis for Heart Rate Variability and Heart Sound Amplitude Variability]]>
http://www.mdpi.com/1099-4300/18/12/430
Simultaneously analyzing multivariate time series provides an insight into underlying interaction mechanisms of cardiovascular system and has recently become an increasing focus of interest. In this study, we proposed a new multivariate entropy measure, named multivariate fuzzy measure entropy (mvFME), for the analysis of multivariate cardiovascular time series. The performances of mvFME, and its two sub-components: the local multivariate fuzzy entropy (mvFEL) and global multivariate fuzzy entropy (mvFEG), as well as the commonly used multivariate sample entropy (mvSE), were tested on both simulation and cardiovascular multivariate time series. Simulation results on multivariate coupled Gaussian signals showed that the statistical stability of mvFME is better than mvSE, but its computation time is higher than mvSE. Then, mvSE and mvFME were applied to the multivariate cardiovascular signal analysis of R wave peak (RR) interval, and first (S1) and second (S2) heart sound amplitude series from three positions of heart sound signal collections, under two different physiological states: rest state and after stair climbing state. The results showed that, compared with rest state, for univariate time series analysis, after stair climbing state has significantly lower mvSE and mvFME values for both RR interval and S1 amplitude series, whereas not for S2 amplitude series. For bivariate time series analysis, all mvSE and mvFME report significantly lower values for after stair climbing. For trivariate time series analysis, only mvFME has the discrimination ability for the two physiological states, whereas mvSE does not. In summary, the new proposed mvFME method shows better statistical stability and better discrimination ability for multivariate time series analysis than the traditional mvSE method.Entropy2016-12-031812Article10.3390/e181204304301099-43002016-12-03doi: 10.3390/e18120430Lina ZhaoShoushui WeiHong TangChengyu Liu<![CDATA[Entropy, Vol. 18, Pages 432: EEG-Based Person Authentication Using a Fuzzy Entropy-Related Approach with Two Electrodes]]>
http://www.mdpi.com/1099-4300/18/12/432
Person authentication, based on electroencephalography (EEG) signals, is one of the directions possible in the study of EEG signals. In this paper, a method for the selection of EEG electrodes and features in a discriminative manner is proposed. Given that EEG signals are unstable and non-linear, a non-linear analysis method, i.e., fuzzy entropy, is more appropriate. In this paper, unlike other methods using different signal sources and patterns, such as rest state and motor imagery, a novel paradigm using the stimuli of self-photos and non-self-photos is introduced. Ten subjects are selected to take part in this experiment, and fuzzy entropy is used as a feature to select the minimum number of electrodes that identifies individuals. The experimental results show that the proposed method can make use of two electrodes (FP1 and FP2) in the frontal area, while the classification accuracy is greater than 87.3%. The proposed biometric system, based on EEG signals, can provide each subject with a unique key and is capable of human recognition.Entropy2016-12-021812Communication10.3390/e181204324321099-43002016-12-02doi: 10.3390/e18120432Zhendong MuJianfeng HuJianliang Min<![CDATA[Entropy, Vol. 18, Pages 433: Foliations-Webs-Hessian Geometry-Information Geometry-Entropy and Cohomology]]>
http://www.mdpi.com/1099-4300/18/12/433
IN MEMORIAM OF ALEXANDER GROTHENDIECK. THE MAN.Entropy2016-12-021812Article10.3390/e181204334331099-43002016-12-02doi: 10.3390/e18120433Michel Boyom<![CDATA[Entropy, Vol. 18, Pages 431: Maximum Entropy Production Is Not a Steady State Attractor for 2D Fluid Convection]]>
http://www.mdpi.com/1099-4300/18/12/431
Multiple authors have claimed that the natural convection of a fluid is a process that exhibits maximum entropy production (MEP). However, almost all such investigations were limited to fixed temperature boundary conditions (BCs). It was found that under those conditions, the system tends to maximize its heat flux, and hence it was concluded that the MEP state is a dynamical attractor. However, since entropy production varies with heat flux and difference of inverse temperature, it is essential that any complete investigation of entropy production allows for variations in heat flux and temperature difference. Only then can we legitimately assess whether the MEP state is the most attractive. Our previous work made use of negative feedback BCs to explore this possibility. We found that the steady state of the system was far from the MEP state. For any system, entropy production can only be maximized subject to a finite set of physical and material constraints. In the case of our previous work, it was possible that the adopted set of fluid parameters were constraining the system in such a way that it was entirely prevented from reaching the MEP state. Hence, in the present work, we used a different set of boundary parameters, such that the steady states of the system were in the local vicinity of the MEP state. If MEP was indeed an attractor, relaxing those constraints of our previous work should have caused a discrete perturbation to the surface of steady state heat flux values near the value corresponding to MEP. We found no such perturbation, and hence no discernible attraction to the MEP state. Furthermore, systems with fixed flux BCs actually minimize their entropy production (relative to the alternative stable state, that of pure diffusive heat transport). This leads us to conclude that the principle of MEP is not an accurate indicator of which stable steady state a convective system will adopt. However, for all BCs considered, the quotient of heat flux and temperature difference F / Δ T —which is proportional to the dimensionless Nusselt number—does appear to be maximized.Entropy2016-12-011812Article10.3390/e181204314311099-43002016-12-01doi: 10.3390/e18120431Stuart BartlettNathaniel Virgo<![CDATA[Entropy, Vol. 18, Pages 429: CoFea: A Novel Approach to Spam Review Identification Based on Entropy and Co-Training]]>
http://www.mdpi.com/1099-4300/18/12/429
With the rapid development of electronic commerce, spam reviews are rapidly growing on the Internet to manipulate online customers’ opinions on goods being sold. This paper proposes a novel approach, called CoFea (Co-training by Features), to identify spam reviews, based on entropy and the co-training algorithm. After sorting all lexical terms of reviews by entropy, we produce two views on the reviews by dividing the lexical terms into two subsets. One subset contains odd-numbered terms and the other contains even-numbered terms. Using SVM (support vector machine) as the base classifier, we further propose two strategies, CoFea-T and CoFea-S, embedded with the CoFea approach. The CoFea-T strategy uses all terms in the subsets for spam review identification by SVM. The CoFea-S strategy uses a predefined number of terms with small entropy for spam review identification by SVM. The experiment results show that the CoFea-T strategy produces better accuracy than the CoFea-S strategy, while the CoFea-S strategy saves more computing time than the CoFea-T strategy with acceptable accuracy in spam review identification.Entropy2016-11-301812Article10.3390/e181204294291099-43002016-11-30doi: 10.3390/e18120429Wen ZhangChaoqi BuTaketoshi YoshidaSiguang Zhang<![CDATA[Entropy, Vol. 18, Pages 428: Fiber-Mixing Codes between Shifts of Finite Type and Factors of Gibbs Measures]]>
http://www.mdpi.com/1099-4300/18/12/428
A sliding block code π : X → Y between shift spaces is called fiber-mixing if, for every x and x ′ in X with y = π ( x ) = π ( x ′ ) , there is z ∈ π - 1 ( y ) which is left asymptotic to x and right asymptotic to x ′ . A fiber-mixing factor code from a shift of finite type is a code of class degree 1 for which each point of Y has exactly one transition class. Given an infinite-to-one factor code between mixing shifts of finite type (of unequal entropies), we show that there is also a fiber-mixing factor code between them. This result may be regarded as an infinite-to-one (unequal entropies) analogue of Ashley’s Replacement Theorem, which states that the existence of an equal entropy factor code between mixing shifts of finite type guarantees the existence of a degree 1 factor code between them. Properties of fiber-mixing codes and applications to factors of Gibbs measures are presented.Entropy2016-11-301812Article10.3390/e181204284281099-43002016-11-30doi: 10.3390/e18120428Uijin Jung<![CDATA[Entropy, Vol. 18, Pages 426: On Macrostates in Complex Multi-Scale Systems]]>
http://www.mdpi.com/1099-4300/18/12/426
A characteristic feature of complex systems is their deep structure, meaning that the definition of their states and observables depends on the level, or the scale, at which the system is considered. This scale dependence is reflected in the distinction of micro- and macro-states, referring to lower and higher levels of description. There are several conceptual and formal frameworks to address the relation between them. Here, we focus on an approach in which macrostates are contextually emergent from (rather than fully reducible to) microstates and can be constructed by contextual partitions of the space of microstates. We discuss criteria for the stability of such partitions, in particular under the microstate dynamics, and outline some examples. Finally, we address the question of how macrostates arising from stable partitions can be identified as relevant or meaningful.Entropy2016-11-291812Article10.3390/e181204264261099-43002016-11-29doi: 10.3390/e18120426Harald Atmanspacher<![CDATA[Entropy, Vol. 18, Pages 424: Measurement on the Complexity Entropy of Dynamic Game Models for Innovative Enterprises under Two Kinds of Government Subsidies]]>
http://www.mdpi.com/1099-4300/18/12/424
In this paper, setting the high-tech industry as the background, we build a dynamic duopoly game model in two cases with different government subsidies based on the innovation inputs and outputs, respectively. We analyze the equilibrium solution and stability conditions of the system, and study the dynamic evolution of the system under the conditions of different system parameters by the numerical simulation method. The simulation results show that both innovation subsidy policies have positive effects on firms’ innovation activities. Besides, improving the level of innovation can encourage firms to innovate. It also shows that an exaggerated adjusting speed of innovation outputs may cause complicated dynamic phenomena such as bifurcation and chaos, which means that the system has relatively higher entropy than that in a stable state. The degree of the government innovation subsidies is also shown to impact the stability and entropy of the system.Entropy2016-11-291812Article10.3390/e181204244241099-43002016-11-29doi: 10.3390/e18120424Junhai MaXinyan SuiLei Li<![CDATA[Entropy, Vol. 18, Pages 427: Healthcare Teams Neurodynamically Reorganize When Resolving Uncertainty]]>
http://www.mdpi.com/1099-4300/18/12/427
Research on the microscale neural dynamics of social interactions has yet to be translated into improvements in the assembly, training and evaluation of teams. This is partially due to the scale of neural involvements in team activities, spanning the millisecond oscillations in individual brains to the minutes/hours performance behaviors of the team. We have used intermediate neurodynamic representations to show that healthcare teams enter persistent (50–100 s) neurodynamic states when they encounter and resolve uncertainty while managing simulated patients. Each of the second symbols was developed situating the electroencephalogram (EEG) power of each team member in the contexts of those of other team members and the task. These representations were acquired from EEG headsets with 19 recording electrodes for each of the 1–40 Hz frequencies. Estimates of the information in each symbol stream were calculated from a 60 s moving window of Shannon entropy that was updated each second, providing a quantitative neurodynamic history of the team’s performance. Neurodynamic organizations fluctuated with the task demands with increased organization (i.e., lower entropy) occurring when the team needed to resolve uncertainty. These results show that intermediate neurodynamic representations can provide a quantitative bridge between the micro and macro scales of teamwork.Entropy2016-11-291812Article10.3390/e181204274271099-43002016-11-29doi: 10.3390/e18120427Ronald StevensTrysha GallowayDonald HalpinAnn Willemsen-Dunlap<![CDATA[Entropy, Vol. 18, Pages 425: Anisotropically Weighted and Nonholonomically Constrained Evolutions on Manifolds]]>
http://www.mdpi.com/1099-4300/18/12/425
We present evolution equations for a family of paths that results from anisotropically weighting curve energies in non-linear statistics of manifold valued data. This situation arises when performing inference on data that have non-trivial covariance and are anisotropic distributed. The family can be interpreted as most probable paths for a driving semi-martingale that through stochastic development is mapped to the manifold. We discuss how the paths are projections of geodesics for a sub-Riemannian metric on the frame bundle of the manifold, and how the curvature of the underlying connection appears in the sub-Riemannian Hamilton–Jacobi equations. Evolution equations for both metric and cometric formulations of the sub-Riemannian metric are derived. We furthermore show how rank-deficient metrics can be mixed with an underlying Riemannian metric, and we relate the paths to geodesics and polynomials in Riemannian geometry. Examples from the family of paths are visualized on embedded surfaces, and we explore computational representations on finite dimensional landmark manifolds with geometry induced from right-invariant metrics on diffeomorphism groups.Entropy2016-11-261812Article10.3390/e181204254251099-43002016-11-26doi: 10.3390/e18120425Stefan Sommer<![CDATA[Entropy, Vol. 18, Pages 423: Consensus of Second Order Multi-Agent Systems with Exogenous Disturbance Generated by Unknown Exosystems]]>
http://www.mdpi.com/1099-4300/18/12/423
This paper is concerned with consensus problem of a class of second-order multi-agent systems subjecting to external disturbance generated from some unknown exosystems. In comparison with the case where the disturbance is generated from some known exosystems, we need to combine adaptive control and internal model design to deal with the external disturbance generated from the unknown exosystems. With the help of the internal model, an adaptive protocol is proposed for the consensus problem of the multi-agent systems. Finally, one numerical example is provided to demonstrate the effectiveness of the control design.Entropy2016-11-251812Article10.3390/e181204234231099-43002016-11-25doi: 10.3390/e18120423Xuxi ZhangQidan ZhuXianping Liu<![CDATA[Entropy, Vol. 18, Pages 417: Condensation: Passenger Not Driver in Atmospheric Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/12/417
The second law of thermodynamics states that processes yielding work or at least capable of yielding work are thermodynamically spontaneous, and that those costing work are thermodynamically nonspontaneous. Whether a process yields or costs heat is irrelevant. Condensation of water vapor yields work and hence is thermodynamically spontaneous only in a supersaturated atmosphere; in an unsaturated atmosphere it costs work and hence is thermodynamically nonspontaneous. Far more of Earth’s atmosphere is unsaturated than supersaturated; based on this alone evaporation is far more often work-yielding and hence thermodynamically spontaneous than condensation in Earth’s atmosphere—despite condensation always yielding heat and evaporation always costing heat. Furthermore, establishment of the unstable or at best metastable condition of supersaturation, and its maintenance in the face of condensation that would wipe it out, is always work-costing and hence thermodynamically nonspontaneous in Earth’s atmosphere or anywhere else. The work required to enable supersaturation is most usually provided at the expense of temperature differences that enable cooling to below the dew point. In the case of most interest to us, convective weather systems and storms, it is provided at the expense of vertical temperature gradients exceeding the moist adiabatic. Thus, ultimately, condensation is a work-costing and hence thermodynamically nonspontaneous process even in supersaturated regions of Earth’s or any other atmosphere. While heat engines in general can in principle extract all of the work represented by any temperature difference until it is totally neutralized to isothermality, convective weather systems and storms in particular cannot. They can extract only the work represented by partial neutralization of super-moist-adiabatic lapse rates to moist-adiabaticity. Super-moist-adiabatic lapse rates are required to enable convection of saturated air. Condensation cannot occur fast enough to maintain relative humidity in a cloud exactly at saturation, thereby trapping some water vapor in metastable supersaturation. Only then can the water vapor condense. Thus ultimately condensation is a thermodynamically nonspontaneous process forced by super-moist-adiabatic lapse rates. Yet water vapor plays vital roles in atmospheric thermodynamics and kinetics. Convective weather systems and storms in a dry atmosphere (e.g., dust devils) can extract only the work represented by partial neutralization of super-dry-adiabatic lapse rates to dry-adiabaticity. At typical atmospheric temperatures in the tropics, where convective weather systems and storms are most frequent and active, the moist-adiabatic lapse rate is much smaller (thus much closer to isothermality), and hence represents much more extractable work, than the dry—the thermodynamic advantage of water vapor. Moreover, the large heat of condensation (and to a lesser extent fusion) of water facilitates much faster heat transfer from Earth’s surface to the tropopause than is possible in a dry atmosphere, thereby facilitating much faster extraction of work, i.e., much greater power, than is possible in a dry atmosphere—the kinetic advantage of water vapor.Entropy2016-11-251812Article10.3390/e181204174171099-43002016-11-25doi: 10.3390/e18120417Jack Denur<![CDATA[Entropy, Vol. 18, Pages 421: The Information Geometry of Sparse Goodness-of-Fit Testing]]>
http://www.mdpi.com/1099-4300/18/12/421
This paper takes an information-geometric approach to the challenging issue of goodness-of-fit testing in the high dimensional, low sample size context where—potentially—boundary effects dominate. The main contributions of this paper are threefold: first, we present and prove two new theorems on the behaviour of commonly used test statistics in this context; second, we investigate—in the novel environment of the extended multinomial model—the links between information geometry-based divergences and standard goodness-of-fit statistics, allowing us to formalise relationships which have been missing in the literature; finally, we use simulation studies to validate and illustrate our theoretical results and to explore currently open research questions about the way that discretisation effects can dominate sampling distributions near the boundary. Novelly accommodating these discretisation effects contrasts sharply with the essentially continuous approach of skewness and other corrections flowing from standard higher-order asymptotic analysis.Entropy2016-11-241812Article10.3390/e181204214211099-43002016-11-24doi: 10.3390/e18120421Paul MarriottRadka SabolováGermain Van BeverFrank Critchley<![CDATA[Entropy, Vol. 18, Pages 422: Energy Efficiency Improvement in a Modified Ethanol Process from Acetic Acid]]>
http://www.mdpi.com/1099-4300/18/12/422
For the high utilization of abundant lignocellulose, which is difficult to directly convert into ethanol, an energy-efficient ethanol production process using acetic acid was examined, and its energy saving performance, economics, and thermodynamic efficiency were compared with the conventional process. The raw ethanol synthesized from acetic acid and hydrogen was fed to the proposed ethanol concentration process. The proposed process utilized an extended divided wall column (DWC), for which the performance was investigated with the HYSYS simulation. The performance improvement of the proposed process includes a 27% saving in heating duty and a 41% reduction in cooling duty. The economics shows a 16% saving in investment cost and a 24% decrease in utility cost over the conventional ethanol concentration process. The exergy analysis shows a 9.6% improvement in thermodynamic efficiency for the proposed process.Entropy2016-11-241812Article10.3390/e181204224221099-43002016-11-24doi: 10.3390/e18120422Young Kim<![CDATA[Entropy, Vol. 18, Pages 420: On the Existence and Uniqueness of Solutions for Local Fractional Differential Equations]]>
http://www.mdpi.com/1099-4300/18/11/420
In this manuscript, we prove the existence and uniqueness of solutions for local fractional differential equations (LFDEs) with local fractional derivative operators (LFDOs). By using the contracting mapping theorem (CMT) and increasing and decreasing theorem (IDT), existence and uniqueness results are obtained. Some examples are presented to illustrate the validity of our results.Entropy2016-11-231811Article10.3390/e181104204201099-43002016-11-23doi: 10.3390/e18110420Hossein JafariHassan JassimMaysaa Al QurashiDumitru Baleanu<![CDATA[Entropy, Vol. 18, Pages 419: Periodic Energy Transport and Entropy Production in Quantum Electronics]]>
http://www.mdpi.com/1099-4300/18/11/419
The problem of time-dependent particle transport in quantum conductors is nowadays a well established topic. In contrast, the way in which energy and heat flow in mesoscopic systems subjected to dynamical drivings is a relatively new subject that cross-fertilize both fundamental developments of quantum thermodynamics and practical applications in nanoelectronics and quantum information. In this short review, we discuss from a thermodynamical perspective recent investigations on nonstationary heat and work generated in quantum systems, emphasizing open questions and unsolved issues.Entropy2016-11-231811Review10.3390/e181104194191099-43002016-11-23doi: 10.3390/e18110419María LudovicoLiliana ArracheaMichael MoskaletsDavid Sánchez<![CDATA[Entropy, Vol. 18, Pages 415: Simple Harmonic Oscillator Canonical Ensemble Model for Tunneling Radiation of Black Hole]]>
http://www.mdpi.com/1099-4300/18/11/415
A simple harmonic oscillator canonical ensemble model for Schwarzchild black hole quantum tunneling radiation is proposed in this paper. Firstly, the equivalence between canonical ensemble model and Parikh–Wilczek’s tunneling method is introduced. Then, radiated massless particles are considered as a collection of simple harmonic oscillators. Based on this model, we treat the black hole as a heat bath to derive the energy flux of the radiation. Finally, we apply the result to estimate the lifespan of a black hole.Entropy2016-11-231811Article10.3390/e181104154151099-43002016-11-23doi: 10.3390/e18110415Jinbo YangTangmei HeJingyi Zhang<![CDATA[Entropy, Vol. 18, Pages 418: Prediction of Bearing Fault Using Fractional Brownian Motion and Minimum Entropy Deconvolution]]>
http://www.mdpi.com/1099-4300/18/11/418
In this paper, we propose a novel framework for the diagnosis of incipient bearing faults and trend prediction of weak faults which result in gradual aggravation with the bearing vibration intensity as the characteristic parameter. For the weak fault diagnosis, the proposed framework adopts the improved minimum entropy deconvolution (MED) theory to identify the weak fault characteristics of mechanical equipment. From a large number of actual data analysis, once a bearing shows a weak fault, the bearing vibration intensity not only has random non-stationary, but also long-range dependent (LRD) characteristics. Therefore, the stochastic model with LRD−fractional Brown motion (FBM) is proposed to evaluate and predict the condition of slowly varying bearing faults which is a gradual process from weak fault occurrence to severity. For the FBM stochastic model, we mainly implement the derivation and the parameter identification of the FBM model. This is the first study to slowly fault prediction with stochastic model FBM. Experimental results show that the proposed methods can obtain the best performance in incipient fault diagnosis and bearing condition trend prediction.Entropy2016-11-231811Article10.3390/e181104184181099-43002016-11-23doi: 10.3390/e18110418Wanqing SongMing LiJian-Kai Liang<![CDATA[Entropy, Vol. 18, Pages 416: Analysis of the Complexity Entropy and Chaos Control of the Bullwhip Effect Considering Price of Evolutionary Game between Two Retailers]]>
http://www.mdpi.com/1099-4300/18/11/416
In this research, a model is established to represent a supply chain, which consists of one manufacturer and two retailers. The price-sensitive demand model is considered and the price game system is built according to the rule of bounded rationality as well as the entropy theory. With the increase of the price adjustment speed, the game system may go into chaos from the stable and periodic state. The bullwhip effect and inventory variance ratio of different stages that the system falls in are compared in real time. We also employ the delayed feedback control method to control the system and succeed in mitigating the bullwhip effect of the system. On the whole, the bullwhip effect and inventory variance ratio in the stable state are smaller than those in period-doubling and chaos. In the stable state, there is an optimal price adjustment speed to obtain both the lowest bullwhip effect and inventory variance ratio.Entropy2016-11-191811Article10.3390/e181104164161099-43002016-11-19doi: 10.3390/e18110416Junhai MaXiaogang MaWandong Lou<![CDATA[Entropy, Vol. 18, Pages 413: Existence of Solutions to a Nonlinear Parabolic Equation of Fourth-Order in Variable Exponent Spaces]]>
http://www.mdpi.com/1099-4300/18/11/413
This paper is devoted to studying the existence and uniqueness of weak solutions for an initial boundary problem of a nonlinear fourth-order parabolic equation with variable exponent v t + div ( | ∇ ▵ v | p ( x ) − 2 ∇ ▵ v ) − | ▵ v | q ( x ) − 2 ▵ v = g ( x , v ) . By applying Leray-Schauder’s fixed point theorem, the existence of weak solutions of the elliptic problem is given. Furthermore, the semi-discrete method yields the existence of weak solutions of the corresponding parabolic problem by constructing two approximate solutions.Entropy2016-11-181811Article10.3390/e181104134131099-43002016-11-18doi: 10.3390/e18110413Bo LiangXiting PengChengyuan Qu<![CDATA[Entropy, Vol. 18, Pages 414: Application of Sample Entropy Based LMD-TFPF De-Noising Algorithm for the Gear Transmission System]]>
http://www.mdpi.com/1099-4300/18/11/414
This paper investigates an improved noise reduction method and its application on gearbox vibration signal de-noising. A hybrid de-noising algorithm based on local mean decomposition (LMD), sample entropy (SE), and time-frequency peak filtering (TFPF) is proposed. TFPF is a classical filter method in the time-frequency domain. However, there is a contradiction in TFPF, i.e., a good preservation for signal amplitude, but poor random noise reduction results might be obtained by selecting a short window length, whereas a serious attenuation for signal amplitude, but effective random noise reduction might be obtained by selecting a long window length. In order to make a good tradeoff between valid signal amplitude preservation and random noise reduction, LMD and SE are adopted to improve TFPF. Firstly, the original signal is decomposed into PFs by LMD, and the SE value of each product function (PF) is calculated in order to classify the numerous PFs into the useful component, mixed component, and the noise component; then short-window TFPF is employed for the useful component, long-window TFPF is employed for the mixed component, and the noise component is removed; finally, the final signal is obtained after reconstruction. The gearbox vibration signals are employed to verify the proposed algorithm, and the comparison results show that the proposed SE-LMD-TFPF has the best de-noising results compared to traditional wavelet and TFPF method.Entropy2016-11-181811Article10.3390/e181104144141099-43002016-11-18doi: 10.3390/e18110414Shaohui NingZhennan HanZhijian WangXuefeng Wu<![CDATA[Entropy, Vol. 18, Pages 410: Information-Theoretic Analysis of Memoryless Deterministic Systems]]>
http://www.mdpi.com/1099-4300/18/11/410
The information loss in deterministic, memoryless systems is investigated by evaluating the conditional entropy of the input random variable given the output random variable. It is shown that for a large class of systems the information loss is finite, even if the input has a continuous distribution. For systems with infinite information loss, a relative measure is defined and shown to be related to Rényi information dimension. As deterministic signal processing can only destroy information, it is important to know how this information loss affects the solution of inverse problems. Hence, we connect the probability of perfectly reconstructing the input to the information lost in the system via Fano-type bounds. The theoretical results are illustrated by example systems commonly used in discrete-time, nonlinear signal processing and communications.Entropy2016-11-171811Article10.3390/e181104104101099-43002016-11-17doi: 10.3390/e18110410Bernhard GeigerGernot Kubin<![CDATA[Entropy, Vol. 18, Pages 412: Symplectic Entropy as a Novel Measure for Complex Systems]]>
http://www.mdpi.com/1099-4300/18/11/412
Real systems are often complex, nonlinear, and noisy in various fields, including mathematics, natural science, and social science. We present the symplectic entropy (SymEn) measure as well as an analysis method based on SymEn to estimate the nonlinearity of a complex system by analyzing the given time series. The SymEn estimation is a kind of entropy based on symplectic principal component analysis (SPCA), which represents organized but unpredictable behaviors of systems. The key to SPCA is to preserve the global submanifold geometrical properties of the systems through a symplectic transform in the phase space, which is a kind of measure-preserving transform. The ability to preserve the global geometrical characteristics makes SymEn a test statistic for the detection of the nonlinear characteristics in several typical chaotic time series, and the stochastic characteristic in Gaussian white noise. The results are in agreement with findings in the approximate entropy (ApEn), the sample entropy (SampEn), and the fuzzy entropy (FuzzyEn). Moreover, the SymEn method is also used to analyze the nonlinearities of real signals (including the electroencephalogram (EEG) signals for Autism Spectrum Disorder (ASD) and healthy subjects, and the sound and vibration signals for mechanical systems). The results indicate that the SymEn estimation can be taken as a measure for the description of the nonlinear characteristics in the data collected from natural complex systems.Entropy2016-11-171811Article10.3390/e181104124121099-43002016-11-17doi: 10.3390/e18110412Min LeiGuang MengWenming ZhangJoshua WadeNilanjan Sarkar<![CDATA[Entropy, Vol. 18, Pages 407: Geometry Induced by a Generalization of Rényi Divergence]]>
http://www.mdpi.com/1099-4300/18/11/407
In this paper, we propose a generalization of Rényi divergence, and then we investigate its induced geometry. This generalization is given in terms of a φ-function, the same function that is used in the definition of non-parametric φ-families. The properties of φ-functions proved to be crucial in the generalization of Rényi divergence. Assuming appropriate conditions, we verify that the generalized Rényi divergence reduces, in a limiting case, to the φ-divergence. In generalized statistical manifold, the φ-divergence induces a pair of dual connections D ( − 1 ) and D ( 1 ) . We show that the family of connections D ( α ) induced by the generalization of Rényi divergence satisfies the relation D ( α ) = 1 − α 2 D ( − 1 ) + 1 + α 2 D ( 1 ) , with α ∈ [ − 1 , 1 ] .Entropy2016-11-171811Article10.3390/e181104074071099-43002016-11-17doi: 10.3390/e18110407David de SouzaRui VigelisCharles Cavalcante<![CDATA[Entropy, Vol. 18, Pages 411: Multivariate Generalized Multiscale Entropy Analysis]]>
http://www.mdpi.com/1099-4300/18/11/411
Multiscale entropy (MSE) was introduced in the 2000s to quantify systems’ complexity. MSE relies on (i) a coarse-graining procedure to derive a set of time series representing the system dynamics on different time scales; (ii) the computation of the sample entropy for each coarse-grained time series. A refined composite MSE (rcMSE)—based on the same steps as MSE—also exists. Compared to MSE, rcMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy for short time series. The multivariate versions of MSE (MMSE) and rcMSE (MrcMSE) have also been introduced. In the coarse-graining step used in MSE, rcMSE, MMSE, and MrcMSE, the mean value is used to derive representations of the original data at different resolutions. A generalization of MSE was recently published, using the computation of different moments in the coarse-graining procedure. However, so far, this generalization only exists for univariate signals. We therefore herein propose an extension of this generalized MSE to multivariate data. The multivariate generalized algorithms of MMSE and MrcMSE presented herein (MGMSE and MGrcMSE, respectively) are first analyzed through the processing of synthetic signals. We reveal that MGrcMSE shows better performance than MGMSE for short multivariate data. We then study the performance of MGrcMSE on two sets of short multivariate electroencephalograms (EEG) available in the public domain. We report that MGrcMSE may show better performance than MrcMSE in distinguishing different types of multivariate EEG data. MGrcMSE could therefore supplement MMSE or MrcMSE in the processing of multivariate datasets.Entropy2016-11-171811Article10.3390/e181104114111099-43002016-11-17doi: 10.3390/e18110411Anne Humeau-Heurtier<![CDATA[Entropy, Vol. 18, Pages 409: Entropy-Based Experimental Design for Optimal Model Discrimination in the Geosciences]]>
http://www.mdpi.com/1099-4300/18/11/409
Choosing between competing models lies at the heart of scientific work, and is a frequent motivation for experimentation. Optimal experimental design (OD) methods maximize the benefit of experiments towards a specified goal. We advance and demonstrate an OD approach to maximize the information gained towards model selection. We make use of so-called model choice indicators, which are random variables with an expected value equal to Bayesian model weights. Their uncertainty can be measured with Shannon entropy. Since the experimental data are still random variables in the planning phase of an experiment, we use mutual information (the expected reduction in Shannon entropy) to quantify the information gained from a proposed experimental design. For implementation, we use the Preposterior Data Impact Assessor framework (PreDIA), because it is free of the lower-order approximations of mutual information often found in the geosciences. In comparison to other studies in statistics, our framework is not restricted to sequential design or to discrete-valued data, and it can handle measurement errors. As an application example, we optimize an experiment about the transport of contaminants in clay, featuring the problem of choosing between competing isotherms to describe sorption. We compare the results of optimizing towards maximum model discrimination with an alternative OD approach that minimizes the overall predictive uncertainty under model choice uncertainty.Entropy2016-11-171811Article10.3390/e181104094091099-43002016-11-17doi: 10.3390/e18110409Wolfgang NowakAnneli Guthke<![CDATA[Entropy, Vol. 18, Pages 408: Global Atmospheric Dynamics Investigated by Using Hilbert Frequency Analysis]]>
http://www.mdpi.com/1099-4300/18/11/408
The Hilbert transform is a well-known tool of time series analysis that has been widely used to investigate oscillatory signals that resemble a noisy periodic oscillation, because it allows instantaneous phase and frequency to be estimated, which in turn uncovers interesting properties of the underlying process that generates the signal. Here we use this tool to analyze atmospheric data: we consider daily-averaged Surface Air Temperature (SAT) time series recorded over a regular grid of locations covering the Earth’s surface. From each SAT time series, we calculate the instantaneous frequency time series by considering the Hilbert analytic signal. The properties of the obtained frequency data set are investigated by plotting the map of the average frequency and the map of the standard deviation of the frequency fluctuations. The average frequency map reveals well-defined large-scale structures: in the extra-tropics, the average frequency in general corresponds to the expected one-year period of solar forcing, while in the tropics, a different behaviour is found, with particular regions having a faster average frequency. In the standard deviation map, large-scale structures are also found, which tend to be located over regions of strong annual precipitation. Our results demonstrate that Hilbert analysis of SAT time-series uncovers meaningful information, and is therefore a promising tool for the study of other climatological variables.Entropy2016-11-161811Article10.3390/e181104084081099-43002016-11-16doi: 10.3390/e18110408Dario ZappalàMarcelo BarreiroCristina Masoller<![CDATA[Entropy, Vol. 18, Pages 406: Thermodynamics of Noncommutative Quantum Kerr Black Holes]]>
http://www.mdpi.com/1099-4300/18/11/406
The thermodynamic formalism for rotating black holes, characterized by noncommutative and quantum corrections, is constructed. From a fundamental thermodynamic relation, the equations of state and thermodynamic response functions are explicitly given, and the effect of noncommutativity and quantum correction is discussed. It is shown that the well-known divergence exhibited in specific heat is not removed by any of these corrections. However, regions of thermodynamic stability are affected by noncommutativity, increasing the available states for which some thermodynamic stability conditions are satisfied.Entropy2016-11-161811Article10.3390/e181104064061099-43002016-11-16doi: 10.3390/e18110406Lenin Escamilla-HerreraEri Mena-BarbozaJosé Torres-Arenas<![CDATA[Entropy, Vol. 18, Pages 405: Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection]]>
http://www.mdpi.com/1099-4300/18/11/405
Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.Entropy2016-11-151811Article10.3390/e181104054051099-43002016-11-15doi: 10.3390/e18110405Jaesung LeeDae-Won Kim<![CDATA[Entropy, Vol. 18, Pages 404: Decision-Making Model under Risk Assessment Based on Entropy]]>
http://www.mdpi.com/1099-4300/18/11/404
Decision-making under risk assessment involves dealing with the matter of uncertainty, especially in projects such as tunnel construction. Risk control should include not only measures to reduce the possible consequence of incident, but also exploration measures (information collecting measures) to reduce the uncertainty of the incident. The classical risk assessment model in engineering is R = P × C which only takes account of the assessment and decision-making of possible consequences. It cannot provide theoretical guidance for taking exploration measures. The paper presents an advanced methodology to assess the effectiveness of exploration measures in decision-making. The methodology classifies risk into two attributes: hazard (expected value) and uncertainty (entropy). On this basis, a generalized model of decision-making under risk assessment is proposed. This model extends the use of the classical assessment model to a more general case. The reason for taking exploration measures and assessment of such measures’ effectiveness could be explained well by this developed model. This model can also serve as a descriptive model for many risk problems and provide a decision-making basis for a variety of risk types. Moreover, the assessment process and calculation method are applied with some case studies.Entropy2016-11-151811Article10.3390/e181104044041099-43002016-11-15doi: 10.3390/e18110404Xin DongHao LuYuanpu XiaZiming Xiong<![CDATA[Entropy, Vol. 18, Pages 399: A Concept Lattice for Semantic Integration of Geo-Ontologies Based on Weight of Inclusion Degree Importance and Information Entropy]]>
http://www.mdpi.com/1099-4300/18/11/399
Constructing a merged concept lattice with formal concept analysis (FCA) is an important research direction in the field of integrating multi-source geo-ontologies. Extracting essential geographical properties and reducing the concept lattice are two key points of previous research. A formal integration method is proposed to address the challenges in these two areas. We first extract essential properties from multi-source geo-ontologies and use FCA to build a merged formal context. Second, the combined importance weight of each single attribute of the formal context is calculated by introducing the inclusion degree importance from rough set theory and information entropy; then a weighted formal context is built from the merged formal context. Third, a combined weighted concept lattice is established from the weighted formal context with FCA and the importance weight value of every concept is defined as the sum of weight of attributes belonging to the concept’s intent. Finally, semantic granularity of concept is defined by its importance weight; we, then gradually reduce the weighted concept lattice by setting up diminishing threshold of semantic granularity. Additionally, all of those reduced lattices are organized into a regular hierarchy structure based on the threshold of semantic granularity. A workflow is designed to demonstrate this procedure. A case study is conducted to show feasibility and validity of this method and the procedure to integrate multi-source geo-ontologies.Entropy2016-11-151811Article10.3390/e181103993991099-43002016-11-15doi: 10.3390/e18110399Jia XiaoZongyi He<![CDATA[Entropy, Vol. 18, Pages 397: Increase in Complexity and Information through Molecular Evolution]]>
http://www.mdpi.com/1099-4300/18/11/397
Biological evolution progresses by essentially three different mechanisms: (I) optimization of properties through natural selection in a population of competitors; (II) development of new capabilities through cooperation of competitors caused by catalyzed reproduction; and (III) variation of genetic information through mutation or recombination. Simplified evolutionary processes combine two out of the three mechanisms: Darwinian evolution combines competition (I) and variation (III) and is represented by the quasispecies model, major transitions involve cooperation (II) of competitors (I), and the third combination, cooperation (II) and variation (III) provides new insights in the role of mutations in evolution. A minimal kinetic model based on simple molecular mechanisms for reproduction, catalyzed reproduction and mutation is introduced, cast into ordinary differential equations (ODEs), and analyzed mathematically in form of its implementation in a flow reactor. Stochastic aspects are investigated through computer simulation of trajectories of the corresponding chemical master equations. The competition-cooperation model, mechanisms (I) and (II), gives rise to selection at low levels of resources and leads to symbiontic cooperation in case the material required is abundant. Accordingly, it provides a kind of minimal system that can undergo a (major) transition. Stochastic effects leading to extinction of the population through self-enhancing oscillations destabilize symbioses of four or more partners. Mutations (III) are not only the basis of change in phenotypic properties but can also prevent extinction provided the mutation rates are sufficiently large. Threshold phenomena are observed for all three combinations: The quasispecies model leads to an error threshold, the competition-cooperation model allows for an identification of a resource-triggered bifurcation with the transition, and for the cooperation-mutation model a kind of stochastic threshold for survival through sufficiently high mutation rates is observed. The evolutionary processes in the model are accompanied by gains in information on the environment of the evolving populations. In order to provide a useful basis for comparison, two forms of information, syntactic or Shannon information and semantic information are introduced here. Both forms of information are defined for simple evolving systems at the molecular level. Selection leads primarily to an increase in semantic information in the sense that higher fitness allows for more efficient exploitation of the environment and provides the basis for more progeny whereas understanding transitions involves characteristic contributions from both Shannon information and semantic information.Entropy2016-11-141811Review10.3390/e181103973971099-43002016-11-14doi: 10.3390/e18110397Peter Schuster<![CDATA[Entropy, Vol. 18, Pages 398: Fractional-Order Identification and Control of Heating Processes with Non-Continuous Materials]]>
http://www.mdpi.com/1099-4300/18/11/398
The paper presents a fractional order model of a heating process and a comparison of fractional and standard PI controllers in its closed loop system. Preliminarily, an enhanced fractional order model for the heating process on non-continuous materials has been identified through a fitting algorithm on experimental data. Experimentation has been carried out on a finite length beam filled with three non-continuous materials (air, styrofoam, metal buckshots) in order to identify a model in the frequency domain and to obtain a relationship between the fractional order of the heating process and the different materials’ properties. A comparison between the experimental model and the theoretical one has been performed, proving a significant enhancement of the fitting performances. Moreover the obtained modelling results confirm the fractional nature of the heating processes when diffusion occurs in non-continuous composite materials, and they show how the model’s fractional order can be used as a characteristic parameter for non-continuous materials with different composition and structure. Finally, three different kinds of controllers have been applied and compared in order to keep constant the beam temperature constant at a fixed length.Entropy2016-11-121811Article10.3390/e181103983981099-43002016-11-12doi: 10.3390/e18110398Riccardo CaponettoFrancesca SapuppoVincenzo TomaselloGuido MaionePaolo Lino<![CDATA[Entropy, Vol. 18, Pages 396: Kernel Density Estimation on the Siegel Space with an Application to Radar Processing]]>
http://www.mdpi.com/1099-4300/18/11/396
This paper studies probability density estimation on the Siegel space. The Siegel space is a generalization of the hyperbolic space. Its Riemannian metric provides an interesting structure to the Toeplitz block Toeplitz matrices that appear in the covariance estimation of radar signals. The main techniques of probability density estimation on Riemannian manifolds are reviewed. For computational reasons, we chose to focus on the kernel density estimation. The main result of the paper is the expression of Pelletier’s kernel density estimator. The computation of the kernels is made possible by the symmetric structure of the Siegel space. The method is applied to density estimation of reflection coefficients from radar observations.Entropy2016-11-111811Article10.3390/e181103963961099-43002016-11-11doi: 10.3390/e18110396Emmanuel ChevallierThibault ForgetFrédéric BarbarescoJesus Angulo<![CDATA[Entropy, Vol. 18, Pages 394: Rectification and Non-Gaussian Diffusion in Heterogeneous Media]]>
http://www.mdpi.com/1099-4300/18/11/394
We show that when Brownian motion takes place in a heterogeneous medium, the presence of local forces and transport coefficients leads to deviations from a Gaussian probability distribution that make that the ratio between forward and backward probabilities depend on the nature of the host medium, on local forces, and also on time. We have applied our results to two situations: diffusion in a disordered medium, and diffusion in a confined system. For such scenarios, we have shown that our theoretical predictions are in very good agreement with numerical results. Moreover, we have shown that the deviations from the Gaussian solution lead to the onset of rectification. Our predictions could be used to detect the presence of local forces and to characterize the intrinsic short-scale properties of the host medium—a problem of current interest in the study of micro- and nano-systems.Entropy2016-11-111811Article10.3390/e181103943941099-43002016-11-11doi: 10.3390/e18110394Paolo MalgarettiIgnacio PagonabarragaJ. Rubi<![CDATA[Entropy, Vol. 18, Pages 395: Unextendible Mutually Unbiased Bases (after Mandayam, Bandyopadhyay, Grassl and Wootters)]]>
http://www.mdpi.com/1099-4300/18/11/395
We consider questions posed in a recent paper of Mandayam et al. (2014) on the nature of “unextendible mutually unbiased bases.” We describe a conceptual framework to study these questions, using a connection proved by the author in Thas (2009) between the set of nonidentity generalized Pauli operators on the Hilbert space of N d-level quantum systems, d a prime, and the geometry of non-degenerate alternating bilinear forms of rank N over finite fields F d . We then supply alternative and short proofs of results obtained in Mandayam et al. (2014), as well as new general bounds for the problems considered in loc. cit. In this setting, we also solve Conjecture 1 of Mandayam et al. (2014) and speculate on variations of this conjecture.Entropy2016-11-111811Article10.3390/e181103953951099-43002016-11-11doi: 10.3390/e18110395Koen Thas<![CDATA[Entropy, Vol. 18, Pages 393: Feature Extraction of Ship-Radiated Noise Based on Permutation Entropy of the Intrinsic Mode Function with the Highest Energy]]>
http://www.mdpi.com/1099-4300/18/11/393
In order to solve the problem of feature extraction of underwater acoustic signals in complex ocean environment, a new method for feature extraction from ship-radiated noise is presented based on empirical mode decomposition theory and permutation entropy. It analyzes the separability for permutation entropies of the intrinsic mode functions of three types of ship-radiated noise signals, and discusses the permutation entropy of the intrinsic mode function with the highest energy. In this study, ship-radiated noise signals measured from three types of ships are decomposed into a set of intrinsic mode functions with empirical mode decomposition method. Then, the permutation entropies of all intrinsic mode functions are calculated with appropriate parameters. The permutation entropies are obviously different in the intrinsic mode functions with the highest energy, thus, the permutation entropy of the intrinsic mode function with the highest energy is regarded as a new characteristic parameter to extract the feature of ship-radiated noise. After that, the characteristic parameters—namely, the energy difference between high and low frequency, permutation entropy, and multi-scale permutation entropy—are compared with the permutation entropy of the intrinsic mode function with the highest energy. It is discovered that the four characteristic parameters are at the same level for similar ships, however, there are differences in the parameters for different types of ships. The results demonstrate that the permutation entropy of the intrinsic mode function with the highest energy is better in separability as the characteristic parameter than the other three parameters by comparing their fluctuation ranges and the average values of the four characteristic parameters. Hence, the feature of ship-radiated noise can be extracted efficiently with the method.Entropy2016-11-111811Article10.3390/e181103933931099-43002016-11-11doi: 10.3390/e18110393Yu-Xing LiYa-An LiZhe ChenXiao Chen<![CDATA[Entropy, Vol. 18, Pages 392: Angular Spectral Density and Information Entropy for Eddy Current Distribution]]>
http://www.mdpi.com/1099-4300/18/11/392
Here, a new method is proposed to quantitatively evaluate the eddy current distribution induced by different exciting coils of an eddy current probe. Probability of energy allocation of a vector field is modeled via conservation of energy and imitating the wave function in quantum mechanics. The idea of quantization and the principle of circuit sampling is utilized to discretize the space of the vector field. Then, a method to calculate angular spectral density and Shannon information entropy is proposed. Eddy current induced by three different exciting coils is evaluated with this method, and the specific nature of eddy current testing is discussed.Entropy2016-11-101811Article10.3390/e181103923921099-43002016-11-10doi: 10.3390/e18110392Guolong ChenWeimin Zhang<![CDATA[Entropy, Vol. 18, Pages 391: On Thermodynamics Problems in the Single-Phase-Lagging Heat Conduction Model]]>
http://www.mdpi.com/1099-4300/18/11/391
Thermodynamics problems for the single-phase-lagging (SPL) model have not been much studied. In this paper, the violation of the second law of thermodynamics by the SPL model is studied from two perspectives, which are the negative entropy production rate and breaking equilibrium spontaneously. The methods for the SPL model to avoid the negative entropy production rate are proposed, which are extended irreversible thermodynamics and the thermal relaxation time. Modifying the entropy production rate positive or zero is not enough to avoid the violation of the second law of thermodynamics for the SPL model, because the SPL model could cause breaking equilibrium spontaneously in some special circumstances. As comparison, it is shown that Fourier’s law and the CV model cannot break equilibrium spontaneously by analyzing mathematical energy integral.Entropy2016-11-091811Article10.3390/e181103913911099-43002016-11-09doi: 10.3390/e18110391Shu-Nan LiBing-Yang Cao<![CDATA[Entropy, Vol. 18, Pages 390: Possibility of Using Entropy Method to Evaluate the Distracting Effect of Mobile Phones on Pedestrians]]>
http://www.mdpi.com/1099-4300/18/11/390
The number of mobile phone users keeps increasing every year and mobile phones have become a primary need for most people. Ordinarily, people are not aware of the risk from a common dual-task study, such as using a mobile phone while walking or simply standing. This study reviewed the methodology in evaluating the distracting effect of mobile phones on pedestrians. A comprehensive review of literature revealed that the most common method in quantifying pedestrian performance is to evaluate postural task performance. Since using a mobile phone while crossing the road is a type of dual-task study, it would give more clarity to investigate it using entropy methods that have been proven more sensitive than the traditional center of pressure (COP) in discriminating the changes in human balance. The descriptions of commonly used entropy methods were also given in order to give a broad overview of the possibility in applying the methods to further clarify the distracting effect of mobile phones.Entropy2016-11-041811Review10.3390/e181103903901099-43002016-11-04doi: 10.3390/e18110390Nurul NurwulanBernard Jiang<![CDATA[Entropy, Vol. 18, Pages 386: Geometric Theory of Heat from Souriau Lie Groups Thermodynamics and Koszul Hessian Geometry: Applications in Information Geometry for Exponential Families]]>
http://www.mdpi.com/1099-4300/18/11/386
We introduce the symplectic structure of information geometry based on Souriau’s Lie group thermodynamics model, with a covariant definition of Gibbs equilibrium via invariances through co-adjoint action of a group on its moment space, defining physical observables like energy, heat, and moment as pure geometrical objects. Using geometric Planck temperature of Souriau model and symplectic cocycle notion, the Fisher metric is identified as a Souriau geometric heat capacity. The Souriau model is based on affine representation of Lie group and Lie algebra that we compare with Koszul works on G/K homogeneous space and bijective correspondence between the set of G-invariant flat connections on G/K and the set of affine representations of the Lie algebra of G. In the framework of Lie group thermodynamics, an Euler-Poincaré equation is elaborated with respect to thermodynamic variables, and a new variational principal for thermodynamics is built through an invariant Poincaré-Cartan-Souriau integral. The Souriau-Fisher metric is linked to KKS (Kostant–Kirillov–Souriau) 2-form that associates a canonical homogeneous symplectic manifold to the co-adjoint orbits. We apply this model in the framework of information geometry for the action of an affine group for exponential families, and provide some illustrations of use cases for multivariate gaussian densities. Information geometry is presented in the context of the seminal work of Fréchet and his Clairaut-Legendre equation. The Souriau model of statistical physics is validated as compatible with the Balian gauge model of thermodynamics. We recall the precursor work of Casalis on affine group invariance for natural exponential families.Entropy2016-11-041811Article10.3390/e181103863861099-43002016-11-04doi: 10.3390/e18110386Frédéric Barbaresco<![CDATA[Entropy, Vol. 18, Pages 384: Texture Segmentation Using Laplace Distribution-Based Wavelet-Domain Hidden Markov Tree Models]]>
http://www.mdpi.com/1099-4300/18/11/384
Multiresolution models such as the wavelet-domain hidden Markov tree (HMT) model provide a powerful approach for image modeling and processing because it captures the key features of the wavelet coefficients of real-world data. It is observed that the Laplace distribution is peakier in the center and has heavier tails compared with the Gaussian distribution. Thus we propose a new HMT model based on the two-state, zero-mean Laplace mixture model (LMM), the LMM-HMT, which provides significantly potential for characterizing real-world textures. By using the HMT segmentation framework, we develop LMM-HMT based segmentation methods for image textures and dynamic textures. The experimental results demonstrate the effectiveness of the introduced model and segmentation methods.Entropy2016-11-041811Article10.3390/e181103843841099-43002016-11-04doi: 10.3390/e18110384Yulong QiaoGanchao Zhao<![CDATA[Entropy, Vol. 18, Pages 389: A Possible Ethical Imperative Based on the Entropy Law]]>
http://www.mdpi.com/1099-4300/18/11/389
Lindsay in an article titled, “Entropy consumption and values in physical science,” (Am. Sci. 1959, 47, 678–696) proposed a Thermodynamic Imperative similar to Kant’s Ethical Categorical Imperative. In this paper, after describing the concept of ethical imperative as elaborated by Kant, we provide a brief discussion of the role of science and its relationship to the classical thermodynamics and the physical implications of the first and the second laws of thermodynamics. We finally attempt to extend and supplement Lindsay’s Thermodynamic Imperative (TI), by another Imperative suggesting simplicity, conservation, and harmony.Entropy2016-11-031811Discussion10.3390/e181103893891099-43002016-11-03doi: 10.3390/e18110389Mehrdad Massoudi<![CDATA[Entropy, Vol. 18, Pages 387: Energy and Exergy Analyses of a Diesel Engine Fuelled with Biodiesel-Diesel Blends Containing 5% Bioethanol]]>
http://www.mdpi.com/1099-4300/18/11/387
In this study, energy and exergy analysis were performed for a single cylinder, water-cooled diesel engine using biodiesel, diesel and bioethanol blends. Each experiment was performed at twelve different engine speeds between 1000 and 3000 rev/min at intervals of 200 rev/min for four different fuel blends. The fuel blends, prepared by mixing biodiesel and diesel in different proportions fuel with 5% bioethanol, are identified as D92B3E5 (92% diesel, 3% biodiesel and 5% bioethanol), D85B10E5 (85% diesel, 10% biodiesel and 5% bioethanol), D80B15E5(80% diesel, 15% biodiesel and 5% bioethanol) and D75B20E5 (75% diesel, 20% biodiesel and 5% bioethanol). The effect of blends on energy and exergy analysis was investigated for the different engine speeds and all the results were compared with effect of D100 reference fuel. The maximum thermal efficiencies obtained were 31.42% at 1500 rev/min for D100 and 31.42%, 28.68%, 28.1%, 28% and 27.18% at 1400 rev/min, respectively, for D92B3E5, D85B10E5, D80B15E5, D75B20E5. Maximum exergetic efficiencies were also obtained as 29.38%, 26.8%, 26.33%, 26.15% and 25.38%, respectively, for the abovementioned fuels. As a result of our analyses, it was determined that D100 fuel has a slightly higher thermal and exergetic efficiency than other fuel blends and all the results are quite close to each other.Entropy2016-10-311811Article10.3390/e181103873871099-43002016-10-31doi: 10.3390/e18110387Bahar Sayin KulAli Kahraman<![CDATA[Entropy, Vol. 18, Pages 388: Entropy Analysis of a Railway Network’s Complexity]]>
http://www.mdpi.com/1099-4300/18/11/388
Railway networks are among the many physical systems that reveal a fractal structure. This paper studies the Portuguese railway system, and analyzes how it evolved over time, namely what concerns the structure of its different levels, and its distribution over the territory. Different mathematical tools are adopted, such as fractal dimension, entropy and state space portrait. The results are consistent with the historical evolution of the network.Entropy2016-10-311811Article10.3390/e181103883881099-43002016-10-31doi: 10.3390/e18110388Duarte ValérioAntónio LopesJosé Tenreiro Machado<![CDATA[Entropy, Vol. 18, Pages 377: Secrecy Capacity of the Extended Wiretap Channel II with Noise]]>
http://www.mdpi.com/1099-4300/18/11/377
The secrecy capacity of an extended communication model of wiretap channelII is determined. In this channel model, the source message is encoded into a digital sequence of length N and transmitted to the legitimate receiver through a discrete memoryless channel (DMC). There exists an eavesdropper who is able to observe arbitrary μ = N α digital symbols from the transmitter through a second DMC, where 0 ≤ α ≤ 1 is a constant real number. A pair of an encoder and a decoder is designed to let the receiver be able to recover the source message with a vanishing decoding error probability and keep the eavesdropper ignorant of the message. This communication model includes a variety of wiretap channels as special cases. The coding scheme is based on that designed by Ozarow and Wyner for the classic wiretap channel II.Entropy2016-10-311811Article10.3390/e181103773771099-43002016-10-31doi: 10.3390/e18110377Dan HeWangmei GuoYuan Luo<![CDATA[Entropy, Vol. 18, Pages 385: Application of the Self-Organization Phenomenon in the Development of Wear Resistant Materials—A Review]]>
http://www.mdpi.com/1099-4300/18/11/385
Application of the phenomenon of self-organization for the development of wear resistant materials has been reviewed. For this purpose the term of self-organization and dissipative structures as applied to tribology have been discussed. The applications of this phenomenon have been shown in order to develop new wear resistant- and antifriction materials. Specific examples have been shown for the application of the self-organization phenomenon and the generation of dissipative structures for the formation of tribotechnical materials with enhanced wear resistance for current collecting materials and antifriction materials of bearings.Entropy2016-10-271811Review10.3390/e181103853851099-43002016-10-27doi: 10.3390/e18110385Iosif GershmanEugeniy GershmanAlexander MironovGerman Fox-RabinovichStephen Veldhuis<![CDATA[Entropy, Vol. 18, Pages 383: Explicit Formula of Koszul–Vinberg Characteristic Functions for a Wide Class of Regular Convex Cones]]>
http://www.mdpi.com/1099-4300/18/11/383
The Koszul–Vinberg characteristic function plays a fundamental role in the theory of convex cones. We give an explicit description of the function and related integral formulas for a new class of convex cones, including homogeneous cones and cones associated with chordal (decomposable) graphs appearing in statistics. Furthermore, we discuss an application to maximum likelihood estimation for a certain exponential family over a cone of this class.Entropy2016-10-261811Article10.3390/e181103833831099-43002016-10-26doi: 10.3390/e18110383Hideyuki Ishi<![CDATA[Entropy, Vol. 18, Pages 382: Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus)]]>
http://www.mdpi.com/1099-4300/18/11/382
Mixture models are in high demand for machine-learning analysis due to their computational tractability, and because they serve as a good approximation for continuous densities. Predominantly, entropy applications have been developed in the context of a mixture of normal densities. In this paper, we consider a novel class of skew-normal mixture models, whose components capture skewness due to their flexibility. We find upper and lower bounds for Shannon and Rényi entropies for this model. Using such a pair of bounds, a confidence interval for the approximate entropy value can be calculated. In addition, an asymptotic expression for Rényi entropy by Stirling’s approximation is given, and upper and lower bounds are reported using multinomial coefficients and some properties and inequalities of L p metric spaces. Simulation studies are then applied to a swordfish (Xiphias gladius Linnaeus) length dataset.Entropy2016-10-261811Article10.3390/e181103823821099-43002016-10-26doi: 10.3390/e18110382Javier Contreras-ReyesDaniel Cortés<![CDATA[Entropy, Vol. 18, Pages 381: The Geometry of Signal Detection with Applications to Radar Signal Processing]]>
http://www.mdpi.com/1099-4300/18/11/381
The problem of hypothesis testing in the Neyman–Pearson formulation is considered from a geometric viewpoint. In particular, a concise geometric interpretation of deterministic and random signal detection in the philosophy of information geometry is presented. In such a framework, both hypotheses and detectors can be treated as geometrical objects on the statistical manifold of a parameterized family of probability distributions. Both the detector and detection performance are geometrically elucidated in terms of the Kullback–Leibler divergence. Compared to the likelihood ratio test, the geometric interpretation provides a consistent but more comprehensive means to understand and deal with signal detection problems in a rather convenient manner. Example of the geometry based detector in radar constant false alarm rate (CFAR) detection is presented, which shows its advantage over the classical processing method.Entropy2016-10-251811Article10.3390/e181103813811099-43002016-10-25doi: 10.3390/e18110381Yongqiang ChengXiaoqiang HuaHongqiang WangYuliang QinXiang Li<![CDATA[Entropy, Vol. 18, Pages 380: A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation]]>
http://www.mdpi.com/1099-4300/18/10/380
A robust sparse least-mean mixture-norm (LMMN) algorithm is proposed, and its performance is appraised in the context of estimating a broadband multi-path wireless channel. The proposed algorithm is implemented via integrating a correntropy-induced metric (CIM) penalty into the conventional LMMN algorithm to modify the basic cost function, which is denoted as the CIM-based LMMN (CIM-LMMN) algorithm. The proposed CIM-LMMN algorithm is derived in detail within the kernel framework. The updating equation of CIM-LMMN can provide a zero attractor to attract the non-dominant channel coefficients to zeros, and it also gives a tradeoff between the sparsity and the estimation misalignment. Moreover, the channel estimation behavior is investigated over a broadband sparse multi-path wireless channel, and the simulation results are compared with the least mean square/fourth (LMS/F), least mean square (LMS), least mean fourth (LMF) and the recently-developed sparse channel estimation algorithms. The channel estimation performance obtained from the designated sparse channel estimation demonstrates that the CIM-LMMN algorithm outperforms the recently-developed sparse LMMN algorithms and the relevant sparse channel estimation algorithms. From the results, we can see that our CIM-LMMN algorithm is robust and is superior to these mentioned algorithms in terms of both the convergence speed rate and the channel estimation misalignment for estimating a sparse channel.Entropy2016-10-241810Article10.3390/e181003803801099-43002016-10-24doi: 10.3390/e18100380Yingsong LiZhan JinYanyan WangRui Yang<![CDATA[Entropy, Vol. 18, Pages 379: A Novel Sequence-Based Feature for the Identification of DNA-Binding Sites in Proteins Using Jensen–Shannon Divergence]]>
http://www.mdpi.com/1099-4300/18/10/379
The knowledge of protein-DNA interactions is essential to fully understand the molecular activities of life. Many research groups have developed various tools which are either structure- or sequence-based approaches to predict the DNA-binding residues in proteins. The structure-based methods usually achieve good results, but require the knowledge of the 3D structure of protein; while sequence-based methods can be applied to high-throughput of proteins, but require good features. In this study, we present a new information theoretic feature derived from Jensen–Shannon Divergence (JSD) between amino acid distribution of a site and the background distribution of non-binding sites. Our new feature indicates the difference of a certain site from a non-binding site, thus it is informative for detecting binding sites in proteins. We conduct the study with a five-fold cross validation of 263 proteins utilizing the Random Forest classifier. We evaluate the functionality of our new features by combining them with other popular existing features such as position-specific scoring matrix (PSSM), orthogonal binary vector (OBV), and secondary structure (SS). We notice that by adding our features, we can significantly boost the performance of Random Forest classifier, with a clear increment of sensitivity and Matthews correlation coefficient (MCC).Entropy2016-10-241810Article10.3390/e181003793791099-43002016-10-24doi: 10.3390/e18100379Truong DangCornelia MeckbachRebecca TackeStephan WaackMehmet Gültas<![CDATA[Entropy, Vol. 18, Pages 378: Second Law Analysis of Nanofluid Flow within a Circular Minichannel Considering Nanoparticle Migration]]>
http://www.mdpi.com/1099-4300/18/10/378
In the current research, entropy generation for the water–alumina nanofluid flow is studied in a circular minichannel for the laminar regime under constant wall heat flux in order to evaluate irreversibilities arising from friction and heat transfer. To this end, simulations are carried out considering the particle migration effects. Due to particle migration, the nanoparticles incorporate non-uniform distribution at the cross-section of the pipe, such that the concentration is larger at central areas. The concentration non-uniformity increases by augmenting the mean concentration, particle size, and Reynolds number. The rates of entropy generation are evaluated both locally and globally (integrated). The obtained results show that particle migration changes the thermal and frictional entropy generation rates significantly, particularly at high Reynolds numbers, large concentrations, and coarser particles. Hence, this phenomenon should be considered in examinations related to energy in the field of nanofluids.Entropy2016-10-211810Article10.3390/e181003783781099-43002016-10-21doi: 10.3390/e18100378Mehdi BahiraeiNavid Cheraghi Kazerooni<![CDATA[Entropy, Vol. 18, Pages 376: Isothermal Oxidation of Aluminized Coatings on High-Entropy Alloys]]>
http://www.mdpi.com/1099-4300/18/10/376
The isothermal oxidation resistance of Al0.2Co1.5CrFeNi1.5Ti0.3 high-entropy alloy is analyzed and the microstructural evolution of the oxide layer is studied. The limited aluminum, about 3.6 at %, leads to the non-continuous alumina. The present alloy is insufficient for severe circumstances only due to chromium oxide that is 10 μm after 1173 K for 360 h. Thus, the aluminized high-entropy alloys (HEAs) are further prepared by the industrial packing cementation process at 1273 K and 1323 K. The aluminizing coating is 50 μm at 1273 K after 5 h. The coating growth is controlled by the diffusion of aluminum. The interdiffusion zone reveals two regions that are the Ti-, Co-, Ni-rich area and the Fe-, Cr-rich area. The oxidation resistance of aluminizing HEA improves outstandingly, and sustains at 1173 K and 1273 K for 441 h without any spallation. The alumina at the surface and the stable interface contribute to the performance of this Al0.2Co1.5CrFeNi1.5Ti0.3 alloy.Entropy2016-10-201810Article10.3390/e181003763761099-43002016-10-20doi: 10.3390/e18100376Che-Wei TsaiKuen-Cheng SungKzauki KasaiHideyuki Murakami<![CDATA[Entropy, Vol. 18, Pages 375: Non-Asymptotic Confidence Sets for Circular Means]]>
http://www.mdpi.com/1099-4300/18/10/375
The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.Entropy2016-10-201810Article10.3390/e181003753751099-43002016-10-20doi: 10.3390/e18100375Thomas HotzFlorian KelmaJohannes Wieditz<![CDATA[Entropy, Vol. 18, Pages 374: On the Virtual Cell Transmission in Ultra Dense Networks]]>
http://www.mdpi.com/1099-4300/18/10/374
Ultra dense networks (UDN) are identified as one of the key enablers for 5G, since they can provide an ultra high spectral reuse factor exploiting proximal transmissions. By densifying the network infrastructure equipment, it is highly possible that each user will have one or more dedicated serving base station antennas, introducing the user-centric virtual cell paradigm. However, due to irregular deployment of a large amount of base station antennas, the interference environment becomes rather complex, thus introducing severe interferences among different virtual cells. This paper focuses on the downlink transmission scheme in UDN where a large number of users and base station antennas is uniformly spread over a certain area. An interference graph is first created based on the large-scale fadings to give a potential description of the interference relationship among the virtual cells. Then, base station antennas and users in the virtual cells within the same maximally-connected component are grouped together and merged into one new virtual cell cluster, where users are jointly served via zero-forcing (ZF) beamforming. A multi-virtual-cell minimum mean square error precoding scheme is further proposed to mitigate the inter-cluster interference. Additionally, the interference alignment framework is proposed based on the low complexity virtual cell merging to eliminate the strong interference between different virtual cells. Simulation results show that the proposed interference graph-based virtual cell merging approach can attain the average user spectral efficiency performance of the grouping scheme based on virtual cell overlapping with a smaller virtual cell size and reduced signal processing complexity. Besides, the proposed user-centric transmission scheme greatly outperforms the BS-centric transmission scheme (maximum ratio transmission (MRT)) in terms of both the average user spectral efficiency and edge user spectral efficiency. What is more, interference alignment based on the low complexity virtual cell merging can achieve much better performance than ZF and MRT precoding in terms of average user spectral efficiency.Entropy2016-10-201810Article10.3390/e181003743741099-43002016-10-20doi: 10.3390/e18100374Xiaopeng ZhuJie ZengXin SuChiyang XiaoJing WangLianfen Huang<![CDATA[Entropy, Vol. 18, Pages 373: Correction: Jacobsen, C.S., et al. Continuous Variable Quantum Key Distribution with a Noisy Laser. Entropy 2015, 17, 4654–4663]]>
http://www.mdpi.com/1099-4300/18/10/373
n/aEntropy2016-10-201810Correction10.3390/e181003733731099-43002016-10-20doi: 10.3390/e18100373Christian JacobsenTobias GehringUlrik Andersen<![CDATA[Entropy, Vol. 18, Pages 371: Study on the Stability and Entropy Complexity of an Energy-Saving and Emission-Reduction Model with Two Delays]]>
http://www.mdpi.com/1099-4300/18/10/371
In this paper, we build a model of energy-savings and emission-reductions with two delays. In this model, it is assumed that the interaction between energy-savings and emission-reduction and that between carbon emissions and economic growth are delayed. We examine the local stability and the existence of a Hopf bifurcation at the equilibrium point of the system. By employing System Complexity Theory, we also analyze the impact of delays and the feedback control on stability and entropy of the system are analyzed from two aspects: single delay and double delays. In numerical simulation section, we test the theoretical analysis by using means bifurcation diagram, the largest Lyapunov exponent diagrams, attractor, time-domain plot, Poincare section plot, power spectrum, entropy diagram, 3-D surface chart and 4-D graph, the simulation results demonstrating that the inappropriate changes of delays and the feedback control will result in instability and fluctuation of carbon emissions. Finally, the bifurcation control is achieved by using the method of variable feedback control. Hence, we conclude that the greater the value of the control parameter, the better the effect of the bifurcation control. The results will provide for the development of energy-saving and emission-reduction policies.Entropy2016-10-191810Article10.3390/e181003713711099-43002016-10-19doi: 10.3390/e18100371Jing WangYuling Wang<![CDATA[Entropy, Vol. 18, Pages 372: Point Information Gain and Multidimensional Data Analysis]]>
http://www.mdpi.com/1099-4300/18/10/372
We generalize the point information gain (PIG) and derived quantities, i.e., point information gain entropy (PIE) and point information gain entropy density (PIED), for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the real data with the examples of several images and discuss further possible utilizations in other fields of data processing.Entropy2016-10-191810Article10.3390/e181003723721099-43002016-10-19doi: 10.3390/e18100372Renata RychtárikováJan KorbelPetr MacháčekPetr CísařJan UrbanDalibor Štys<![CDATA[Entropy, Vol. 18, Pages 369: Chemical Reactions Using a Non-Equilibrium Wigner Function Approach]]>
http://www.mdpi.com/1099-4300/18/10/369
A three-dimensional model of binary chemical reactions is studied. We consider an ab initio quantum two-particle system subjected to an attractive interaction potential and to a heat bath at thermal equilibrium at absolute temperature T &gt; 0 . Under the sole action of the attraction potential, the two particles can either be bound or unbound to each other. While at T = 0 , there is no transition between both states, such a transition is possible when T &gt; 0 (due to the heat bath) and plays a key role as k B T approaches the magnitude of the attractive potential. We focus on a quantum regime, typical of chemical reactions, such that: (a) the thermal wavelength is shorter than the range of the attractive potential (lower limit on T) and (b) ( 3 / 2 ) k B T does not exceed the magnitude of the attractive potential (upper limit on T). In this regime, we extend several methods previously applied to analyze the time duration of DNA thermal denaturation. The two-particle system is then described by a non-equilibrium Wigner function. Under Assumptions (a) and (b), and for sufficiently long times, defined by a characteristic time scale D that is subsequently estimated, the general dissipationless non-equilibrium equation for the Wigner function is approximated by a Smoluchowski-like equation displaying dissipation and quantum effects. A comparison with the standard chemical kinetic equations is made. The time τ required for the two particles to transition from the bound state to unbound configurations is studied by means of the mean first passage time formalism. An approximate formula for τ, in terms of D and exhibiting the Arrhenius exponential factor, is obtained. Recombination processes are also briefly studied within our framework and compared with previous well-known methods.Entropy2016-10-191810Article10.3390/e181003693691099-43002016-10-19doi: 10.3390/e18100369Ramón Álvarez-EstradaGabriel Calvo<![CDATA[Entropy, Vol. 18, Pages 368: A Hydrodynamic Model for Silicon Nanowires Based on the Maximum Entropy Principle]]>
http://www.mdpi.com/1099-4300/18/10/368
Silicon nanowires (SiNW) are quasi-one-dimensional structures in which the electrons are spatially confined in two directions, and they are free to move along the axis of the wire. The spatial confinement is governed by the Schrödinger–Poisson system, which must be coupled to the transport in the free motion direction. For devices with the characteristic length of a few tens of nanometers, the transport of the electrons along the axis of the wire can be considered semiclassical, and it can be dealt with by the multi-sub-band Boltzmann transport equations (MBTE). By taking the moments of the MBTE, a hydrodynamic model has been formulated, where explicit closure relations for the fluxes and production terms (i.e., the moments on the collisional operator) are obtained by means of the maximum entropy principle of extended thermodynamics, including the scattering of electrons with phonons, impurities and surface roughness scattering. Numerical results are shown for a SiNW transistor.Entropy2016-10-191810Article10.3390/e181003683681099-43002016-10-19doi: 10.3390/e18100368Orazio MuscatoTina Castiglione<![CDATA[Entropy, Vol. 18, Pages 370: From Tools in Symplectic and Poisson Geometry to J.-M. Souriau’s Theories of Statistical Mechanics and Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/10/370
I present in this paper some tools in symplectic and Poisson geometry in view of their applications in geometric mechanics and mathematical physics. After a short discussion of the Lagrangian an Hamiltonian formalisms, including the use of symmetry groups, and a presentation of the Tulczyjew’s isomorphisms (which explain some aspects of the relations between these formalisms), I explain the concept of manifold of motions of a mechanical system and its use, due to J.-M. Souriau, in statistical mechanics and thermodynamics. The generalization of the notion of thermodynamic equilibrium in which the one-dimensional group of time translations is replaced by a multi-dimensional, maybe non-commutative Lie group, is fully discussed and examples of applications in physics are given.Entropy2016-10-191810Article10.3390/e181003703701099-43002016-10-19doi: 10.3390/e18100370Charles-Michel Marle<![CDATA[Entropy, Vol. 18, Pages 367: Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory]]>
http://www.mdpi.com/1099-4300/18/10/367
Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN) are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.Entropy2016-10-181810Article10.3390/e181003673671099-43002016-10-18doi: 10.3390/e18100367Joshua RodewaldJohn ColombiKyle OyamaAlan Johnson<![CDATA[Entropy, Vol. 18, Pages 366: Intelligent Security IT System for Detecting Intruders Based on Received Signal Strength Indicators]]>
http://www.mdpi.com/1099-4300/18/10/366
Given that entropy-based IT technology has been applied in homes, office buildings and elsewhere for IT security systems, diverse kinds of intelligent services are currently provided. In particular, IT security systems have become more robust and varied. However, access control systems still depend on tags held by building entrants. Since tags can be obtained by intruders, an approach to counter the disadvantages of tags is required. For example, it is possible to track the movement of tags in intelligent buildings in order to detect intruders. Therefore, each tag owner can be judged by analyzing the movements of their tags. This paper proposes a security approach based on the received signal strength indicators (RSSIs) of beacon-based tags to detect intruders. The normal RSSI patterns of moving entrants are obtained and analyzed. Intruders can be detected when abnormal RSSIs are measured in comparison to normal RSSI patterns. In the experiments, one normal and one abnormal scenario are defined for collecting the RSSIs of a Bluetooth-based beacon in order to validate the proposed method. When the RSSIs of both scenarios are compared to pre-collected RSSIs, the RSSIs of the abnormal scenario are about 61% more different compared to the RSSIs of the normal scenario. Therefore, intruders in buildings can be detected by considering RSSI differences.Entropy2016-10-161810Article10.3390/e181003663661099-43002016-10-16doi: 10.3390/e18100366Yunsick Sung<![CDATA[Entropy, Vol. 18, Pages 365: Boltzmann Sampling by Degenerate Optical Parametric Oscillator Network for Structure-Based Virtual Screening]]>
http://www.mdpi.com/1099-4300/18/10/365
A structure-based lead optimization procedure is an essential step to finding appropriate ligand molecules binding to a target protein structure in order to identify drug candidates. This procedure takes a known structure of a protein-ligand complex as input, and structurally similar compounds with the query ligand are designed in consideration with all possible combinations of atomic species. This task is, however, computationally hard since such combinatorial optimization problems belong to the non-deterministic nonpolynomial-time hard (NP-hard) class. In this paper, we propose the structure-based lead generation and optimization procedures by a degenerate optical parametric oscillator (DOPO) network. Results of numerical simulation demonstrate that the DOPO network efficiently identifies a set of appropriate ligand molecules according to the Boltzmann sampling law.Entropy2016-10-131810Article10.3390/e181003653651099-43002016-10-13doi: 10.3390/e18100365Hiromasa SakaguchiKoji OgataTetsu IsomuraShoko UtsunomiyaYoshihisa YamamotoKazuyuki Aihara<![CDATA[Entropy, Vol. 18, Pages 364: Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora]]>
http://www.mdpi.com/1099-4300/18/10/364
One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese), to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.Entropy2016-10-121810Article10.3390/e181003643641099-43002016-10-12doi: 10.3390/e18100364Ryosuke TakahiraKumiko Tanaka-IshiiŁukasz Dębowski<![CDATA[Entropy, Vol. 18, Pages 363: The Shell Collapsar—A Possible Alternative to Black Holes]]>
http://www.mdpi.com/1099-4300/18/10/363
This article argues that a consistent description is possible for gravitationally collapsed bodies, in which collapse stops before the object reaches its gravitational radius, the density reaching a maximum close to the surface and then decreasing towards the centre. The way towards such a description was indicated in the classic Oppenheimer-Snyder (OS) 1939 analysis of a dust star. The title of that article implied support for a black-hole solution, but the present article shows that the final OS density distribution accords with gravastar and other shell models. The parallel Oppenheimer-Volkoff (OV) study of 1939 used the equation of state for a neutron gas, but could consider only stationary solutions of the field equations. Recently we found that the OV equation of state permits solutions with minimal rather than maximal central density, and here we find a similar topology for the OS dust collapsar; a uniform dust-ball which starts with large radius, and correspondingly small density, and collapses to a shell at the gravitational radius with density decreasing monotonically towards the centre. Though no longer considered central in black-hole theory, the OS dust model gave the first exact, time-dependent solution of the field equations. Regarded as a limiting case of OV, it indicates the possibility of neutron stars of unlimited mass with a similar shell topology. Progress in observational astronomy will distinguish this class of collapsars from black holes.Entropy2016-10-121810Article10.3390/e181003633631099-43002016-10-12doi: 10.3390/e18100363Trevor Marshall<![CDATA[Entropy, Vol. 18, Pages 360: Metric for Estimating Congruity between Quantum Images]]>
http://www.mdpi.com/1099-4300/18/10/360
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited for use in the quantum computing framework, whereas the prohibitive cost of the probability-based similarity score makes it imprudent for use as an effective image quality metric. Unlike the aforementioned image quality measures, the proposed QIFM metric is calibrated as a pixel difference-based image quality measure that is sensitive to the intricacies inherent to quantum image processing (QIP). As proposed, the QIFM is configured with in-built non-destructive measurement units that preserve the coherence necessary for quantum computation. This design moderates the cost of executing the QIFM in order to estimate congruity between two or more quantum images. A statistical analysis also shows that our proposed QIFM metric has a better correlation with digital expectation of likeness between images than other available quantum image quality measures. Therefore, the QIFM offers a competent substitute for the PSNR as an image quality measure in the quantum computing framework thereby providing a tool to effectively assess fidelity between images in quantum watermarking, quantum movie aggregation and other applications in QIP.Entropy2016-10-091810Article10.3390/e181003603601099-43002016-10-09doi: 10.3390/e18100360Abdullah IliyasuFei YanKaoru Hirota<![CDATA[Entropy, Vol. 18, Pages 348: Tolerance Redistributing of the Reassembly Dimensional Chain on Measure of Uncertainty]]>
http://www.mdpi.com/1099-4300/18/10/348
How to use the limited precision of remanufactured parts to assemble higher-quality remanufactured products is a challenge for remanufacturing engineering under uncertainty. On the basis of analyzing the uncertainty of remanufacturing parts, this paper takes tolerance redistributing of the reassembly (remanufactured assembly) dimensional chain as the research object. An entropy model to measure the uncertainty of assembly dimensional is built, and we quantify the degree of the uncertainty gap between reassembly and assembly. Then, in order to make sure the uncertainty of reassembly is not lower than that of assembly, the tolerance redistribution optimization model of the reassembly dimensional chain is proposed which is based on the tolerance of the grading allocation method. Finally, this paper takes the remanufactured gearbox assembly dimension chain as an example. The redistribution optimization model saves 19.11% of the cost with the assembly precision of remanufactured products. It provides new technical and theoretical support to expand the utilization rate of remanufactured parts and improve reassembly precision.Entropy2016-10-091810Article10.3390/e181003483481099-43002016-10-09doi: 10.3390/e18100348Conghu Liu<![CDATA[Entropy, Vol. 18, Pages 361: Measures of Difference and Significance in the Era of Computer Simulations, Meta-Analysis, and Big Data]]>
http://www.mdpi.com/1099-4300/18/10/361
In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with the square root of the sample size, which provides a stimulus to probe large samples. In simulation models, the situation is entirely different. When probability distribution functions for model features are specified, the probability distribution function of the model output can be approached using numerical techniques, such as bootstrapping or Monte Carlo sampling. Given the computational power of most PCs today, the sample size can be increased almost without bounds. The result is that standard errors of parameters are vanishingly small, and that almost all significance tests will lead to a rejected null hypothesis. Clearly, another approach to statistical significance is needed. This paper analyzes the situation and connects the discussion to other domains in which the null hypothesis significance test (NHST) paradigm is challenged. In particular, the notions of effect size and Cohen’s d provide promising alternatives for the establishment of a new indicator of statistical significance. This indicator attempts to cover significance (precision) and effect size (relevance) in one measure. Although in the end more fundamental changes are called for, our approach has the attractiveness of requiring only a minimal change to the practice of statistics. The analysis is not only relevant for artificial samples, but also for present-day huge samples, associated with the availability of big data.Entropy2016-10-091810Article10.3390/e181003613611099-43002016-10-09doi: 10.3390/e18100361Reinout HeijungsPatrik HenrikssonJeroen Guinée<![CDATA[Entropy, Vol. 18, Pages 359: Realistic Many-Body Quantum Systems vs. Full Random Matrices: Static and Dynamical Properties]]>
http://www.mdpi.com/1099-4300/18/10/359
We study the static and dynamical properties of isolated many-body quantum systems and compare them with the results for full random matrices. In doing so, we link concepts from quantum information theory with those from quantum chaos. In particular, we relate the von Neumann entanglement entropy with the Shannon information entropy and discuss their relevance for the analysis of the degree of complexity of the eigenstates, the behavior of the system at different time scales and the conditions for thermalization. A main advantage of full random matrices is that they enable the derivation of analytical expressions that agree extremely well with the numerics and provide bounds for realistic many-body quantum systems.Entropy2016-10-081810Article10.3390/e181003593591099-43002016-10-08doi: 10.3390/e18100359Eduardo Torres-HerreraJonathan KarpMarco TávoraLea Santos<![CDATA[Entropy, Vol. 18, Pages 357: Ordering Quantiles through Confidence Statements]]>
http://www.mdpi.com/1099-4300/18/10/357
Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A nonparametric method called Quor is designed to provide a confidence value for the order of arbitrary quantiles of different populations using independent samples. This confidence may provide insights about possible differences among groups and yields a ranking of importance for the variables. Computations are efficient and use exact distributions with no need for asymptotic considerations. Experiments with simulated data and with multiple real -omics data sets are performed, and they show advantages and disadvantages of the method. Quor has no assumptions but independence of samples, thus it might be a better option when assumptions of other methods cannot be asserted. The software is publicly available on CRAN.Entropy2016-10-081810Article10.3390/e181003573571099-43002016-10-08doi: 10.3390/e18100357Cassio de CamposCarlos de B. PereiraPaola RancoitaAdriano Polpo<![CDATA[Entropy, Vol. 18, Pages 358: Entropy Cross-Efficiency Model for Decision Making Units with Interval Data]]>
http://www.mdpi.com/1099-4300/18/10/358
The cross-efficiency method, as a Data Envelopment Analysis (DEA) extension, calculates the cross efficiency of each decision making unit (DMU) using the weights of all decision making units (DMUs). The major advantage of the cross-efficiency method is that it can provide a complete ranking for all DMUs. In addition, the cross-efficiency method could eliminate unrealistic weight results. However, the existing cross-efficiency methods only evaluate the relative efficiencies of a set of DMUs with exact values of inputs and outputs. If the input or output data of DMUs are imprecise, such as the interval data, the existing methods fail to assess the efficiencies of these DMUs. To address this issue, we propose the introduction of Shannon entropy into the cross-efficiency method. In the proposed model, intervals of all cross-efficiency values are firstly obtained by the interval cross-efficiency method. Then, a distance entropy model is proposed to obtain the weights of interval efficiency. Finally, all alternatives are ranked by their relative Euclidean distance from the positive solution.Entropy2016-10-011810Article10.3390/e181003583581099-43002016-10-01doi: 10.3390/e18100358Lupei WangLei LiNingxi Hong<![CDATA[Entropy, Vol. 18, Pages 350: Entropy-Based Application Layer DDoS Attack Detection Using Artificial Neural Networks]]>
http://www.mdpi.com/1099-4300/18/10/350
Distributed denial-of-service (DDoS) attack is one of the major threats to the web server. The rapid increase of DDoS attacks on the Internet has clearly pointed out the limitations in current intrusion detection systems or intrusion prevention systems (IDS/IPS), mostly caused by application-layer DDoS attacks. Within this context, the objective of the paper is to detect a DDoS attack using a multilayer perceptron (MLP) classification algorithm with genetic algorithm (GA) as learning algorithm. In this work, we analyzed the standard EPA-HTTP (environmental protection agency-hypertext transfer protocol) dataset and selected the parameters that will be used as input to the classifier model for differentiating the attack from normal profile. The parameters selected are the HTTP GET request count, entropy, and variance for every connection. The proposed model can provide a better accuracy of 98.31%, sensitivity of 0.9962, and specificity of 0.0561 when compared to other traditional classification models.Entropy2016-10-011810Article10.3390/e181003503501099-43002016-10-01doi: 10.3390/e18100350Khundrakpam Johnson SinghKhelchandra ThongamTanmay De<![CDATA[Entropy, Vol. 18, Pages 355: Analysis of Entropy Generation in Mixed Convective Peristaltic Flow of Nanofluid]]>
http://www.mdpi.com/1099-4300/18/10/355
This article examines entropy generation in the peristaltic transport of nanofluid in a channel with flexible walls. Single walled carbon nanotubes (SWCNT) and multiple walled carbon nanotubes (MWCNT) with water as base fluid are utilized in this study. Mixed convection is also considered in the present analysis. Viscous dissipation effect is present. Moreover, slip conditions are encountered for both velocity and temperature at the boundaries. Analysis is prepared in the presence of long wavelength and small Reynolds number assumptions. Two phase model for nanofluids are employed. Nonlinear system of equations for small Grashof number is solved. Velocity and temperature are examined for different parameters via graphs. Streamlines are also constructed to analyze the trapping. Results show that axial velocity and temperature of the nanofluid decrease when we enhance the nanoparticle volume fraction. Moreover, the wall elastance parameter shows increase in axial velocity and temperature, whereas decrease in both quantities is noticed for damping coefficient. Decrease is notified in Entropy generation and Bejan number for increasing values of nanoparticle volume fraction.Entropy2016-09-301810Article10.3390/e181003553551099-43002016-09-30doi: 10.3390/e18100355Tasawar HayatSadaf NawazAhmed AlsaediMaimona Rafiq<![CDATA[Entropy, Vol. 18, Pages 356: Exergetic Analysis of a Novel Solar Cooling System for Combined Cycle Power Plants]]>
http://www.mdpi.com/1099-4300/18/10/356
This paper presents a detailed exergetic analysis of a novel high-temperature Solar Assisted Combined Cycle (SACC) power plant. The system includes a solar field consisting of innovative high-temperature flat plate evacuated solar thermal collectors, a double stage LiBr-H2O absorption chiller, pumps, heat exchangers, storage tanks, mixers, diverters, controllers and a simple single-pressure Combined Cycle (CC) power plant. Here, a high temperature solar cooling system is coupled with a conventional combined cycle, in order to pre-cool gas turbine inlet air in order to enhance system efficiency and electrical capacity. In this paper, the system is analyzed from an exergetic point of view, on the basis of an energy-economic model presented in a recent work, where the obtained main results show that SACC exhibits a higher electrical production and efficiency with respect to the conventional CC. The system performance is evaluated by a dynamic simulation, where detailed simulation models are implemented for all the components included in the system. In addition, for all the components and for the system as whole, energy and exergy balances are implemented in order to calculate the magnitude of the irreversibilities within the system. In fact, exergy analysis is used in order to assess: exergy destructions and exergetic efficiencies. Such parameters are used in order to evaluate the magnitude of the irreversibilities in the system and to identify the sources of such irreversibilities. Exergetic efficiencies and exergy destructions are dynamically calculated for the 1-year operation of the system. Similarly, exergetic results are also integrated on weekly and yearly bases in order to evaluate the corresponding irreversibilities. The results showed that the components of the Joule cycle (combustor, turbine and compressor) are the major sources of irreversibilities. System overall exergetic efficiency was around 48%. Average weekly solar collector exergetic efficiency ranged from 6.5% to 14.5%, significantly increasing during the summer season. Conversely, absorption chiller exergy efficiency varies from 7.7% to 20.2%, being higher during the winter season. Combustor exergy efficiency is stably close to 68%, whereas the exergy efficiencies of the remaining components are higher than 80%.Entropy2016-09-291810Article10.3390/e181003563561099-43002016-09-29doi: 10.3390/e18100356Francesco CaliseLuigi LibertiniMaria Vicidomini<![CDATA[Entropy, Vol. 18, Pages 353: Generalized Thermodynamic Optimization for Iron and Steel Production Processes: Theoretical Exploration and Application Cases]]>
http://www.mdpi.com/1099-4300/18/10/353
Combining modern thermodynamics theory branches, including finite time thermodynamics or entropy generation minimization, constructal theory and entransy theory, with metallurgical process engineering, this paper provides a new exploration on generalized thermodynamic optimization theory for iron and steel production processes. The theoretical core is to thermodynamically optimize performances of elemental packages, working procedure modules, functional subsystems, and whole process of iron and steel production processes with real finite-resource and/or finite-size constraints with various irreversibilities toward saving energy, decreasing consumption, reducing emission and increasing yield, and to achieve the comprehensive coordination among the material flow, energy flow and environment of the hierarchical process systems. A series of application cases of the theory are reviewed. It can provide a new angle of view for the iron and steel production processes from thermodynamics, and can also provide some guidelines for other process industries.Entropy2016-09-291810Review10.3390/e181003533531099-43002016-09-29doi: 10.3390/e18100353Lingen ChenHuijun FengZhihui Xie<![CDATA[Entropy, Vol. 18, Pages 354: A Langevin Canonical Approach to the Study of Quantum Stochastic Resonance in Chiral Molecules]]>
http://www.mdpi.com/1099-4300/18/10/354
A Langevin canonical framework for a chiral two-level system coupled to a bath of harmonic oscillators is used within a coupling scheme different from the well-known spin-boson model to study the quantum stochastic resonance for chiral molecules. This process refers to the amplification of the response to an external periodic signal at a certain value of the noise strength, being a cooperative effect of friction, noise, and periodic driving occurring in a bistable system. Furthermore, from this stochastic dynamics within the Markovian regime and Ohmic friction, the competing process between tunneling and the parity violating energy difference present in this type of chiral systems plays a fundamental role. This mechanism is finally proposed to observe the so-far elusive parity-violating energy difference in chiral molecules.Entropy2016-09-291810Article10.3390/e181003543541099-43002016-09-29doi: 10.3390/e18100354Germán Rojas-LorenzoHelen Peñate-RodríguezAnais Dorta-UrraPedro BargueñoSalvador Miret-Artés<![CDATA[Entropy, Vol. 18, Pages 351: Influence of the Aqueous Environment on Protein Structure—A Plausible Hypothesis Concerning the Mechanism of Amyloidogenesis]]>
http://www.mdpi.com/1099-4300/18/10/351
The aqueous environment is a pervasive factor which, in many ways, determines the protein folding process and consequently the activity of proteins. Proteins are unable to perform their function unless immersed in water (membrane proteins excluded from this statement). Tertiary conformational stabilization is dependent on the presence of internal force fields (nonbonding interactions between atoms), as well as an external force field generated by water. The hitherto the unknown structuralization of water as the aqueous environment may be elucidated by analyzing its effects on protein structure and function. Our study is based on the fuzzy oil drop model—a mechanism which describes the formation of a hydrophobic core and attempts to explain the emergence of amyloid-like fibrils. A set of proteins which vary with respect to their fuzzy oil drop status (including titin, transthyretin and a prion protein) have been selected for in-depth analysis to suggest the plausible mechanism of amyloidogenesis.Entropy2016-09-281810Article10.3390/e181003513511099-43002016-09-28doi: 10.3390/e18100351Irena RotermanMateusz BanachBarbara KalinowskaLeszek Konieczny<![CDATA[Entropy, Vol. 18, Pages 352: Propositions for Confidence Interval in Systematic Sampling on Real Line]]>
http://www.mdpi.com/1099-4300/18/10/352
Systematic sampling is used as a method to get the quantitative results from tissues and radiological images. Systematic sampling on a real line ( R ) is a very attractive method within which biomedical imaging is consulted by practitioners. For the systematic sampling on R , the measurement function ( M F ) occurs by slicing the three-dimensional object equidistant systematically. The currently-used covariogram model in variance approximation is tested for the different measurement functions in a class to see the performance on the variance estimation of systematically-sampled R . An exact calculation method is proposed to calculate the constant λ ( q , N ) of the confidence interval in the systematic sampling. The exact value of constant λ ( q , N ) is examined for the different measurement functions, as well. As a result, it is observed from the simulation that the proposed M F should be used to check the performances of the variance approximation and the constant λ ( q , N ) . Synthetic data can support the results of real data.Entropy2016-09-281810Article10.3390/e181003523521099-43002016-09-28doi: 10.3390/e18100352Mehmet Çankaya<![CDATA[Entropy, Vol. 18, Pages 343: Fuzzy Shannon Entropy: A Hybrid GIS-Based Landslide Susceptibility Mapping Method]]>
http://www.mdpi.com/1099-4300/18/10/343
Assessing Landslide Susceptibility Mapping (LSM) contributes to reducing the risk of living with landslides. Handling the vagueness associated with LSM is a challenging task. Here we show the application of hybrid GIS-based LSM. The hybrid approach embraces fuzzy membership functions (FMFs) in combination with Shannon entropy, a well-known information theory-based method. Nine landslide-related criteria, along with an inventory of landslides containing 108 recent and historic landslide points, are used to prepare a susceptibility map. A random split into training (≈70%) and testing (≈30%) samples are used for training and validation of the LSM model. The study area—Izeh—is located in the Khuzestan province of Iran, a highly susceptible landslide zone. The performance of the hybrid method is evaluated using receiver operating characteristics (ROC) curves in combination with area under the curve (AUC). The performance of the proposed hybrid method with AUC of 0.934 is superior to multi-criteria evaluation approaches using a subjective scheme in this research in comparison with a previous study using the same dataset through extended fuzzy multi-criteria evaluation with AUC value of 0.894, and was built on the basis of decision makers’ evaluation in the same study area.Entropy2016-09-271810Article10.3390/e181003433431099-43002016-09-27doi: 10.3390/e18100343Majid Shadman RoodposhtiJagannath AryalHiman ShahabiTaher Safarrad<![CDATA[Entropy, Vol. 18, Pages 349: Recognition of Abnormal Uptake through 123I-mIBG Scintigraphy Entropy for Paediatric Neuroblastoma Identification]]>
http://www.mdpi.com/1099-4300/18/10/349
Whole-body 123I-Metaiodobenzylguanidine (mIBG) scintigraphy is used as primary image modality to visualize neuroblastoma tumours and metastases because it is the most sensitive and specific radioactive tracer in staging the disease and evaluating the response to treatment. However, especially in paediatric neuroblastoma, information from mIBG scans is difficult to extract because of acquisition difficulties that produce low definition images, with poor contours, resolution and contrast. These problems limit physician assessment. Current oncological guidelines are based on qualitative observer-dependant analysis. This makes comparing results taken at different moments of therapy, or in different institutions, difficult. In this paper, we present a computerized method that processes an image and calculates a quantitative measurement considered as its entropy, suitable for the identification of abnormal uptake regions, for which there is enough suspicion that they may be a tumour or metastatic site. This measurement can also be compared with future scintigraphies of the same patient. Over 46 scintigraphies of 22 anonymous patients were tested; the procedure identified 96.7% of regions of abnormal uptake and it showed a low overall false negative rate of 3.3%. This method provides assistance to physicians in diagnosing tumours and also allows the monitoring of patients’ evolution.Entropy2016-09-271810Article10.3390/e181003493491099-43002016-09-27doi: 10.3390/e18100349Milagros Martínez-DíazRafael Martínez-DíazLuis Sánchez-RuizGuillermo Peris-Fajarnés<![CDATA[Entropy, Vol. 18, Pages 345: A Novel Operational Matrix of Caputo Fractional Derivatives of Fibonacci Polynomials: Spectral Solutions of Fractional Differential Equations]]>
http://www.mdpi.com/1099-4300/18/10/345
Herein, two numerical algorithms for solving some linear and nonlinear fractional-order differential equations are presented and analyzed. For this purpose, a novel operational matrix of fractional-order derivatives of Fibonacci polynomials was constructed and employed along with the application of the tau and collocation spectral methods. The convergence and error analysis of the suggested Fibonacci expansion were carefully investigated. Some numerical examples with comparisons are presented to ensure the efficiency, applicability and high accuracy of the proposed algorithms. Two accurate semi-analytic polynomial solutions for linear and nonlinear fractional differential equations are the result.Entropy2016-09-231810Article10.3390/e181003453451099-43002016-09-23doi: 10.3390/e18100345Waleed Abd-ElhameedYoussri Youssri<![CDATA[Entropy, Vol. 18, Pages 344: The Analytical Solution of Parabolic Volterra Integro-Differential Equations in the Infinite Domain]]>
http://www.mdpi.com/1099-4300/18/10/344
This article focuses on obtaining analytical solutions for d-dimensional, parabolic Volterra integro-differential equations with different types of frictional memory kernel. Based on Laplace transform and Fourier transform theories, the properties of the Fox-H function and convolution theorem, analytical solutions for the equations in the infinite domain are derived under three frictional memory kernel functions. The analytical solutions are expressed by infinite series, the generalized multi-parameter Mittag-Leffler function, the Fox-H function and the convolution form of the Fourier transform. In addition, graphical representations of the analytical solution under different parameters are given for one-dimensional parabolic Volterra integro-differential equations with a power-law memory kernel. It can be seen that the solution curves are subject to Gaussian decay at any given moment.Entropy2016-09-231810Article10.3390/e181003443441099-43002016-09-23doi: 10.3390/e18100344Yun ZhaoFengqun Zhao<![CDATA[Entropy, Vol. 18, Pages 347: Boltzmann Complexity: An Emergent Property of the Majorization Partial Order]]>
http://www.mdpi.com/1099-4300/18/10/347
Boltzmann macrostates, which are in 1:1 correspondence with the partitions of integers, are investigated. Integer partitions, unlike entropy, uniquely characterize Boltzmann states, but their use has been limited. Integer partitions are well known to be partially ordered by majorization. It is less well known that this partial order is fundamentally equivalent to the “mixedness” of the set of microstates that comprise each macrostate. Thus, integer partitions represent the fundamental property of the mixing character of Boltzmann states. The standard definition of incomparability in partial orders is applied to the individual Boltzmann macrostates to determine the number of other macrostates with which it is incomparable. We apply this definition to each partition (or macrostate) and calculate the number C with which that partition is incomparable. We show that the value of C complements the value of the Boltzmann entropy, S, obtained in the usual way. Results for C and S are obtained for Boltzmann states comprised of up to N = 50 microstates where there are 204,226 Boltzmann macrostates. We note that, unlike mixedness, neither C nor S uniquely characterizes macrostates. Plots of C vs. S are shown. The results are surprising and support the authors’ earlier suggestion that C be regarded as the complexity of the Boltzmann states. From this we propose that complexity may generally arise from incomparability in other systems as well.Entropy2016-09-231810Article10.3390/e181003473471099-43002016-09-23doi: 10.3390/e18100347William SeitzA. Kirwan<![CDATA[Entropy, Vol. 18, Pages 346: Entropy for the Quantized Field in the Atom-Field Interaction: Initial Thermal Distribution]]>
http://www.mdpi.com/1099-4300/18/10/346
We study the entropy of a quantized field in interaction with a two-level atom (in a pure state) when the field is initially in a mixture of two number states. We then generalise the result for a thermal state; i.e., an (infinite) statistical mixture of number states. We show that for some specific interaction times, the atom passes its purity to the field and therefore the field entropy decreases from its initial value.Entropy2016-09-231810Article10.3390/e181003463461099-43002016-09-23doi: 10.3390/e18100346Luis Andrade-MoralesBraulio Villegas-MartínezHector Moya-Cessa<![CDATA[Entropy, Vol. 18, Pages 340: Application of Information Theory for an Entropic Gradient of Ecological Sites]]>
http://www.mdpi.com/1099-4300/18/10/340
The present study was carried out to compute the straightforward formulations of information entropy for ecological sites and to arrange their locations along the ordination axes using the values of those entropic measures. The data of plant communities taken from six sites found in the Dedegül Mountain sub-district and the Sultan Mountain sub-district located in the Beyşehir Watershed was examined in the present study. Firstly entropic measures (i.e., marginal entropy, joint entropy, conditional entropy and mutual entropy) were computed for each of the sites. Next principal component analysis (PCA) was applied to the data composed of the values of those entropic measures. As a result of the first axis of the applied PCA, the arrangement of the sites was found meaningful from an ecological point of view because the sites were located along with the first component axis of the PCA by illustrating the climatic differences between the sub-districts.Entropy2016-09-221810Article10.3390/e181003403401099-43002016-09-22doi: 10.3390/e18100340Kürşad Özkan<![CDATA[Entropy, Vol. 18, Pages 342: The Differential Entropy of the Joint Distribution of Eigenvalues of Random Density Matrices]]>
http://www.mdpi.com/1099-4300/18/9/342
We derive exactly the differential entropy of the joint distribution of eigenvalues of Wishart matrices. Based on this result, we calculate the differential entropy of the joint distribution of eigenvalues of random mixed quantum states, which is induced by taking the partial trace over the environment of Haar-distributed bipartite pure states. Then, we investigate the differential entropy of the joint distribution of diagonal entries of random mixed quantum states. Finally, we investigate the relative entropy between these two kinds of distributions.Entropy2016-09-21189Article10.3390/e180903423421099-43002016-09-21doi: 10.3390/e18090342Laizhen LuoJiamei WangLin ZhangShifang Zhang<![CDATA[Entropy, Vol. 18, Pages 341: Effects of Fatty Infiltration of the Liver on the Shannon Entropy of Ultrasound Backscattered Signals]]>
http://www.mdpi.com/1099-4300/18/9/341
This study explored the effects of fatty infiltration on the signal uncertainty of ultrasound backscattered echoes from the liver. Standard ultrasound examinations were performed on 107 volunteers. For each participant, raw ultrasound image data of the right lobe of liver were acquired using a clinical scanner equipped with a 3.5-MHz convex transducer. An algorithmic scheme was proposed for ultrasound B-mode and entropy imaging. Fatty liver stage was evaluated using a sonographic scoring system. Entropy values constructed using the ultrasound radiofrequency (RF) and uncompressed envelope signals (denoted by HR and HE, respectively) as a function of fatty liver stage were analyzed using the Pearson correlation coefficient. Data were expressed as the median and interquartile range (IQR). Receiver operating characteristic (ROC) curve analysis with 95% confidence intervals (CIs) was performed to obtain the area under the ROC curve (AUC). The brightness of the entropy image typically increased as the fatty stage varied from mild to severe. The median value of HR monotonically increased from 4.69 (IQR: 4.60–4.79) to 4.90 (IQR: 4.87–4.92) as the severity of fatty liver increased (r = 0.63, p &lt; 0.0001). Concurrently, the median value of HE increased from 4.80 (IQR: 4.69–4.89) to 5.05 (IQR: 5.02–5.07) (r = 0.69, p &lt; 0.0001). In particular, the AUCs obtained using HE (95% CI) were 0.93 (0.87–0.99), 0.88 (0.82–0.94), and 0.76 (0.65–0.87) for fatty stages ≥mild, ≥moderate, and ≥severe, respectively. The sensitivity, specificity, and accuracy were 93.33%, 83.11%, and 86.00%, respectively (≥mild). Fatty infiltration increases the uncertainty of backscattered signals from livers. Ultrasound entropy imaging has potential for the routine examination of fatty liver disease.Entropy2016-09-21189Article10.3390/e180903413411099-43002016-09-21doi: 10.3390/e18090341Po-Hsiang TsuiYung-Liang Wan<![CDATA[Entropy, Vol. 18, Pages 339: Quantum Computation and Information: Multi-Particle Aspects]]>
http://www.mdpi.com/1099-4300/18/9/339
This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers.Entropy2016-09-20189Editorial10.3390/e180903393391099-43002016-09-20doi: 10.3390/e18090339Demosthenes EllinasGiorgio KaniadakisJiannis PachosAntonio Scarfone<![CDATA[Entropy, Vol. 18, Pages 338: The Constant Information Radar]]>
http://www.mdpi.com/1099-4300/18/9/338
The constant information radar, or CIR, is a tracking radar that modulates target revisit time by maintaining a fixed mutual information measure. For highly dynamic targets that deviate significantly from the path predicted by the tracking motion model, the CIR adjusts by illuminating the target more frequently than it would for well-modeled targets. If SNR is low, the radar delays revisit to the target until the state entropy overcomes noise uncertainty. As a result, we show that the information measure is highly dependent on target entropy and target measurement covariance. A constant information measure maintains a fixed spectral efficiency to support the RF convergence of radar and communications. The result is a radar implementing a novel target scheduling algorithm based on information instead of heuristic or ad hoc methods. The CIR mathematically ensures that spectral use is justified.Entropy2016-09-19189Article10.3390/e180903383381099-43002016-09-19doi: 10.3390/e18090338Bryan PaulDaniel Bliss<![CDATA[Entropy, Vol. 18, Pages 337: Entropy Minimizing Curves with Application to Flight Path Design and Clustering]]>
http://www.mdpi.com/1099-4300/18/9/337
Air traffic management (ATM) aims at providing companies with a safe and ideally optimal aircraft trajectory planning. Air traffic controllers act on flight paths in such a way that no pair of aircraft come closer than the regulatory separation norms. With the increase of traffic, it is expected that the system will reach its limits in the near future: a paradigm change in ATM is planned with the introduction of trajectory-based operations. In this context, sets of well-separated flight paths are computed in advance, tremendously reducing the number of unsafe situations that must be dealt with by controllers. Unfortunately, automated tools used to generate such planning generally issue trajectories not complying with operational practices or even flight dynamics. In this paper, a means of producing realistic air routes from the output of an automated trajectory design tool is investigated. For that purpose, the entropy of a system of curves is first defined, and a mean of iteratively minimizing it is presented. The resulting curves form a route network that is suitable for use in a semi-automated ATM system with human in the loop. The tool introduced in this work is quite versatile and may be applied also to unsupervised classification of curves: an example is given for French traffic.Entropy2016-09-15189Article10.3390/e180903373371099-43002016-09-15doi: 10.3390/e18090337Stéphane PuechmorelFlorence Nicol<![CDATA[Entropy, Vol. 18, Pages 336: Combined Forecasting of Streamflow Based on Cross Entropy]]>
http://www.mdpi.com/1099-4300/18/9/336
In this study, we developed a model of combined streamflow forecasting based on cross entropy to solve the problems of streamflow complexity and random hydrological processes. First, we analyzed the streamflow data obtained from Wudaogou station on the Huifa River, which is the second tributary of the Songhua River, and found that the streamflow was characterized by fluctuations and periodicity, and it was closely related to rainfall. The proposed method involves selecting similar years based on the gray correlation degree. The forecasting results obtained by the time series model (autoregressive integrated moving average), improved grey forecasting model, and artificial neural network model (a radial basis function) were used as a single forecasting model, and from the viewpoint of the probability density, the method for determining weights was improved by using the cross entropy model. The numerical results showed that compared with the single forecasting model, the combined forecasting model improved the stability of the forecasting model, and the prediction accuracy was better than that of conventional combined forecasting models.Entropy2016-09-15189Article10.3390/e180903363361099-43002016-09-15doi: 10.3390/e18090336Baohui MenRishang LongJianhua Zhang<![CDATA[Entropy, Vol. 18, Pages 335: Enhanced Energy Distribution for Quantum Information Heat Engines]]>
http://www.mdpi.com/1099-4300/18/9/335
A new scenario for energy distribution, security and shareability is presented that assumes the availability of quantum information heat engines and a thermal bath. It is based on the convertibility between entropy and work in the presence of a thermal reservoir. Our approach to the informational content of physical systems that are distributed between users is complementary to the conventional perspective of quantum communication. The latter places the value on the unpredictable content of the transmitted quantum states, while our interest focuses on their certainty. Some well-known results in quantum communication are reused in this context. Particularly, we describe a way to securely distribute quantum states to be used for unlocking energy from thermal sources. We also consider some multi-partite entangled and classically correlated states for a collaborative multi-user sharing of work extraction possibilities. In addition, the relation between the communication and work extraction capabilities is analyzed and written as an equation.Entropy2016-09-14189Article10.3390/e180903353351099-43002016-09-14doi: 10.3390/e18090335Jose Diaz de la CruzMiguel Martin-Delgado<![CDATA[Entropy, Vol. 18, Pages 332: Study on the Inherent Complex Features and Chaos Control of IS–LM Fractional-Order Systems]]>
http://www.mdpi.com/1099-4300/18/9/332
Based on the traditional IS–LM economic theory, which shows the relationship between interest rates and output in the goods and services market and the money market in macroeconomic. We established a four-dimensional IS–LM model involving four variables. With the Caputo fractional calculus theory, we improved it into a fractional order nonlinear model, analyzed the complexity and stability of the fractional order system. The existences conditions of attractors under different order conditions are compared, and obtain the orders when the system reaches a stable state. Have the detail analysis on the dynamic phenomena, such as the strange attractor, sensitivity to initial values through phase diagram and the power spectral. The order changes in two ways: orders changes synchronously or single order changes. The results show regardless of which the order situation is, the economic system will enter into multiple states, such as strong divergence, strange attractor and the convergence, finally, system will enter into the stable state under a certain order; parameter changes have similar effects on the economic system. Therefore, selecting an appropriate order is significant for an economic system, which guarantees a steady development. Furthermore, this paper construct the chaos control to IS–LM fractional-order macroeconomic model by means of linear feedback control method, by calculating and adjusting the feedback coefficient, we make the system return to the convergence state.Entropy2016-09-14189Article10.3390/e180903323321099-43002016-09-14doi: 10.3390/e18090332Junhai MaWenbo RenXueli Zhan<![CDATA[Entropy, Vol. 18, Pages 261: Fuzzy Adaptive Repetitive Control for Periodic Disturbance with Its Application to High Performance Permanent Magnet Synchronous Motor Speed Servo Systems]]>
http://www.mdpi.com/1099-4300/18/9/261
For reducing the steady state speed ripple, especially in high performance speed servo system applications, the steady state precision is more and more important for real servo systems. This paper investigates the steady state speed ripple periodic disturbance problem for a permanent magnet synchronous motor (PMSM) servo system; a fuzzy adaptive repetitive controller is designed in the speed loop based on repetitive control and fuzzy information theory for reducing periodic disturbance. Firstly, the various sources of the PMSM speed ripple problem are described and analyzed. Then, the mathematical model of PMSM is given. Subsequently, a fuzzy adaptive repetitive controller based on repetitive control and fuzzy logic control is designed for the PMSM speed servo system. In addition, the system stability analysis is also deduced. Finally, the simulation and experiment implementation are respectively based on the MATLAB/Simulink and TMS320F2808 of Texas instrument company, DSP (digital signal processor) hardware platform. Comparing to the proportional integral (PI) controller, simulation and experimental results show that the proposed fuzzy adaptive repetitive controller has better periodic disturbance rejection ability and higher steady state precision.Entropy2016-09-14189Article10.3390/e180902612611099-43002016-09-14doi: 10.3390/e18090261Junxiao Wang<![CDATA[Entropy, Vol. 18, Pages 327: Sparse Trajectory Prediction Based on Multiple Entropy Measures]]>
http://www.mdpi.com/1099-4300/18/9/327
Trajectory prediction is an important problem that has a large number of applications. A common approach to trajectory prediction is based on historical trajectories. However, existing techniques suffer from the “data sparsity problem”. The available historical trajectories are far from enough to cover all possible query trajectories. We propose the sparsity trajectory prediction algorithm based on multiple entropy measures (STP-ME) to address the data sparsity problem. Firstly, the moving region is iteratively divided into a two-dimensional plane grid graph, and each trajectory is represented as a grid sequence with temporal information. Secondly, trajectory entropy is used to evaluate trajectory’s regularity, the L-Z entropy estimator is implemented to calculate trajectory entropy, and a new trajectory space is generated through trajectory synthesis. We define location entropy and time entropy to measure the popularity of locations and timeslots respectively. Finally, a second-order Markov model that contains a temporal dimension is adopted to perform sparse trajectory prediction. The experiments show that when trip completed percentage increases towards 90%, the coverage of the baseline algorithm decreases to almost 25%, while the STP-ME algorithm successfully copes with it as expected with only an unnoticeable drop in coverage, and can constantly answer almost 100% of query trajectories. It is found that the STP-ME algorithm improves the prediction accuracy generally by as much as 8%, 3%, and 4%, compared to the baseline algorithm, the second-order Markov model (2-MM), and sub-trajectory synthesis (SubSyn) algorithm, respectively. At the same time, the prediction time of STP-ME algorithm is negligible (10 μ s ), greatly outperforming the baseline algorithm (100 ms ).Entropy2016-09-14189Article10.3390/e180903273271099-43002016-09-14doi: 10.3390/e18090327Lei ZhangLeijun LiuZhanguo XiaWen LiQingfu Fan<![CDATA[Entropy, Vol. 18, Pages 334: Special Issue on Entropy-Based Applied Cryptography and Enhanced Security for Ubiquitous Computing]]>
http://www.mdpi.com/1099-4300/18/9/334
Entropy is a basic and important concept in information theory. It is also often used as a measure of the unpredictability of a cryptographic key in cryptography research areas. Ubiquitous computing (Ubi-comp) has emerged rapidly as an exciting new paradigm. In this special issue, we mainly selected and discussed papers related with ore theories based on the graph theory to solve computational problems on cryptography and security, practical technologies; applications and services for Ubi-comp including secure encryption techniques, identity and authentication; credential cloning attacks and countermeasures; switching generator with resistance against the algebraic and side channel attacks; entropy-based network anomaly detection; applied cryptography using chaos function, information hiding and watermark, secret sharing, message authentication, detection and modeling of cyber attacks with Petri Nets, and quantum flows for secret key distribution, etc.Entropy2016-09-13189Editorial10.3390/e180903343341099-43002016-09-13doi: 10.3390/e18090334James ParkWanlei Zhou<![CDATA[Entropy, Vol. 18, Pages 333: Design of Light-Weight High-Entropy Alloys]]>
http://www.mdpi.com/1099-4300/18/9/333
High-entropy alloys (HEAs) are a new class of solid-solution alloys that have attracted worldwide attention for their outstanding properties. Owing to the demand from transportation and defense industries, light-weight HEAs have also garnered widespread interest from scientists for use as potential structural materials. Great efforts have been made to study the phase-formation rules of HEAs to accelerate and refine the discovery process. In this paper, many proposed solid-solution phase-formation rules are assessed, based on a series of known and newly-designed light-weight HEAs. The results indicate that these empirical rules work for most compositions but also fail for several alloys. Light-weight HEAs often involve the additions of Al and/or Ti in great amounts, resulting in large negative enthalpies for forming solid-solution phases and/or intermetallic compounds. Accordingly, these empirical rules need to be modified with the new experimental data. In contrast, CALPHAD (acronym of the calculation of phase diagrams) method is demonstrated to be an effective approach to predict the phase formation in HEAs as a function of composition and temperature. Future perspectives on the design of light-weight HEAs are discussed in light of CALPHAD modeling and physical metallurgy principles.Entropy2016-09-13189Article10.3390/e180903333331099-43002016-09-13doi: 10.3390/e18090333Rui FengMichael GaoChanho LeeMichael MathesTingting ZuoShuying ChenJeffrey HawkYong ZhangPeter Liaw