Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 19, Pages 40: Nonequilibrium Thermodynamics of Ion Flux through Membrane Channels]]>
http://www.mdpi.com/1099-4300/19/1/40
Ion flux through membrane channels is passively driven by the electrochemical potential differences across the cell membrane. Nonequilibrium thermodynamics has been successful in explaining transport mechanisms, including the ion transport phenomenon. However, physiologists may not be familiar with biophysical concepts based on the view of entropy production. In this paper, I have reviewed the physical meanings and connections between nonequilibrium thermodynamics and the expressions commonly used in describing ion fluxes in membrane physiology. The fluctuation theorem can be applied to interpret the flux ratio in the small molecular systems. The multi-ion single-file feature of the ion channel facilitates the utilization of the natural tendency of electrochemical driving force to couple specific biophysical processes and biochemical reactions on the membrane.Entropy2017-01-19191Review10.3390/e19010040401099-43002017-01-19doi: 10.3390/e19010040Chi-Pan Hsieh<![CDATA[Entropy, Vol. 19, Pages 41: Transfer Learning for SSVEP Electroencephalography Based Brain–Computer Interfaces Using Learn++.NSE and Mutual Information]]>
http://www.mdpi.com/1099-4300/19/1/41
Brain–Computer Interfaces (BCI) using Steady-State Visual Evoked Potentials (SSVEP) are sometimes used by injured patients seeking to use a computer. Canonical Correlation Analysis (CCA) is seen as state-of-the-art for SSVEP BCI systems. However, this assumes that the user has full control over their covert attention, which may not be the case. This introduces high calibration requirements when using other machine learning techniques. These may be circumvented by using transfer learning to utilize data from other participants. This paper proposes a combination of ensemble learning via Learn++ for Nonstationary Environments (Learn++.NSE)and similarity measures such as mutual information to identify ensembles of pre-existing data that result in higher classification. Results show that this approach performed worse than CCA in participants with typical SSVEP responses, but outperformed CCA in participants whose SSVEP responses violated CCA assumptions. This indicates that similarity measures and Learn++.NSE can introduce a transfer learning mechanism to bring SSVEP system accessibility to users unable to control their covert attention.Entropy2017-01-19191Article10.3390/e19010041411099-43002017-01-19doi: 10.3390/e19010041Matthew SybeldonLukas SchmitMurat Akcakaya<![CDATA[Entropy, Vol. 19, Pages 38: Distributed Rateless Codes with Unequal Error Protection Property for Space Information Networks]]>
http://www.mdpi.com/1099-4300/19/1/38
In this paper, we propose a novel distributed unequal error protection (UEP) rateless coding scheme (DURC) for space information networks (SIN). We consider the multimedia data transmissions in a dual-hop SIN communication scenario, where multiple disjoint source nodes need to transmit their UEP rateless coded data to a destination via a dynamic relay. We formulate the optimization problems to provide optimal degree distributions on the direct links and the dynamic relay links to satisfy the required error protection levels. The optimization methods are based on the And–Or tree analysis and can be solved by multi-objective programming. In addition, we evaluate the performance of the optimal DURC scheme, and simulation results show that the proposed DURC scheme can effectively provide UEP property under a variety of error requirements.Entropy2017-01-18191Article10.3390/e19010038381099-43002017-01-18doi: 10.3390/e19010038Jian JiaoYi YangBowen FengShaohua WuYonghui LiQinyu Zhang<![CDATA[Entropy, Vol. 19, Pages 36: Exploitation of the Maximum Entropy Principle in Mathematical Modeling of Charge Transport in Semiconductors]]>
http://www.mdpi.com/1099-4300/19/1/36
In the last two decades, the Maximum Entropy Principle (MEP) has been successfully employed to construct macroscopic models able to describe the charge and heat transport in semiconductor devices. These models are obtained, starting from the Boltzmann transport equations, for the charge and the phonon distribution functions, by taking—as macroscopic variables—suitable moments of the distributions and exploiting MEP in order to close the evolution equations for the chosen moments. Important results have also been obtained for the description of charge transport in devices made both of elemental and compound semiconductors, in cases where charge confinement is present and the carrier flow is two- or one-dimensional.Entropy2017-01-18191Article10.3390/e19010036361099-43002017-01-18doi: 10.3390/e19010036Giovanni MascaliVittorio Romano<![CDATA[Entropy, Vol. 19, Pages 37: Evaluation Model of Aluminum Alloy Welded Joint Low-Cycle Fatigue Data Based on Information Entropy]]>
http://www.mdpi.com/1099-4300/19/1/37
An evaluation model of aluminum alloy welded joint low-cycle fatigue data based on information entropy is proposed. Through calculating and analyzing the information entropy of decision attributes, quantitative contribution of stress concentration, plate thickness, and loading mode to the fatigue destruction are researched. Results reveal that the total information entropy of the fatigue data based on nominal stress, structural stress and equivalent structural stress are, respectively, 0.9702, 0.8881, and 0.8294. There is consistency between the reducing trend of the weight-based information entropy and the smaller and smaller standard deviation of the S-N curves. In the structural stress based S-N curve, total stress concentration factor is crucial for the distribution of the fatigue data and the weight based information entropy of membrane stress concentration factor is 0.6754, which illustrates that stress concentration is a key issue of welded structure to which ought to be attached great importance. Subsequently, in the equivalent structural stress-based S-N curve, the weight based information entropy of stress ratio is 0.5759, which plays an important role in the distribution of fatigue data. With the importance level of the attributes on the S-N curves investigated, the correction of R in the equivalent structural stress based master S-N curve method should be carried out to make the welding fatigue prediction much more accurate.Entropy2017-01-18191Article10.3390/e19010037371099-43002017-01-18doi: 10.3390/e19010037Yaliang LiuLi ZouYibo SunXinhua Yang<![CDATA[Entropy, Vol. 19, Pages 32: Impact of Ambient Conditions of Arab Gulf Countries on the Performance of Gas Turbines Using Energy and Exergy Analysis]]>
http://www.mdpi.com/1099-4300/19/1/32
In this paper, energy and exergy analysis of typical gas turbines is performed using average hourly temperature and relative humidity for selected Gulf cities located in Saudi Arabia, Kuwait, United Arab Emirates, Oman, Bahrain and Qatar. A typical gas turbine unit of 42 MW is considered in this study. The electricity production, thermal efficiency, fuel consumption differences between the ISO conditions and actual conditions are determined for each city. The exergy efficiency and exergy destruction rates for the gas turbine unit and its components are also evaluated taking ISO conditions as reference conditions. The results indicate that the electricity production losses occur in all cities during the year, except in Dammam and Kuwait for the period between November and March. During a typical day, the variation of the power production can reach 4 MW. The rate of exergy destruction under the combined effect of temperature and humidity is significant in hot months reaching a maximum of 12 MW in July. The presented results show also that adding inlet cooling systems to the existing gas turbine units could be justified in hot periods. Other aspects, such as the economic and environmental ones, should also be investigated.Entropy2017-01-17191Article10.3390/e19010032321099-43002017-01-17doi: 10.3390/e19010032Saleh BaakeemJamel OrfiShaker AlaqelHany Al-Ansary<![CDATA[Entropy, Vol. 19, Pages 35: Spacetime Topology and the Laws of Black Hole-Soliton Mechanics]]>
http://www.mdpi.com/1099-4300/19/1/35
The domain of outer communication of an asymptotically flat spactime must be simply connected. In five dimensions, this still allows for the possibility of an arbitrary number of 2-cycles supported by magnetic flux carried by Maxwell fields. As a result, stationary, asymptotically flat, horizonless solutions—“gravitational solitons”—may exist with non-vanishing mass, charge, and angular momenta. These gravitational solutions satisfy a Smarr-like relation, as well as a first law of mechanics. Furthermore, the presence of solitons leads to new terms in the well-known first law of black hole mechanics for spacetimes containing black hole horizons and non-trivial topology in the exterior region. I outline the derivation of these results and consider an explicit example in five-dimensional supergravity.Entropy2017-01-17191Article10.3390/e19010035351099-43002017-01-17doi: 10.3390/e19010035Hari Kunduri<![CDATA[Entropy, Vol. 19, Pages 27: A Probabilistic Damage Identification Method for Shear Structure Components Based on Cross-Entropy Optimizations]]>
http://www.mdpi.com/1099-4300/19/1/27
A probabilistic damage identification method for shear structure components is presented. The method uses the extracted modal frequencies from the measured dynamical responses in conjunction with a representative finite element model. The damage of each component is modeled using a stiffness multiplier in the finite element model. By coupling the extracted features and the probabilistic structural model, the damage identification problem is recast to an equivalent optimization problem, which is iteratively solved using the cross-entropy optimization technique. An application example is used to demonstrate the proposed method and validate its effectiveness. Influencing factors such as the location of damaged components, measurement location, measurement noise level, and damage severity are studied. The detection reliability under different measurement noise levels is also discussed in detail.Entropy2017-01-17191Article10.3390/e19010027271099-43002017-01-17doi: 10.3390/e19010027Xuefei GuanYongxiang WangJingjing He<![CDATA[Entropy, Vol. 19, Pages 25: Similarity Theory Based Radial Turbine Performance and Loss Mechanism Comparison between R245fa and Air for Heavy-Duty Diesel Engine Organic Rankine Cycles]]>
http://www.mdpi.com/1099-4300/19/1/25
Organic Rankine Cycles using radial turbines as expanders are considered as one of the most efficient technologies to convert heavy-duty diesel engine waste heat into useful work. Turbine similarity design based on the existing air turbine profiles is time saving. Due to totally different thermodynamic properties between organic fluids and air, its influence on turbine performance and loss mechanisms need to be analyzed. This paper numerically simulated a radial turbine under similar conditions between R245fa and air, and compared the differences of the turbine performance and loss mechanisms. Larger specific heat ratio of air leads to air turbine operating at higher pressure ratios. As R245fa gas constant is only about one-fifth of air gas constant, reduced rotating speeds of R245fa turbine are only 0.4-fold of those of air turbine, and reduced mass flow rates are about twice of those of air turbine. When using R245fa as working fluid, the nozzle shock wave losses decrease but rotor suction surface separation vortex losses increase, and eventually leads that isentropic efficiencies of R245fa turbine in the commonly used velocity ratio range from 0.5 to 0.9 are 3%–4% lower than those of air turbine.Entropy2017-01-14191Article10.3390/e19010025251099-43002017-01-14doi: 10.3390/e19010025Lei ZhangWeilin ZhugeYangjun ZhangTao Chen<![CDATA[Entropy, Vol. 19, Pages 34: Implementing Demons and Ratchets]]>
http://www.mdpi.com/1099-4300/19/1/34
Experimental results show that ratchets may be implemented in semiconductor and chemical systems, bypassing the second law and opening up huge gains in energy production. This paper summarizes or describes experiments and results on systems that effect demons and ratchets operating in chemical or electrical domains. One creates temperature differences that can be harvested by a heat engine. A second produces light with only heat input. A third produces harvestable electrical potential directly. These systems share creating particles in one location, destroying them in another and moving them between locations by diffusion (Brownian motion). All absorb ambient heat as they produce other energy forms. None requires an external hot and cold side. The economic and social impacts of these conversions of ambient heat to work are, of course, well-understood and huge. The experimental results beg for serious work on the chance that they are valid.Entropy2017-01-14191Article10.3390/e19010034341099-43002017-01-14doi: 10.3390/e19010034Peter OremFrank Orem<![CDATA[Entropy, Vol. 19, Pages 33: Heuristic Approach to Understanding the Accumulation Process in Hydrothermal Pores]]>
http://www.mdpi.com/1099-4300/19/1/33
One of the central questions of humankind is: which chemical and physical conditions are necessary to make life possible? In this “origin-of-life” context, formamide plays an important role, because it has been demonstrated that prebiotic molecules can be synthesized from concentrated formamide solutions. Recently, it could be shown, using finite-element calculations combining thermophoresis and convection processes in hydrothermal pores, that sufficiently high formamide concentrations could be accumulated to form prebiotic molecules (Niether et al. (2016)). Depending on the initial formamide concentration, the aspect ratio of the pores, and the ambient temperature, formamide concentrations up to 85 wt % could be reached. The stationary calculations show an effective accumulation, only if the aspect ratio is above a certain threshold, and the corresponding transient studies display a sudden increase of the accumulation after a certain time. Neither of the observations were explained. In this work, we derive a simple heuristic model, which explains both phenomena. The physical idea of the approach is a comparison of the time to reach the top of the pore with the time to cross from the convective upstream towards the convective downstream. If the time to reach the top of the pore is shorter than the crossing time, the formamide molecules are flushed out of the pore. If the time is long enough, the formamide molecules can reach the downstream and accumulate at the bottom of the pore. Analysing the optimal aspect ratio as function of concentration, we find that, at a weight fraction of w = 0 . 5 , a minimal pore height is required for effective accumulation. At the same concentration, the transient calculations show a maximum of the accumulation rate.Entropy2017-01-13191Article10.3390/e19010033331099-43002017-01-13doi: 10.3390/e19010033Doreen NietherSimone Wiegand<![CDATA[Entropy, Vol. 19, Pages 31: Univariate and Multivariate Generalized Multiscale Entropy to Characterise EEG Signals in Alzheimer’s Disease]]>
http://www.mdpi.com/1099-4300/19/1/31
Alzheimer’s disease (AD) is a degenerative brain disorder leading to memory loss and changes in other cognitive abilities. The complexity of electroencephalogram (EEG) signals may help to characterise AD. To this end, we propose an extension of multiscale entropy based on variance (MSEσ2) to multichannel signals, termed multivariate MSEσ2 (mvMSEσ2), to take into account both the spatial and time domains of time series. Then, we investigate the mvMSEσ2 of EEGs at different frequency bands, including the broadband signals filtered between 1 and 40 Hz, θ, α, and β bands, and compare it with the previously-proposed multiscale entropy based on mean (MSEµ), multivariate MSEµ (mvMSEµ), and MSEσ2, to distinguish different kinds of dynamical properties of the spread and the mean in the signals. Results from 11 AD patients and 11 age-matched controls suggest that the presence of broadband activity of EEGs is required for a proper evaluation of complexity. MSEσ2 and mvMSEσ2 results, showing a loss of complexity in AD signals, led to smaller p-values in comparison with MSEµ and mvMSEµ ones, suggesting that the variance-based MSE and mvMSE can characterise changes in EEGs as a result of AD in a more detailed way. The p-values for the slope values of the mvMSE curves were smaller than for MSE at large scale factors, also showing the possible usefulness of multivariate techniques.Entropy2017-01-12191Article10.3390/e19010031311099-43002017-01-12doi: 10.3390/e19010031Hamed AzamiDaniel AbásoloSamantha SimonsJavier Escudero<![CDATA[Entropy, Vol. 19, Pages 29: Local Entropy Generation in Compressible Flow through a High Pressure Turbine with Delayed Detached Eddy Simulation]]>
http://www.mdpi.com/1099-4300/19/1/29
Gas turbines are important energy-converting equipment in many industries. The flow inside gas turbines is very complicated and the knowledge about the flow loss mechanism is critical to the advanced design. The current design system heavily relies on empirical formulas or Reynolds Averaged Navier–Stokes (RANS), which faces big challenges in dealing with highly unsteady complex flow and accurately predicting flow losses. Further improving the efficiency needs more insights into the loss generation in gas turbines. Conventional Unsteady Reynolds Averaged Simulation (URANS) methods have defects in modeling multi-frequency, multi-length, highly unsteady flow, especially when mixing or separation occurs, while Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) are too costly for the high-Reynolds number flow. In this work, the Delayed Detached Eddy Simulation (DDES) method is used with a low-dissipation numerical scheme to capture the detailed flow structures of the complicated flow in a high pressure turbine guide vane. DDES accurately predicts the wake vortex behavior and produces much more details than RANS and URANS. The experimental findings of the wake vortex length characteristics, which RANS and URANS fail to predict, are successfully captured by DDES. Accurate flow simulation builds up a solid foundation for accurate losses prediction. Based on the detailed DDES results, loss analysis in terms of entropy generation rate is conducted from two aspects. The first aspect is to apportion losses by its physical resources: viscous irreversibility and heat transfer irreversibility. The viscous irreversibility is found to be much stronger than the heat transfer irreversibility in the flow. The second aspect is weighing the contributions of steady effects and unsteady effects. Losses due to unsteady effects account for a large part of total losses. Effects of unsteadiness should not be neglected in the flow physics study and design process.Entropy2017-01-11191Article10.3390/e19010029291099-43002017-01-11doi: 10.3390/e19010029Dun LinXin YuanXinrong Su<![CDATA[Entropy, Vol. 19, Pages 30: Comparing Relational and Ontological Triple Stores in Healthcare Domain]]>
http://www.mdpi.com/1099-4300/19/1/30
Today’s technological improvements have made ubiquitous healthcare systems that converge into smart healthcare applications in order to solve patients’ problems, to communicate effectively with patients, and to improve healthcare service quality. The first step of building a smart healthcare information system is representing the healthcare data as connected, reachable, and sharable. In order to achieve this representation, ontologies are used to describe the healthcare data. Combining ontological healthcare data with the used and obtained data can be maintained by storing the entire health domain data inside big data stores that support both relational and graph-based ontological data. There are several big data stores and different types of big data sets in the healthcare domain. The goal of this paper is to determine the most applicable ontology data store for storing the big healthcare data. For this purpose, AllegroGraph and Oracle 12c data stores are compared based on their infrastructural capacity, loading time, and query response times. Hence, healthcare ontologies (GENE Ontology, Gene Expression Ontology (GEXO), Regulation of Transcription Ontology (RETO), Regulation of Gene Expression Ontology (REXO)) are used to measure the ontology loading time. Thereafter, various queries are constructed and executed for GENE ontology in order to measure the capacity and query response times for the performance comparison between AllegroGraph and Oracle 12c triple stores.Entropy2017-01-11191Technical Note10.3390/e19010030301099-43002017-01-11doi: 10.3390/e19010030Ozgu CanEmine SezerOkan BursaMurat Unalir<![CDATA[Entropy, Vol. 19, Pages 26: Face Detection Based on Skin Color Segmentation Using Fuzzy Entropy]]>
http://www.mdpi.com/1099-4300/19/1/26
Face detection is the first step of any automated face recognition system. One of the most popular approaches to detect faces in color images is using a skin color segmentation scheme, which in many cases needs a proper representation of color spaces to interpret image information. In this paper, we propose a fuzzy system for detecting skin in color images, so that each color tone is assumed to be a fuzzy set. The Red, Green, and Blue (RGB), the Hue, Saturation and Value (HSV), and the YCbCr (where Y is the luminance and Cb,Cr are the chroma components) color systems are used for the development of our fuzzy design. Thus, a fuzzy three-partition entropy approach is used to calculate all of the parameters needed for the fuzzy systems, and then, a face detection method is also developed to validate the segmentation results. The results of the experiments show a correct skin detection rate between 94% and 96% for our fuzzy segmentation methods, with a false positive rate of about 0.5% in all cases. Furthermore, the average correct face detection rate is above 93%, and even when working with heterogeneous backgrounds and different light conditions, it achieves almost 88% correct detections. Thus, our method leads to accurate face detection results with low false positive and false negative rates.Entropy2017-01-11191Article10.3390/e19010026261099-43002017-01-11doi: 10.3390/e19010026Francisco PujolMar PujolAntonio Jimeno-MorenillaMaría Pujol<![CDATA[Entropy, Vol. 19, Pages 28: Acknowledgement to Reviewers of Entropy in 2016]]>
http://www.mdpi.com/1099-4300/19/1/28
The editors of Entropy would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]Entropy2017-01-11191Editorial10.3390/e19010028281099-43002017-01-11doi: 10.3390/e19010028 Entropy Editorial Office<![CDATA[Entropy, Vol. 19, Pages 22: Research Entropy Complexity about the Nonlinear Dynamic Delay Game Model]]>
http://www.mdpi.com/1099-4300/19/1/22
Based on the research of domestic and foreign scholars, this paper has improved and established a double oligopoly market model of renewable energy, and analyzed the complex dynamic characteristics of a system based on entropy theory and chaos theory, such as equilibrium point, stability, Hopf bifurcation conditions, etc. This paper also studied and simulated the effects of the natural growth rate of energy and the single delay decision on the renewable energy system by minimizing the entropy of the system and reducing the system instability to a minimum, so that the degree of disorder within the system was reduced. The results show that with the increase of the natural growth rate of energy, the stability of the system is not affected, but the market demand of the oligopoly 1 is gradually reducing and the market demand of the oligopoly 2 is gradually increasing. At the same time, a single oligopoly making the time delay decision will affect the stability of the two oligopolies. With the increase of delay, the time required to reach the stable state will grow, and the system will eventually enter the Hopf bifurcation, thus the system will have its entropy increased and fall into an unstable state. Therefore, in the actual market of renewable energy, oligopolies should pay attention to the natural growth rate of energy and time delay, ensuring the stability of the game process and the orderliness of the system.Entropy2017-01-09191Article10.3390/e19010022221099-43002017-01-09doi: 10.3390/e19010022Xueli ZhanJunhai MaWenbo Ren<![CDATA[Entropy, Vol. 19, Pages 23: Use of Information Measures and Their Approximations to Detect Predictive Gene-Gene Interaction]]>
http://www.mdpi.com/1099-4300/19/1/23
We reconsider the properties and relationships of the interaction information and its modified versions in the context of detecting the interaction of two SNPs for the prediction of a binary outcome when interaction information is positive. This property is called predictive interaction, and we state some new sufficient conditions for it to hold true. We also study chi square approximations to these measures. It is argued that interaction information is a different and sometimes more natural measure of interaction than the logistic interaction parameter especially when SNPs are dependent. We introduce a novel measure of predictive interaction based on interaction information and its modified version. In numerical experiments, which use copulas to model dependence, we study examples when the logistic interaction parameter is zero or close to zero for which predictive interaction is detected by the new measure, while it remains undetected by the likelihood ratio test.Entropy2017-01-07191Article10.3390/e19010023231099-43002017-01-07doi: 10.3390/e19010023Jan MielniczukMarcin Rdzanowski<![CDATA[Entropy, Vol. 19, Pages 24: Constructing a Measurement Method of Differences in Group Preferences Based on Relative Entropy]]>
http://www.mdpi.com/1099-4300/19/1/24
In the research and data analysis of the differences involved in group preferences, conventional statistical methods cannot reflect the integrity and preferences of human minds; in particular, it is difficult to exclude humans’ irrational factors. This paper introduces a preference amount model based on relative entropy theory. A related expansion is made based on the characteristics of the questionnaire data, and we also construct the parameters to measure differences in the data distribution of different groups on the whole. In this paper, this parameter is called the center distance, and it effectively reflects the preferences of human minds. Using the survey data of securities market participants as an example, this paper analyzes differences in market participants’ attitudes toward the effectiveness of securities regulation. Based on this method, differences between groups that were overlooked by analysis of variance are found, and certain aspects obscured by general data characteristics are also found.Entropy2017-01-06191Article10.3390/e19010024241099-43002017-01-06doi: 10.3390/e19010024Shiyu ZhangWenzhi LiuQin HeXuguang Hao<![CDATA[Entropy, Vol. 19, Pages 6: Misalignment Fault Diagnosis of DFWT Based on IEMD Energy Entropy and PSO-SVM]]>
http://www.mdpi.com/1099-4300/19/1/6
Misalignment is an important cause for the early failure of large doubly-fed wind turbines (DFWT). For the non-stationary characteristics of the signals in the transmission system of DFWT and the reality that it is difficult to obtain a large number of fault samples, Solidworks and Adams are used to simulate the different operating conditions of the transmission system of the DFWT to obtain the corresponding characteristic signals. Improved empirical mode decomposition (IEMD), which improves the end effects of empirical mode decomposition (EMD) is used to decompose the signals to get intrinsic mode function (IMF), and the IEMD energy entropy reflecting the working state are extracted as the inputs of the support vector machine (SVM). Particle swarm optimization (PSO) is used to optimize the parameters of SVM to improve the classification performance. The results show that the proposed method can effectively and accurately identify the types of misalignment of the DFWT.Entropy2017-01-01191Article10.3390/e1901000661099-43002017-01-01doi: 10.3390/e19010006Yancai XiaoNa KangYi HongGuangjian Zhang<![CDATA[Entropy, Vol. 19, Pages 21: Perturbative Treatment of the Non-Linear q-Schrödinger and q-Klein–Gordon Equations]]>
http://www.mdpi.com/1099-4300/19/1/21
Interesting non-linear generalization of both Schrödinger’s and Klein–Gordon’s equations have been recently advanced by Tsallis, Rego-Monteiro and Tsallis (NRT) in Nobre et al. (Phys. Rev. Lett. 2011, 106, 140601). There is much current activity going on in this area. The non-linearity is governed by a real parameter q. Empiric hints suggest that the ensuing non-linear q-Schrödinger and q-Klein–Gordon equations are a natural manifestations of very high energy phenomena, as verified by LHC-experiments. This happens for q − values close to unity (Plastino et al. (Nucl. Phys. A 2016, 955, 16–26, Nucl. Phys. A 2016, 948, 19–27)). It might thus be difficult for q-values close to unity to ascertain whether one is dealing with solutions to the ordinary Schrödinger equation (whose free particle solutions are exponentials and for which q = 1 ) or with its NRT non-linear q-generalizations, whose free particle solutions are q-exponentials. In this work, we provide a careful analysis of the q ∼ 1 instance via a perturbative analysis of the NRT equations.Entropy2016-12-31191Article10.3390/e19010021211099-43002016-12-31doi: 10.3390/e19010021Javier ZamoraMario RoccaAngelo PlastinoGustavo Ferri<![CDATA[Entropy, Vol. 19, Pages 20: Nonlinear Relaxation Phenomena in Metastable Condensed Matter Systems]]>
http://www.mdpi.com/1099-4300/19/1/20
Nonlinear relaxation phenomena in three different systems of condensed matter are investigated. (i) First, the phase dynamics in Josephson junctions is analyzed. Specifically, a superconductor-graphene-superconductor (SGS) system exhibits quantum metastable states, and the average escape time from these metastable states in the presence of Gaussian and correlated fluctuations is calculated, accounting for variations in the the noise source intensity and the bias frequency. Moreover, the transient dynamics of a long-overlap Josephson junction (JJ) subject to thermal fluctuations and non-Gaussian noise sources is investigated. Noise induced phenomena are observed, such as the noise enhanced stability and the stochastic resonant activation. (ii) Second, the electron spin relaxation process in a n-type GaAs bulk driven by a fluctuating electric field is investigated. In particular, by using a Monte Carlo approach, we study the influence of a random telegraph noise on the spin polarized transport. Our findings show the possibility to raise the spin relaxation length by increasing the amplitude of the external fluctuations. Moreover, we find that, crucially, depending on the value of the external field strength, the electron spin depolarization length versus the noise correlation time increases up to a plateau. (iii) Finally, the stabilization of quantum metastable states by dissipation is presented. Normally, quantum fluctuations enhance the escape from metastable states in the presence of dissipation. We show that dissipation can enhance the stability of a quantum metastable system, consisting of a particle moving in a strongly asymmetric double well potential, interacting with a thermal bath. We find that the escape time from the metastable region has a nonmonotonic behavior versus the system- bath coupling and the temperature, producing a stabilizing effect.Entropy2016-12-31191Article10.3390/e19010020201099-43002016-12-31doi: 10.3390/e19010020Bernardo SpagnoloClaudio GuarcelloLuca MagazzùAngelo CarolloDominique Persano AdornoDavide Valenti<![CDATA[Entropy, Vol. 19, Pages 19: Thermal Conductivity of Suspension of Aggregating Nanometric Rods]]>
http://www.mdpi.com/1099-4300/19/1/19
Enhancing thermal conductivity of simple fluids is of major interest in numerous applicative systems. One possibility of enhancing thermal properties consists of dispersing small conductive particles inside. However, in general, aggregation effects occur and then one must address systems composed of dispersed clusters composed of particles as well as the ones related to percolated networks. This papers analyzes the conductivity enhancement of different microstructures scaling from clusters dispersed into a simple matrix to the ones related to percolated networks exhibiting a fractal morphology.Entropy2016-12-31191Article10.3390/e19010019191099-43002016-12-31doi: 10.3390/e19010019Amine AmmarFrancisco ChinestaRodolphe Heyd<![CDATA[Entropy, Vol. 19, Pages 18: Information and Self-Organization]]>
http://www.mdpi.com/1099-4300/19/1/18
The process of “self-organization” takes place in open and complex systems that acquire spatio-temporal or functional structures without specific ordering instructions from the outside. [...]Entropy2016-12-31191Editorial10.3390/e19010018181099-43002016-12-31doi: 10.3390/e19010018Hermann HakenJuval Portugali<![CDATA[Entropy, Vol. 19, Pages 14: A New Feature Extraction Method Based on EEMD and Multi-Scale Fuzzy Entropy for Motor Bearing]]>
http://www.mdpi.com/1099-4300/19/1/14
Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD), mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs) with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM) model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM) for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.Entropy2016-12-31191Article10.3390/e19010014141099-43002016-12-31doi: 10.3390/e19010014Huimin ZhaoMeng SunWu DengXinhua Yang<![CDATA[Entropy, Vol. 19, Pages 17: The Information Recovery Problem]]>
http://www.mdpi.com/1099-4300/19/1/17
The issue of unitary evolution during creation and evaporation of a black hole remains controversial. We argue that some prominent cures are more troubling than the disease, demonstrate that their central element—forming of the event horizon before the evaporation begins—is not necessarily true, and describe a fully coupled matter-gravity system which is manifestly unitary.Entropy2016-12-30191Article10.3390/e19010017171099-43002016-12-30doi: 10.3390/e19010017Valentina BaccettiViqar HusainDaniel Terno<![CDATA[Entropy, Vol. 19, Pages 16: One-Parameter Fisher–Rényi Complexity: Notion and Hydrogenic Applications]]>
http://www.mdpi.com/1099-4300/19/1/16
In this work, the one-parameter Fisher–Rényi measure of complexity for general d-dimensional probability distributions is introduced and its main analytic properties are discussed. Then, this quantity is determined for the hydrogenic systems in terms of the quantum numbers of the quantum states and the nuclear charge.Entropy2016-12-30191Article10.3390/e19010016161099-43002016-12-30doi: 10.3390/e19010016Irene ToranzoPablo Sánchez-MorenoŁukasz RudnickiJesús Dehesa<![CDATA[Entropy, Vol. 19, Pages 15: Humans Outperform Machines at the Bilingual Shannon Game]]>
http://www.mdpi.com/1099-4300/19/1/15
We provide an upper bound for the amount of information a human translator adds to an original text, i.e., how many bits of information we need to store a translation, given the original. We do this by creating a Bilingual Shannon Game that elicits character guesses from human subjects, then developing models to estimate the entropy of those guess sequences.Entropy2016-12-30191Article10.3390/e19010015151099-43002016-12-30doi: 10.3390/e19010015Marjan GhazvininejadKevin Knight<![CDATA[Entropy, Vol. 19, Pages 13: A Comparative Study of Empirical Mode Decomposition-Based Filtering for Impact Signal]]>
http://www.mdpi.com/1099-4300/19/1/13
The Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) has been used to propose a new method for filtering time series originating from nonlinear systems. The filtering method is based on fuzzy entropy and a new waveform. A new waveform is defined wherein Intrinsic Mode Functions (IMFs)—which are obtained by CEEMDAN algorithm—are firstly sorted in ascending order (the sorted IMFs is symmetric about center point, because at any point, the mean value of the envelope line defined by the local maxima and the local minima is zero), and the energy of the sorted IMFs are calculated, respectively. Finally, the new waveform with axial symmetry can be obtained. The complexity of the new waveform can be quantified by fuzzy entropy. The relevant modes (noisy signal modes and useful signal modes) can be identified by the difference between the fuzzy entropy of the new waveform and the next adjacent new waveform. To evaluate the filter performance, CEEMDAN and sample entropy, Ensemble Empirical Mode Decomposition (EEMD) and fuzzy entropy, and EEMD and sample entropy were used to filter the synthesizing signals with various levels of input signal-to-noise ratio (SNRin). In particular, this approach is successful in filtering impact signal. The results of the filtering are evaluated by a de-trended fluctuation analysis (DFA) algorithm, revised mean square error (RMSE), and revised signal-to-noise ratio (RSNR), respectively. The filtering results of simulated and impact signal show that the filtering method based on CEEMDAN and fuzzy entropy outperforms other signal filtering methods.Entropy2016-12-29191Article10.3390/e19010013131099-43002016-12-29doi: 10.3390/e19010013Liwei ZhanChengwei Li<![CDATA[Entropy, Vol. 19, Pages 12: An Urban Cellular Automata Model for Simulating Dynamic States on a Local Scale]]>
http://www.mdpi.com/1099-4300/19/1/12
In complex systems, flexibility and adaptability to changes are crucial to the systems’ dynamic stability and evolution. Such resilience requires that the system is able to respond to disturbances by self-organizing, which implies a certain level of entropy within the system. Dynamic states (static, cyclical/periodic, complex, and chaotic) reflect this generative capacity, and correlate with the level of entropy. For planning complex cities, we need to develop methods to guide such autonomous progress in an optimal manner. A classical apparatus, cellular automaton (CA), provides such a tool. Applications of CA help us to study temporal dynamics in self-organizing urban systems. By exploring the dynamic states of the model’s dynamics resulting from different border conditions it is possible to discover favorable set(s) of rules conductive to the self-organizing dynamics and enable the system’s recovery at the time of crises. Level of entropy is a relevant measurement for evaluation of these dynamic states. The 2-D urban cellular automaton model studied here is based on the microeconomic principle that similar urban activities are attracted to each other, especially in certain self-organizing areas, and that the local dynamics of these enclaves affect the dynamics of the urban region by channeling flows of information, goods and people. The results of the modeling experiment indicate that the border conditions have a major impact on the model’s dynamics generating various dynamic states of the system. Most importantly, it seemed that the model could simulate a favorable, complex dynamic state with medium entropy level which may refer to the continuous self-organization of the system. The model provides a tool for exploring and understanding the effects of boundary conditions in the planning process as various scenarios are tested: resulting dynamics of the system can be explored with such “planning rules” prior to decisions, helping to identify planning guidelines that will support the future evolution of these areas.Entropy2016-12-28191Article10.3390/e19010012121099-43002016-12-28doi: 10.3390/e19010012Jenni Partanen<![CDATA[Entropy, Vol. 19, Pages 11: A Cloud Theory-Based Trust Computing Model in Social Networks]]>
http://www.mdpi.com/1099-4300/19/1/11
How to develop a trust management model and then to efficiently control and manage nodes is an important issue in the scope of social network security. In this paper, a trust management model based on a cloud model is proposed. The cloud model uses a specific computation operator to achieve the transformation from qualitative concepts to quantitative computation. Additionally, this can also be used to effectively express the fuzziness, randomness and the relationship between them of the subjective trust. The node trust is divided into reputation trust and transaction trust. In addition, evaluation methods are designed, respectively. Firstly, the two-dimension trust cloud evaluation model is designed based on node’s comprehensive and trading experience to determine the reputation trust. The expected value reflects the average trust status of nodes. Then, entropy and hyper-entropy are used to describe the uncertainty of trust. Secondly, the calculation methods of the proposed direct transaction trust and the recommendation transaction trust involve comprehensively computation of the transaction trust of each node. Then, the choosing strategies were designed for node to trade based on trust cloud. Finally, the results of a simulation experiment in P2P network file sharing on an experimental platform directly reflect the objectivity, accuracy and robustness of the proposed model, and could also effectively identify the malicious or unreliable service nodes in the system. In addition, this can be used to promote the service reliability of the nodes with high credibility, by which the stability of the whole network is improved.Entropy2016-12-28191Article10.3390/e19010011111099-43002016-12-28doi: 10.3390/e19010011Fengming LiuXiaoqian ZhuYuxi HuLehua RenHenric Johnson<![CDATA[Entropy, Vol. 19, Pages 10: Entropy Generation in Magnetohydrodynamic Mixed Convection Flow over an Inclined Stretching Sheet]]>
http://www.mdpi.com/1099-4300/19/1/10
This research focuses on entropy generation rate per unit volume in magneto-hydrodynamic (MHD) mixed convection boundary layer flow of a viscous fluid over an inclined stretching sheet. Analysis has been performed in the presence of viscous dissipation and non-isothermal boundary conditions. The governing boundary layer equations are transformed into ordinary differential equations by an appropriate similarity transformation. The transformed coupled nonlinear ordinary differential equations are then solved numerically by a shooting technique along with the Runge-Kutta method. Expressions for entropy generation (Ns) and Bejan number (Be) in the form of dimensionless variables are also obtained. Impact of various physical parameters on the quantities of interest is seen.Entropy2016-12-28191Article10.3390/e19010010101099-43002016-12-28doi: 10.3390/e19010010Muhammad AfridiMuhammad QasimIlyas KhanSharidan ShafieAli Alshomrani<![CDATA[Entropy, Vol. 19, Pages 9: A Dissipation of Relative Entropy by Diffusion Flows]]>
http://www.mdpi.com/1099-4300/19/1/9
Given a probability measure, we consider the diffusion flows of probability measures associated with the partial differential equation (PDE) of Fokker–Planck. Our flows of the probability measures are defined as the solution of the Fokker–Planck equation for the same strictly convex potential, which means that the flows have the same equilibrium. Then, we shall investigate the time derivative for the relative entropy in the case where the object and the reference measures are moving according to the above diffusion flows, from which we can obtain a certain dissipation formula and also an integral representation of the relative entropy.Entropy2016-12-27191Article10.3390/e1901000991099-43002016-12-27doi: 10.3390/e19010009Hiroaki Yoshida<![CDATA[Entropy, Vol. 19, Pages 8: Active and Purely Dissipative Nambu Systems in General Thermostatistical Settings Described by Nonlinear Partial Differential Equations Involving Generalized Entropy Measures]]>
http://www.mdpi.com/1099-4300/19/1/8
In physics, several attempts have been made to apply the concepts and tools of physics to the life sciences. In this context, a thermostatistic framework for active Nambu systems is proposed. The so-called free energy Fokker–Planck equation approach is used to describe stochastic aspects of active Nambu systems. Different thermostatistic settings are considered that are characterized by appropriately-defined entropy measures, such as the Boltzmann–Gibbs–Shannon entropy and the Tsallis entropy. In general, the free energy Fokker–Planck equations associated with these generalized entropy measures correspond to nonlinear partial differential equations. Irrespective of the entropy-related nonlinearities occurring in these nonlinear partial differential equations, it is shown that semi-analytical solutions for the stationary probability densities of the active Nambu systems can be obtained provided that the pumping mechanisms of the active systems assume the so-called canonical-dissipative form and depend explicitly only on Nambu invariants. Applications are presented both for purely-dissipative and for active systems illustrating that the proposed framework includes as a special case stochastic equilibrium systems.Entropy2016-12-27191Article10.3390/e1901000881099-43002016-12-27doi: 10.3390/e19010008T. Frank<![CDATA[Entropy, Vol. 19, Pages 7: A Sequence of Escort Distributions and Generalizations of Expectations on q-Exponential Family]]>
http://www.mdpi.com/1099-4300/19/1/7
In the theory of complex systems, long tailed probability distributions are often discussed. For such a probability distribution, a deformed expectation with respect to an escort distribution is more useful than the standard expectation. In this paper, by generalizing such escort distributions, a sequence of escort distributions is introduced. As a consequence, it is shown that deformed expectations with respect to sequential escort distributions effectively work for anomalous statistics. In particular, it is shown that a Fisher metric on a q-exponential family can be obtained from the escort expectation with respect to the second escort distribution, and a cubic form (or an Amari–Chentsov tensor field, equivalently) is obtained from the escort expectation with respect to the third escort distribution.Entropy2016-12-25191Article10.3390/e1901000771099-43002016-12-25doi: 10.3390/e19010007Hiroshi Matsuzoe<![CDATA[Entropy, Vol. 19, Pages 5: Information Decomposition in Multivariate Systems: Definitions, Implementation and Application to Cardiovascular Networks]]>
http://www.mdpi.com/1099-4300/19/1/5
The continuously growing framework of information dynamics encompasses a set of tools, rooted in information theory and statistical physics, which allow to quantify different aspects of the statistical structure of multivariate processes reflecting the temporal dynamics of complex networks. Building on the most recent developments in this field, this work designs a complete approach to dissect the information carried by the target of a network of multiple interacting systems into the new information produced by the system, the information stored in the system, and the information transferred to it from the other systems; information storage and transfer are then further decomposed into amounts eliciting the specific contribution of assigned source systems to the target dynamics, and amounts reflecting information modification through the balance between redundant and synergetic interaction between systems. These decompositions are formulated quantifying information either as the variance or as the entropy of the investigated processes, and their exact computation for the case of linear Gaussian processes is presented. The theoretical properties of the resulting measures are first investigated in simulations of vector autoregressive processes. Then, the measures are applied to assess information dynamics in cardiovascular networks from the variability series of heart period, systolic arterial pressure and respiratory activity measured in healthy subjects during supine rest, orthostatic stress, and mental stress. Our results document the importance of combining the assessment of information storage, transfer and modification to investigate common and complementary aspects of network dynamics; suggest the higher specificity to alterations in the network properties of the measures derived from the decompositions; and indicate that measures of information transfer and information modification are better assessed, respectively, through entropy-based and variance-based implementations of the framework.Entropy2016-12-24191Article10.3390/e1901000551099-43002016-12-24doi: 10.3390/e19010005Luca FaesAlberto PortaGiandomenico NolloMichal Javorka<![CDATA[Entropy, Vol. 19, Pages 3: Echo State Condition at the Critical Point]]>
http://www.mdpi.com/1099-4300/19/1/3
Recurrent networks with transfer functions that fulfil the Lipschitz continuity with K = 1 may be echo state networks if certain limitations on the recurrent connectivity are applied. It has been shown that it is sufficient if the largest singular value of the recurrent connectivity is smaller than 1. The main achievement of this paper is a proof under which conditions the network is an echo state network even if the largest singular value is one. It turns out that in this critical case the exact shape of the transfer function plays a decisive role in determining whether the network still fulfills the echo state condition. In addition, several examples with one-neuron networks are outlined to illustrate effects of critical connectivity. Moreover, within the manuscript a mathematical definition for a critical echo state network is suggested.Entropy2016-12-23191Article10.3390/e1901000331099-43002016-12-23doi: 10.3390/e19010003Norbert Mayer<![CDATA[Entropy, Vol. 19, Pages 4: Quantum Key Distribution in the Presence of the Intercept-Resend with Faked States Attack]]>
http://www.mdpi.com/1099-4300/19/1/4
Despite the unconditionally secure theory of the Quantum Key Distribution (Q K D), several attacks have been successfully implemented against commercial Q K D systems. Those systems have exhibited some flaws, as the secret key rate of corresponding protocols remains unaltered, while the eavesdropper obtains the entire secret key. We propose the negative acknowledgment state quantum key distribution protocol as a novel protocol capable of detecting the eavesdropping activity of the Intercept Resend with Faked Sates (I R F S) attack without requiring additional optical components different from the B B 84 protocol because the system can be implemented as a high software module. In this approach, the transmitter interleaves pairs of quantum states, referred to here as parallel and orthogonal states, while the receiver uses active basis selection.Entropy2016-12-23191Article10.3390/e1901000441099-43002016-12-23doi: 10.3390/e19010004Luis Lizama-PérezJosé LópezEduardo De Carlos López<![CDATA[Entropy, Vol. 19, Pages 2: A Multivariate Multiscale Fuzzy Entropy Algorithm with Application to Uterine EMG Complexity Analysis]]>
http://www.mdpi.com/1099-4300/19/1/2
The recently introduced multivariate multiscale entropy (MMSE) has been successfully used to quantify structural complexity in terms of nonlinear within- and cross-channel correlations as well as to reveal complex dynamical couplings and various degrees of synchronization over multiple scales in real-world multichannel data. However, the applicability of MMSE is limited by the coarse-graining process which defines scales, as it successively reduces the data length for each scale and thus yields inaccurate and undefined entropy estimates at higher scales and for short length data. To that cause, we propose the multivariate multiscale fuzzy entropy (MMFE) algorithm and demonstrate its superiority over the MMSE on both synthetic as well as real-world uterine electromyography (EMG) short duration signals. Based on MMFE features, an improvement in the classification accuracy of term-preterm deliveries was achieved, with a maximum area under the curve (AUC) value of 0.99.Entropy2016-12-22191Article10.3390/e1901000221099-43002016-12-22doi: 10.3390/e19010002Mosabber AhmedTheerasak ChanwimalueangSudhin ThayyilDanilo Mandic<![CDATA[Entropy, Vol. 19, Pages 1: Maximum Entropy Models for Quantum Systems]]>
http://www.mdpi.com/1099-4300/19/1/1
We show that for a finite von Neumann algebra, the states that maximise Segal’s entropy with a given energy level are Gibbs states. This is a counterpart of the classical result for the algebra of all bounded linear operators on a Hilbert space and von Neumann entropy.Entropy2016-12-22191Article10.3390/e1901000111099-43002016-12-22doi: 10.3390/e19010001Andrzej ŁuczakHanna PodsędkowskaMichał Seweryn<![CDATA[Entropy, Vol. 18, Pages 457: Monitoring Test for Stability of Dependence Structure in Multivariate Data Based on Copula]]>
http://www.mdpi.com/1099-4300/18/12/457
In this paper, we consider a sequential monitoring procedure for detecting changes in copula function. We propose a cusum type of monitoring test based on the empirical copula function and apply it to the detection of the distributional changes in copula function. We investigate the asymptotic properties of the stopping time and show that under regularity conditions, its limiting null distribution is the same as the sup of Kiefer process. Moreover, we utilize the bootstrap method in order to obtain the limiting distribution. A simulation study and a real data analysis are conducted to evaluate our test.Entropy2016-12-211812Article10.3390/e181204574571099-43002016-12-21doi: 10.3390/e18120457Jiyeon LeeByungsoo Kim<![CDATA[Entropy, Vol. 18, Pages 456: A Possible Application of the Contribution of Aromaticity to Entropy: Thermal Switch]]>
http://www.mdpi.com/1099-4300/18/12/456
It has been known for a long time that the loss of aromaticity of gaseous molecules leads to a large increase of the enthalpy and to a tiny increase of the entropy. Generally, the calculated transition temperature from an aromatic structure towards a non-aromatic structure at which these two contributions cancel is very high. The entropy associated to the loss of aromaticity of adsorbed molecules, such as pyridine on Si(100) and on Ge(100), is roughly the same while the associated enthalpy is much smaller, a consequence of which is a low transition temperature. This allows us to imagine monomolecular devices, such as thermal switches, based on the difference of electrical conductivity between aromatic and non-aromatic species adsorbed on Si(100) or on Ge(100).Entropy2016-12-201812Article10.3390/e181204564561099-43002016-12-20doi: 10.3390/e18120456Romain CoustelStéphane CarniatoGérard Boureau<![CDATA[Entropy, Vol. 18, Pages 454: Grey Coupled Prediction Model for Traffic Flow with Panel Data Characteristics]]>
http://www.mdpi.com/1099-4300/18/12/454
This paper studies the grey coupled prediction problem of traffic data with panel data characteristics. Traffic flow data collected continuously at the same site typically has panel data characteristics. The longitudinal data (daily flow) is time-series data, which show an obvious intra-day trend and can be predicted using the autoregressive integrated moving average (ARIMA) model. The cross-sectional data is composed of observations at the same time intervals on different days and shows weekly seasonality and limited data characteristics; this data can be predicted using the rolling seasonal grey model (RSDGM(1,1)). The length of the rolling sequence is determined using matrix perturbation analysis. Then, a coupled model is established based on the ARIMA and RSDGM(1,1) models; the coupled prediction is achieved at the intersection of the time-series data and cross-sectional data, and the weights are determined using grey relational analysis. Finally, numerical experiments on 16 groups of cross-sectional data show that the RSDGM(1,1) model has good adaptability and stability and can effectively predict changes in traffic flow. The performance of the coupled model is also better than that of the benchmark model, the coupled model with equal weights and the Bayesian combination model.Entropy2016-12-201812Article10.3390/e181204544541099-43002016-12-20doi: 10.3390/e18120454Jinwei YangXinping XiaoShuhua MaoCongjun RaoJianghui Wen<![CDATA[Entropy, Vol. 18, Pages 455: Rényi Divergences, Bures Geometry and Quantum Statistical Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/12/455
The Bures geometry of quantum statistical thermodynamics at thermal equilibrium is investigated by introducing the connections between the Bures angle and the Rényi 1/2-divergence. Fundamental relations concerning free energy, moments of work, and distance are established.Entropy2016-12-191812Article10.3390/e181204554551099-43002016-12-19doi: 10.3390/e18120455Ali HardalÖzgür Müstecaplıoğlu<![CDATA[Entropy, Vol. 18, Pages 453: Entropic Citizenship Behavior and Sustainability in Urban Organizations: Towards a Theoretical Model]]>
http://www.mdpi.com/1099-4300/18/12/453
Entropy is a concept derived from Physics that has been used to describe natural and social systems’ structure and behavior. Applications of the concept in the social sciences so far have been largely limited to the disciplines of economics and sociology. In the current paper, the concept of entropy is applied to organizational citizenship behavior with implications for urban organizational sustainability. A heuristic is presented for analysing personal and organizational citizenship configurations and distributions within a given workforce that can lead to corporate entropy; and for allowing prescriptive remedial steps to be taken to manage the process, should entropy from this source threaten its sustainability and survival.Entropy2016-12-191812Article10.3390/e181204534531099-43002016-12-19doi: 10.3390/e18120453David Coldwell<![CDATA[Entropy, Vol. 18, Pages 452: Ranking DMUs by Comparing DEA Cross-Efficiency Intervals Using Entropy Measures]]>
http://www.mdpi.com/1099-4300/18/12/452
Cross-efficiency evaluation, an extension of data envelopment analysis (DEA), can eliminate unrealistic weighing schemes and provide a ranking for decision making units (DMUs). In the literature, the determination of input and output weights uniquely receives more attentions. However, the problem of choosing the aggressive (minimal) or benevolent (maximal) formulation for decision-making might still remain. In this paper, we develop a procedure to perform cross-efficiency evaluation without the need to make any specific choice of DEA weights. The proposed procedure takes into account the aggressive and benevolent formulations at the same time, and the choice of DEA weights can then be avoided. Consequently, a number of cross-efficiency intervals is obtained for each DMU. The entropy, which is based on information theory, is an effective tool to measure the uncertainty. We then utilize the entropy to construct a numerical index for DMUs with cross-efficiency intervals. A mathematical program is proposed to find the optimal entropy values of DMUs for comparison. With the derived entropy value, we can rank DMUs accordingly. Two examples are illustrated to show the effectiveness of the idea proposed in this paper.Entropy2016-12-171812Article10.3390/e181204524521099-43002016-12-17doi: 10.3390/e18120452Tim LuShiang-Tai Liu<![CDATA[Entropy, Vol. 18, Pages 451: Linear Quantum Entropy and Non-Hermitian Hamiltonians]]>
http://www.mdpi.com/1099-4300/18/12/451
We consider the description of open quantum systems with probability sinks (or sources) in terms of general non-Hermitian Hamiltonians. Within such a framework, we study novel possible definitions of the quantum linear entropy as an indicator of the flow of information during the dynamics. Such linear entropy functionals are necessary in the case of a partially Wigner-transformed non-Hermitian Hamiltonian (which is typically useful within a mixed quantum-classical representation). Both the case of a system represented by a pure non-Hermitian Hamiltonian as well as that of the case of non-Hermitian dynamics in a classical bath are explicitly considered.Entropy2016-12-161812Article10.3390/e181204514511099-43002016-12-16doi: 10.3390/e18120451Alessandro SergiPaolo Giaquinta<![CDATA[Entropy, Vol. 18, Pages 449: Entropy-Constrained Scalar Quantization with a Lossy-Compressed Bit]]>
http://www.mdpi.com/1099-4300/18/12/449
We consider the compression of a continuous real-valued source X using scalar quantizers and average squared error distortion D. Using lossless compression of the quantizer’s output, Gish and Pierce showed that uniform quantizing yields the smallest output entropy in the limit D → 0 , resulting in a rate penalty of 0.255 bits/sample above the Shannon Lower Bound (SLB). We present a scalar quantization scheme named lossy-bit entropy-constrained scalar quantization (Lb-ECSQ) that is able to reduce the D → 0 gap to SLB to 0.251 bits/sample by combining both lossless and binary lossy compression of the quantizer’s output. We also study the low-resolution regime and show that Lb-ECSQ significantly outperforms ECSQ in the case of 1-bit quantization.Entropy2016-12-161812Article10.3390/e181204494491099-43002016-12-16doi: 10.3390/e18120449Melanie PradierPablo OlmosFernando Perez-Cruz<![CDATA[Entropy, Vol. 18, Pages 450: Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle]]>
http://www.mdpi.com/1099-4300/18/12/450
Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.Entropy2016-12-161812Article10.3390/e181204504501099-43002016-12-16doi: 10.3390/e18120450Ge ChengZhenyu ZhangMoses KyebambeNasser Kimbugwe<![CDATA[Entropy, Vol. 18, Pages 448: The Kullback–Leibler Information Function for Infinite Measures]]>
http://www.mdpi.com/1099-4300/18/12/448
In this paper, we introduce the Kullback–Leibler information function ρ ( ν , μ ) and prove the local large deviation principle for σ-finite measures μ and finitely additive probability measures ν. In particular, the entropy of a continuous probability distribution ν on the real axis is interpreted as the exponential rate of asymptotics for the Lebesgue measure of the set of those samples that generate empirical measures close to ν in a suitable fine topology.Entropy2016-12-151812Article10.3390/e181204484481099-43002016-12-15doi: 10.3390/e18120448Victor BakhtinEdvard Sokal<![CDATA[Entropy, Vol. 18, Pages 447: Quantum Thermodynamics with Degenerate Eigenstate Coherences]]>
http://www.mdpi.com/1099-4300/18/12/447
We establish quantum thermodynamics for open quantum systems weakly coupled to their reservoirs when the system exhibits degeneracies. The first and second law of thermodynamics are derived, as well as a finite-time fluctuation theorem for mechanical work and energy and matter currents. Using a double quantum dot junction model, local eigenbasis coherences are shown to play a crucial role on thermodynamics and on the electron counting statistics.Entropy2016-12-151812Article10.3390/e181204474471099-43002016-12-15doi: 10.3390/e18120447Gregory Bulnes CuetaraMassimiliano EspositoGernot Schaller<![CDATA[Entropy, Vol. 18, Pages 446: Effects Induced by the Initial Condition in the Quantum Kibble–Zurek Scaling for Changing the Symmetry-Breaking Field]]>
http://www.mdpi.com/1099-4300/18/12/446
The Kibble–Zurek scaling describes the driven critical dynamics starting with an equilibrium state far away from the critical point. Recently, it has been shown that scaling behaviors also exist when the fluctuation term changes starting near the critical point. In this case, the relevant initial conditions should be included in the scaling theory as additional scaling variables. Here, we study the driven quantum critical dynamics in which a symmetry-breaking field is linearly changed starting from the vicinity of the critical point. We find that, similar to the case of changing the fluctuation term, scaling behaviors in the driven dynamics can be described by the Kibble–Zurek scaling with the initial symmetry-breaking field being included as its additional scaling variable. Both the cases of zero and finite temperatures are considered, and the scaling forms of the order parameter and the entanglement entropy are obtained. We numerically verify the scaling theory by taking the quantum Ising model as an example.Entropy2016-12-141812Article10.3390/e181204464461099-43002016-12-14doi: 10.3390/e18120446Liang-Jun ZhaiShuai Yin<![CDATA[Entropy, Vol. 18, Pages 445: Multivariate Surprisal Analysis of Gene Expression Levels]]>
http://www.mdpi.com/1099-4300/18/12/445
We consider here multivariate data which we understand as the problem where each data point i is measured for two or more distinct variables. In a typical situation there are many data points i while the range of the different variables is more limited. If there is only one variable then the data can be arranged as a rectangular matrix where i is the index of the rows while the values of the variable label the columns. We begin here with this case, but then proceed to the more general case with special emphasis on two variables when the data can be organized as a tensor. An analysis of such multivariate data by a maximal entropy approach is discussed and illustrated for gene expressions in four different cell types of six different patients. The different genes are indexed by i, and there are 24 (4 by 6) entries for each i. We used an unbiased thermodynamic maximal-entropy based approach (surprisal analysis) to analyze the multivariate transcriptional profiles. The measured microarray experimental data is organized as a tensor array where the two minor orthogonal directions are the different patients and the different cell types. The entries are the transcription levels on a logarithmic scale. We identify a disease signature of prostate cancer and determine the degree of variability between individual patients. Surprisal analysis determined a baseline expression level common for all cells and patients. We identify the transcripts in the baseline as the “housekeeping” genes that insure the cell stability. The baseline and two surprisal patterns satisfactorily recover (99.8%) the multivariate data. The two patterns characterize the individuality of the patients and, to a lesser extent, the commonality of the disease. The immune response was identified as the most significant pathway contributing to the cancer disease pattern. Delineating patient variability is a central issue in personalized diagnostics and it remains to be seen if additional data will confirm the power of multivariate analysis to address this key point. The collapsed limits where the data is compacted into two dimensional arrays are contained within the proposed formalism.Entropy2016-12-111812Article10.3390/e181204454451099-43002016-12-11doi: 10.3390/e18120445Francoise RemacleAndrew GoldsteinRaphael Levine<![CDATA[Entropy, Vol. 18, Pages 444: Determining the Optimum Inner Diameter of Condenser Tubes Based on Thermodynamic Objective Functions and an Economic Analysis]]>
http://www.mdpi.com/1099-4300/18/12/444
The diameter and configuration of tubes are important design parameters of power condensers. If a proper tube diameter is applied during the design of a power unit, a high energy efficiency of the condenser itself can be achieved and the performance of the whole power generation unit can be improved. If a tube assembly is to be replaced, one should verify whether the chosen condenser tube diameter is correct. Using a diameter that is too large increases the heat transfer area, leading to over-dimensioning and higher costs of building the condenser. On the other hand, if the diameter is too small, water flows faster through the tubes, which results in larger flow resistance and larger pumping power of the cooling-water pump. Both simple and complex methods can be applied to determine the condenser tube diameter. The paper proposes a method of technical and economic optimisation taking into account the performance of a condenser, the low-pressure (LP) part of a turbine, and a cooling-water pump as well as the profit from electric power generation and costs of building the condenser and pumping cooling water. The results obtained by this method were compared with those provided by the following simpler methods: minimization of the entropy generation rate per unit length of a condenser tube (considering entropy generation due to heat transfer and resistance of cooling-water flow), minimization of the total entropy generation rate (considering entropy generation for the system comprising the LP part of the turbine, the condenser, and the cooling-water pump), and maximization of the power unit’s output. The proposed methods were used to verify diameters of tubes in power condensers in a200-MW and a 500-MW power units.Entropy2016-12-101812Article10.3390/e181204444441099-43002016-12-10doi: 10.3390/e18120444Rafał LaskowskiAdam SmykArtur RusowiczAndrzej Grzebielec<![CDATA[Entropy, Vol. 18, Pages 443: The Evaluation of Noise Spectroscopy Tests]]>
http://www.mdpi.com/1099-4300/18/12/443
The paper discusses mathematical tools to evaluate novel noise spectroscopy based analysis and describes, via physical similarity, the mathematical models expressing the quantitative character of the modeled task. Using the Stefan–Boltzmann law, the authors indicate finding the spectral density of the radiated power of a hemisphere, and, for the selected frequency interval and temperature, they compare the simplified models with the expression of noise spectral density according to the Johnson–Nyquist formula or Nyquist’s expression of the function of spectral density based on a derivation of Planck’s law. The related measurements and evaluations, together with analyses of the noise spectroscopy of periodic resonant structures, are also outlined in the given context.Entropy2016-12-101812Article10.3390/e181204434431099-43002016-12-10doi: 10.3390/e18120443Pavel FialaPetr DrexlerDusan NesporZoltan SzaboJan MikulkaJiri Polivka<![CDATA[Entropy, Vol. 18, Pages 442: Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities]]>
http://www.mdpi.com/1099-4300/18/12/442
Information-theoretic measures, such as the entropy, the cross-entropy and the Kullback–Leibler divergence between two mixture models, are core primitives in many signal processing tasks. Since the Kullback–Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte Carlo stochastic integration, approximated or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy, the Kullback–Leibler and the α-divergences of mixtures. We illustrate the versatile method by reporting our experiments for approximating the Kullback–Leibler and the α-divergences between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures and Gamma mixtures.Entropy2016-12-091812Article10.3390/e181204424421099-43002016-12-09doi: 10.3390/e18120442Frank NielsenKe Sun<![CDATA[Entropy, Vol. 18, Pages 440: Supply Chain Strategies for Quality Inspection under a Customer Return Policy: A Game Theoretical Approach]]>
http://www.mdpi.com/1099-4300/18/12/440
This paper outlines the quality inspection strategies in a supplier–buyer supply chain under a customer return policy. This paper primarily focuses on product quality and quality inspection techniques to maximize the actors’ and supply chain’s profits using game theory approach. The supplier–buyer setup is described in terms of textile manufacturer–retailer supply chain where quality inspection is an important aspect and the product return from the customer is generally accepted. Textile manufacturer produces the product, whereas, retailer acts as a reseller who buys the products from the textile manufacturer and sells them to the customers. In this context, the former invests in the product quality whereas the latter invests in the random quality inspection and traceability. The relationships between the textile manufacturer and the retailer are recognized as horizontal and vertical alliances and modeled using non-cooperative and cooperative games. The non-cooperative games are based on the Stackelberg and Nash equilibrium models. Further, bargaining and game change scenarios have been discussed to maximize the profit under different games. To understand the appropriateness of a strategic alliance, a computational study demonstrates textile manufacturer–retailer relation under different game scenarios.Entropy2016-12-081812Article10.3390/e181204404401099-43002016-12-08doi: 10.3390/e18120440Vijay KumarDaniel EkwallLichuan Wang<![CDATA[Entropy, Vol. 18, Pages 439: The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors]]>
http://www.mdpi.com/1099-4300/18/12/439
With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function) neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism) soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.Entropy2016-12-081812Article10.3390/e181204394391099-43002016-12-08doi: 10.3390/e18120439Yi WangXin SuShubing Guo<![CDATA[Entropy, Vol. 18, Pages 438: Static Einstein–Maxwell Magnetic Solitons and Black Holes in an Odd Dimensional AdS Spacetime]]>
http://www.mdpi.com/1099-4300/18/12/438
We construct a new class of Einstein–Maxwell static solutions with a magnetic field in D-dimensions (with D ≥ 5 an odd number), approaching at infinity a globally Anti-de Sitter (AdS) spacetime. In addition to the mass, the new solutions possess an extra-parameter associated with a non-zero magnitude of the magnetic potential at infinity. Some of the black holes possess a non-trivial zero-horizon size limit, which corresponds to a solitonic deformation of the AdS background.Entropy2016-12-081812Article10.3390/e181204384381099-43002016-12-08doi: 10.3390/e18120438Jose Blázquez-SalcedoJutta KunzFrancisco Navarro-LéridaEugen Radu<![CDATA[Entropy, Vol. 18, Pages 441: Construction of Fractional Repetition Codes with Variable Parameters for Distributed Storage Systems]]>
http://www.mdpi.com/1099-4300/18/12/441
In this paper, we propose a new class of regular fractional repetition (FR) codes constructed from perfect difference families and quasi-perfect difference families to store big data in distributed storage systems. The main advantage of the proposed construction method is that it supports a wide range of code parameter values compared to existing ones, which is an important feature to be adopted in practical systems. When using one instance of the proposed codes for a given parameter set, we show that the amount of stored data is very close to that of an existing state-of-the-art optimal FR code.Entropy2016-12-081812Article10.3390/e181204414411099-43002016-12-08doi: 10.3390/e18120441Hosung ParkYoung-Sik Kim<![CDATA[Entropy, Vol. 18, Pages 437: Application of Shannon Wavelet Entropy and Shannon Wavelet Packet Entropy in Analysis of Power System Transient Signals]]>
http://www.mdpi.com/1099-4300/18/12/437
In a power system, the analysis of transient signals is the theoretical basis of fault diagnosis and transient protection theory. Shannon wavelet entropy (SWE) and Shannon wavelet packet entropy (SWPE) are powerful mathematics tools for transient signal analysis. Combined with the recent achievements regarding SWE and SWPE, their applications are summarized in feature extraction of transient signals and transient fault recognition. For wavelet aliasing at adjacent scale of wavelet decomposition, the impact of wavelet aliasing is analyzed for feature extraction accuracy of SWE and SWPE, and their differences are compared. Meanwhile, the analyses mentioned are verified by partial discharge (PD) feature extraction of power cable. Finally, some new ideas and further researches are proposed in the wavelet entropy mechanism, operation speed and how to overcome wavelet aliasing.Entropy2016-12-071812Review10.3390/e181204374371099-43002016-12-07doi: 10.3390/e18120437Jikai ChenYanhui DouYang LiJiang Li<![CDATA[Entropy, Vol. 18, Pages 436: Numerical Study of Entropy Generation in Mixed MHD Convection in a Square Lid-Driven Cavity Filled with Darcy–Brinkman–Forchheimer Porous Medium]]>
http://www.mdpi.com/1099-4300/18/12/436
This investigation deals with the numerical simulation of entropy generation at mixed convection flow in a lid-driven saturated porous cavity submitted to a magnetic field. The magnetic field is applied in the direction that is normal to the cavity cross section. The governing equations, written in the Darcy–Brinkman–Forchheimer formulation, are solved using a numerical code based on the Control Volume Finite Element Method. The flow structure and heat transfer are presented in the form of streamlines, isotherms and average Nusselt number. The entropy generation was studied for various values of Darcy number (10−3 ≤ Da ≤ 1) and for a range of Hartmann number (0 ≤ Ha ≤ 102). It was found that entropy generation is affected by the variations of the considered dimensionless physical parameters. Moreover, the form drag related to the Forchheimer effect remains significant until a critical Hartmann number value.Entropy2016-12-061812Article10.3390/e181204364361099-43002016-12-06doi: 10.3390/e18120436Rahma BouabdaMounir BouabidAmmar Ben BrahimMourad Magherbi<![CDATA[Entropy, Vol. 18, Pages 435: Intra-Day Trading System Design Based on the Integrated Model of Wavelet De-Noise and Genetic Programming]]>
http://www.mdpi.com/1099-4300/18/12/435
Technical analysis has been proved to be capable of exploiting short-term fluctuations in financial markets. Recent results indicate that the market timing approach beats many traditional buy-and-hold approaches in most of the short-term trading periods. Genetic programming (GP) was used to generate short-term trade rules on the stock markets during the last few decades. However, few of the related studies on the analysis of financial time series with genetic programming considered the non-stationary and noisy characteristics of the time series. In this paper, to de-noise the original financial time series and to search profitable trading rules, an integrated method is proposed based on the Wavelet Threshold (WT) method and GP. Since relevant information that affects the movement of the time series is assumed to be fully digested during the market closed periods, to avoid the jumping points of the daily or monthly data, in this paper, intra-day high-frequency time series are used to fully exploit the short-term forecasting advantage of technical analysis. To validate the proposed integrated approach, an empirical study is conducted based on the China Securities Index (CSI) 300 futures in the emerging China Financial Futures Exchange (CFFEX) market. The analysis outcomes show that the wavelet de-noise approach outperforms many comparative models.Entropy2016-12-061812Article10.3390/e181204354351099-43002016-12-06doi: 10.3390/e18120435Hongguang LiuPing JiJian Jin<![CDATA[Entropy, Vol. 18, Pages 434: Entropy and Stability Analysis of Delayed Energy Supply–Demand Model]]>
http://www.mdpi.com/1099-4300/18/12/434
In this paper, a four-dimensional model of energy supply–demand with two-delay is built. The interactions among energy demand of east China, energy supply of west China and the utilization of renewable energy in east China are delayed in this model. We discuss stability of the system affected by parameters and the existence of Hopf bifurcation at the equilibrium point from two aspects: single delay and two-delay. The stability and complexity of the system are demonstrated through bifurcation diagram, Poincare section plot, entropy diagram, etc. in numerical simulation. The simulation results show that the parameters beyond the stable region will cause the system to be unstable and increase the complexity of the system. At this point, because of energy supply–demand system fluctuations, it is difficult to formulate energy policies. Finally, the bifurcation control is realized successfully by the method of delayed feedback control. The results of bifurcation control simulation indicate that the system can return to stable state by adjusting the control parameter. In addition, we find that the bigger the value of the control parameter, the better the effect of the bifurcation control. The results of this paper can provide help for maintaining the stability of the system, which will be conducive to energy scheduling.Entropy2016-12-031812Article10.3390/e181204344341099-43002016-12-03doi: 10.3390/e18120434Jing WangFengshan SiYuling WangShide Duan<![CDATA[Entropy, Vol. 18, Pages 430: Multivariable Fuzzy Measure Entropy Analysis for Heart Rate Variability and Heart Sound Amplitude Variability]]>
http://www.mdpi.com/1099-4300/18/12/430
Simultaneously analyzing multivariate time series provides an insight into underlying interaction mechanisms of cardiovascular system and has recently become an increasing focus of interest. In this study, we proposed a new multivariate entropy measure, named multivariate fuzzy measure entropy (mvFME), for the analysis of multivariate cardiovascular time series. The performances of mvFME, and its two sub-components: the local multivariate fuzzy entropy (mvFEL) and global multivariate fuzzy entropy (mvFEG), as well as the commonly used multivariate sample entropy (mvSE), were tested on both simulation and cardiovascular multivariate time series. Simulation results on multivariate coupled Gaussian signals showed that the statistical stability of mvFME is better than mvSE, but its computation time is higher than mvSE. Then, mvSE and mvFME were applied to the multivariate cardiovascular signal analysis of R wave peak (RR) interval, and first (S1) and second (S2) heart sound amplitude series from three positions of heart sound signal collections, under two different physiological states: rest state and after stair climbing state. The results showed that, compared with rest state, for univariate time series analysis, after stair climbing state has significantly lower mvSE and mvFME values for both RR interval and S1 amplitude series, whereas not for S2 amplitude series. For bivariate time series analysis, all mvSE and mvFME report significantly lower values for after stair climbing. For trivariate time series analysis, only mvFME has the discrimination ability for the two physiological states, whereas mvSE does not. In summary, the new proposed mvFME method shows better statistical stability and better discrimination ability for multivariate time series analysis than the traditional mvSE method.Entropy2016-12-031812Article10.3390/e181204304301099-43002016-12-03doi: 10.3390/e18120430Lina ZhaoShoushui WeiHong TangChengyu Liu<![CDATA[Entropy, Vol. 18, Pages 432: EEG-Based Person Authentication Using a Fuzzy Entropy-Related Approach with Two Electrodes]]>
http://www.mdpi.com/1099-4300/18/12/432
Person authentication, based on electroencephalography (EEG) signals, is one of the directions possible in the study of EEG signals. In this paper, a method for the selection of EEG electrodes and features in a discriminative manner is proposed. Given that EEG signals are unstable and non-linear, a non-linear analysis method, i.e., fuzzy entropy, is more appropriate. In this paper, unlike other methods using different signal sources and patterns, such as rest state and motor imagery, a novel paradigm using the stimuli of self-photos and non-self-photos is introduced. Ten subjects are selected to take part in this experiment, and fuzzy entropy is used as a feature to select the minimum number of electrodes that identifies individuals. The experimental results show that the proposed method can make use of two electrodes (FP1 and FP2) in the frontal area, while the classification accuracy is greater than 87.3%. The proposed biometric system, based on EEG signals, can provide each subject with a unique key and is capable of human recognition.Entropy2016-12-021812Communication10.3390/e181204324321099-43002016-12-02doi: 10.3390/e18120432Zhendong MuJianfeng HuJianliang Min<![CDATA[Entropy, Vol. 18, Pages 433: Foliations-Webs-Hessian Geometry-Information Geometry-Entropy and Cohomology]]>
http://www.mdpi.com/1099-4300/18/12/433
IN MEMORIAM OF ALEXANDER GROTHENDIECK. THE MAN.Entropy2016-12-021812Article10.3390/e181204334331099-43002016-12-02doi: 10.3390/e18120433Michel Boyom<![CDATA[Entropy, Vol. 18, Pages 431: Maximum Entropy Production Is Not a Steady State Attractor for 2D Fluid Convection]]>
http://www.mdpi.com/1099-4300/18/12/431
Multiple authors have claimed that the natural convection of a fluid is a process that exhibits maximum entropy production (MEP). However, almost all such investigations were limited to fixed temperature boundary conditions (BCs). It was found that under those conditions, the system tends to maximize its heat flux, and hence it was concluded that the MEP state is a dynamical attractor. However, since entropy production varies with heat flux and difference of inverse temperature, it is essential that any complete investigation of entropy production allows for variations in heat flux and temperature difference. Only then can we legitimately assess whether the MEP state is the most attractive. Our previous work made use of negative feedback BCs to explore this possibility. We found that the steady state of the system was far from the MEP state. For any system, entropy production can only be maximized subject to a finite set of physical and material constraints. In the case of our previous work, it was possible that the adopted set of fluid parameters were constraining the system in such a way that it was entirely prevented from reaching the MEP state. Hence, in the present work, we used a different set of boundary parameters, such that the steady states of the system were in the local vicinity of the MEP state. If MEP was indeed an attractor, relaxing those constraints of our previous work should have caused a discrete perturbation to the surface of steady state heat flux values near the value corresponding to MEP. We found no such perturbation, and hence no discernible attraction to the MEP state. Furthermore, systems with fixed flux BCs actually minimize their entropy production (relative to the alternative stable state, that of pure diffusive heat transport). This leads us to conclude that the principle of MEP is not an accurate indicator of which stable steady state a convective system will adopt. However, for all BCs considered, the quotient of heat flux and temperature difference F / Δ T —which is proportional to the dimensionless Nusselt number—does appear to be maximized.Entropy2016-12-011812Article10.3390/e181204314311099-43002016-12-01doi: 10.3390/e18120431Stuart BartlettNathaniel Virgo<![CDATA[Entropy, Vol. 18, Pages 429: CoFea: A Novel Approach to Spam Review Identification Based on Entropy and Co-Training]]>
http://www.mdpi.com/1099-4300/18/12/429
With the rapid development of electronic commerce, spam reviews are rapidly growing on the Internet to manipulate online customers’ opinions on goods being sold. This paper proposes a novel approach, called CoFea (Co-training by Features), to identify spam reviews, based on entropy and the co-training algorithm. After sorting all lexical terms of reviews by entropy, we produce two views on the reviews by dividing the lexical terms into two subsets. One subset contains odd-numbered terms and the other contains even-numbered terms. Using SVM (support vector machine) as the base classifier, we further propose two strategies, CoFea-T and CoFea-S, embedded with the CoFea approach. The CoFea-T strategy uses all terms in the subsets for spam review identification by SVM. The CoFea-S strategy uses a predefined number of terms with small entropy for spam review identification by SVM. The experiment results show that the CoFea-T strategy produces better accuracy than the CoFea-S strategy, while the CoFea-S strategy saves more computing time than the CoFea-T strategy with acceptable accuracy in spam review identification.Entropy2016-11-301812Article10.3390/e181204294291099-43002016-11-30doi: 10.3390/e18120429Wen ZhangChaoqi BuTaketoshi YoshidaSiguang Zhang<![CDATA[Entropy, Vol. 18, Pages 428: Fiber-Mixing Codes between Shifts of Finite Type and Factors of Gibbs Measures]]>
http://www.mdpi.com/1099-4300/18/12/428
A sliding block code π : X → Y between shift spaces is called fiber-mixing if, for every x and x ′ in X with y = π ( x ) = π ( x ′ ) , there is z ∈ π - 1 ( y ) which is left asymptotic to x and right asymptotic to x ′ . A fiber-mixing factor code from a shift of finite type is a code of class degree 1 for which each point of Y has exactly one transition class. Given an infinite-to-one factor code between mixing shifts of finite type (of unequal entropies), we show that there is also a fiber-mixing factor code between them. This result may be regarded as an infinite-to-one (unequal entropies) analogue of Ashley’s Replacement Theorem, which states that the existence of an equal entropy factor code between mixing shifts of finite type guarantees the existence of a degree 1 factor code between them. Properties of fiber-mixing codes and applications to factors of Gibbs measures are presented.Entropy2016-11-301812Article10.3390/e181204284281099-43002016-11-30doi: 10.3390/e18120428Uijin Jung<![CDATA[Entropy, Vol. 18, Pages 426: On Macrostates in Complex Multi-Scale Systems]]>
http://www.mdpi.com/1099-4300/18/12/426
A characteristic feature of complex systems is their deep structure, meaning that the definition of their states and observables depends on the level, or the scale, at which the system is considered. This scale dependence is reflected in the distinction of micro- and macro-states, referring to lower and higher levels of description. There are several conceptual and formal frameworks to address the relation between them. Here, we focus on an approach in which macrostates are contextually emergent from (rather than fully reducible to) microstates and can be constructed by contextual partitions of the space of microstates. We discuss criteria for the stability of such partitions, in particular under the microstate dynamics, and outline some examples. Finally, we address the question of how macrostates arising from stable partitions can be identified as relevant or meaningful.Entropy2016-11-291812Article10.3390/e181204264261099-43002016-11-29doi: 10.3390/e18120426Harald Atmanspacher<![CDATA[Entropy, Vol. 18, Pages 424: Measurement on the Complexity Entropy of Dynamic Game Models for Innovative Enterprises under Two Kinds of Government Subsidies]]>
http://www.mdpi.com/1099-4300/18/12/424
In this paper, setting the high-tech industry as the background, we build a dynamic duopoly game model in two cases with different government subsidies based on the innovation inputs and outputs, respectively. We analyze the equilibrium solution and stability conditions of the system, and study the dynamic evolution of the system under the conditions of different system parameters by the numerical simulation method. The simulation results show that both innovation subsidy policies have positive effects on firms’ innovation activities. Besides, improving the level of innovation can encourage firms to innovate. It also shows that an exaggerated adjusting speed of innovation outputs may cause complicated dynamic phenomena such as bifurcation and chaos, which means that the system has relatively higher entropy than that in a stable state. The degree of the government innovation subsidies is also shown to impact the stability and entropy of the system.Entropy2016-11-291812Article10.3390/e181204244241099-43002016-11-29doi: 10.3390/e18120424Junhai MaXinyan SuiLei Li<![CDATA[Entropy, Vol. 18, Pages 427: Healthcare Teams Neurodynamically Reorganize When Resolving Uncertainty]]>
http://www.mdpi.com/1099-4300/18/12/427
Research on the microscale neural dynamics of social interactions has yet to be translated into improvements in the assembly, training and evaluation of teams. This is partially due to the scale of neural involvements in team activities, spanning the millisecond oscillations in individual brains to the minutes/hours performance behaviors of the team. We have used intermediate neurodynamic representations to show that healthcare teams enter persistent (50–100 s) neurodynamic states when they encounter and resolve uncertainty while managing simulated patients. Each of the second symbols was developed situating the electroencephalogram (EEG) power of each team member in the contexts of those of other team members and the task. These representations were acquired from EEG headsets with 19 recording electrodes for each of the 1–40 Hz frequencies. Estimates of the information in each symbol stream were calculated from a 60 s moving window of Shannon entropy that was updated each second, providing a quantitative neurodynamic history of the team’s performance. Neurodynamic organizations fluctuated with the task demands with increased organization (i.e., lower entropy) occurring when the team needed to resolve uncertainty. These results show that intermediate neurodynamic representations can provide a quantitative bridge between the micro and macro scales of teamwork.Entropy2016-11-291812Article10.3390/e181204274271099-43002016-11-29doi: 10.3390/e18120427Ronald StevensTrysha GallowayDonald HalpinAnn Willemsen-Dunlap<![CDATA[Entropy, Vol. 18, Pages 425: Anisotropically Weighted and Nonholonomically Constrained Evolutions on Manifolds]]>
http://www.mdpi.com/1099-4300/18/12/425
We present evolution equations for a family of paths that results from anisotropically weighting curve energies in non-linear statistics of manifold valued data. This situation arises when performing inference on data that have non-trivial covariance and are anisotropic distributed. The family can be interpreted as most probable paths for a driving semi-martingale that through stochastic development is mapped to the manifold. We discuss how the paths are projections of geodesics for a sub-Riemannian metric on the frame bundle of the manifold, and how the curvature of the underlying connection appears in the sub-Riemannian Hamilton–Jacobi equations. Evolution equations for both metric and cometric formulations of the sub-Riemannian metric are derived. We furthermore show how rank-deficient metrics can be mixed with an underlying Riemannian metric, and we relate the paths to geodesics and polynomials in Riemannian geometry. Examples from the family of paths are visualized on embedded surfaces, and we explore computational representations on finite dimensional landmark manifolds with geometry induced from right-invariant metrics on diffeomorphism groups.Entropy2016-11-261812Article10.3390/e181204254251099-43002016-11-26doi: 10.3390/e18120425Stefan Sommer<![CDATA[Entropy, Vol. 18, Pages 423: Consensus of Second Order Multi-Agent Systems with Exogenous Disturbance Generated by Unknown Exosystems]]>
http://www.mdpi.com/1099-4300/18/12/423
This paper is concerned with consensus problem of a class of second-order multi-agent systems subjecting to external disturbance generated from some unknown exosystems. In comparison with the case where the disturbance is generated from some known exosystems, we need to combine adaptive control and internal model design to deal with the external disturbance generated from the unknown exosystems. With the help of the internal model, an adaptive protocol is proposed for the consensus problem of the multi-agent systems. Finally, one numerical example is provided to demonstrate the effectiveness of the control design.Entropy2016-11-251812Article10.3390/e181204234231099-43002016-11-25doi: 10.3390/e18120423Xuxi ZhangQidan ZhuXianping Liu<![CDATA[Entropy, Vol. 18, Pages 417: Condensation: Passenger Not Driver in Atmospheric Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/12/417
The second law of thermodynamics states that processes yielding work or at least capable of yielding work are thermodynamically spontaneous, and that those costing work are thermodynamically nonspontaneous. Whether a process yields or costs heat is irrelevant. Condensation of water vapor yields work and hence is thermodynamically spontaneous only in a supersaturated atmosphere; in an unsaturated atmosphere it costs work and hence is thermodynamically nonspontaneous. Far more of Earth’s atmosphere is unsaturated than supersaturated; based on this alone evaporation is far more often work-yielding and hence thermodynamically spontaneous than condensation in Earth’s atmosphere—despite condensation always yielding heat and evaporation always costing heat. Furthermore, establishment of the unstable or at best metastable condition of supersaturation, and its maintenance in the face of condensation that would wipe it out, is always work-costing and hence thermodynamically nonspontaneous in Earth’s atmosphere or anywhere else. The work required to enable supersaturation is most usually provided at the expense of temperature differences that enable cooling to below the dew point. In the case of most interest to us, convective weather systems and storms, it is provided at the expense of vertical temperature gradients exceeding the moist adiabatic. Thus, ultimately, condensation is a work-costing and hence thermodynamically nonspontaneous process even in supersaturated regions of Earth’s or any other atmosphere. While heat engines in general can in principle extract all of the work represented by any temperature difference until it is totally neutralized to isothermality, convective weather systems and storms in particular cannot. They can extract only the work represented by partial neutralization of super-moist-adiabatic lapse rates to moist-adiabaticity. Super-moist-adiabatic lapse rates are required to enable convection of saturated air. Condensation cannot occur fast enough to maintain relative humidity in a cloud exactly at saturation, thereby trapping some water vapor in metastable supersaturation. Only then can the water vapor condense. Thus ultimately condensation is a thermodynamically nonspontaneous process forced by super-moist-adiabatic lapse rates. Yet water vapor plays vital roles in atmospheric thermodynamics and kinetics. Convective weather systems and storms in a dry atmosphere (e.g., dust devils) can extract only the work represented by partial neutralization of super-dry-adiabatic lapse rates to dry-adiabaticity. At typical atmospheric temperatures in the tropics, where convective weather systems and storms are most frequent and active, the moist-adiabatic lapse rate is much smaller (thus much closer to isothermality), and hence represents much more extractable work, than the dry—the thermodynamic advantage of water vapor. Moreover, the large heat of condensation (and to a lesser extent fusion) of water facilitates much faster heat transfer from Earth’s surface to the tropopause than is possible in a dry atmosphere, thereby facilitating much faster extraction of work, i.e., much greater power, than is possible in a dry atmosphere—the kinetic advantage of water vapor.Entropy2016-11-251812Article10.3390/e181204174171099-43002016-11-25doi: 10.3390/e18120417Jack Denur<![CDATA[Entropy, Vol. 18, Pages 421: The Information Geometry of Sparse Goodness-of-Fit Testing]]>
http://www.mdpi.com/1099-4300/18/12/421
This paper takes an information-geometric approach to the challenging issue of goodness-of-fit testing in the high dimensional, low sample size context where—potentially—boundary effects dominate. The main contributions of this paper are threefold: first, we present and prove two new theorems on the behaviour of commonly used test statistics in this context; second, we investigate—in the novel environment of the extended multinomial model—the links between information geometry-based divergences and standard goodness-of-fit statistics, allowing us to formalise relationships which have been missing in the literature; finally, we use simulation studies to validate and illustrate our theoretical results and to explore currently open research questions about the way that discretisation effects can dominate sampling distributions near the boundary. Novelly accommodating these discretisation effects contrasts sharply with the essentially continuous approach of skewness and other corrections flowing from standard higher-order asymptotic analysis.Entropy2016-11-241812Article10.3390/e181204214211099-43002016-11-24doi: 10.3390/e18120421Paul MarriottRadka SabolováGermain Van BeverFrank Critchley<![CDATA[Entropy, Vol. 18, Pages 422: Energy Efficiency Improvement in a Modified Ethanol Process from Acetic Acid]]>
http://www.mdpi.com/1099-4300/18/12/422
For the high utilization of abundant lignocellulose, which is difficult to directly convert into ethanol, an energy-efficient ethanol production process using acetic acid was examined, and its energy saving performance, economics, and thermodynamic efficiency were compared with the conventional process. The raw ethanol synthesized from acetic acid and hydrogen was fed to the proposed ethanol concentration process. The proposed process utilized an extended divided wall column (DWC), for which the performance was investigated with the HYSYS simulation. The performance improvement of the proposed process includes a 27% saving in heating duty and a 41% reduction in cooling duty. The economics shows a 16% saving in investment cost and a 24% decrease in utility cost over the conventional ethanol concentration process. The exergy analysis shows a 9.6% improvement in thermodynamic efficiency for the proposed process.Entropy2016-11-241812Article10.3390/e181204224221099-43002016-11-24doi: 10.3390/e18120422Young Kim<![CDATA[Entropy, Vol. 18, Pages 420: On the Existence and Uniqueness of Solutions for Local Fractional Differential Equations]]>
http://www.mdpi.com/1099-4300/18/11/420
In this manuscript, we prove the existence and uniqueness of solutions for local fractional differential equations (LFDEs) with local fractional derivative operators (LFDOs). By using the contracting mapping theorem (CMT) and increasing and decreasing theorem (IDT), existence and uniqueness results are obtained. Some examples are presented to illustrate the validity of our results.Entropy2016-11-231811Article10.3390/e181104204201099-43002016-11-23doi: 10.3390/e18110420Hossein JafariHassan JassimMaysaa Al QurashiDumitru Baleanu<![CDATA[Entropy, Vol. 18, Pages 419: Periodic Energy Transport and Entropy Production in Quantum Electronics]]>
http://www.mdpi.com/1099-4300/18/11/419
The problem of time-dependent particle transport in quantum conductors is nowadays a well established topic. In contrast, the way in which energy and heat flow in mesoscopic systems subjected to dynamical drivings is a relatively new subject that cross-fertilize both fundamental developments of quantum thermodynamics and practical applications in nanoelectronics and quantum information. In this short review, we discuss from a thermodynamical perspective recent investigations on nonstationary heat and work generated in quantum systems, emphasizing open questions and unsolved issues.Entropy2016-11-231811Review10.3390/e181104194191099-43002016-11-23doi: 10.3390/e18110419María LudovicoLiliana ArracheaMichael MoskaletsDavid Sánchez<![CDATA[Entropy, Vol. 18, Pages 415: Simple Harmonic Oscillator Canonical Ensemble Model for Tunneling Radiation of Black Hole]]>
http://www.mdpi.com/1099-4300/18/11/415
A simple harmonic oscillator canonical ensemble model for Schwarzchild black hole quantum tunneling radiation is proposed in this paper. Firstly, the equivalence between canonical ensemble model and Parikh–Wilczek’s tunneling method is introduced. Then, radiated massless particles are considered as a collection of simple harmonic oscillators. Based on this model, we treat the black hole as a heat bath to derive the energy flux of the radiation. Finally, we apply the result to estimate the lifespan of a black hole.Entropy2016-11-231811Article10.3390/e181104154151099-43002016-11-23doi: 10.3390/e18110415Jinbo YangTangmei HeJingyi Zhang<![CDATA[Entropy, Vol. 18, Pages 418: Prediction of Bearing Fault Using Fractional Brownian Motion and Minimum Entropy Deconvolution]]>
http://www.mdpi.com/1099-4300/18/11/418
In this paper, we propose a novel framework for the diagnosis of incipient bearing faults and trend prediction of weak faults which result in gradual aggravation with the bearing vibration intensity as the characteristic parameter. For the weak fault diagnosis, the proposed framework adopts the improved minimum entropy deconvolution (MED) theory to identify the weak fault characteristics of mechanical equipment. From a large number of actual data analysis, once a bearing shows a weak fault, the bearing vibration intensity not only has random non-stationary, but also long-range dependent (LRD) characteristics. Therefore, the stochastic model with LRD−fractional Brown motion (FBM) is proposed to evaluate and predict the condition of slowly varying bearing faults which is a gradual process from weak fault occurrence to severity. For the FBM stochastic model, we mainly implement the derivation and the parameter identification of the FBM model. This is the first study to slowly fault prediction with stochastic model FBM. Experimental results show that the proposed methods can obtain the best performance in incipient fault diagnosis and bearing condition trend prediction.Entropy2016-11-231811Article10.3390/e181104184181099-43002016-11-23doi: 10.3390/e18110418Wanqing SongMing LiJian-Kai Liang<![CDATA[Entropy, Vol. 18, Pages 416: Analysis of the Complexity Entropy and Chaos Control of the Bullwhip Effect Considering Price of Evolutionary Game between Two Retailers]]>
http://www.mdpi.com/1099-4300/18/11/416
In this research, a model is established to represent a supply chain, which consists of one manufacturer and two retailers. The price-sensitive demand model is considered and the price game system is built according to the rule of bounded rationality as well as the entropy theory. With the increase of the price adjustment speed, the game system may go into chaos from the stable and periodic state. The bullwhip effect and inventory variance ratio of different stages that the system falls in are compared in real time. We also employ the delayed feedback control method to control the system and succeed in mitigating the bullwhip effect of the system. On the whole, the bullwhip effect and inventory variance ratio in the stable state are smaller than those in period-doubling and chaos. In the stable state, there is an optimal price adjustment speed to obtain both the lowest bullwhip effect and inventory variance ratio.Entropy2016-11-191811Article10.3390/e181104164161099-43002016-11-19doi: 10.3390/e18110416Junhai MaXiaogang MaWandong Lou<![CDATA[Entropy, Vol. 18, Pages 413: Existence of Solutions to a Nonlinear Parabolic Equation of Fourth-Order in Variable Exponent Spaces]]>
http://www.mdpi.com/1099-4300/18/11/413
This paper is devoted to studying the existence and uniqueness of weak solutions for an initial boundary problem of a nonlinear fourth-order parabolic equation with variable exponent v t + div ( | ∇ ▵ v | p ( x ) − 2 ∇ ▵ v ) − | ▵ v | q ( x ) − 2 ▵ v = g ( x , v ) . By applying Leray-Schauder’s fixed point theorem, the existence of weak solutions of the elliptic problem is given. Furthermore, the semi-discrete method yields the existence of weak solutions of the corresponding parabolic problem by constructing two approximate solutions.Entropy2016-11-181811Article10.3390/e181104134131099-43002016-11-18doi: 10.3390/e18110413Bo LiangXiting PengChengyuan Qu<![CDATA[Entropy, Vol. 18, Pages 414: Application of Sample Entropy Based LMD-TFPF De-Noising Algorithm for the Gear Transmission System]]>
http://www.mdpi.com/1099-4300/18/11/414
This paper investigates an improved noise reduction method and its application on gearbox vibration signal de-noising. A hybrid de-noising algorithm based on local mean decomposition (LMD), sample entropy (SE), and time-frequency peak filtering (TFPF) is proposed. TFPF is a classical filter method in the time-frequency domain. However, there is a contradiction in TFPF, i.e., a good preservation for signal amplitude, but poor random noise reduction results might be obtained by selecting a short window length, whereas a serious attenuation for signal amplitude, but effective random noise reduction might be obtained by selecting a long window length. In order to make a good tradeoff between valid signal amplitude preservation and random noise reduction, LMD and SE are adopted to improve TFPF. Firstly, the original signal is decomposed into PFs by LMD, and the SE value of each product function (PF) is calculated in order to classify the numerous PFs into the useful component, mixed component, and the noise component; then short-window TFPF is employed for the useful component, long-window TFPF is employed for the mixed component, and the noise component is removed; finally, the final signal is obtained after reconstruction. The gearbox vibration signals are employed to verify the proposed algorithm, and the comparison results show that the proposed SE-LMD-TFPF has the best de-noising results compared to traditional wavelet and TFPF method.Entropy2016-11-181811Article10.3390/e181104144141099-43002016-11-18doi: 10.3390/e18110414Shaohui NingZhennan HanZhijian WangXuefeng Wu<![CDATA[Entropy, Vol. 18, Pages 410: Information-Theoretic Analysis of Memoryless Deterministic Systems]]>
http://www.mdpi.com/1099-4300/18/11/410
The information loss in deterministic, memoryless systems is investigated by evaluating the conditional entropy of the input random variable given the output random variable. It is shown that for a large class of systems the information loss is finite, even if the input has a continuous distribution. For systems with infinite information loss, a relative measure is defined and shown to be related to Rényi information dimension. As deterministic signal processing can only destroy information, it is important to know how this information loss affects the solution of inverse problems. Hence, we connect the probability of perfectly reconstructing the input to the information lost in the system via Fano-type bounds. The theoretical results are illustrated by example systems commonly used in discrete-time, nonlinear signal processing and communications.Entropy2016-11-171811Article10.3390/e181104104101099-43002016-11-17doi: 10.3390/e18110410Bernhard GeigerGernot Kubin<![CDATA[Entropy, Vol. 18, Pages 412: Symplectic Entropy as a Novel Measure for Complex Systems]]>
http://www.mdpi.com/1099-4300/18/11/412
Real systems are often complex, nonlinear, and noisy in various fields, including mathematics, natural science, and social science. We present the symplectic entropy (SymEn) measure as well as an analysis method based on SymEn to estimate the nonlinearity of a complex system by analyzing the given time series. The SymEn estimation is a kind of entropy based on symplectic principal component analysis (SPCA), which represents organized but unpredictable behaviors of systems. The key to SPCA is to preserve the global submanifold geometrical properties of the systems through a symplectic transform in the phase space, which is a kind of measure-preserving transform. The ability to preserve the global geometrical characteristics makes SymEn a test statistic for the detection of the nonlinear characteristics in several typical chaotic time series, and the stochastic characteristic in Gaussian white noise. The results are in agreement with findings in the approximate entropy (ApEn), the sample entropy (SampEn), and the fuzzy entropy (FuzzyEn). Moreover, the SymEn method is also used to analyze the nonlinearities of real signals (including the electroencephalogram (EEG) signals for Autism Spectrum Disorder (ASD) and healthy subjects, and the sound and vibration signals for mechanical systems). The results indicate that the SymEn estimation can be taken as a measure for the description of the nonlinear characteristics in the data collected from natural complex systems.Entropy2016-11-171811Article10.3390/e181104124121099-43002016-11-17doi: 10.3390/e18110412Min LeiGuang MengWenming ZhangJoshua WadeNilanjan Sarkar<![CDATA[Entropy, Vol. 18, Pages 407: Geometry Induced by a Generalization of Rényi Divergence]]>
http://www.mdpi.com/1099-4300/18/11/407
In this paper, we propose a generalization of Rényi divergence, and then we investigate its induced geometry. This generalization is given in terms of a φ-function, the same function that is used in the definition of non-parametric φ-families. The properties of φ-functions proved to be crucial in the generalization of Rényi divergence. Assuming appropriate conditions, we verify that the generalized Rényi divergence reduces, in a limiting case, to the φ-divergence. In generalized statistical manifold, the φ-divergence induces a pair of dual connections D ( − 1 ) and D ( 1 ) . We show that the family of connections D ( α ) induced by the generalization of Rényi divergence satisfies the relation D ( α ) = 1 − α 2 D ( − 1 ) + 1 + α 2 D ( 1 ) , with α ∈ [ − 1 , 1 ] .Entropy2016-11-171811Article10.3390/e181104074071099-43002016-11-17doi: 10.3390/e18110407David de SouzaRui VigelisCharles Cavalcante<![CDATA[Entropy, Vol. 18, Pages 411: Multivariate Generalized Multiscale Entropy Analysis]]>
http://www.mdpi.com/1099-4300/18/11/411
Multiscale entropy (MSE) was introduced in the 2000s to quantify systems’ complexity. MSE relies on (i) a coarse-graining procedure to derive a set of time series representing the system dynamics on different time scales; (ii) the computation of the sample entropy for each coarse-grained time series. A refined composite MSE (rcMSE)—based on the same steps as MSE—also exists. Compared to MSE, rcMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy for short time series. The multivariate versions of MSE (MMSE) and rcMSE (MrcMSE) have also been introduced. In the coarse-graining step used in MSE, rcMSE, MMSE, and MrcMSE, the mean value is used to derive representations of the original data at different resolutions. A generalization of MSE was recently published, using the computation of different moments in the coarse-graining procedure. However, so far, this generalization only exists for univariate signals. We therefore herein propose an extension of this generalized MSE to multivariate data. The multivariate generalized algorithms of MMSE and MrcMSE presented herein (MGMSE and MGrcMSE, respectively) are first analyzed through the processing of synthetic signals. We reveal that MGrcMSE shows better performance than MGMSE for short multivariate data. We then study the performance of MGrcMSE on two sets of short multivariate electroencephalograms (EEG) available in the public domain. We report that MGrcMSE may show better performance than MrcMSE in distinguishing different types of multivariate EEG data. MGrcMSE could therefore supplement MMSE or MrcMSE in the processing of multivariate datasets.Entropy2016-11-171811Article10.3390/e181104114111099-43002016-11-17doi: 10.3390/e18110411Anne Humeau-Heurtier<![CDATA[Entropy, Vol. 18, Pages 409: Entropy-Based Experimental Design for Optimal Model Discrimination in the Geosciences]]>
http://www.mdpi.com/1099-4300/18/11/409
Choosing between competing models lies at the heart of scientific work, and is a frequent motivation for experimentation. Optimal experimental design (OD) methods maximize the benefit of experiments towards a specified goal. We advance and demonstrate an OD approach to maximize the information gained towards model selection. We make use of so-called model choice indicators, which are random variables with an expected value equal to Bayesian model weights. Their uncertainty can be measured with Shannon entropy. Since the experimental data are still random variables in the planning phase of an experiment, we use mutual information (the expected reduction in Shannon entropy) to quantify the information gained from a proposed experimental design. For implementation, we use the Preposterior Data Impact Assessor framework (PreDIA), because it is free of the lower-order approximations of mutual information often found in the geosciences. In comparison to other studies in statistics, our framework is not restricted to sequential design or to discrete-valued data, and it can handle measurement errors. As an application example, we optimize an experiment about the transport of contaminants in clay, featuring the problem of choosing between competing isotherms to describe sorption. We compare the results of optimizing towards maximum model discrimination with an alternative OD approach that minimizes the overall predictive uncertainty under model choice uncertainty.Entropy2016-11-171811Article10.3390/e181104094091099-43002016-11-17doi: 10.3390/e18110409Wolfgang NowakAnneli Guthke<![CDATA[Entropy, Vol. 18, Pages 408: Global Atmospheric Dynamics Investigated by Using Hilbert Frequency Analysis]]>
http://www.mdpi.com/1099-4300/18/11/408
The Hilbert transform is a well-known tool of time series analysis that has been widely used to investigate oscillatory signals that resemble a noisy periodic oscillation, because it allows instantaneous phase and frequency to be estimated, which in turn uncovers interesting properties of the underlying process that generates the signal. Here we use this tool to analyze atmospheric data: we consider daily-averaged Surface Air Temperature (SAT) time series recorded over a regular grid of locations covering the Earth’s surface. From each SAT time series, we calculate the instantaneous frequency time series by considering the Hilbert analytic signal. The properties of the obtained frequency data set are investigated by plotting the map of the average frequency and the map of the standard deviation of the frequency fluctuations. The average frequency map reveals well-defined large-scale structures: in the extra-tropics, the average frequency in general corresponds to the expected one-year period of solar forcing, while in the tropics, a different behaviour is found, with particular regions having a faster average frequency. In the standard deviation map, large-scale structures are also found, which tend to be located over regions of strong annual precipitation. Our results demonstrate that Hilbert analysis of SAT time-series uncovers meaningful information, and is therefore a promising tool for the study of other climatological variables.Entropy2016-11-161811Article10.3390/e181104084081099-43002016-11-16doi: 10.3390/e18110408Dario ZappalàMarcelo BarreiroCristina Masoller<![CDATA[Entropy, Vol. 18, Pages 406: Thermodynamics of Noncommutative Quantum Kerr Black Holes]]>
http://www.mdpi.com/1099-4300/18/11/406
The thermodynamic formalism for rotating black holes, characterized by noncommutative and quantum corrections, is constructed. From a fundamental thermodynamic relation, the equations of state and thermodynamic response functions are explicitly given, and the effect of noncommutativity and quantum correction is discussed. It is shown that the well-known divergence exhibited in specific heat is not removed by any of these corrections. However, regions of thermodynamic stability are affected by noncommutativity, increasing the available states for which some thermodynamic stability conditions are satisfied.Entropy2016-11-161811Article10.3390/e181104064061099-43002016-11-16doi: 10.3390/e18110406Lenin Escamilla-HerreraEri Mena-BarbozaJosé Torres-Arenas<![CDATA[Entropy, Vol. 18, Pages 405: Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection]]>
http://www.mdpi.com/1099-4300/18/11/405
Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.Entropy2016-11-151811Article10.3390/e181104054051099-43002016-11-15doi: 10.3390/e18110405Jaesung LeeDae-Won Kim<![CDATA[Entropy, Vol. 18, Pages 404: Decision-Making Model under Risk Assessment Based on Entropy]]>
http://www.mdpi.com/1099-4300/18/11/404
Decision-making under risk assessment involves dealing with the matter of uncertainty, especially in projects such as tunnel construction. Risk control should include not only measures to reduce the possible consequence of incident, but also exploration measures (information collecting measures) to reduce the uncertainty of the incident. The classical risk assessment model in engineering is R = P × C which only takes account of the assessment and decision-making of possible consequences. It cannot provide theoretical guidance for taking exploration measures. The paper presents an advanced methodology to assess the effectiveness of exploration measures in decision-making. The methodology classifies risk into two attributes: hazard (expected value) and uncertainty (entropy). On this basis, a generalized model of decision-making under risk assessment is proposed. This model extends the use of the classical assessment model to a more general case. The reason for taking exploration measures and assessment of such measures’ effectiveness could be explained well by this developed model. This model can also serve as a descriptive model for many risk problems and provide a decision-making basis for a variety of risk types. Moreover, the assessment process and calculation method are applied with some case studies.Entropy2016-11-151811Article10.3390/e181104044041099-43002016-11-15doi: 10.3390/e18110404Xin DongHao LuYuanpu XiaZiming Xiong<![CDATA[Entropy, Vol. 18, Pages 399: A Concept Lattice for Semantic Integration of Geo-Ontologies Based on Weight of Inclusion Degree Importance and Information Entropy]]>
http://www.mdpi.com/1099-4300/18/11/399
Constructing a merged concept lattice with formal concept analysis (FCA) is an important research direction in the field of integrating multi-source geo-ontologies. Extracting essential geographical properties and reducing the concept lattice are two key points of previous research. A formal integration method is proposed to address the challenges in these two areas. We first extract essential properties from multi-source geo-ontologies and use FCA to build a merged formal context. Second, the combined importance weight of each single attribute of the formal context is calculated by introducing the inclusion degree importance from rough set theory and information entropy; then a weighted formal context is built from the merged formal context. Third, a combined weighted concept lattice is established from the weighted formal context with FCA and the importance weight value of every concept is defined as the sum of weight of attributes belonging to the concept’s intent. Finally, semantic granularity of concept is defined by its importance weight; we, then gradually reduce the weighted concept lattice by setting up diminishing threshold of semantic granularity. Additionally, all of those reduced lattices are organized into a regular hierarchy structure based on the threshold of semantic granularity. A workflow is designed to demonstrate this procedure. A case study is conducted to show feasibility and validity of this method and the procedure to integrate multi-source geo-ontologies.Entropy2016-11-151811Article10.3390/e181103993991099-43002016-11-15doi: 10.3390/e18110399Jia XiaoZongyi He<![CDATA[Entropy, Vol. 18, Pages 397: Increase in Complexity and Information through Molecular Evolution]]>
http://www.mdpi.com/1099-4300/18/11/397
Biological evolution progresses by essentially three different mechanisms: (I) optimization of properties through natural selection in a population of competitors; (II) development of new capabilities through cooperation of competitors caused by catalyzed reproduction; and (III) variation of genetic information through mutation or recombination. Simplified evolutionary processes combine two out of the three mechanisms: Darwinian evolution combines competition (I) and variation (III) and is represented by the quasispecies model, major transitions involve cooperation (II) of competitors (I), and the third combination, cooperation (II) and variation (III) provides new insights in the role of mutations in evolution. A minimal kinetic model based on simple molecular mechanisms for reproduction, catalyzed reproduction and mutation is introduced, cast into ordinary differential equations (ODEs), and analyzed mathematically in form of its implementation in a flow reactor. Stochastic aspects are investigated through computer simulation of trajectories of the corresponding chemical master equations. The competition-cooperation model, mechanisms (I) and (II), gives rise to selection at low levels of resources and leads to symbiontic cooperation in case the material required is abundant. Accordingly, it provides a kind of minimal system that can undergo a (major) transition. Stochastic effects leading to extinction of the population through self-enhancing oscillations destabilize symbioses of four or more partners. Mutations (III) are not only the basis of change in phenotypic properties but can also prevent extinction provided the mutation rates are sufficiently large. Threshold phenomena are observed for all three combinations: The quasispecies model leads to an error threshold, the competition-cooperation model allows for an identification of a resource-triggered bifurcation with the transition, and for the cooperation-mutation model a kind of stochastic threshold for survival through sufficiently high mutation rates is observed. The evolutionary processes in the model are accompanied by gains in information on the environment of the evolving populations. In order to provide a useful basis for comparison, two forms of information, syntactic or Shannon information and semantic information are introduced here. Both forms of information are defined for simple evolving systems at the molecular level. Selection leads primarily to an increase in semantic information in the sense that higher fitness allows for more efficient exploitation of the environment and provides the basis for more progeny whereas understanding transitions involves characteristic contributions from both Shannon information and semantic information.Entropy2016-11-141811Review10.3390/e181103973971099-43002016-11-14doi: 10.3390/e18110397Peter Schuster<![CDATA[Entropy, Vol. 18, Pages 398: Fractional-Order Identification and Control of Heating Processes with Non-Continuous Materials]]>
http://www.mdpi.com/1099-4300/18/11/398
The paper presents a fractional order model of a heating process and a comparison of fractional and standard PI controllers in its closed loop system. Preliminarily, an enhanced fractional order model for the heating process on non-continuous materials has been identified through a fitting algorithm on experimental data. Experimentation has been carried out on a finite length beam filled with three non-continuous materials (air, styrofoam, metal buckshots) in order to identify a model in the frequency domain and to obtain a relationship between the fractional order of the heating process and the different materials’ properties. A comparison between the experimental model and the theoretical one has been performed, proving a significant enhancement of the fitting performances. Moreover the obtained modelling results confirm the fractional nature of the heating processes when diffusion occurs in non-continuous composite materials, and they show how the model’s fractional order can be used as a characteristic parameter for non-continuous materials with different composition and structure. Finally, three different kinds of controllers have been applied and compared in order to keep constant the beam temperature constant at a fixed length.Entropy2016-11-121811Article10.3390/e181103983981099-43002016-11-12doi: 10.3390/e18110398Riccardo CaponettoFrancesca SapuppoVincenzo TomaselloGuido MaionePaolo Lino<![CDATA[Entropy, Vol. 18, Pages 396: Kernel Density Estimation on the Siegel Space with an Application to Radar Processing]]>
http://www.mdpi.com/1099-4300/18/11/396
This paper studies probability density estimation on the Siegel space. The Siegel space is a generalization of the hyperbolic space. Its Riemannian metric provides an interesting structure to the Toeplitz block Toeplitz matrices that appear in the covariance estimation of radar signals. The main techniques of probability density estimation on Riemannian manifolds are reviewed. For computational reasons, we chose to focus on the kernel density estimation. The main result of the paper is the expression of Pelletier’s kernel density estimator. The computation of the kernels is made possible by the symmetric structure of the Siegel space. The method is applied to density estimation of reflection coefficients from radar observations.Entropy2016-11-111811Article10.3390/e181103963961099-43002016-11-11doi: 10.3390/e18110396Emmanuel ChevallierThibault ForgetFrédéric BarbarescoJesus Angulo<![CDATA[Entropy, Vol. 18, Pages 394: Rectification and Non-Gaussian Diffusion in Heterogeneous Media]]>
http://www.mdpi.com/1099-4300/18/11/394
We show that when Brownian motion takes place in a heterogeneous medium, the presence of local forces and transport coefficients leads to deviations from a Gaussian probability distribution that make that the ratio between forward and backward probabilities depend on the nature of the host medium, on local forces, and also on time. We have applied our results to two situations: diffusion in a disordered medium, and diffusion in a confined system. For such scenarios, we have shown that our theoretical predictions are in very good agreement with numerical results. Moreover, we have shown that the deviations from the Gaussian solution lead to the onset of rectification. Our predictions could be used to detect the presence of local forces and to characterize the intrinsic short-scale properties of the host medium—a problem of current interest in the study of micro- and nano-systems.Entropy2016-11-111811Article10.3390/e181103943941099-43002016-11-11doi: 10.3390/e18110394Paolo MalgarettiIgnacio PagonabarragaJ. Rubi<![CDATA[Entropy, Vol. 18, Pages 395: Unextendible Mutually Unbiased Bases (after Mandayam, Bandyopadhyay, Grassl and Wootters)]]>
http://www.mdpi.com/1099-4300/18/11/395
We consider questions posed in a recent paper of Mandayam et al. (2014) on the nature of “unextendible mutually unbiased bases.” We describe a conceptual framework to study these questions, using a connection proved by the author in Thas (2009) between the set of nonidentity generalized Pauli operators on the Hilbert space of N d-level quantum systems, d a prime, and the geometry of non-degenerate alternating bilinear forms of rank N over finite fields F d . We then supply alternative and short proofs of results obtained in Mandayam et al. (2014), as well as new general bounds for the problems considered in loc. cit. In this setting, we also solve Conjecture 1 of Mandayam et al. (2014) and speculate on variations of this conjecture.Entropy2016-11-111811Article10.3390/e181103953951099-43002016-11-11doi: 10.3390/e18110395Koen Thas