Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 19, Pages 91: On the Entropy of Deformed Phase Space Black Hole and the Cosmological Constant]]>
http://www.mdpi.com/1099-4300/19/3/91
In this paper we study the effects of noncommutative phase space deformations on the Schwarzschild black hole. This idea has been previously studied in Friedmann–Robertson–Walker (FRW) cosmology, where this “noncommutativity” provides a simple mechanism that can explain the origin of the cosmological constant. In this paper, we obtain the same relationship between the cosmological constant and the deformation parameter that appears in deformed phase space cosmology, but in the context of the deformed phase space black holes. This was achieved by comparing the entropy of the deformed Schwarzschild black hole with the entropy of the Schwarzschild–de Sitter black hole.Entropy2017-02-28193Article10.3390/e19030091911099-43002017-02-28doi: 10.3390/e19030091Andrés Crespo-HernándezEri Mena-BarbozaMiguel Sabido<![CDATA[Entropy, Vol. 19, Pages 92: Use of Accumulated Entropies for Automated Detection of Congestive Heart Failure in Flexible Analytic Wavelet Transform Framework Based on Short-Term HRV Signals]]>
http://www.mdpi.com/1099-4300/19/3/92
In the present work, an automated method to diagnose Congestive Heart Failure (CHF) using Heart Rate Variability (HRV) signals is proposed. This method is based on Flexible Analytic Wavelet Transform (FAWT), which decomposes the HRV signals into different sub-band signals. Further, Accumulated Fuzzy Entropy (AFEnt) and Accumulated Permutation Entropy (APEnt) are computed over cumulative sums of these sub-band signals. This provides complexity analysis using fuzzy and permutation entropies at different frequency scales. We have extracted 20 features from these signals obtained at different frequency scales of HRV signals. The Bhattacharyya ranking method is used to rank the extracted features from the HRV signals of three different lengths (500, 1000 and 2000 samples). These ranked features are fed to the Least Squares Support Vector Machine (LS-SVM) classifier. Our proposed system has obtained a sensitivity of 98.07%, specificity of 98.33% and accuracy of 98.21% for the 500-sample length of HRV signals. Our system yielded a sensitivity of 97.95%, specificity of 98.07% and accuracy of 98.01% for HRV signals of a length of 1000 samples and a sensitivity of 97.76%, specificity of 97.67% and accuracy of 97.71% for signals corresponding to the 2000-sample length of HRV signals. Our automated system can aid clinicians in the accurate detection of CHF using HRV signals. It can be installed in hospitals, polyclinics and remote villages where there is no access to cardiologists.Entropy2017-02-27193Article10.3390/e19030092921099-43002017-02-27doi: 10.3390/e19030092Mohit KumarRam PachoriU. Acharya<![CDATA[Entropy, Vol. 19, Pages 93: An Entropy-Assisted Shielding Function in DDES Formulation for the SST Turbulence Model]]>
http://www.mdpi.com/1099-4300/19/3/93
The intent of shielding functions in delayed detached-eddy simulation methods (DDES) is to preserve the wall boundary layers as Reynolds-averaged Navier–Strokes (RANS) mode, avoiding possible modeled stress depletion (MSD) or even unphysical separation due to grid refinement. An entropy function fs is introduced to construct a DDES formulation for the k-ω shear stress transport (SST) model, whose performance is extensively examined on a range of attached and separated flows (flat-plate flow, circular cylinder flow, and supersonic cavity-ramp flow). Two more forms of shielding functions are also included for comparison: one that uses the blending function F2 of SST, the other which adopts the recalibrated shielding function fd_cor of the DDES version based on the Spalart-Allmaras (SA) model. In general, all of the shielding functions do not impair the vortex in fully separated flows. However, for flows including attached boundary layer, both F2 and the recalibrated fd_cor are found to be too conservative to resolve the unsteady flow content. On the other side, fs is proposed on the theory of energy dissipation and independent on from any particular turbulence model, showing the generic priority by properly balancing the need of reserving the RANS modeled regions for wall boundary layers and generating the unsteady turbulent structures in detached areas.Entropy2017-02-27193Article10.3390/e19030093931099-43002017-02-27doi: 10.3390/e19030093Ling ZhouRui ZhaoXiao-Pan Shi<![CDATA[Entropy, Vol. 19, Pages 90: A LiBr-H2O Absorption Refrigerator Incorporating a Thermally Activated Solution Pumping Mechanism]]>
http://www.mdpi.com/1099-4300/19/3/90
This paper provides an illustrated description of a proposed LiBr-H2O vapour absorption refrigerator which uses a thermally activated solution pumping mechanism that combines controlled variations in generator vapour pressure with changes it produces in static-head pressure difference to circulate the absorbent solution between the generator and absorber vessels. The proposed system is different and potentially more efficient than a bubble pump system previously proposed and avoids the need for an electrically powered circulation pump found in most conventional LiBr absorption refrigerators. The paper goes on to provide a sample set of calculations that show that the coefficient of performance values of the proposed cycle are similar to those found for conventional cycles. The theoretical results compare favourably with some preliminary experimental results, which are also presented for the first time in this paper. The paper ends by proposing an outline design for an innovative steam valve, which is a key component needed to control the solution pumping mechanism.Entropy2017-02-26193Article10.3390/e19030090901099-43002017-02-26doi: 10.3390/e19030090Ian Eames<![CDATA[Entropy, Vol. 19, Pages 89: Optimization of Alpha-Beta Log-Det Divergences and their Application in the Spatial Filtering of Two Class Motor Imagery Movements]]>
http://www.mdpi.com/1099-4300/19/3/89
The Alpha-Beta Log-Det divergences for positive definite matrices are flexible divergences that are parameterized by two real constants and are able to specialize several relevant classical cases like the squared Riemannian metric, the Steins loss, the S-divergence, etc. A novel classification criterion based on these divergences is optimized to address the problem of classification of the motor imagery movements. This research paper is divided into three main sections in order to address the above mentioned problem: (1) Firstly, it is proven that a suitable scaling of the class conditional covariance matrices can be used to link the Common Spatial Pattern (CSP) solution with a predefined number of spatial filters for each class and its representation as a divergence optimization problem by making their different filter selection policies compatible; (2) A closed form formula for the gradient of the Alpha-Beta Log-Det divergences is derived that allows to perform optimization as well as easily use it in many practical applications; (3) Finally, in similarity with the work of Samek et al. 2014, which proposed the robust spatial filtering of the motor imagery movements based on the beta-divergence, the optimization of the Alpha-Beta Log-Det divergences is applied to this problem. The resulting subspace algorithm provides a unified framework for testing the performance and robustness of the several divergences in different scenarios.Entropy2017-02-25193Article10.3390/e19030089891099-43002017-02-25doi: 10.3390/e19030089Deepa ThiyamSergio CrucesJavier OliasAndrzej Cichocki<![CDATA[Entropy, Vol. 19, Pages 88: Systematic Analysis of the Non-Extensive Statistical Approach in High Energy Particle Collisions—Experiment vs. Theory]]>
http://www.mdpi.com/1099-4300/19/3/88
The analysis of high-energy particle collisions is an excellent testbed for the non-extensive statistical approach. In these reactions we are far from the thermodynamical limit. In small colliding systems, such as electron-positron or nuclear collisions, the number of particles is several orders of magnitude smaller than the Avogadro number; therefore, finite-size and fluctuation effects strongly influence the final-state one-particle energy distributions. Due to the simple characterization, the description of the identified hadron spectra with the Boltzmann–Gibbs thermodynamical approach is insufficient. These spectra can be described very well with Tsallis–Pareto distributions instead, derived from non-extensive thermodynamics. Using the q-entropy formula, we interpret the microscopic physics in terms of the Tsallis q and T parameters. In this paper we give a view on these parameters, analyzing identified hadron spectra from recent years in a wide center-of-mass energy range. We demonstrate that the fitted Tsallis-parameters show dependency on the center-of-mass energy and particle species (mass). Our findings are described well by a QCD (Quantum Chromodynamics) inspired parton evolution ansatz. Based on this comprehensive study, apart from the evolution, both mesonic and baryonic components found to be non-extensive ( q &gt; 1 ), besides the mass ordered hierarchy observed in the parameter T. We also study and compare in details the theory-obtained parameters for the case of PYTHIA8 Monte Carlo Generator, perturbative QCD and quark coalescence models.Entropy2017-02-24193Article10.3390/e19030088881099-43002017-02-24doi: 10.3390/e19030088Gábor BíróGergely BarnaföldiTamás BiróKároly ÜrmössyÁdám Takács<![CDATA[Entropy, Vol. 19, Pages 86: Motion Sequence Decomposition-Based Hybrid Entropy Feature and Its Application to Fault Diagnosis of a High-Speed Automatic Mechanism]]>
http://www.mdpi.com/1099-4300/19/3/86
High-speed automatic weapons play an important role in the field of national defense. However, current research on reliability analysis of automaton principally relies on simulations due to the fact that experimental data are difficult to collect in real life. Different from rotating machinery, a high-speed automaton needs to accomplish complex motion consisting of a series of impacts. In addition to strong noise, the impacts generated by different components of the automaton will interfere with each other. There is no effective approach to cope with this in the fault diagnosis of automatic mechanisms. This paper proposes a motion sequence decomposition approach combining modern signal processing techniques to develop an effective approach to fault detection in high-speed automatons. We first investigate the entire working procedure of the automatic mechanism and calculate the corresponding action times of travel involved. The vibration signal collected from the shooting experiment is then divided into a number of impacts corresponding to action orders. Only the segment generated by a faulty component is isolated from the original impacts according to the action time of the component. Wavelet packet decomposition (WPD) is first applied on the resulting signals for investigation of energy distribution, and the components with higher energy are selected for feature extraction. Three information entropy features are utilized to distinguish various states of the automaton using empirical mode decomposition (EMD). A gray-wolf optimization (GWO) algorithm is introduced as an alternative to improve the performance of the support vector machine (SVM) classifier. We carry out shooting experiments to collect vibration data for demonstration of the proposed work. Experimental results show that the proposed work in this paper is effective for fault diagnosis of a high-speed automaton and can be applied in real applications. Moreover, the GWO is able to provide a competitive diagnosis result compared with the genetic algorithm (GA) and the particle swarm optimization (PSO) algorithm.Entropy2017-02-24193Article10.3390/e19030086861099-43002017-02-24doi: 10.3390/e19030086Baoxiang WangHongxia PanHeng Du<![CDATA[Entropy, Vol. 19, Pages 87: Entropy, Topological Theories and Emergent Quantum Mechanics]]>
http://www.mdpi.com/1099-4300/19/3/87
The classical thermostatics of equilibrium processes is shown to possess a quantum mechanical dual theory with a ﬁnite dimensional Hilbert space of quantum states. Speciﬁcally, the kernel of a certain Hamiltonian operator becomes the Hilbert space of quasistatic quantum mechanics. The relation of thermostatics to topological ﬁeld theory is also discussed in the context of the approach of the emergence of quantum theory, where the concept of entropy plays a key role.Entropy2017-02-23193Article10.3390/e19030087871099-43002017-02-23doi: 10.3390/e19030087D. CabreraP. de CórdobaJ. IsidroJ. Molina<![CDATA[Entropy, Vol. 19, Pages 79: Using k-Mix-Neighborhood Subdigraphs to Compute Canonical Labelings of Digraphs]]>
http://www.mdpi.com/1099-4300/19/2/79
This paper presents a novel theory and method to calculate the canonical labelings of digraphs whose definition is entirely different from the traditional definition of Nauty. It indicates the mutual relationships that exist between the canonical labeling of a digraph and the canonical labeling of its complement graph. It systematically examines the link between computing the canonical labeling of a digraph and the k-neighborhood and k-mix-neighborhood subdigraphs. To facilitate the presentation, it introduces several concepts including mix diffusion outdegree sequence and entire mix diffusion outdegree sequences. For each node in a digraph G, it assigns an attribute m_NearestNode to enhance the accuracy of calculating canonical labeling. The four theorems proved here demonstrate how to determine the first nodes added into M a x Q ( G ) . Further, the other two theorems stated below deal with identifying the second nodes added into M a x Q ( G ) . When computing C m a x ( G ) , if M a x Q ( G ) already contains the first i vertices u 1 , u 2 , ⋯ , u i , Diffusion Theorem provides a guideline on how to choose the subsequent node of M a x Q ( G ) . Besides, the Mix Diffusion Theorem shows that the selection of the ( i + 1 ) th vertex of M a x Q ( G ) for computing C m a x ( G ) is from the open mix-neighborhood subdigraph N + + ( Q ) of the nodes set Q = { u 1 , u 2 , ⋯ , u i } . It also offers two theorems to calculate the C m a x ( G ) of the disconnected digraphs. The four algorithms implemented in it illustrate how to calculate M a x Q ( G ) of a digraph. Through software testing, the correctness of our algorithms is preliminarily verified. Our method can be utilized to mine the frequent subdigraph. We also guess that if there exists a vertex v ∈ S + ( G ) satisfying conditions C m a x ( G − v ) ⩽ C m a x ( G − w ) for each w ∈ S + ( G ) ∧ w ≠ v , then u 1 = v for M a x Q ( G ) .Entropy2017-02-22192Article10.3390/e19020079791099-43002017-02-22doi: 10.3390/e19020079Jianqiang HaoYunzhan GongYawen WangLi TanJianzhi Sun<![CDATA[Entropy, Vol. 19, Pages 85: Quantifying Synergistic Information Using Intermediate Stochastic Variables †]]>
http://www.mdpi.com/1099-4300/19/2/85
Quantifying synergy among stochastic variables is an important open problem in information theory. Information synergy occurs when multiple sources together predict an outcome variable better than the sum of single-source predictions. It is an essential phenomenon in biology such as in neuronal networks and cellular regulatory processes, where different information flows integrate to produce a single response, but also in social cooperation processes as well as in statistical inference tasks in machine learning. Here we propose a metric of synergistic entropy and synergistic information from first principles. The proposed measure relies on so-called synergistic random variables (SRVs) which are constructed to have zero mutual information about individual source variables but non-zero mutual information about the complete set of source variables. We prove several basic and desired properties of our measure, including bounds and additivity properties. In addition, we prove several important consequences of our measure, including the fact that different types of synergistic information may co-exist between the same sets of variables. A numerical implementation is provided, which we use to demonstrate that synergy is associated with resilience to noise. Our measure may be a marked step forward in the study of multivariate information theory and its numerous applications.Entropy2017-02-22192Article10.3390/e19020085851099-43002017-02-22doi: 10.3390/e19020085Rick QuaxOmri Har-ShemeshPeter Sloot<![CDATA[Entropy, Vol. 19, Pages 82: The More You Know, the More You Can Grow: An Information Theoretic Approach to Growth in the Information Age]]>
http://www.mdpi.com/1099-4300/19/2/82
In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory is used to show that information (in bits) is effectively a quantifiable ingredient of growth. The article presents a single equation that allows both to describe hands-off natural selection of evolving populations and to optimize population fitness in uncertain environments through intervention. The setup analyzes the communication channel between the growing population and its uncertain environment. The role of information in population growth can be thought of as the optimization of information flow over this (more or less) noisy channel. Optimized growth implies that the population absorbs all communicated environmental structure during evolutionary updating (measured by their mutual information). This is achieved by endogenously adjusting the population structure to the exogenous environmental pattern (through bet-hedging/portfolio management). The setup can be applied to decompose the growth of any discrete population in stationary, stochastic environments (economic, cultural, or biological). Two empirical examples from the information economy reveal inherent trade-offs among the involved information quantities during growth optimization.Entropy2017-02-22192Article10.3390/e19020082821099-43002017-02-22doi: 10.3390/e19020082Martin Hilbert<![CDATA[Entropy, Vol. 19, Pages 83: Breakdown Point of Robust Support Vector Machines]]>
http://www.mdpi.com/1099-4300/19/2/83
Support vector machine (SVM) is one of the most successful learning methods for solving classiﬁcation problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassiﬁcation is deﬁned by a convex loss called the hinge loss, and the unboundedness of the convex loss causes the sensitivity to outliers. To deal with outliers, robust SVMs have been proposed by replacing the convex loss with a non-convex bounded loss called the ramp loss. In this paper, we study the breakdown point of robust SVMs. The breakdown point is a robustness measure that is the largest amount of contamination such that the estimated classiﬁer still gives information about the non-contaminated data. The main contribution of this paper is to show an exact evaluation of the breakdown point of robust SVMs. For learning parameters such as the regularization parameter, we derive a simple formula that guarantees the robustness of the classiﬁer. When the learning parameters are determined with a grid search using cross-validation, our formula works to reduce the number of candidate search points. Furthermore, the theoretical ﬁndings are conﬁrmed in numerical experiments. We show that the statistical properties of robust SVMs are well explained by a theoretical analysis of the breakdown point.Entropy2017-02-21192Article10.3390/e19020083831099-43002017-02-21doi: 10.3390/e19020083Takafumi KanamoriShuhei FujiwaraAkiko Takeda<![CDATA[Entropy, Vol. 19, Pages 84: Sequential Batch Design for Gaussian Processes Employing Marginalization †]]>
http://www.mdpi.com/1099-4300/19/2/84
Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the quality of the fit—as the utility function, we establish an optimized and automated sequential parameter selection procedure. However, it is also often desirable to utilize the parallel running capabilities of present computer technology and abandon the sequential parameter selection for a faster overall turn-around time (wall-clock time). This paper proposes to achieve this by marginalizing over the expected outcomes at optimized test points in order to set up a pool of starting values for batch execution. For a one-dimensional test case, the numerical results are validated with the analytical solution. Eventually, a systematic convergence study demonstrates the advantage of the optimized approach over randomly chosen parameter settings.Entropy2017-02-21192Article10.3390/e19020084841099-43002017-02-21doi: 10.3390/e19020084Roland PreussUdo von Toussaint<![CDATA[Entropy, Vol. 19, Pages 70: User-Centric Key Entropy: Study of Biometric Key Derivation Subject to Spoofing Attacks]]>
http://www.mdpi.com/1099-4300/19/2/70
Biometric data can be used as input for PKI key pair generation. The concept of not saving the private key is very appealing, but the implementation of such a system shouldn’t be rushed because it might prove less secure then current PKI infrastructure. One biometric characteristic can be easily spoofed, so it was believed that multi-modal biometrics would offer more security, because spoofing two or more biometrics would be very hard. This notion, of increased security of multi-modal biometric systems, was disproved for authentication and matching, studies showing that not only multi-modal biometric systems are not more secure, but they introduce additional vulnerabilities. This paper is a study on the implications of spoofing biometric data for retrieving the derived key. We demonstrate that spoofed biometrics can yield the same key, which in turn will lead an attacker to obtain the private key. A practical implementation is proposed using fingerprint and iris as biometrics and the fuzzy extractor for biometric key extraction. Our experiments show what happens when the biometric data is spoofed for both uni-modal systems and multi-modal. In case of multi-modal system tests were performed when spoofing one biometric or both. We provide detailed analysis of every scenario in regard to successful tests and overall key entropy. Our paper defines a biometric PKI scenario and an in depth security analysis for it. The analysis can be viewed as a blueprint for implementations of future similar systems, because it highlights the main security vulnerabilities for bioPKI. The analysis is not constrained to the biometric part of the system, but covers CA security, sensor security, communication interception, RSA encryption vulnerabilities regarding key entropy, and much more.Entropy2017-02-21192Article10.3390/e19020070701099-43002017-02-21doi: 10.3390/e19020070Lavinia DincaGerhard Hancke<![CDATA[Entropy, Vol. 19, Pages 80: A Risk-Free Protection Index Model for Portfolio Selection with Entropy Constraint under an Uncertainty Framework]]>
http://www.mdpi.com/1099-4300/19/2/80
This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free interest rate we define a risk-free protection index (RFPI), which can measure the protection degree when the loss of risk assets happens. Third, note that the proportion entropy serves as a complementary means to reduce the risk by the preset diversification requirement. We put forward a risk-free protection index model with an entropy constraint under an uncertainty framework by applying the RFPI, Huang’s risk index model (RIM), and mean-variance-entropy model (MVEM). Furthermore, to solve our portfolio model, an algorithm is given to estimate the uncertain expected return and standard deviation of different risk assets by applying the Delphi method. Finally, an example is provided to show that the risk-free protection index model performs better than the traditional MVEM and RIM.Entropy2017-02-21192Article10.3390/e19020080801099-43002017-02-21doi: 10.3390/e19020080Jianwei GaoHuicheng Liu<![CDATA[Entropy, Vol. 19, Pages 81: Towards Operational Definition of Postictal Stage: Spectral Entropy as a Marker of Seizure Ending]]>
http://www.mdpi.com/1099-4300/19/2/81
The postictal period is characterized by several neurological alterations, but its exact limits are clinically or even electroencephalographically hard to determine in most cases. We aim to provide quantitative functions or conditions with a clearly distinguishable behavior during the ictal-postictal transition. Spectral methods were used to analyze foramen ovale electrodes (FOE) recordings during the ictal/postictal transition in 31 seizures of 15 patients with strictly unilateral drug resistant temporal lobe epilepsy. In particular, density of links, spectral entropy, and relative spectral power were analyzed. Partial simple seizures are accompanied by an ipsilateral increase in the relative Delta power and a decrease in synchronization in a 66% and 91% of the cases, respectively, after seizures offset. Complex partial seizures showed a decrease in the spectral entropy in 94% of cases, both ipsilateral and contralateral sides (100% and 73%, respectively) mainly due to an increase of relative Delta activity. Seizure offset is defined as the moment at which the “seizure termination mechanisms” actually end, which is quantified in the spectral entropy value. We propose as a definition for the postictal start the time when the ipsilateral SE reaches the first global minimum.Entropy2017-02-21192Article10.3390/e19020081811099-43002017-02-21doi: 10.3390/e19020081Ancor Sanz-GarcíaLorena Vega-ZelayaJesús PastorRafael SolaGuillermo Ortega<![CDATA[Entropy, Vol. 19, Pages 78: Admitting Spontaneous Violations of the Second Law in Continuum Thermomechanics]]>
http://www.mdpi.com/1099-4300/19/2/78
We survey new extensions of continuum mechanics incorporating spontaneous violations of the Second Law (SL), which involve the viscous flow and heat conduction. First, following an account of the Fluctuation Theorem (FT) of statistical mechanics that generalizes the SL, the irreversible entropy is shown to evolve as a submartingale. Next, a stochastic thermomechanics is formulated consistent with the FT, which, according to a revision of classical axioms of continuum mechanics, must be set up on random fields. This development leads to a reformulation of thermoviscous fluids and inelastic solids. These two unconventional constitutive behaviors may jointly occur in nano-poromechanics.Entropy2017-02-21192Article10.3390/e19020078781099-43002017-02-21doi: 10.3390/e19020078Martin Ostoja-Starzewski<![CDATA[Entropy, Vol. 19, Pages 77: Energy Transfer between Colloids via Critical Interactions]]>
http://www.mdpi.com/1099-4300/19/2/77
We report the observation of a temperature-controlled synchronization of two Brownian-particles in a binary mixture close to the critical point of the demixing transition. The two beads are trapped by two optical tweezers whose distance is periodically modulated. We notice that the motion synchronization of the two beads appears when the critical temperature is approached. In contrast, when the fluid is far from its critical temperature, the displacements of the two beads are uncorrelated. Small changes in temperature can radically change the global dynamics of the system. We show that the synchronisation is induced by the critical Casimir forces. Finally, we present the measure of the energy transfers inside the system produced by the critical interaction.Entropy2017-02-17192Article10.3390/e19020077771099-43002017-02-17doi: 10.3390/e19020077Ignacio MartínezClemence DevaillyArtyom PetrosyanSergio Ciliberto<![CDATA[Entropy, Vol. 19, Pages 75: Information Loss in Binomial Data Due to Data Compression]]>
http://www.mdpi.com/1099-4300/19/2/75
This paper explores the idea of information loss through data compression, as occurs in the course of any data analysis, illustrated via detailed consideration of the Binomial distribution. We examine situations where the full sequence of binomial outcomes is retained, situations where only the total number of successes is retained, and in-between situations. We show that a familiar decomposition of the Shannon entropy H can be rewritten as a decomposition into H t o t a l , H l o s t , and H c o m p , or the total, lost and compressed (remaining) components, respectively. We relate this new decomposition to Landauer’s principle, and we discuss some implications for the “information-dynamic” theory being developed in connection with our broader program to develop a measure of statistical evidence on a properly calibrated scale.Entropy2017-02-16192Article10.3390/e19020075751099-43002017-02-16doi: 10.3390/e19020075Susan HodgeVeronica Vieland<![CDATA[Entropy, Vol. 19, Pages 76: A Comparison of Postural Stability during Upright Standing between Normal and Flatfooted Individuals, Based on COP-Based Measures]]>
http://www.mdpi.com/1099-4300/19/2/76
Aging causes foot arches to collapse, possibly leading to foot deformities and falls. This paper proposes a set of measures involving an entropy-based method used for two groups of young adults with dissimilar foot arches to explore and quantize postural stability on a force plate in an upright position. Fifty-four healthy young adults aged 18–30 years participated in this study. These were categorized into two groups: normal (37 participants) and flatfooted (17 participants). We collected the center of pressure (COP) displacement trajectories of participants during upright standing, on a force plate, in a static position, with eyes open (EO), or eyes closed (EC). These nonstationary time-series signals were quantized using entropy-based measures and traditional measures used to assess postural stability, and the results obtained from these measures were compared. The appropriate combinations of entropy-based measures revealed that, with respect to postural stability, the two groups differed significantly (p &lt; 0.05) under both EO and EC conditions. The traditional commonly-used COP-based measures only revealed differences under EO conditions. Entropy-based measures are thus suitable for examining differences in postural stability for flatfooted people, and may be used by clinicians after further refinement.Entropy2017-02-16192Article10.3390/e19020076761099-43002017-02-16doi: 10.3390/e19020076Tsui-Chiao ChaoBernard Jiang<![CDATA[Entropy, Vol. 19, Pages 74: An Approach to Data Analysis in 5G Networks]]>
http://www.mdpi.com/1099-4300/19/2/74
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order to solve or mitigate common network problems in a dynamic and proactive way. In this context, this paper presents the design of Self-Organized Network Management in Virtualized and Software Defined Networks (SELFNET) Analyzer Module, which main objective is to identify suspicious or unexpected situations based on metrics provided by different network components and sensors. The SELFNET Analyzer Module provides a modular architecture driven by use cases where analytic functions can be easily extended. This paper also proposes the data specification to define the data inputs to be taking into account in diagnosis process. This data specification has been implemented with different use cases within SELFNET Project, proving its effectiveness.Entropy2017-02-16192Article10.3390/e19020074741099-43002017-02-16doi: 10.3390/e19020074Lorena Barona LópezJorge Maestre VidalLuis García Villalba<![CDATA[Entropy, Vol. 19, Pages 71: Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss]]>
http://www.mdpi.com/1099-4300/19/2/71
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or redundant and synergy components. In this work, we extend this framework focusing on the lattices that underpin the decomposition. We generalize the type of constructible lattices and examine the relations between different lattices, for example, relating bivariate and trivariate decompositions. We point out that, in information gain lattices, redundancy components are invariant across decompositions, but unique and synergy components are decomposition-dependent. Exploiting the connection between different lattices, we propose a procedure to construct, in the general multivariate case, information gain decompositions from measures of synergy or unique information. We then introduce an alternative type of lattices, information loss lattices, with the role and invariance properties of redundancy and synergy components reversed with respect to gain lattices, and which provide an alternative procedure to build multivariate decompositions. We finally show how information gain and information loss dual lattices lead to a self-consistent unique decomposition, which allows a deeper understanding of the origin and meaning of synergy and redundancy.Entropy2017-02-16192Article10.3390/e19020071711099-43002017-02-16doi: 10.3390/e19020071Daniel ChicharroStefano Panzeri<![CDATA[Entropy, Vol. 19, Pages 73: Identifying Critical States through the Relevance Index]]>
http://www.mdpi.com/1099-4300/19/2/73
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, depending on the values of some parameters. In this paper, we show that the relevance index may be effectively used to identify critical states in complex systems. The relevance index was originally developed to identify relevant sets of variables in dynamical systems, but in this paper, we show that it is also able to capture features of criticality. The index is applied to two prominent examples showing slightly different meanings of criticality, namely the Ising model and random Boolean networks. Results show that this index is maximized at critical states and is robust with respect to system size and sampling effort. It can therefore be used to detect criticality.Entropy2017-02-16192Article10.3390/e19020073731099-43002017-02-16doi: 10.3390/e19020073Andrea RoliMarco VillaniRiccardo CaprariRoberto Serra<![CDATA[Entropy, Vol. 19, Pages 72: Classification of Normal and Pre-Ictal EEG Signals Using Permutation Entropies and a Generalized Linear Model as a Classifier]]>
http://www.mdpi.com/1099-4300/19/2/72
In this contribution, a comparison between different permutation entropies as classifiers of electroencephalogram (EEG) records corresponding to normal and pre-ictal states is made. A discrete probability distribution function derived from symbolization techniques applied to the EEG signal is used to calculate the Tsallis entropy, Shannon Entropy, Renyi Entropy, and Min Entropy, and they are used separately as the only independent variable in a logistic regression model in order to evaluate its capacity as a classification variable in a inferential manner. The area under the Receiver Operating Characteristic (ROC) curve, along with the accuracy, sensitivity, and specificity are used to compare the models. All the permutation entropies are excellent classifiers, with an accuracy greater than 94.5% in every case, and a sensitivity greater than 97%. Accounting for the amplitude in the symbolization technique retains more information of the signal than its counterparts, and it could be a good candidate for automatic classification of EEG signals.Entropy2017-02-16192Article10.3390/e19020072721099-43002017-02-16doi: 10.3390/e19020072Francisco RedelicoFrancisco TraversaroMaría GarcíaWalter SilvaOsvaldo RossoMarcelo Risk<![CDATA[Entropy, Vol. 19, Pages 69: Two Thermoeconomic Diagnosis Methods Applied to Representative Operating Data of a Commercial Transcritical Refrigeration Plant]]>
http://www.mdpi.com/1099-4300/19/2/69
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the quality of malfunction identification. Both methods were applicable to locate and categorise the malfunctions when using steady state data without measurement uncertainties. By introduction of measurement uncertainty, the categorisation of malfunctions became increasingly difficult, though depending on the magnitude of the uncertainties. Two different uncertainty scenarios were evaluated, as the use of repeated measurements yields a lower magnitude of uncertainty. The two methods show similar performance in the presented study for both of the considered measurement uncertainty scenarios. However, only in the low measurement uncertainty scenario, both methods are applicable to locate the causes of the malfunctions. For both the scenarios an outlier limit was found, which determines if it was possible to reject a high relative indicator based on measurement uncertainty. For high uncertainties, the threshold value of the relative indicator was 35, whereas for low uncertainties one of the methods resulted in a threshold at 8. Additionally, the contribution of different measuring instruments to the relative indicator in two central components was analysed. It shows that the contribution was component dependent.Entropy2017-02-15192Article10.3390/e19020069691099-43002017-02-15doi: 10.3390/e19020069Torben OmmenOskar SigthorssonBrian Elmegaard<![CDATA[Entropy, Vol. 19, Pages 68: Kinetic Theory of a Confined Quasi-Two-Dimensional Gas of Hard Spheres]]>
http://www.mdpi.com/1099-4300/19/2/68
The dynamics of a system of hard spheres enclosed between two parallel plates separated a distance smaller than two particle diameters is described at the level of kinetic theory. The interest focuses on the behavior of the quasi-two-dimensional fluid seen when looking at the system from above or below. In the first part, a collisional model for the effective two-dimensional dynamics is analyzed. Although it is able to describe quite well the homogeneous evolution observed in the experiments, it is shown that it fails to predict the existence of non-equilibrium phase transitions, and in particular, the bimodal regime exhibited by the real system. A critical revision analysis of the model is presented , and as a starting point to get a more accurate description, the Boltzmann equation for the quasi-two-dimensional gas has been derived. In the elastic case, the solutions of the equation verify an H-theorem implying a monotonic tendency to a non-uniform steady state. As an example of application of the kinetic equation, here the evolution equations for the vertical and horizontal temperatures of the system are derived in the homogeneous approximation, and the results compared with molecular dynamics simulation results.Entropy2017-02-14192Article10.3390/e19020068681099-43002017-02-14doi: 10.3390/e19020068J. BreyVicente BuzónMaria García de SoriaPablo Maynar<![CDATA[Entropy, Vol. 19, Pages 65: An Android Malicious Code Detection Method Based on Improved DCA Algorithm]]>
http://www.mdpi.com/1099-4300/19/2/65
Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source code can not completely extract malware’s code features. The Android malware static detection methods whose features used are only obtained from the AndroidManifest.xml file are easily affected by useless permissions. Therefore, there are some limitations in current Android malware static detection methods. The current Android malware dynamic detection algorithm is mostly required to customize the system or needs system root permissions. Based on the Dendritic Cell Algorithm (DCA), this paper proposes an Android malware algorithm that has a higher detection rate, does not need to modify the system, and reduces the impact of code obfuscation to a certain degree. This algorithm is applied to an Android malware detection method based on oriented Dalvik disassembly sequence and application interface (API) calling sequence. Through the designed experiments, the effectiveness of this method is verified for the detection of Android malware.Entropy2017-02-11192Article10.3390/e19020065651099-43002017-02-11doi: 10.3390/e19020065Chundong WangZhiyuan LiLiangyi GongXiuliang MoHong YangYi Zhao<![CDATA[Entropy, Vol. 19, Pages 67: Investigation into Multi-Temporal Scale Complexity of Streamflows and Water Levels in the Poyang Lake Basin, China]]>
http://www.mdpi.com/1099-4300/19/2/67
The streamflow and water level complexity of the Poyang Lake basin has been investigated over multiple time-scales using daily observations of the water level and streamflow spanning from 1954 through 2013. The composite multiscale sample entropy was applied to measure the complexity and the Mann-Kendall algorithm was applied to detect the temporal changes in the complexity. The results show that the streamflow and water level complexity increases as the time-scale increases. The sample entropy of the streamflow increases when the timescale increases from a daily to a seasonal scale, also the sample entropy of the water level increases when the time-scale increases from a daily to a monthly scale. The water outflows of Poyang Lake, which is impacted mainly by the inflow processes, lake regulation, and the streamflow processes of the Yangtze River, is more complex than the water inflows. The streamflow and water level complexity over most of the time-scales, between the daily and monthly scales, is dominated by the increasing trend. This indicates the enhanced randomness, disorderliness, and irregularity of the streamflows and water levels. This investigation can help provide a better understanding to the hydrological features of large freshwater lakes. Ongoing research will be made to analyze and understand the mechanisms of the streamflow and water level complexity changes within the context of climate change and anthropogenic activities.Entropy2017-02-10192Article10.3390/e19020067671099-43002017-02-10doi: 10.3390/e19020067Feng HuangXunzhou ChunyuYuankun WangYao WuBao QianLidan GuoDayong ZhaoZiqiang Xia<![CDATA[Entropy, Vol. 19, Pages 64: Bullwhip Entropy Analysis and Chaos Control in the Supply Chain with Sales Game and Consumer Returns]]>
http://www.mdpi.com/1099-4300/19/2/64
In this paper, we study a supply chain system which consists of one manufacturer and two retailers including a traditional retailer and an online retailer. In order to gain a larger market share, the retailers often take the sales as a decision-making variable in the competition game. We devote ourselves to analyze the bullwhip effect in the supply chain with sales game and consumer returns via the theory of entropy and complexity and take the delayed feedback control method to control the system’s chaotic state. The impact of a statutory 7-day no reason for return policy for online retailers is also investigated. The bounded rational expectation is adopt to forecast the future demand in the sales game system with weak noise. Our results show that high return rates will hurt the profits of both the retailers and the adjustment speed of the bounded rational sales expectation has an important impact on the bullwhip effect. There is a stable area for retailers where the bullwhip effect doesn’t appear. The supply chain system suffers a great bullwhip effect in the quasi-periodic state and the quasi-chaotic state. The purpose of chaos control on the sales game can be achieved and the bullwhip effect would be effectively mitigated by using the delayed feedback control method.Entropy2017-02-10192Article10.3390/e19020064641099-43002017-02-10doi: 10.3390/e19020064Wandong LouJunhai MaXueli Zhan<![CDATA[Entropy, Vol. 19, Pages 66: Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies]]>
http://www.mdpi.com/1099-4300/19/2/66
The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies.Entropy2017-02-10192Concept Paper10.3390/e19020066661099-43002017-02-10doi: 10.3390/e19020066José CasquilhoFrancisco Rego<![CDATA[Entropy, Vol. 19, Pages 63: Response Surface Methodology Control Rod Position Optimization of a Pressurized Water Reactor Core Considering Both High Safety and Low Energy Dissipation]]>
http://www.mdpi.com/1099-4300/19/2/63
Response Surface Methodology (RSM) is introduced to optimize the control rod positions in a pressurized water reactor (PWR) core. The widely used 3D-IAEA benchmark problem is selected as the typical PWR core and the neutron flux field is solved. Besides, some additional thermal parameters are assumed to obtain the temperature distribution. Then the total and local entropy production is calculated to evaluate the energy dissipation. Using RSM, three directions of optimization are taken, which aim to determine the minimum of power peak factor Pmax, peak temperature Tmax and total entropy production Stot. These parameters reflect the safety and energy dissipation in the core. Finally, an optimization scheme was obtained, which reduced Pmax, Tmax and Stot by 23%, 8.7% and 16%, respectively. The optimization results are satisfactory.Entropy2017-02-10192Article10.3390/e19020063631099-43002017-02-10doi: 10.3390/e19020063Yi-Ning ZhangHao-Chun ZhangHai-Yan YuChao Ma<![CDATA[Entropy, Vol. 19, Pages 62: Complex and Fractional Dynamics]]>
http://www.mdpi.com/1099-4300/19/2/62
Complex systems (CS) are pervasive in many areas, namely financial markets; highway transportation; telecommunication networks; world and country economies; social networks; immunological systems; living organisms; computational systems; and electrical and mechanical structures. CS are often composed of a large number of interconnected and interacting entities exhibiting much richer global scale dynamics than could be inferred from the properties and behavior of individual elements. [...]Entropy2017-02-08192Editorial10.3390/e19020062621099-43002017-02-08doi: 10.3390/e19020062J. Tenreiro MachadoAntónio Lopes<![CDATA[Entropy, Vol. 19, Pages 60: Nonlinear Wave Equations Related to Nonextensive Thermostatistics]]>
http://www.mdpi.com/1099-4300/19/2/60
We advance two nonlinear wave equations related to the nonextensive thermostatistical formalism based upon the power-law nonadditive S q entropies. Our present contribution is in line with recent developments, where nonlinear extensions inspired on the q-thermostatistical formalism have been proposed for the Schroedinger, Klein–Gordon, and Dirac wave equations. These previously introduced equations share the interesting feature of admitting q-plane wave solutions. In contrast with these recent developments, one of the nonlinear wave equations that we propose exhibits real q-Gaussian solutions, and the other one admits exponential plane wave solutions modulated by a q-Gaussian. These q-Gaussians are q-exponentials whose arguments are quadratic functions of the space and time variables. The q-Gaussians are at the heart of nonextensive thermostatistics. The wave equations that we analyze in this work illustrate new possible dynamical scenarios leading to time-dependent q-Gaussians. One of the nonlinear wave equations considered here is a wave equation endowed with a nonlinear potential term, and can be regarded as a nonlinear Klein–Gordon equation. The other equation we study is a nonlinear Schroedinger-like equation.Entropy2017-02-07192Article10.3390/e19020060601099-43002017-02-07doi: 10.3390/e19020060Angel PlastinoRoseli Wedemann<![CDATA[Entropy, Vol. 19, Pages 61: Computational Complexity]]>
http://www.mdpi.com/1099-4300/19/2/61
Complex systems (CS) involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...]Entropy2017-02-07192Editorial10.3390/e19020061611099-43002017-02-07doi: 10.3390/e19020061J. Tenreiro MachadoAntónio Lopes<![CDATA[Entropy, Vol. 19, Pages 59: On the Binary Input Gaussian Wiretap Channel with/without Output Quantization]]>
http://www.mdpi.com/1099-4300/19/2/59
In this paper, we investigate the effect of output quantization on the secrecy capacity of the binary-input Gaussian wiretap channel. As a result, a closed-form expression with infinite summation terms of the secrecy capacity of the binary-input Gaussian wiretap channel is derived for the case when both the legitimate receiver and the eavesdropper have unquantized outputs. In particular, computable tight upper and lower bounds on the secrecy capacity are obtained. Theoretically, we prove that when the legitimate receiver has unquantized outputs while the eavesdropper has binary quantized outputs, the secrecy capacity is larger than that when both the legitimate receiver and the eavesdropper have unquantized outputs or both have binary quantized outputs. Further, numerical results show that in the low signal-to-noise ratio (SNR) (of the main channel) region, the secrecy capacity of the binary input Gaussian wiretap channel when both the legitimate receiver and the eavesdropper have unquantized outputs is larger than the capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs; as the SNR increases, the secrecy capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs tends to overtake.Entropy2017-02-04192Article10.3390/e19020059591099-43002017-02-04doi: 10.3390/e19020059Chao QiYanling ChenA. Vinck<![CDATA[Entropy, Vol. 19, Pages 58: Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†]]>
http://www.mdpi.com/1099-4300/19/2/58
We compare the application of Bayesian inference and the maximum entropy (MaxEnt) method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf) by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.Entropy2017-02-02192Article10.3390/e19020058581099-43002017-02-02doi: 10.3390/e19020058Steven WaldripRobert Niven<![CDATA[Entropy, Vol. 19, Pages 57: The Second Law: From Carnot to Thomson-Clausius, to the Theory of Exergy, and to the Entropy-Growth Potential Principle]]>
http://www.mdpi.com/1099-4300/19/2/57
At its origins, thermodynamics was the study of heat and engines. Carnot transformed it into a scientific discipline by explaining engine power in terms of transfer of “caloric”. That idea became the second law of thermodynamics when Thomson and Clausius reconciled Carnot’s theory with Joule’s conflicting thesis that power was derived from the consumption of heat, which was determined to be a form of energy. Eventually, Clausius formulated the 2nd-law as the universal entropy growth principle: the synthesis of transfer vs. consumption led to what became known as the mechanical theory of heat (MTH). However, by making universal-interconvertibility the cornerstone of MTH their synthesis-project was a defective one, which precluded MTH from developing the full expression of the second law. This paper reiterates that universal-interconvertibility is demonstrably false—as the case has been made by many others—by clarifying the true meaning of the mechanical equivalent of heat. And, presents a two-part formulation of the second law: universal entropy growth principle as well as a new principle that no change in Nature happens without entropy growth potential. With the new principle as its cornerstone replacing universal-interconvertibility, thermodynamics transcends the defective MTH and becomes a coherent conceptual system.Entropy2017-01-28192Article10.3390/e19020057571099-43002017-01-28doi: 10.3390/e19020057Lin-Shu Wang<![CDATA[Entropy, Vol. 19, Pages 55: Bateman–Feshbach Tikochinsky and Caldirola–Kanai Oscillators with New Fractional Differentiation]]>
http://www.mdpi.com/1099-4300/19/2/55
In this work, the study of the fractional behavior of the Bateman–Feshbach–Tikochinsky and Caldirola–Kanai oscillators by using different fractional derivatives is presented. We obtained the Euler–Lagrange and the Hamiltonian formalisms in order to represent the dynamic models based on the Liouville–Caputo, Caputo–Fabrizio–Caputo and the new fractional derivative based on the Mittag–Leffler kernel with arbitrary order α. Simulation results are presented in order to show the fractional behavior of the oscillators, and the classical behavior is recovered when α is equal to 1.Entropy2017-01-28192Article10.3390/e19020055551099-43002017-01-28doi: 10.3390/e19020055Antonio Coronel-EscamillaJosé Gómez-AguilarDumitru BaleanuTeodoro Córdova-FragaRicardo Escobar-JiménezVictor Olivares-PeregrinoMaysaa Qurashi<![CDATA[Entropy, Vol. 19, Pages 56: Scaling Relations of Lognormal Type Growth Process with an Extremal Principle of Entropy]]>
http://www.mdpi.com/1099-4300/19/2/56
The scale, inflexion point and maximum point are important scaling parameters for studying growth phenomena with a size following the lognormal function. The width of the size function and its entropy depend on the scale parameter (or the standard deviation) and measure the relative importance of production and dissipation involved in the growth process. The Shannon entropy increases monotonically with the scale parameter, but the slope has a minimum at p 6/6. This value has been used previously to study spreading of spray and epidemical cases. In this paper, this approach of minimizing this entropy slope is discussed in a broader sense and applied to obtain the relationship between the inflexion point and maximum point. It is shown that this relationship is determined by the base of natural logarithm e ' 2.718 and exhibits some geometrical similarity to the minimal surface energy principle. The known data from a number of problems, including the swirling rate of the bathtub vortex, more data of droplet splashing, population growth, distribution of strokes in Chinese language characters and velocity profile of a turbulent jet, are used to assess to what extent the approach of minimizing the entropy slope can be regarded as useful.Entropy2017-01-27192Article10.3390/e19020056561099-43002017-01-27doi: 10.3390/e19020056Zi-Niu WuJuan LiChen-Yuan Bai<![CDATA[Entropy, Vol. 19, Pages 47: On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests]]>
http://www.mdpi.com/1099-4300/19/2/47
Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. In this short survey, we focus on test statistics that involve the Wasserstein distance. Using an entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ) plots and receiver operating characteristic or ordinal dominance (ROC/ODC) curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.Entropy2017-01-26192Article10.3390/e19020047471099-43002017-01-26doi: 10.3390/e19020047Aaditya RamdasNicolás TrillosMarco Cuturi<![CDATA[Entropy, Vol. 19, Pages 54: Information Geometric Approach to Recursive Update in Nonlinear Filtering]]>
http://www.mdpi.com/1099-4300/19/2/54
The measurement update stage in the nonlinear filtering is considered in the viewpoint of information geometry, and the filtered state is considered as an optimization estimation in parameter space has been corresponded with the iteration in the statistical manifold, then a recursive method is proposed in this paper. This method is derived based on the natural gradient descent on the statistical manifold, which constructed by the posterior probability density function (PDF) of state conditional on the measurement. The derivation procedure is processing in the geometric viewpoint, and gives a geometric interpretation for the iteration update. Besides, the proposed method can be seen as an extended for the Kalman filter and its variants. For the one step in our proposed method, it is identical to the Extended Kalman filter (EKF) in the nonlinear case, while traditional Kalman filter in the linear case. Benefited from the natural gradient descent used in the update stage, our proposed method performs better than the existing methods, and the results have showed in the numerical experiments.Entropy2017-01-26192Article10.3390/e19020054541099-43002017-01-26doi: 10.3390/e19020054Yubo LiYongqiang ChengXiang LiXiaoqiang HuaYuliang Qin<![CDATA[Entropy, Vol. 19, Pages 53: A Mixed Geographically and Temporally Weighted Regression: Exploring Spatial-Temporal Variations from Global and Local Perspectives]]>
http://www.mdpi.com/1099-4300/19/2/53
To capture both global stationarity and spatiotemporal non-stationarity, a novel mixed geographically and temporally weighted regression (MGTWR) model accounting for global and local effects in both space and time is presented. Since the constant and spatial-temporal varying coefficients could not be estimated in one step, a two-stage least squares estimation is introduced to calibrate the model. Both simulations and real-world datasets are used to test and verify the performance of the proposed MGTWR model. Additionally, an Akaike Information Criterion (AIC) is adopted as a key model fitting diagnostic. The experiments demonstrate that the MGTWR model yields more accurate results than do traditional spatially weighted regression models. For instance, the MGTWR model decreased AIC value by 2.7066, 36.368 and 112.812 with respect to those of the mixed geographically weighted regression (MGWR) model and by 45.5628, −38.774 and 35.656 with respect to those of the geographical and temporal weighted regression (GTWR) model for the three simulation datasets. Moreover, compared to the MGWR and GTWR models, the MGTWR model obtained the lowest AIC value and mean square error (MSE) and the highest coefficient of determination (R2) and adjusted coefficient of determination (R2adj). In addition, our experiments proved the existence of both global stationarity and spatiotemporal non-stationarity, as well as the practical ability of the proposed method.Entropy2017-01-26192Article10.3390/e19020053531099-43002017-01-26doi: 10.3390/e19020053Jiping LiuYangyang ZhaoYi YangShenghua XuFuhao ZhangXiaolu ZhangLihong ShiAgen Qiu<![CDATA[Entropy, Vol. 19, Pages 51: Entropies of the Chinese Land Use/Cover Change from 1990 to 2010 at a County Level]]>
http://www.mdpi.com/1099-4300/19/2/51
Land Use/Cover Change (LUCC) has gradually became an important direction in the research of global changes. LUCC is a complex system, and entropy is a measure of the degree of disorder of a system. According to land use information entropy, this paper analyzes changes in land use from the perspective of the system. Research on the entropy of LUCC structures has a certain “guiding role” for the optimization and adjustment of regional land use structure. Based on the five periods of LUCC data from the year of 1990 to 2010, this paper focuses on analyzing three types of LUCC entropies among counties in China—namely, Shannon, Renyi, and Tsallis entropies. The findings suggest that: (1) Shannon entropy can reflect the volatility of the LUCC, that Renyi and Tsallis entropies also have this function when their parameter has a positive value, and that Renyi and Tsallis entropies can reflect the extreme case of the LUCC when their parameter has a negative value.; (2) The entropy of China’s LUCC is uneven in time and space distributions, and that there is a large trend during 1990–2010, the central region generally has high entropy in space.Entropy2017-01-25192Article10.3390/e19020051511099-43002017-01-25doi: 10.3390/e19020051Yong FanGuangming YuZongyi HeHailong YuRui BaiLinru YangDi Wu<![CDATA[Entropy, Vol. 19, Pages 52: Research and Application of a Novel Hybrid Model Based on Data Selection and Artificial Intelligence Algorithm for Short Term Load Forecasting]]>
http://www.mdpi.com/1099-4300/19/2/52
Machine learning plays a vital role in several modern economic and industrial fields, and selecting an optimized machine learning method to improve time series’ forecasting accuracy is challenging. Advanced machine learning methods, e.g., the support vector regression (SVR) model, are widely employed in forecasting fields, but the individual SVR pays no attention to the significance of data selection, signal processing and optimization, which cannot always satisfy the requirements of time series forecasting. By preprocessing and analyzing the original time series, in this paper, a hybrid SVR model is developed, considering periodicity, trend and randomness, and combined with data selection, signal processing and an optimization algorithm for short-term load forecasting. Case studies of electricity power data from New South Wales and Singapore are regarded as exemplifications to estimate the performance of the developed novel model. The experimental results demonstrate that the proposed hybrid method is not only robust but also capable of achieving significant improvement compared with the traditional single models and can be an effective and efficient tool for power load forecasting.Entropy2017-01-25192Article10.3390/e19020052521099-43002017-01-25doi: 10.3390/e19020052Wendong YangJianzhou WangRui Wang<![CDATA[Entropy, Vol. 19, Pages 50: Multiplicity of Homoclinic Solutions for Fractional Hamiltonian Systems with Subquadratic Potential]]>
http://www.mdpi.com/1099-4300/19/2/50
In this paper, we study the existence of homoclinic solutions for the fractional Hamiltonian systems with left and right Liouville–Weyl derivatives. We establish some new results concerning the existence and multiplicity of homoclinic solutions for the given system by using Clark’s theorem from critical point theory and fountain theorem.Entropy2017-01-24192Article10.3390/e19020050501099-43002017-01-24doi: 10.3390/e19020050Neamat NyamoradiAhmed AlsaediBashir AhmadYong Zhou<![CDATA[Entropy, Vol. 19, Pages 49: Entropy-Based Method for Evaluating Contact Strain-Energy Distribution for Assembly Accuracy Prediction]]>
http://www.mdpi.com/1099-4300/19/2/49
Assembly accuracy significantly affects the performance of precision mechanical systems. In this study, an entropy-based evaluation method for contact strain-energy distribution is proposed to predict the assembly accuracy. Strain energy is utilized to characterize the effects of the combination of form errors and contact deformations on the formation of assembly errors. To obtain the strain energy, the contact state is analyzed by applying the finite element method (FEM) on 3D, solid models of real parts containing form errors. Entropy is employed for evaluating the uniformity of the contact strain-energy distribution. An evaluation model, in which the uniformity of the contact strain-energy distribution is evaluated in three levels based on entropy, is developed to predict the assembly accuracy, and a comprehensive index is proposed. The assembly experiments for five sets of two rotating parts are conducted. Moreover, the coaxiality between the surfaces of two parts with assembly accuracy requirements is selected as the verification index to verify the effectiveness of the evaluation method. The results are in good agreement with the verification index, indicating that the method presented in this study is reliable and effective in predicting the assembly accuracy.Entropy2017-01-24192Article10.3390/e19020049491099-43002017-01-24doi: 10.3390/e19020049Yan FangXin JinChencan HuangZhijing Zhang<![CDATA[Entropy, Vol. 19, Pages 48: Entropy, Shannon’s Measure of Information and Boltzmann’s H-Theorem]]>
http://www.mdpi.com/1099-4300/19/2/48
We start with a clear distinction between Shannon’s Measure of Information (SMI) and the Thermodynamic Entropy. The first is defined on any probability distribution; and therefore it is a very general concept. On the other hand Entropy is defined on a very special set of distributions. Next we show that the Shannon Measure of Information (SMI) provides a solid and quantitative basis for the interpretation of the thermodynamic entropy. The entropy measures the uncertainty in the distribution of the locations and momenta of all the particles; as well as two corrections due to the uncertainty principle and the indistinguishability of the particles. Finally we show that the H-function as defined by Boltzmann is an SMI but not entropy. Therefore; much of what has been written on the H-theorem is irrelevant to entropy and the Second Law of Thermodynamics.Entropy2017-01-24192Article10.3390/e19020048481099-43002017-01-24doi: 10.3390/e19020048Arieh Ben-Naim<![CDATA[Entropy, Vol. 19, Pages 46: Topological Entropy Dimension and Directional Entropy Dimension for ℤ2-Subshifts]]>
http://www.mdpi.com/1099-4300/19/2/46
The notion of topological entropy dimension for a Z -action has been introduced to measure the subexponential complexity of zero entropy systems. Given a Z 2 -action, along with a Z 2 -entropy dimension, we also consider a finer notion of directional entropy dimension arising from its subactions. The entropy dimension of a Z 2 -action and the directional entropy dimensions of its subactions satisfy certain inequalities. We present several constructions of strictly ergodic Z 2 -subshifts of positive entropy dimension with diverse properties of their subgroup actions. In particular, we show that there is a Z 2 -subshift of full dimension in which every direction has entropy 0.Entropy2017-01-24192Article10.3390/e19020046461099-43002017-01-24doi: 10.3390/e19020046Uijin JungJungseob LeeKyewon Koh Park<![CDATA[Entropy, Vol. 19, Pages 45: A Soft Parameter Function Penalized Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification]]>
http://www.mdpi.com/1099-4300/19/1/45
A soft parameter function penalized normalized maximum correntropy criterion (SPF-NMCC) algorithm is proposed for sparse system identification. The proposed SPF-NMCC algorithm is derived on the basis of the normalized adaptive filter theory, the maximum correntropy criterion (MCC) algorithm and zero-attracting techniques. A soft parameter function is incorporated into the cost function of the traditional normalized MCC (NMCC) algorithm to exploit the sparsity properties of the sparse signals. The proposed SPF-NMCC algorithm is mathematically derived in detail. As a result, the proposed SPF-NMCC algorithm can provide an efficient zero attractor term to effectively attract the zero taps and near-zero coefficients to zero, and, hence, it can speed up the convergence. Furthermore, the estimation behaviors are obtained by estimating a sparse system and a sparse acoustic echo channel. Computer simulation results indicate that the proposed SPF-NMCC algorithm can achieve a better performance in comparison with the MCC, NMCC, LMS (least mean square) algorithms and their zero attraction forms in terms of both convergence speed and steady-state performance.Entropy2017-01-23191Article10.3390/e19010045451099-43002017-01-23doi: 10.3390/e19010045Yingsong LiYanyan WangRui YangFelix Albu<![CDATA[Entropy, Vol. 19, Pages 44: Crane Safety Assessment Method Based on Entropy and Cumulative Prospect Theory]]>
http://www.mdpi.com/1099-4300/19/1/44
Assessing the safety status of cranes is an important problem. To overcome the inaccuracies and misjudgments in such assessments, this work describes a safety assessment method for cranes that combines entropy and cumulative prospect theory. Firstly, the proposed method transforms the set of evaluation indices into an evaluation vector. Secondly, a decision matrix is then constructed from the evaluation vectors and evaluation standards, and an entropy-based technique is applied to calculate the index weights. Thirdly, positive and negative prospect value matrices are established from reference points based on the positive and negative ideal solutions. Thus, this enables the crane safety grade to be determined according to the ranked comprehensive prospect values. Finally, the safety status of four general overhead traveling crane samples is evaluated to verify the rationality and feasibility of the proposed method. The results demonstrate that the method described in this paper can precisely and reasonably reflect the safety status of a crane.Entropy2017-01-21191Article10.3390/e19010044441099-43002017-01-21doi: 10.3390/e19010044Aihua LiZhangyan Zhao<![CDATA[Entropy, Vol. 19, Pages 43: Radiative Entropy Production along the Paludification Gradient in the Southern Taiga]]>
http://www.mdpi.com/1099-4300/19/1/43
Entropy production (σ) is a measure of ecosystem and landscape stability in a changing environment. We calculated the σ in the radiation balance for a well-drained spruce forest, a paludified spruce forest, and a bog in the southern taiga of the European part of Russia using long-term meteorological data. Though radiative σ depends both on surface temperature and absorbed radiation, the radiation effect in boreal ecosystems is much more important than the temperature effect. The dynamic of the incoming solar radiation was the main driver of the diurnal, seasonal, and intra-annual courses of σ for all ecosystems; the difference in ecosystem albedo was the second most important factor, responsible for seven-eighths of the difference in σ between the bog and forest in a warm period. Despite the higher productivity and the complex structure of the well-drained forest, the dynamics and sums of σ in two forests were very similar. Summer droughts had no influence on the albedo and σ efficiency of forests, demonstrating high self-regulation of the taiga forest ecosystems. On the contrary, a decreasing water supply significantly elevated the albedo and lowered the σ in bog. Bogs, being non-steady ecosystems, demonstrate unique thermodynamic behavior, which is fluctuant and strongly dependent on the moisture supply. Paludification of territories may result in increasing instability of the energy balance and entropy production in the landscape of the southern taiga.Entropy2017-01-21191Article10.3390/e19010043431099-43002017-01-21doi: 10.3390/e19010043Olga KurichevaVadim MamkinRobert SandlerskyJuriy PuzachenkoAndrej VarlaginJuliya Kurbatova<![CDATA[Entropy, Vol. 19, Pages 39: Nonlinear q-Generalizations of Quantum Equations: Homogeneous and Nonhomogeneous Cases—An Overview]]>
http://www.mdpi.com/1099-4300/19/1/39
Recent developments on the generalizations of two important equations of quantum physics, namely the Schroedinger and Klein–Gordon equations, are reviewed. These generalizations present nonlinear terms, characterized by exponents depending on an index q, in such a way that the standard linear equations are recovered in the limit q → 1 . Interestingly, these equations present a common, soliton-like, traveling solution, which is written in terms of the q-exponential function that naturally emerges within nonextensive statistical mechanics. In both cases, the corresponding well-known Einstein energy-momentum relations, as well as the Planck and the de Broglie ones, are preserved for arbitrary values of q. In order to deal appropriately with the continuity equation, a classical field theory has been developed, where besides the usual Ψ ( x → , t ) , a new field Φ ( x → , t ) must be introduced; this latter field becomes Ψ * ( x → , t ) only when q → 1 . A class of linear nonhomogeneous Schroedinger equations, characterized by position-dependent masses, for which the extra field Φ ( x → , t ) becomes necessary, is also investigated. In this case, an appropriate transformation connecting Ψ ( x → , t ) and Φ ( x → , t ) is proposed, opening the possibility for finding a connection between these fields in the nonlinear cases. The solutions presented herein are potential candidates for applications to nonlinear excitations in plasma physics, nonlinear optics, in structures, such as those of graphene, as well as in shallow and deep water waves.Entropy2017-01-21191Review10.3390/e19010039391099-43002017-01-21doi: 10.3390/e19010039Fernando NobreMarco Rego-MonteiroConstantino Tsallis<![CDATA[Entropy, Vol. 19, Pages 42: Intermittent Motion, Nonlinear Diffusion Equation and Tsallis Formalism]]>
http://www.mdpi.com/1099-4300/19/1/42
We investigate an intermittent process obtained from the combination of a nonlinear diffusion equation and pauses. We consider the porous media equation with reaction terms related to the rate of switching the particles from the diffusive mode to the resting mode or switching them from the resting to the movement. The results show that in the asymptotic limit of small and long times, the spreading of the system is essentially governed by the diffusive term. The behavior exhibited for intermediate times depends on the rates present in the reaction terms. In this scenario, we show that, in the asymptotic limits, the distributions for this process are given by in terms of power laws which may be related to the q-exponential present in the Tsallis statistics. Furthermore, we also analyze a situation characterized by different diffusive regimes, which emerges when the diffusive term is a mixing of linear and nonlinear terms.Entropy2017-01-21191Article10.3390/e19010042421099-43002017-01-21doi: 10.3390/e19010042Ervin LenziLuciano da SilvaMarcelo LenziMaike dos SantosHaroldo RibeiroLuiz Evangelista<![CDATA[Entropy, Vol. 19, Pages 40: Nonequilibrium Thermodynamics of Ion Flux through Membrane Channels]]>
http://www.mdpi.com/1099-4300/19/1/40
Ion flux through membrane channels is passively driven by the electrochemical potential differences across the cell membrane. Nonequilibrium thermodynamics has been successful in explaining transport mechanisms, including the ion transport phenomenon. However, physiologists may not be familiar with biophysical concepts based on the view of entropy production. In this paper, I have reviewed the physical meanings and connections between nonequilibrium thermodynamics and the expressions commonly used in describing ion fluxes in membrane physiology. The fluctuation theorem can be applied to interpret the flux ratio in the small molecular systems. The multi-ion single-file feature of the ion channel facilitates the utilization of the natural tendency of electrochemical driving force to couple specific biophysical processes and biochemical reactions on the membrane.Entropy2017-01-19191Review10.3390/e19010040401099-43002017-01-19doi: 10.3390/e19010040Chi-Pan Hsieh<![CDATA[Entropy, Vol. 19, Pages 41: Transfer Learning for SSVEP Electroencephalography Based Brain–Computer Interfaces Using Learn++.NSE and Mutual Information]]>
http://www.mdpi.com/1099-4300/19/1/41
Brain–Computer Interfaces (BCI) using Steady-State Visual Evoked Potentials (SSVEP) are sometimes used by injured patients seeking to use a computer. Canonical Correlation Analysis (CCA) is seen as state-of-the-art for SSVEP BCI systems. However, this assumes that the user has full control over their covert attention, which may not be the case. This introduces high calibration requirements when using other machine learning techniques. These may be circumvented by using transfer learning to utilize data from other participants. This paper proposes a combination of ensemble learning via Learn++ for Nonstationary Environments (Learn++.NSE)and similarity measures such as mutual information to identify ensembles of pre-existing data that result in higher classification. Results show that this approach performed worse than CCA in participants with typical SSVEP responses, but outperformed CCA in participants whose SSVEP responses violated CCA assumptions. This indicates that similarity measures and Learn++.NSE can introduce a transfer learning mechanism to bring SSVEP system accessibility to users unable to control their covert attention.Entropy2017-01-19191Article10.3390/e19010041411099-43002017-01-19doi: 10.3390/e19010041Matthew SybeldonLukas SchmitMurat Akcakaya<![CDATA[Entropy, Vol. 19, Pages 38: Distributed Rateless Codes with Unequal Error Protection Property for Space Information Networks]]>
http://www.mdpi.com/1099-4300/19/1/38
In this paper, we propose a novel distributed unequal error protection (UEP) rateless coding scheme (DURC) for space information networks (SIN). We consider the multimedia data transmissions in a dual-hop SIN communication scenario, where multiple disjoint source nodes need to transmit their UEP rateless coded data to a destination via a dynamic relay. We formulate the optimization problems to provide optimal degree distributions on the direct links and the dynamic relay links to satisfy the required error protection levels. The optimization methods are based on the And–Or tree analysis and can be solved by multi-objective programming. In addition, we evaluate the performance of the optimal DURC scheme, and simulation results show that the proposed DURC scheme can effectively provide UEP property under a variety of error requirements.Entropy2017-01-18191Article10.3390/e19010038381099-43002017-01-18doi: 10.3390/e19010038Jian JiaoYi YangBowen FengShaohua WuYonghui LiQinyu Zhang<![CDATA[Entropy, Vol. 19, Pages 36: Exploitation of the Maximum Entropy Principle in Mathematical Modeling of Charge Transport in Semiconductors]]>
http://www.mdpi.com/1099-4300/19/1/36
In the last two decades, the Maximum Entropy Principle (MEP) has been successfully employed to construct macroscopic models able to describe the charge and heat transport in semiconductor devices. These models are obtained, starting from the Boltzmann transport equations, for the charge and the phonon distribution functions, by taking—as macroscopic variables—suitable moments of the distributions and exploiting MEP in order to close the evolution equations for the chosen moments. Important results have also been obtained for the description of charge transport in devices made both of elemental and compound semiconductors, in cases where charge confinement is present and the carrier flow is two- or one-dimensional.Entropy2017-01-18191Article10.3390/e19010036361099-43002017-01-18doi: 10.3390/e19010036Giovanni MascaliVittorio Romano<![CDATA[Entropy, Vol. 19, Pages 37: Evaluation Model of Aluminum Alloy Welded Joint Low-Cycle Fatigue Data Based on Information Entropy]]>
http://www.mdpi.com/1099-4300/19/1/37
An evaluation model of aluminum alloy welded joint low-cycle fatigue data based on information entropy is proposed. Through calculating and analyzing the information entropy of decision attributes, quantitative contribution of stress concentration, plate thickness, and loading mode to the fatigue destruction are researched. Results reveal that the total information entropy of the fatigue data based on nominal stress, structural stress and equivalent structural stress are, respectively, 0.9702, 0.8881, and 0.8294. There is consistency between the reducing trend of the weight-based information entropy and the smaller and smaller standard deviation of the S-N curves. In the structural stress based S-N curve, total stress concentration factor is crucial for the distribution of the fatigue data and the weight based information entropy of membrane stress concentration factor is 0.6754, which illustrates that stress concentration is a key issue of welded structure to which ought to be attached great importance. Subsequently, in the equivalent structural stress-based S-N curve, the weight based information entropy of stress ratio is 0.5759, which plays an important role in the distribution of fatigue data. With the importance level of the attributes on the S-N curves investigated, the correction of R in the equivalent structural stress based master S-N curve method should be carried out to make the welding fatigue prediction much more accurate.Entropy2017-01-18191Article10.3390/e19010037371099-43002017-01-18doi: 10.3390/e19010037Yaliang LiuLi ZouYibo SunXinhua Yang<![CDATA[Entropy, Vol. 19, Pages 32: Impact of Ambient Conditions of Arab Gulf Countries on the Performance of Gas Turbines Using Energy and Exergy Analysis]]>
http://www.mdpi.com/1099-4300/19/1/32
In this paper, energy and exergy analysis of typical gas turbines is performed using average hourly temperature and relative humidity for selected Gulf cities located in Saudi Arabia, Kuwait, United Arab Emirates, Oman, Bahrain and Qatar. A typical gas turbine unit of 42 MW is considered in this study. The electricity production, thermal efficiency, fuel consumption differences between the ISO conditions and actual conditions are determined for each city. The exergy efficiency and exergy destruction rates for the gas turbine unit and its components are also evaluated taking ISO conditions as reference conditions. The results indicate that the electricity production losses occur in all cities during the year, except in Dammam and Kuwait for the period between November and March. During a typical day, the variation of the power production can reach 4 MW. The rate of exergy destruction under the combined effect of temperature and humidity is significant in hot months reaching a maximum of 12 MW in July. The presented results show also that adding inlet cooling systems to the existing gas turbine units could be justified in hot periods. Other aspects, such as the economic and environmental ones, should also be investigated.Entropy2017-01-17191Article10.3390/e19010032321099-43002017-01-17doi: 10.3390/e19010032Saleh BaakeemJamel OrfiShaker AlaqelHany Al-Ansary<![CDATA[Entropy, Vol. 19, Pages 35: Spacetime Topology and the Laws of Black Hole-Soliton Mechanics]]>
http://www.mdpi.com/1099-4300/19/1/35
The domain of outer communication of an asymptotically flat spactime must be simply connected. In five dimensions, this still allows for the possibility of an arbitrary number of 2-cycles supported by magnetic flux carried by Maxwell fields. As a result, stationary, asymptotically flat, horizonless solutions—“gravitational solitons”—may exist with non-vanishing mass, charge, and angular momenta. These gravitational solutions satisfy a Smarr-like relation, as well as a first law of mechanics. Furthermore, the presence of solitons leads to new terms in the well-known first law of black hole mechanics for spacetimes containing black hole horizons and non-trivial topology in the exterior region. I outline the derivation of these results and consider an explicit example in five-dimensional supergravity.Entropy2017-01-17191Article10.3390/e19010035351099-43002017-01-17doi: 10.3390/e19010035Hari Kunduri<![CDATA[Entropy, Vol. 19, Pages 27: A Probabilistic Damage Identification Method for Shear Structure Components Based on Cross-Entropy Optimizations]]>
http://www.mdpi.com/1099-4300/19/1/27
A probabilistic damage identification method for shear structure components is presented. The method uses the extracted modal frequencies from the measured dynamical responses in conjunction with a representative finite element model. The damage of each component is modeled using a stiffness multiplier in the finite element model. By coupling the extracted features and the probabilistic structural model, the damage identification problem is recast to an equivalent optimization problem, which is iteratively solved using the cross-entropy optimization technique. An application example is used to demonstrate the proposed method and validate its effectiveness. Influencing factors such as the location of damaged components, measurement location, measurement noise level, and damage severity are studied. The detection reliability under different measurement noise levels is also discussed in detail.Entropy2017-01-17191Article10.3390/e19010027271099-43002017-01-17doi: 10.3390/e19010027Xuefei GuanYongxiang WangJingjing He<![CDATA[Entropy, Vol. 19, Pages 25: Similarity Theory Based Radial Turbine Performance and Loss Mechanism Comparison between R245fa and Air for Heavy-Duty Diesel Engine Organic Rankine Cycles]]>
http://www.mdpi.com/1099-4300/19/1/25
Organic Rankine Cycles using radial turbines as expanders are considered as one of the most efficient technologies to convert heavy-duty diesel engine waste heat into useful work. Turbine similarity design based on the existing air turbine profiles is time saving. Due to totally different thermodynamic properties between organic fluids and air, its influence on turbine performance and loss mechanisms need to be analyzed. This paper numerically simulated a radial turbine under similar conditions between R245fa and air, and compared the differences of the turbine performance and loss mechanisms. Larger specific heat ratio of air leads to air turbine operating at higher pressure ratios. As R245fa gas constant is only about one-fifth of air gas constant, reduced rotating speeds of R245fa turbine are only 0.4-fold of those of air turbine, and reduced mass flow rates are about twice of those of air turbine. When using R245fa as working fluid, the nozzle shock wave losses decrease but rotor suction surface separation vortex losses increase, and eventually leads that isentropic efficiencies of R245fa turbine in the commonly used velocity ratio range from 0.5 to 0.9 are 3%–4% lower than those of air turbine.Entropy2017-01-14191Article10.3390/e19010025251099-43002017-01-14doi: 10.3390/e19010025Lei ZhangWeilin ZhugeYangjun ZhangTao Chen<![CDATA[Entropy, Vol. 19, Pages 34: Implementing Demons and Ratchets]]>
http://www.mdpi.com/1099-4300/19/1/34
Experimental results show that ratchets may be implemented in semiconductor and chemical systems, bypassing the second law and opening up huge gains in energy production. This paper summarizes or describes experiments and results on systems that effect demons and ratchets operating in chemical or electrical domains. One creates temperature differences that can be harvested by a heat engine. A second produces light with only heat input. A third produces harvestable electrical potential directly. These systems share creating particles in one location, destroying them in another and moving them between locations by diffusion (Brownian motion). All absorb ambient heat as they produce other energy forms. None requires an external hot and cold side. The economic and social impacts of these conversions of ambient heat to work are, of course, well-understood and huge. The experimental results beg for serious work on the chance that they are valid.Entropy2017-01-14191Article10.3390/e19010034341099-43002017-01-14doi: 10.3390/e19010034Peter OremFrank Orem<![CDATA[Entropy, Vol. 19, Pages 33: Heuristic Approach to Understanding the Accumulation Process in Hydrothermal Pores]]>
http://www.mdpi.com/1099-4300/19/1/33
One of the central questions of humankind is: which chemical and physical conditions are necessary to make life possible? In this “origin-of-life” context, formamide plays an important role, because it has been demonstrated that prebiotic molecules can be synthesized from concentrated formamide solutions. Recently, it could be shown, using finite-element calculations combining thermophoresis and convection processes in hydrothermal pores, that sufficiently high formamide concentrations could be accumulated to form prebiotic molecules (Niether et al. (2016)). Depending on the initial formamide concentration, the aspect ratio of the pores, and the ambient temperature, formamide concentrations up to 85 wt % could be reached. The stationary calculations show an effective accumulation, only if the aspect ratio is above a certain threshold, and the corresponding transient studies display a sudden increase of the accumulation after a certain time. Neither of the observations were explained. In this work, we derive a simple heuristic model, which explains both phenomena. The physical idea of the approach is a comparison of the time to reach the top of the pore with the time to cross from the convective upstream towards the convective downstream. If the time to reach the top of the pore is shorter than the crossing time, the formamide molecules are flushed out of the pore. If the time is long enough, the formamide molecules can reach the downstream and accumulate at the bottom of the pore. Analysing the optimal aspect ratio as function of concentration, we find that, at a weight fraction of w = 0 . 5 , a minimal pore height is required for effective accumulation. At the same concentration, the transient calculations show a maximum of the accumulation rate.Entropy2017-01-13191Article10.3390/e19010033331099-43002017-01-13doi: 10.3390/e19010033Doreen NietherSimone Wiegand<![CDATA[Entropy, Vol. 19, Pages 31: Univariate and Multivariate Generalized Multiscale Entropy to Characterise EEG Signals in Alzheimer’s Disease]]>
http://www.mdpi.com/1099-4300/19/1/31
Alzheimer’s disease (AD) is a degenerative brain disorder leading to memory loss and changes in other cognitive abilities. The complexity of electroencephalogram (EEG) signals may help to characterise AD. To this end, we propose an extension of multiscale entropy based on variance (MSEσ2) to multichannel signals, termed multivariate MSEσ2 (mvMSEσ2), to take into account both the spatial and time domains of time series. Then, we investigate the mvMSEσ2 of EEGs at different frequency bands, including the broadband signals filtered between 1 and 40 Hz, θ, α, and β bands, and compare it with the previously-proposed multiscale entropy based on mean (MSEµ), multivariate MSEµ (mvMSEµ), and MSEσ2, to distinguish different kinds of dynamical properties of the spread and the mean in the signals. Results from 11 AD patients and 11 age-matched controls suggest that the presence of broadband activity of EEGs is required for a proper evaluation of complexity. MSEσ2 and mvMSEσ2 results, showing a loss of complexity in AD signals, led to smaller p-values in comparison with MSEµ and mvMSEµ ones, suggesting that the variance-based MSE and mvMSE can characterise changes in EEGs as a result of AD in a more detailed way. The p-values for the slope values of the mvMSE curves were smaller than for MSE at large scale factors, also showing the possible usefulness of multivariate techniques.Entropy2017-01-12191Article10.3390/e19010031311099-43002017-01-12doi: 10.3390/e19010031Hamed AzamiDaniel AbásoloSamantha SimonsJavier Escudero<![CDATA[Entropy, Vol. 19, Pages 29: Local Entropy Generation in Compressible Flow through a High Pressure Turbine with Delayed Detached Eddy Simulation]]>
http://www.mdpi.com/1099-4300/19/1/29
Gas turbines are important energy-converting equipment in many industries. The flow inside gas turbines is very complicated and the knowledge about the flow loss mechanism is critical to the advanced design. The current design system heavily relies on empirical formulas or Reynolds Averaged Navier–Stokes (RANS), which faces big challenges in dealing with highly unsteady complex flow and accurately predicting flow losses. Further improving the efficiency needs more insights into the loss generation in gas turbines. Conventional Unsteady Reynolds Averaged Simulation (URANS) methods have defects in modeling multi-frequency, multi-length, highly unsteady flow, especially when mixing or separation occurs, while Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) are too costly for the high-Reynolds number flow. In this work, the Delayed Detached Eddy Simulation (DDES) method is used with a low-dissipation numerical scheme to capture the detailed flow structures of the complicated flow in a high pressure turbine guide vane. DDES accurately predicts the wake vortex behavior and produces much more details than RANS and URANS. The experimental findings of the wake vortex length characteristics, which RANS and URANS fail to predict, are successfully captured by DDES. Accurate flow simulation builds up a solid foundation for accurate losses prediction. Based on the detailed DDES results, loss analysis in terms of entropy generation rate is conducted from two aspects. The first aspect is to apportion losses by its physical resources: viscous irreversibility and heat transfer irreversibility. The viscous irreversibility is found to be much stronger than the heat transfer irreversibility in the flow. The second aspect is weighing the contributions of steady effects and unsteady effects. Losses due to unsteady effects account for a large part of total losses. Effects of unsteadiness should not be neglected in the flow physics study and design process.Entropy2017-01-11191Article10.3390/e19010029291099-43002017-01-11doi: 10.3390/e19010029Dun LinXin YuanXinrong Su<![CDATA[Entropy, Vol. 19, Pages 30: Comparing Relational and Ontological Triple Stores in Healthcare Domain]]>
http://www.mdpi.com/1099-4300/19/1/30
Today’s technological improvements have made ubiquitous healthcare systems that converge into smart healthcare applications in order to solve patients’ problems, to communicate effectively with patients, and to improve healthcare service quality. The first step of building a smart healthcare information system is representing the healthcare data as connected, reachable, and sharable. In order to achieve this representation, ontologies are used to describe the healthcare data. Combining ontological healthcare data with the used and obtained data can be maintained by storing the entire health domain data inside big data stores that support both relational and graph-based ontological data. There are several big data stores and different types of big data sets in the healthcare domain. The goal of this paper is to determine the most applicable ontology data store for storing the big healthcare data. For this purpose, AllegroGraph and Oracle 12c data stores are compared based on their infrastructural capacity, loading time, and query response times. Hence, healthcare ontologies (GENE Ontology, Gene Expression Ontology (GEXO), Regulation of Transcription Ontology (RETO), Regulation of Gene Expression Ontology (REXO)) are used to measure the ontology loading time. Thereafter, various queries are constructed and executed for GENE ontology in order to measure the capacity and query response times for the performance comparison between AllegroGraph and Oracle 12c triple stores.Entropy2017-01-11191Technical Note10.3390/e19010030301099-43002017-01-11doi: 10.3390/e19010030Ozgu CanEmine SezerOkan BursaMurat Unalir<![CDATA[Entropy, Vol. 19, Pages 26: Face Detection Based on Skin Color Segmentation Using Fuzzy Entropy]]>
http://www.mdpi.com/1099-4300/19/1/26
Face detection is the first step of any automated face recognition system. One of the most popular approaches to detect faces in color images is using a skin color segmentation scheme, which in many cases needs a proper representation of color spaces to interpret image information. In this paper, we propose a fuzzy system for detecting skin in color images, so that each color tone is assumed to be a fuzzy set. The Red, Green, and Blue (RGB), the Hue, Saturation and Value (HSV), and the YCbCr (where Y is the luminance and Cb,Cr are the chroma components) color systems are used for the development of our fuzzy design. Thus, a fuzzy three-partition entropy approach is used to calculate all of the parameters needed for the fuzzy systems, and then, a face detection method is also developed to validate the segmentation results. The results of the experiments show a correct skin detection rate between 94% and 96% for our fuzzy segmentation methods, with a false positive rate of about 0.5% in all cases. Furthermore, the average correct face detection rate is above 93%, and even when working with heterogeneous backgrounds and different light conditions, it achieves almost 88% correct detections. Thus, our method leads to accurate face detection results with low false positive and false negative rates.Entropy2017-01-11191Article10.3390/e19010026261099-43002017-01-11doi: 10.3390/e19010026Francisco PujolMar PujolAntonio Jimeno-MorenillaMaría Pujol<![CDATA[Entropy, Vol. 19, Pages 28: Acknowledgement to Reviewers of Entropy in 2016]]>
http://www.mdpi.com/1099-4300/19/1/28
The editors of Entropy would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...]Entropy2017-01-11191Editorial10.3390/e19010028281099-43002017-01-11doi: 10.3390/e19010028 Entropy Editorial Office<![CDATA[Entropy, Vol. 19, Pages 22: Research Entropy Complexity about the Nonlinear Dynamic Delay Game Model]]>
http://www.mdpi.com/1099-4300/19/1/22
Based on the research of domestic and foreign scholars, this paper has improved and established a double oligopoly market model of renewable energy, and analyzed the complex dynamic characteristics of a system based on entropy theory and chaos theory, such as equilibrium point, stability, Hopf bifurcation conditions, etc. This paper also studied and simulated the effects of the natural growth rate of energy and the single delay decision on the renewable energy system by minimizing the entropy of the system and reducing the system instability to a minimum, so that the degree of disorder within the system was reduced. The results show that with the increase of the natural growth rate of energy, the stability of the system is not affected, but the market demand of the oligopoly 1 is gradually reducing and the market demand of the oligopoly 2 is gradually increasing. At the same time, a single oligopoly making the time delay decision will affect the stability of the two oligopolies. With the increase of delay, the time required to reach the stable state will grow, and the system will eventually enter the Hopf bifurcation, thus the system will have its entropy increased and fall into an unstable state. Therefore, in the actual market of renewable energy, oligopolies should pay attention to the natural growth rate of energy and time delay, ensuring the stability of the game process and the orderliness of the system.Entropy2017-01-09191Article10.3390/e19010022221099-43002017-01-09doi: 10.3390/e19010022Xueli ZhanJunhai MaWenbo Ren<![CDATA[Entropy, Vol. 19, Pages 23: Use of Information Measures and Their Approximations to Detect Predictive Gene-Gene Interaction]]>
http://www.mdpi.com/1099-4300/19/1/23
We reconsider the properties and relationships of the interaction information and its modified versions in the context of detecting the interaction of two SNPs for the prediction of a binary outcome when interaction information is positive. This property is called predictive interaction, and we state some new sufficient conditions for it to hold true. We also study chi square approximations to these measures. It is argued that interaction information is a different and sometimes more natural measure of interaction than the logistic interaction parameter especially when SNPs are dependent. We introduce a novel measure of predictive interaction based on interaction information and its modified version. In numerical experiments, which use copulas to model dependence, we study examples when the logistic interaction parameter is zero or close to zero for which predictive interaction is detected by the new measure, while it remains undetected by the likelihood ratio test.Entropy2017-01-07191Article10.3390/e19010023231099-43002017-01-07doi: 10.3390/e19010023Jan MielniczukMarcin Rdzanowski<![CDATA[Entropy, Vol. 19, Pages 24: Constructing a Measurement Method of Differences in Group Preferences Based on Relative Entropy]]>
http://www.mdpi.com/1099-4300/19/1/24
In the research and data analysis of the differences involved in group preferences, conventional statistical methods cannot reflect the integrity and preferences of human minds; in particular, it is difficult to exclude humans’ irrational factors. This paper introduces a preference amount model based on relative entropy theory. A related expansion is made based on the characteristics of the questionnaire data, and we also construct the parameters to measure differences in the data distribution of different groups on the whole. In this paper, this parameter is called the center distance, and it effectively reflects the preferences of human minds. Using the survey data of securities market participants as an example, this paper analyzes differences in market participants’ attitudes toward the effectiveness of securities regulation. Based on this method, differences between groups that were overlooked by analysis of variance are found, and certain aspects obscured by general data characteristics are also found.Entropy2017-01-06191Article10.3390/e19010024241099-43002017-01-06doi: 10.3390/e19010024Shiyu ZhangWenzhi LiuQin HeXuguang Hao<![CDATA[Entropy, Vol. 19, Pages 6: Misalignment Fault Diagnosis of DFWT Based on IEMD Energy Entropy and PSO-SVM]]>
http://www.mdpi.com/1099-4300/19/1/6
Misalignment is an important cause for the early failure of large doubly-fed wind turbines (DFWT). For the non-stationary characteristics of the signals in the transmission system of DFWT and the reality that it is difficult to obtain a large number of fault samples, Solidworks and Adams are used to simulate the different operating conditions of the transmission system of the DFWT to obtain the corresponding characteristic signals. Improved empirical mode decomposition (IEMD), which improves the end effects of empirical mode decomposition (EMD) is used to decompose the signals to get intrinsic mode function (IMF), and the IEMD energy entropy reflecting the working state are extracted as the inputs of the support vector machine (SVM). Particle swarm optimization (PSO) is used to optimize the parameters of SVM to improve the classification performance. The results show that the proposed method can effectively and accurately identify the types of misalignment of the DFWT.Entropy2017-01-01191Article10.3390/e1901000661099-43002017-01-01doi: 10.3390/e19010006Yancai XiaoNa KangYi HongGuangjian Zhang<![CDATA[Entropy, Vol. 19, Pages 21: Perturbative Treatment of the Non-Linear q-Schrödinger and q-Klein–Gordon Equations]]>
http://www.mdpi.com/1099-4300/19/1/21
Interesting non-linear generalization of both Schrödinger’s and Klein–Gordon’s equations have been recently advanced by Tsallis, Rego-Monteiro and Tsallis (NRT) in Nobre et al. (Phys. Rev. Lett. 2011, 106, 140601). There is much current activity going on in this area. The non-linearity is governed by a real parameter q. Empiric hints suggest that the ensuing non-linear q-Schrödinger and q-Klein–Gordon equations are a natural manifestations of very high energy phenomena, as verified by LHC-experiments. This happens for q − values close to unity (Plastino et al. (Nucl. Phys. A 2016, 955, 16–26, Nucl. Phys. A 2016, 948, 19–27)). It might thus be difficult for q-values close to unity to ascertain whether one is dealing with solutions to the ordinary Schrödinger equation (whose free particle solutions are exponentials and for which q = 1 ) or with its NRT non-linear q-generalizations, whose free particle solutions are q-exponentials. In this work, we provide a careful analysis of the q ∼ 1 instance via a perturbative analysis of the NRT equations.Entropy2016-12-31191Article10.3390/e19010021211099-43002016-12-31doi: 10.3390/e19010021Javier ZamoraMario RoccaAngelo PlastinoGustavo Ferri<![CDATA[Entropy, Vol. 19, Pages 20: Nonlinear Relaxation Phenomena in Metastable Condensed Matter Systems]]>
http://www.mdpi.com/1099-4300/19/1/20
Nonlinear relaxation phenomena in three different systems of condensed matter are investigated. (i) First, the phase dynamics in Josephson junctions is analyzed. Specifically, a superconductor-graphene-superconductor (SGS) system exhibits quantum metastable states, and the average escape time from these metastable states in the presence of Gaussian and correlated fluctuations is calculated, accounting for variations in the the noise source intensity and the bias frequency. Moreover, the transient dynamics of a long-overlap Josephson junction (JJ) subject to thermal fluctuations and non-Gaussian noise sources is investigated. Noise induced phenomena are observed, such as the noise enhanced stability and the stochastic resonant activation. (ii) Second, the electron spin relaxation process in a n-type GaAs bulk driven by a fluctuating electric field is investigated. In particular, by using a Monte Carlo approach, we study the influence of a random telegraph noise on the spin polarized transport. Our findings show the possibility to raise the spin relaxation length by increasing the amplitude of the external fluctuations. Moreover, we find that, crucially, depending on the value of the external field strength, the electron spin depolarization length versus the noise correlation time increases up to a plateau. (iii) Finally, the stabilization of quantum metastable states by dissipation is presented. Normally, quantum fluctuations enhance the escape from metastable states in the presence of dissipation. We show that dissipation can enhance the stability of a quantum metastable system, consisting of a particle moving in a strongly asymmetric double well potential, interacting with a thermal bath. We find that the escape time from the metastable region has a nonmonotonic behavior versus the system- bath coupling and the temperature, producing a stabilizing effect.Entropy2016-12-31191Article10.3390/e19010020201099-43002016-12-31doi: 10.3390/e19010020Bernardo SpagnoloClaudio GuarcelloLuca MagazzùAngelo CarolloDominique Persano AdornoDavide Valenti<![CDATA[Entropy, Vol. 19, Pages 19: Thermal Conductivity of Suspension of Aggregating Nanometric Rods]]>
http://www.mdpi.com/1099-4300/19/1/19
Enhancing thermal conductivity of simple fluids is of major interest in numerous applicative systems. One possibility of enhancing thermal properties consists of dispersing small conductive particles inside. However, in general, aggregation effects occur and then one must address systems composed of dispersed clusters composed of particles as well as the ones related to percolated networks. This papers analyzes the conductivity enhancement of different microstructures scaling from clusters dispersed into a simple matrix to the ones related to percolated networks exhibiting a fractal morphology.Entropy2016-12-31191Article10.3390/e19010019191099-43002016-12-31doi: 10.3390/e19010019Amine AmmarFrancisco ChinestaRodolphe Heyd<![CDATA[Entropy, Vol. 19, Pages 18: Information and Self-Organization]]>
http://www.mdpi.com/1099-4300/19/1/18
The process of “self-organization” takes place in open and complex systems that acquire spatio-temporal or functional structures without specific ordering instructions from the outside. [...]Entropy2016-12-31191Editorial10.3390/e19010018181099-43002016-12-31doi: 10.3390/e19010018Hermann HakenJuval Portugali<![CDATA[Entropy, Vol. 19, Pages 14: A New Feature Extraction Method Based on EEMD and Multi-Scale Fuzzy Entropy for Motor Bearing]]>
http://www.mdpi.com/1099-4300/19/1/14
Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD), mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs) with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM) model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM) for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.Entropy2016-12-31191Article10.3390/e19010014141099-43002016-12-31doi: 10.3390/e19010014Huimin ZhaoMeng SunWu DengXinhua Yang<![CDATA[Entropy, Vol. 19, Pages 17: The Information Recovery Problem]]>
http://www.mdpi.com/1099-4300/19/1/17
The issue of unitary evolution during creation and evaporation of a black hole remains controversial. We argue that some prominent cures are more troubling than the disease, demonstrate that their central element—forming of the event horizon before the evaporation begins—is not necessarily true, and describe a fully coupled matter-gravity system which is manifestly unitary.Entropy2016-12-30191Article10.3390/e19010017171099-43002016-12-30doi: 10.3390/e19010017Valentina BaccettiViqar HusainDaniel Terno<![CDATA[Entropy, Vol. 19, Pages 16: One-Parameter Fisher–Rényi Complexity: Notion and Hydrogenic Applications]]>
http://www.mdpi.com/1099-4300/19/1/16
In this work, the one-parameter Fisher–Rényi measure of complexity for general d-dimensional probability distributions is introduced and its main analytic properties are discussed. Then, this quantity is determined for the hydrogenic systems in terms of the quantum numbers of the quantum states and the nuclear charge.Entropy2016-12-30191Article10.3390/e19010016161099-43002016-12-30doi: 10.3390/e19010016Irene ToranzoPablo Sánchez-MorenoŁukasz RudnickiJesús Dehesa<![CDATA[Entropy, Vol. 19, Pages 15: Humans Outperform Machines at the Bilingual Shannon Game]]>
http://www.mdpi.com/1099-4300/19/1/15
We provide an upper bound for the amount of information a human translator adds to an original text, i.e., how many bits of information we need to store a translation, given the original. We do this by creating a Bilingual Shannon Game that elicits character guesses from human subjects, then developing models to estimate the entropy of those guess sequences.Entropy2016-12-30191Article10.3390/e19010015151099-43002016-12-30doi: 10.3390/e19010015Marjan GhazvininejadKevin Knight<![CDATA[Entropy, Vol. 19, Pages 13: A Comparative Study of Empirical Mode Decomposition-Based Filtering for Impact Signal]]>
http://www.mdpi.com/1099-4300/19/1/13
The Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) has been used to propose a new method for filtering time series originating from nonlinear systems. The filtering method is based on fuzzy entropy and a new waveform. A new waveform is defined wherein Intrinsic Mode Functions (IMFs)—which are obtained by CEEMDAN algorithm—are firstly sorted in ascending order (the sorted IMFs is symmetric about center point, because at any point, the mean value of the envelope line defined by the local maxima and the local minima is zero), and the energy of the sorted IMFs are calculated, respectively. Finally, the new waveform with axial symmetry can be obtained. The complexity of the new waveform can be quantified by fuzzy entropy. The relevant modes (noisy signal modes and useful signal modes) can be identified by the difference between the fuzzy entropy of the new waveform and the next adjacent new waveform. To evaluate the filter performance, CEEMDAN and sample entropy, Ensemble Empirical Mode Decomposition (EEMD) and fuzzy entropy, and EEMD and sample entropy were used to filter the synthesizing signals with various levels of input signal-to-noise ratio (SNRin). In particular, this approach is successful in filtering impact signal. The results of the filtering are evaluated by a de-trended fluctuation analysis (DFA) algorithm, revised mean square error (RMSE), and revised signal-to-noise ratio (RSNR), respectively. The filtering results of simulated and impact signal show that the filtering method based on CEEMDAN and fuzzy entropy outperforms other signal filtering methods.Entropy2016-12-29191Article10.3390/e19010013131099-43002016-12-29doi: 10.3390/e19010013Liwei ZhanChengwei Li<![CDATA[Entropy, Vol. 19, Pages 12: An Urban Cellular Automata Model for Simulating Dynamic States on a Local Scale]]>
http://www.mdpi.com/1099-4300/19/1/12
In complex systems, flexibility and adaptability to changes are crucial to the systems’ dynamic stability and evolution. Such resilience requires that the system is able to respond to disturbances by self-organizing, which implies a certain level of entropy within the system. Dynamic states (static, cyclical/periodic, complex, and chaotic) reflect this generative capacity, and correlate with the level of entropy. For planning complex cities, we need to develop methods to guide such autonomous progress in an optimal manner. A classical apparatus, cellular automaton (CA), provides such a tool. Applications of CA help us to study temporal dynamics in self-organizing urban systems. By exploring the dynamic states of the model’s dynamics resulting from different border conditions it is possible to discover favorable set(s) of rules conductive to the self-organizing dynamics and enable the system’s recovery at the time of crises. Level of entropy is a relevant measurement for evaluation of these dynamic states. The 2-D urban cellular automaton model studied here is based on the microeconomic principle that similar urban activities are attracted to each other, especially in certain self-organizing areas, and that the local dynamics of these enclaves affect the dynamics of the urban region by channeling flows of information, goods and people. The results of the modeling experiment indicate that the border conditions have a major impact on the model’s dynamics generating various dynamic states of the system. Most importantly, it seemed that the model could simulate a favorable, complex dynamic state with medium entropy level which may refer to the continuous self-organization of the system. The model provides a tool for exploring and understanding the effects of boundary conditions in the planning process as various scenarios are tested: resulting dynamics of the system can be explored with such “planning rules” prior to decisions, helping to identify planning guidelines that will support the future evolution of these areas.Entropy2016-12-28191Article10.3390/e19010012121099-43002016-12-28doi: 10.3390/e19010012Jenni Partanen<![CDATA[Entropy, Vol. 19, Pages 11: A Cloud Theory-Based Trust Computing Model in Social Networks]]>
http://www.mdpi.com/1099-4300/19/1/11
How to develop a trust management model and then to efficiently control and manage nodes is an important issue in the scope of social network security. In this paper, a trust management model based on a cloud model is proposed. The cloud model uses a specific computation operator to achieve the transformation from qualitative concepts to quantitative computation. Additionally, this can also be used to effectively express the fuzziness, randomness and the relationship between them of the subjective trust. The node trust is divided into reputation trust and transaction trust. In addition, evaluation methods are designed, respectively. Firstly, the two-dimension trust cloud evaluation model is designed based on node’s comprehensive and trading experience to determine the reputation trust. The expected value reflects the average trust status of nodes. Then, entropy and hyper-entropy are used to describe the uncertainty of trust. Secondly, the calculation methods of the proposed direct transaction trust and the recommendation transaction trust involve comprehensively computation of the transaction trust of each node. Then, the choosing strategies were designed for node to trade based on trust cloud. Finally, the results of a simulation experiment in P2P network file sharing on an experimental platform directly reflect the objectivity, accuracy and robustness of the proposed model, and could also effectively identify the malicious or unreliable service nodes in the system. In addition, this can be used to promote the service reliability of the nodes with high credibility, by which the stability of the whole network is improved.Entropy2016-12-28191Article10.3390/e19010011111099-43002016-12-28doi: 10.3390/e19010011Fengming LiuXiaoqian ZhuYuxi HuLehua RenHenric Johnson<![CDATA[Entropy, Vol. 19, Pages 10: Entropy Generation in Magnetohydrodynamic Mixed Convection Flow over an Inclined Stretching Sheet]]>
http://www.mdpi.com/1099-4300/19/1/10
This research focuses on entropy generation rate per unit volume in magneto-hydrodynamic (MHD) mixed convection boundary layer flow of a viscous fluid over an inclined stretching sheet. Analysis has been performed in the presence of viscous dissipation and non-isothermal boundary conditions. The governing boundary layer equations are transformed into ordinary differential equations by an appropriate similarity transformation. The transformed coupled nonlinear ordinary differential equations are then solved numerically by a shooting technique along with the Runge-Kutta method. Expressions for entropy generation (Ns) and Bejan number (Be) in the form of dimensionless variables are also obtained. Impact of various physical parameters on the quantities of interest is seen.Entropy2016-12-28191Article10.3390/e19010010101099-43002016-12-28doi: 10.3390/e19010010Muhammad AfridiMuhammad QasimIlyas KhanSharidan ShafieAli Alshomrani<![CDATA[Entropy, Vol. 19, Pages 9: A Dissipation of Relative Entropy by Diffusion Flows]]>
http://www.mdpi.com/1099-4300/19/1/9
Given a probability measure, we consider the diffusion flows of probability measures associated with the partial differential equation (PDE) of Fokker–Planck. Our flows of the probability measures are defined as the solution of the Fokker–Planck equation for the same strictly convex potential, which means that the flows have the same equilibrium. Then, we shall investigate the time derivative for the relative entropy in the case where the object and the reference measures are moving according to the above diffusion flows, from which we can obtain a certain dissipation formula and also an integral representation of the relative entropy.Entropy2016-12-27191Article10.3390/e1901000991099-43002016-12-27doi: 10.3390/e19010009Hiroaki Yoshida<![CDATA[Entropy, Vol. 19, Pages 8: Active and Purely Dissipative Nambu Systems in General Thermostatistical Settings Described by Nonlinear Partial Differential Equations Involving Generalized Entropy Measures]]>
http://www.mdpi.com/1099-4300/19/1/8
In physics, several attempts have been made to apply the concepts and tools of physics to the life sciences. In this context, a thermostatistic framework for active Nambu systems is proposed. The so-called free energy Fokker–Planck equation approach is used to describe stochastic aspects of active Nambu systems. Different thermostatistic settings are considered that are characterized by appropriately-defined entropy measures, such as the Boltzmann–Gibbs–Shannon entropy and the Tsallis entropy. In general, the free energy Fokker–Planck equations associated with these generalized entropy measures correspond to nonlinear partial differential equations. Irrespective of the entropy-related nonlinearities occurring in these nonlinear partial differential equations, it is shown that semi-analytical solutions for the stationary probability densities of the active Nambu systems can be obtained provided that the pumping mechanisms of the active systems assume the so-called canonical-dissipative form and depend explicitly only on Nambu invariants. Applications are presented both for purely-dissipative and for active systems illustrating that the proposed framework includes as a special case stochastic equilibrium systems.Entropy2016-12-27191Article10.3390/e1901000881099-43002016-12-27doi: 10.3390/e19010008T. Frank<![CDATA[Entropy, Vol. 19, Pages 7: A Sequence of Escort Distributions and Generalizations of Expectations on q-Exponential Family]]>
http://www.mdpi.com/1099-4300/19/1/7
In the theory of complex systems, long tailed probability distributions are often discussed. For such a probability distribution, a deformed expectation with respect to an escort distribution is more useful than the standard expectation. In this paper, by generalizing such escort distributions, a sequence of escort distributions is introduced. As a consequence, it is shown that deformed expectations with respect to sequential escort distributions effectively work for anomalous statistics. In particular, it is shown that a Fisher metric on a q-exponential family can be obtained from the escort expectation with respect to the second escort distribution, and a cubic form (or an Amari–Chentsov tensor field, equivalently) is obtained from the escort expectation with respect to the third escort distribution.Entropy2016-12-25191Article10.3390/e1901000771099-43002016-12-25doi: 10.3390/e19010007Hiroshi Matsuzoe<![CDATA[Entropy, Vol. 19, Pages 5: Information Decomposition in Multivariate Systems: Definitions, Implementation and Application to Cardiovascular Networks]]>
http://www.mdpi.com/1099-4300/19/1/5
The continuously growing framework of information dynamics encompasses a set of tools, rooted in information theory and statistical physics, which allow to quantify different aspects of the statistical structure of multivariate processes reflecting the temporal dynamics of complex networks. Building on the most recent developments in this field, this work designs a complete approach to dissect the information carried by the target of a network of multiple interacting systems into the new information produced by the system, the information stored in the system, and the information transferred to it from the other systems; information storage and transfer are then further decomposed into amounts eliciting the specific contribution of assigned source systems to the target dynamics, and amounts reflecting information modification through the balance between redundant and synergetic interaction between systems. These decompositions are formulated quantifying information either as the variance or as the entropy of the investigated processes, and their exact computation for the case of linear Gaussian processes is presented. The theoretical properties of the resulting measures are first investigated in simulations of vector autoregressive processes. Then, the measures are applied to assess information dynamics in cardiovascular networks from the variability series of heart period, systolic arterial pressure and respiratory activity measured in healthy subjects during supine rest, orthostatic stress, and mental stress. Our results document the importance of combining the assessment of information storage, transfer and modification to investigate common and complementary aspects of network dynamics; suggest the higher specificity to alterations in the network properties of the measures derived from the decompositions; and indicate that measures of information transfer and information modification are better assessed, respectively, through entropy-based and variance-based implementations of the framework.Entropy2016-12-24191Article10.3390/e1901000551099-43002016-12-24doi: 10.3390/e19010005Luca FaesAlberto PortaGiandomenico NolloMichal Javorka<![CDATA[Entropy, Vol. 19, Pages 3: Echo State Condition at the Critical Point]]>
http://www.mdpi.com/1099-4300/19/1/3
Recurrent networks with transfer functions that fulfil the Lipschitz continuity with K = 1 may be echo state networks if certain limitations on the recurrent connectivity are applied. It has been shown that it is sufficient if the largest singular value of the recurrent connectivity is smaller than 1. The main achievement of this paper is a proof under which conditions the network is an echo state network even if the largest singular value is one. It turns out that in this critical case the exact shape of the transfer function plays a decisive role in determining whether the network still fulfills the echo state condition. In addition, several examples with one-neuron networks are outlined to illustrate effects of critical connectivity. Moreover, within the manuscript a mathematical definition for a critical echo state network is suggested.Entropy2016-12-23191Article10.3390/e1901000331099-43002016-12-23doi: 10.3390/e19010003Norbert Mayer<![CDATA[Entropy, Vol. 19, Pages 4: Quantum Key Distribution in the Presence of the Intercept-Resend with Faked States Attack]]>
http://www.mdpi.com/1099-4300/19/1/4
Despite the unconditionally secure theory of the Quantum Key Distribution (Q K D), several attacks have been successfully implemented against commercial Q K D systems. Those systems have exhibited some flaws, as the secret key rate of corresponding protocols remains unaltered, while the eavesdropper obtains the entire secret key. We propose the negative acknowledgment state quantum key distribution protocol as a novel protocol capable of detecting the eavesdropping activity of the Intercept Resend with Faked Sates (I R F S) attack without requiring additional optical components different from the B B 84 protocol because the system can be implemented as a high software module. In this approach, the transmitter interleaves pairs of quantum states, referred to here as parallel and orthogonal states, while the receiver uses active basis selection.Entropy2016-12-23191Article10.3390/e1901000441099-43002016-12-23doi: 10.3390/e19010004Luis Lizama-PérezJosé LópezEduardo De Carlos López<![CDATA[Entropy, Vol. 19, Pages 2: A Multivariate Multiscale Fuzzy Entropy Algorithm with Application to Uterine EMG Complexity Analysis]]>
http://www.mdpi.com/1099-4300/19/1/2
The recently introduced multivariate multiscale entropy (MMSE) has been successfully used to quantify structural complexity in terms of nonlinear within- and cross-channel correlations as well as to reveal complex dynamical couplings and various degrees of synchronization over multiple scales in real-world multichannel data. However, the applicability of MMSE is limited by the coarse-graining process which defines scales, as it successively reduces the data length for each scale and thus yields inaccurate and undefined entropy estimates at higher scales and for short length data. To that cause, we propose the multivariate multiscale fuzzy entropy (MMFE) algorithm and demonstrate its superiority over the MMSE on both synthetic as well as real-world uterine electromyography (EMG) short duration signals. Based on MMFE features, an improvement in the classification accuracy of term-preterm deliveries was achieved, with a maximum area under the curve (AUC) value of 0.99.Entropy2016-12-22191Article10.3390/e1901000221099-43002016-12-22doi: 10.3390/e19010002Mosabber AhmedTheerasak ChanwimalueangSudhin ThayyilDanilo Mandic<![CDATA[Entropy, Vol. 19, Pages 1: Maximum Entropy Models for Quantum Systems]]>
http://www.mdpi.com/1099-4300/19/1/1
We show that for a finite von Neumann algebra, the states that maximise Segal’s entropy with a given energy level are Gibbs states. This is a counterpart of the classical result for the algebra of all bounded linear operators on a Hilbert space and von Neumann entropy.Entropy2016-12-22191Article10.3390/e1901000111099-43002016-12-22doi: 10.3390/e19010001Andrzej ŁuczakHanna PodsędkowskaMichał Seweryn<![CDATA[Entropy, Vol. 18, Pages 457: Monitoring Test for Stability of Dependence Structure in Multivariate Data Based on Copula]]>
http://www.mdpi.com/1099-4300/18/12/457
In this paper, we consider a sequential monitoring procedure for detecting changes in copula function. We propose a cusum type of monitoring test based on the empirical copula function and apply it to the detection of the distributional changes in copula function. We investigate the asymptotic properties of the stopping time and show that under regularity conditions, its limiting null distribution is the same as the sup of Kiefer process. Moreover, we utilize the bootstrap method in order to obtain the limiting distribution. A simulation study and a real data analysis are conducted to evaluate our test.Entropy2016-12-211812Article10.3390/e181204574571099-43002016-12-21doi: 10.3390/e18120457Jiyeon LeeByungsoo Kim<![CDATA[Entropy, Vol. 18, Pages 456: A Possible Application of the Contribution of Aromaticity to Entropy: Thermal Switch]]>
http://www.mdpi.com/1099-4300/18/12/456
It has been known for a long time that the loss of aromaticity of gaseous molecules leads to a large increase of the enthalpy and to a tiny increase of the entropy. Generally, the calculated transition temperature from an aromatic structure towards a non-aromatic structure at which these two contributions cancel is very high. The entropy associated to the loss of aromaticity of adsorbed molecules, such as pyridine on Si(100) and on Ge(100), is roughly the same while the associated enthalpy is much smaller, a consequence of which is a low transition temperature. This allows us to imagine monomolecular devices, such as thermal switches, based on the difference of electrical conductivity between aromatic and non-aromatic species adsorbed on Si(100) or on Ge(100).Entropy2016-12-201812Article10.3390/e181204564561099-43002016-12-20doi: 10.3390/e18120456Romain CoustelStéphane CarniatoGérard Boureau<![CDATA[Entropy, Vol. 18, Pages 454: Grey Coupled Prediction Model for Traffic Flow with Panel Data Characteristics]]>
http://www.mdpi.com/1099-4300/18/12/454
This paper studies the grey coupled prediction problem of traffic data with panel data characteristics. Traffic flow data collected continuously at the same site typically has panel data characteristics. The longitudinal data (daily flow) is time-series data, which show an obvious intra-day trend and can be predicted using the autoregressive integrated moving average (ARIMA) model. The cross-sectional data is composed of observations at the same time intervals on different days and shows weekly seasonality and limited data characteristics; this data can be predicted using the rolling seasonal grey model (RSDGM(1,1)). The length of the rolling sequence is determined using matrix perturbation analysis. Then, a coupled model is established based on the ARIMA and RSDGM(1,1) models; the coupled prediction is achieved at the intersection of the time-series data and cross-sectional data, and the weights are determined using grey relational analysis. Finally, numerical experiments on 16 groups of cross-sectional data show that the RSDGM(1,1) model has good adaptability and stability and can effectively predict changes in traffic flow. The performance of the coupled model is also better than that of the benchmark model, the coupled model with equal weights and the Bayesian combination model.Entropy2016-12-201812Article10.3390/e181204544541099-43002016-12-20doi: 10.3390/e18120454Jinwei YangXinping XiaoShuhua MaoCongjun RaoJianghui Wen<![CDATA[Entropy, Vol. 18, Pages 455: Rényi Divergences, Bures Geometry and Quantum Statistical Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/12/455
The Bures geometry of quantum statistical thermodynamics at thermal equilibrium is investigated by introducing the connections between the Bures angle and the Rényi 1/2-divergence. Fundamental relations concerning free energy, moments of work, and distance are established.Entropy2016-12-191812Article10.3390/e181204554551099-43002016-12-19doi: 10.3390/e18120455Ali HardalÖzgür Müstecaplıoğlu<![CDATA[Entropy, Vol. 18, Pages 453: Entropic Citizenship Behavior and Sustainability in Urban Organizations: Towards a Theoretical Model]]>
http://www.mdpi.com/1099-4300/18/12/453
Entropy is a concept derived from Physics that has been used to describe natural and social systems’ structure and behavior. Applications of the concept in the social sciences so far have been largely limited to the disciplines of economics and sociology. In the current paper, the concept of entropy is applied to organizational citizenship behavior with implications for urban organizational sustainability. A heuristic is presented for analysing personal and organizational citizenship configurations and distributions within a given workforce that can lead to corporate entropy; and for allowing prescriptive remedial steps to be taken to manage the process, should entropy from this source threaten its sustainability and survival.Entropy2016-12-191812Article10.3390/e181204534531099-43002016-12-19doi: 10.3390/e18120453David Coldwell<![CDATA[Entropy, Vol. 18, Pages 452: Ranking DMUs by Comparing DEA Cross-Efficiency Intervals Using Entropy Measures]]>
http://www.mdpi.com/1099-4300/18/12/452
Cross-efficiency evaluation, an extension of data envelopment analysis (DEA), can eliminate unrealistic weighing schemes and provide a ranking for decision making units (DMUs). In the literature, the determination of input and output weights uniquely receives more attentions. However, the problem of choosing the aggressive (minimal) or benevolent (maximal) formulation for decision-making might still remain. In this paper, we develop a procedure to perform cross-efficiency evaluation without the need to make any specific choice of DEA weights. The proposed procedure takes into account the aggressive and benevolent formulations at the same time, and the choice of DEA weights can then be avoided. Consequently, a number of cross-efficiency intervals is obtained for each DMU. The entropy, which is based on information theory, is an effective tool to measure the uncertainty. We then utilize the entropy to construct a numerical index for DMUs with cross-efficiency intervals. A mathematical program is proposed to find the optimal entropy values of DMUs for comparison. With the derived entropy value, we can rank DMUs accordingly. Two examples are illustrated to show the effectiveness of the idea proposed in this paper.Entropy2016-12-171812Article10.3390/e181204524521099-43002016-12-17doi: 10.3390/e18120452Tim LuShiang-Tai Liu<![CDATA[Entropy, Vol. 18, Pages 451: Linear Quantum Entropy and Non-Hermitian Hamiltonians]]>
http://www.mdpi.com/1099-4300/18/12/451
We consider the description of open quantum systems with probability sinks (or sources) in terms of general non-Hermitian Hamiltonians. Within such a framework, we study novel possible definitions of the quantum linear entropy as an indicator of the flow of information during the dynamics. Such linear entropy functionals are necessary in the case of a partially Wigner-transformed non-Hermitian Hamiltonian (which is typically useful within a mixed quantum-classical representation). Both the case of a system represented by a pure non-Hermitian Hamiltonian as well as that of the case of non-Hermitian dynamics in a classical bath are explicitly considered.Entropy2016-12-161812Article10.3390/e181204514511099-43002016-12-16doi: 10.3390/e18120451Alessandro SergiPaolo Giaquinta