Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 19, Pages 301: Measurement Uncertainty Relations for Position and Momentum: Relative Entropy Formulation]]>
http://www.mdpi.com/1099-4300/19/7/301
Heisenberg’s uncertainty principle has recently led to general measurement uncertainty relations for quantum systems: incompatible observables can be measured jointly or in sequence only with some unavoidable approximation, which can be quantified in various ways. The relative entropy is the natural theoretical quantifier of the information loss when a `true’ probability distribution is replaced by an approximating one. In this paper, we provide a lower bound for the amount of information that is lost by replacing the distributions of the sharp position and momentum observables, as they could be obtained with two separate experiments, by the marginals of any smeared joint measurement. The bound is obtained by introducing an entropic error function, and optimizing it over a suitable class of covariant approximate joint measurements. We fully exploit two cases of target observables: (1) n-dimensional position and momentum vectors; (2) two components of position and momentum along different directions. In (1), we connect the quantum bound to the dimension n; in (2), going from parallel to orthogonal directions, we show the transition from highly incompatible observables to compatible ones. For simplicity, we develop the theory only for Gaussian states and measurements.Entropy2017-06-24197Article10.3390/e190703013011099-43002017-06-24doi: 10.3390/e19070301Alberto BarchielliMatteo GregorattiAlessandro Toigo<![CDATA[Entropy, Vol. 19, Pages 300: Test of the Pauli Exclusion Principle in the VIP-2 Underground Experiment]]>
http://www.mdpi.com/1099-4300/19/7/300
The validity of the Pauli exclusion principle—a building block of Quantum Mechanics—is tested for electrons. The VIP (violation of Pauli exclusion principle) and its follow-up VIP-2 experiments at the Laboratori Nazionali del Gran Sasso search for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. The candidate events—if they exist—originate from the transition of a 2 p orbit electron to the ground state which is already occupied by two electrons. The present limit on the probability for Pauli exclusion principle violation for electrons set by the VIP experiment is 4.7 × 10 − 29 . We report a first result from the VIP-2 experiment improving on the VIP limit, which solidifies the final goal of achieving a two orders of magnitude gain in the long run.Entropy2017-06-24197Brief Report10.3390/e190703003001099-43002017-06-24doi: 10.3390/e19070300Catalina CurceanuHexi ShiSergio BartalucciSergio BertolucciMassimiliano BazziCarolina BerucciMario BragadireanuMichael CargnelliAlberto ClozzaLuca De PaolisSergio Di MatteoJean-Pierre EggerCarlo GuaraldoMihail IliescuJohann MartonMatthias LaubensteinEdoardo MilottiMarco MiliucciAndreas PichlerDorel PietreanuKristian PiscicchiaAlessandro ScordoDiana SirghiFlorin SirghiLaura SperandioOton Vazquez DoceEberhard WidmannJohann Zmeskal<![CDATA[Entropy, Vol. 19, Pages 299: Critical Behavior in Physics and Probabilistic Formal Languages]]>
http://www.mdpi.com/1099-4300/19/7/299
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity, which we dub the rational mutual information, and discuss generalizations of our claims involving more complicated Bayesian networks.Entropy2017-06-23197Article10.3390/e190702992991099-43002017-06-23doi: 10.3390/e19070299Henry LinMax Tegmark<![CDATA[Entropy, Vol. 19, Pages 298: The Legendre Transform in Non-Additive Thermodynamics and Complexity]]>
http://www.mdpi.com/1099-4300/19/7/298
We present an argument which purports to show that the use of the standard Legendre transform in non-additive Statistical Mechanics is not appropriate. For concreteness, we use as paradigm, the case of systems which are conjecturally described by the (non-additive) Tsallis entropy. We point out the form of the modified Legendre transform that should be used, instead, in the non-additive thermodynamics induced by the Tsallis entropy. We comment on more general implications of this proposal for the thermodynamics of “complex systems”.Entropy2017-06-23197Article10.3390/e190702982981099-43002017-06-23doi: 10.3390/e19070298Nikolaos Kalogeropoulos<![CDATA[Entropy, Vol. 19, Pages 297: Two Approaches to Obtaining the Space-Time Fractional Advection-Diffusion Equation]]>
http://www.mdpi.com/1099-4300/19/7/297
Two approaches resulting in two different generalizations of the space-time-fractional advection-diffusion equation are discussed. The Caputo time-fractional derivative and Riesz fractional Laplacian are used. The fundamental solutions to the corresponding Cauchy and source problems in the case of one spatial variable are studied using the Laplace transform with respect to time and the Fourier transform with respect to the spatial coordinate. The numerical results are illustrated graphically.Entropy2017-06-23197Article10.3390/e190702972971099-43002017-06-23doi: 10.3390/e19070297Yuriy PovstenkoTamara Kyrylych<![CDATA[Entropy, Vol. 19, Pages 295: Bodily Processing: The Role of Morphological Computation]]>
http://www.mdpi.com/1099-4300/19/7/295
The integration of embodied and computational approaches to cognition requires that non-neural body parts be described as parts of a computing system, which realizes cognitive processing. In this paper, based on research about morphological computations and the ecology of vision, I argue that nonneural body parts could be described as parts of a computational system, but they do not realize computation autonomously, only in connection with some kind of—even in the simplest form—central control system. Finally, I integrate the proposal defended in the paper with the contemporary mechanistic approach to wide computation.Entropy2017-06-22197Article10.3390/e190702952951099-43002017-06-22doi: 10.3390/e19070295Przemysław Nowakowski<![CDATA[Entropy, Vol. 19, Pages 294: An Exploration Algorithm for Stochastic Simulators Driven by Energy Gradients]]>
http://www.mdpi.com/1099-4300/19/7/294
In recent work, we have illustrated the construction of an exploration geometry on free energy surfaces: the adaptive computer-assisted discovery of an approximate low-dimensional manifold on which the effective dynamics of the system evolves. Constructing such an exploration geometry involves geometry-biased sampling (through both appropriately-initialized unbiased molecular dynamics and through restraining potentials) and, machine learning techniques to organize the intrinsic geometry of the data resulting from the sampling (in particular, diffusion maps, possibly enhanced through the appropriate Mahalanobis-type metric). In this contribution, we detail a method for exploring the conformational space of a stochastic gradient system whose effective free energy surface depends on a smaller number of degrees of freedom than the dimension of the phase space. Our approach comprises two steps. First, we study the local geometry of the free energy landscape using diffusion maps on samples computed through stochastic dynamics. This allows us to automatically identify the relevant coarse variables. Next, we use the information garnered in the previous step to construct a new set of initial conditions for subsequent trajectories. These initial conditions are computed so as to explore the accessible conformational space more efficiently than by continuing the previous, unbiased simulations. We showcase this method on a representative test system.Entropy2017-06-22197Article10.3390/e190702942941099-43002017-06-22doi: 10.3390/e19070294Anastasia GeorgiouJuan Bello-RivasCharles GearHau-Tieng WuEliodoro ChiavazzoIoannis Kevrekidis<![CDATA[Entropy, Vol. 19, Pages 296: Analytical Approximate Solutions of (n + 1)-Dimensional Fractal Heat-Like and Wave-Like Equations]]>
http://www.mdpi.com/1099-4300/19/7/296
In this paper, we propose a new type (n + 1)-dimensional reduced differential transform method (RDTM) based on a local fractional derivative (LFD) to solve (n + 1)-dimensional local fractional partial differential equations (PDEs) in Cantor sets. The presented method is named the (n + 1)-dimensional local fractional reduced differential transform method (LFRDTM). First the theories, their proofs and also some basic properties of this procedure are given. To understand the introduced method clearly, we apply it on the (n + 1)-dimensional fractal heat-like equations (HLEs) and wave-like equations (WLEs). The applications show that this new technique is efficient, simply applicable and has powerful effects in (n + 1)-dimensional local fractional problems.Entropy2017-06-22197Article10.3390/e190702962961099-43002017-06-22doi: 10.3390/e19070296Omer AcanDumitru BaleanuMaysaa Mohamed Al QurashiMehmet Giyas Sakar<![CDATA[Entropy, Vol. 19, Pages 290: Enthalpy of Mixing in Al–Tb Liquid]]>
http://www.mdpi.com/1099-4300/19/6/290
The liquid-phase enthalpy of mixing for Al–Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared with the Miedema model prediction of mixing enthalpy.Entropy2017-06-21196Article10.3390/e190602902901099-43002017-06-21doi: 10.3390/e19060290Shihuai ZhouCarl TackesRalph Napolitano<![CDATA[Entropy, Vol. 19, Pages 292: The Entropic Linkage between Equity and Bond Market Dynamics]]>
http://www.mdpi.com/1099-4300/19/6/292
An alternative derivation of the yield curve based on entropy or the loss of information as it is communicated through time is introduced. Given this focus on entropy growth in communication the Shannon entropy will be utilized. Additionally, Shannon entropy’s close relationship to the Kullback–Leibler divergence is used to provide a more precise understanding of this new yield curve. The derivation of the entropic yield curve is completed with the use of the Burnashev reliability function which serves as a weighting between the true and error distributions. The deep connections between the entropic yield curve and the popular Nelson–Siegel specification are also examined. Finally, this entropically derived yield curve is used to provide an estimate of the economy’s implied information processing ratio. This information theoretic ratio offers a new causal link between bond and equity markets, and is a valuable new tool for the modeling and prediction of stock market behavior.Entropy2017-06-21196Article10.3390/e190602922921099-43002017-06-21doi: 10.3390/e19060292Edgar Parker<![CDATA[Entropy, Vol. 19, Pages 288: Inconsistency of Template Estimation by Minimizing of the Variance/Pre-Variance in the Quotient Space]]>
http://www.mdpi.com/1099-4300/19/6/288
We tackle the problem of template estimation when data have been randomly deformed under a group action in the presence of noise. In order to estimate the template, one often minimizes the variance when the influence of the transformations have been removed (computation of the Fréchet mean in the quotient space). The consistency bias is defined as the distance (possibly zero) between the orbit of the template and the orbit of one element which minimizes the variance. In the first part, we restrict ourselves to isometric group action, in this case the Hilbertian distance is invariant under the group action. We establish an asymptotic behavior of the consistency bias which is linear with respect to the noise level. As a result the inconsistency is unavoidable as soon as the noise is enough. In practice, template estimation with a finite sample is often done with an algorithm called “max-max”. In the second part, also in the case of isometric group finite, we show the convergence of this algorithm to an empirical Karcher mean. Our numerical experiments show that the bias observed in practice can not be attributed to the small sample size or to a convergence problem but is indeed due to the previously studied inconsistency. In a third part, we also present some insights of the case of a non invariant distance with respect to the group action. We will see that the inconsistency still holds as soon as the noise level is large enough. Moreover we prove the inconsistency even when a regularization term is added.Entropy2017-06-20196Article10.3390/e190602882881099-43002017-06-20doi: 10.3390/e19060288Loïc DevilliersStéphanie AllassonnièreAlain TrouvéXavier Pennec<![CDATA[Entropy, Vol. 19, Pages 289: The Mehler-Fock Transform in Signal Processing]]>
http://www.mdpi.com/1099-4300/19/6/289
Many signals can be described as functions on the unit disk (ball). In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N) as their symmetry groups. One illustration of this construction is three-dimensional color spaces in which chroma properties are described by points on the unit disk. A combination of principal component analysis and the Perron-Frobenius theorem can be used to show that perspective projections map positive signals (i.e., functions with positive values) to a product of the positive half-axis and the unit ball. The representation theory (harmonic analysis) of the group SU(1,1) leads to an integral transform, the Mehler-Fock-transform (MFT), that decomposes functions, depending on the radial coordinate only, into combinations of associated Legendre functions. This transformation is applied to kernel density estimators of probability distributions on the unit disk. It is shown that the transform separates the influence of the data and the measured data. The application of the transform is illustrated by studying the statistical distribution of RGB vectors obtained from a common set of object points under different illuminants.Entropy2017-06-20196Article10.3390/e190602892891099-43002017-06-20doi: 10.3390/e19060289Reiner Lenz<![CDATA[Entropy, Vol. 19, Pages 286: Assessing Probabilistic Inference by Comparing the Generalized Mean of the Model and Source Probabilities]]>
http://www.mdpi.com/1099-4300/19/6/286
An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the actual outcome and is referred to as the “Accuracy”. Upper and lower error bars on the accuracy are provided by the arithmetic mean and the −2/3 mean. The arithmetic is called the “Decisiveness” due to its similarity with the cost of a decision and the −2/3 mean is called the “Robustness”, due to its sensitivity to outlier errors. Visualization of inference performance is facilitated by plotting the reported model probabilities versus the histogram calculated source probabilities. The visualization of the calibration between model and source is summarized on both axes by the arithmetic, geometric, and −2/3 means. From information theory, the performance of the inference is related to the cross-entropy between the model and source distribution. Just as cross-entropy is the sum of the entropy and the divergence; the accuracy of a model can be decomposed into a component due to the source uncertainty and the divergence between the source and model. Translated to the probability domain these quantities are plotted as the average model probability versus the average source probability. The divergence probability is the average model probability divided by the average source probability. When an inference is over/under-confident, the arithmetic mean of the model increases/decreases, while the −2/3 mean decreases/increases, respectively.Entropy2017-06-19196Article10.3390/e190602862861099-43002017-06-19doi: 10.3390/e19060286Kenric Nelson<![CDATA[Entropy, Vol. 19, Pages 265: Modeling Multi-Event Non-Point Source Pollution in a Data-Scarce Catchment Using ANN and Entropy Analysis]]>
http://www.mdpi.com/1099-4300/19/6/265
Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS) prediction and evaluation. The artificial neural network (ANN) was used to extend the runoff–pollutant relationship from complete data events to other data-scarce events. The interpolation method was then used to solve the problem of tail deviation in the simulated pollutographs. In addition, the entropy method was utilized to train the ANN for comprehensive evaluations. A case study was performed in the Three Gorges Reservoir Region, China. Results showed that the ANN performed well in the NPS simulation, especially for light rainfall events, and the phosphorus predictions were always more accurate than the nitrogen predictions under scarce data conditions. In addition, peak pollutant data scarcity had a significant impact on the model performance. Furthermore, these traditional indicators would lead to certain information loss during the model evaluation, but the entropy weighting method could provide a more accurate model evaluation. These results would be valuable for monitoring schemes and the quantitation of event-based NPS pollution, especially in data-poor catchments.Entropy2017-06-19196Article10.3390/e190602652651099-43002017-06-19doi: 10.3390/e19060265Lei ChenCheng SunGuobo WangHui XieZhenyao Shen<![CDATA[Entropy, Vol. 19, Pages 287: Information Technology Project Portfolio Implementation Process Optimization Based on Complex Network Theory and Entropy]]>
http://www.mdpi.com/1099-4300/19/6/287
In traditional information technology project portfolio management (ITPPM), managers often pay more attention to the optimization of portfolio selection in the initial stage. In fact, during the portfolio implementation process, there are still issues to be optimized. Organizing cooperation will enhance the efficiency, although it brings more immediate risk due to the complex variety of links between projects. In order to balance the efficiency and risk, an optimization method is presented based on the complex network theory and entropy, which will assist portfolio managers in recognizing the structure of the portfolio and determine the cooperation range. Firstly, a complex network model for an IT project portfolio is constructed, in which the project is simulated as an artificial life agent. At the same time, the portfolio is viewed as a small scale of society. Following this, social network analysis is used to detect and divide communities in order to estimate the roles of projects between different portfolios. Based on these, the efficiency and the risk are measured using entropy and are balanced through searching for adequate hierarchy community divisions. Thus, the activities of cooperation in organizations, risk management, and so on—which are usually viewed as an important art—can be discussed and conducted based on quantity calculations.Entropy2017-06-19196Article10.3390/e190602872871099-43002017-06-19doi: 10.3390/e19060287Qin WangGuangping ZengXuyan Tu<![CDATA[Entropy, Vol. 19, Pages 285: On the Simplification of Statistical Mechanics for Space Plasmas]]>
http://www.mdpi.com/1099-4300/19/6/285
Space plasmas are frequently described by kappa distributions. Non-extensive statistical mechanics involves the maximization of the Tsallis entropic form under the constraints of canonical ensemble, considering also a dyadic formalism between the ordinary and escort probability distributions. This paper addresses the statistical origin of kappa distributions, and shows that they can be connected with non-extensive statistical mechanics without considering the dyadic formalism of ordinary/escort distributions. While this concept does significantly simplify the usage of the theory, it costs the definition of a dyadic entropic formulation, in order to preserve the consistency between statistical mechanics and thermodynamics. Therefore, the simplification of the theory by means of avoiding dyadic formalism is impossible within the framework of non-extensive statistical mechanics.Entropy2017-06-18196Article10.3390/e190602852851099-43002017-06-18doi: 10.3390/e19060285George Livadiotis<![CDATA[Entropy, Vol. 19, Pages 284: Multiscale Entropy Analysis of Unattended Oximetric Recordings to Assist in the Screening of Paediatric Sleep Apnoea at Home]]>
http://www.mdpi.com/1099-4300/19/6/284
Untreated paediatric obstructive sleep apnoea syndrome (OSAS) can severely affect the development and quality of life of children. In-hospital polysomnography (PSG) is the gold standard for a definitive diagnosis though it is relatively unavailable and particularly intrusive. Nocturnal portable oximetry has emerged as a reliable technique for OSAS screening. Nevertheless, additional evidences are demanded. Our study is aimed at assessing the usefulness of multiscale entropy (MSE) to characterise oximetric recordings. We hypothesise that MSE could provide relevant information of blood oxygen saturation (SpO2) dynamics in the detection of childhood OSAS. In order to achieve this goal, a dataset composed of unattended SpO2 recordings from 50 children showing clinical suspicion of OSAS was analysed. SpO2 was parameterised by means of MSE and conventional oximetric indices. An optimum feature subset composed of five MSE-derived features and four conventional clinical indices were obtained using automated bidirectional stepwise feature selection. Logistic regression (LR) was used for classification. Our optimum LR model reached 83.5% accuracy (84.5% sensitivity and 83.0% specificity). Our results suggest that MSE provides relevant information from oximetry that is complementary to conventional approaches. Therefore, MSE may be useful to improve the diagnostic ability of unattended oximetry as a simplified screening test for childhood OSAS.Entropy2017-06-17196Article10.3390/e190602842841099-43002017-06-17doi: 10.3390/e19060284Andrea CrespoDaniel ÁlvarezGonzalo C. Gutiérrez-TobalFernando Vaquerizo-VillarVerónica Barroso-GarcíaMaría L. Alonso-ÁlvarezJoaquín Terán-SantosRoberto HorneroFélix del Campo<![CDATA[Entropy, Vol. 19, Pages 283: LSTM-CRF for Drug-Named Entity Recognition]]>
http://www.mdpi.com/1099-4300/19/6/283
Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific knowledge which are difficult to collect and define. Therefore, we offer an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (LSTM) with Conditional Random Fields decoding (LSTM-CRF). Two kinds of word representations are used in this work: word embedding, which is trained from a large amount of text, and character-based representation, which can capture orthographic feature of words. Experimental results on the DDI2011 and DDI2013 dataset show the effect of the proposed LSTM-CRF method. Our method outperforms the best system in the DDI2013 challenge.Entropy2017-06-17196Article10.3390/e190602832831099-43002017-06-17doi: 10.3390/e19060283Donghuo ZengChengjie SunLei LinBingquan Liu<![CDATA[Entropy, Vol. 19, Pages 282: Correntropy-Based Pulse Rate Variability Analysis in Children with Sleep Disordered Breathing]]>
http://www.mdpi.com/1099-4300/19/6/282
Pulse rate variability (PRV), an alternative measure of heart rate variability (HRV), is altered during obstructive sleep apnea. Correntropy spectral density (CSD) is a novel spectral analysis that includes nonlinear information. We recruited 160 children and recorded SpO2 and photoplethysmography (PPG), alongside standard polysomnography. PPG signals were divided into 1-min epochs and apnea/hypoapnea (A/H) epochs labeled. CSD was applied to the pulse-to-pulse interval time series (PPIs) and five features extracted: the total spectral power (TP: 0.01–0.6 Hz), the power in the very low frequency band (VLF: 0.01–0.04 Hz), the normalized power in the low and high frequency bands (LFn: 0.04–0.15 Hz, HFn: 0.15–0.6 Hz), and the LF/HF ratio. Nonlinearity was assessed with the surrogate data technique. Multivariate logistic regression models were developed for CSD and power spectral density (PSD) analysis to detect epochs with A/H events. The CSD-based features and model identified epochs with and without A/H events more accurately relative to PSD-based analysis (area under the curve (AUC) 0.72 vs. 0.67) due to the nonlinearity of the data. In conclusion, CSD-based PRV analysis provided enhanced performance in detecting A/H epochs, however, a combination with overnight SpO2 analysis is suggested for optimal results.Entropy2017-06-16196Article10.3390/e190602822821099-43002017-06-16doi: 10.3390/e19060282Ainara GardeParastoo DehkordiJohn AnserminoGuy Dumont<![CDATA[Entropy, Vol. 19, Pages 278: Entropy Generation Rates through the Dissipation of Ordered Regions in Helium Boundary-Layer Flows]]>
http://www.mdpi.com/1099-4300/19/6/278
The results of the computation of entropy generation rates through the dissipation of ordered regions within selected helium boundary layer flows are presented. Entropy generation rates in helium boundary layer flows for five cases of increasing temperature and pressure are considered. The basic format of a turbulent spot is used as the flow model. Statistical processing of the time-dependent series solutions of the nonlinear, coupled Lorenz-type differential equations for the spectral velocity wave components in the three-dimensional boundary layer configuration yields the local volumetric entropy generation rates. Extension of the computational method to the transition from laminar to fully turbulent flow is discussed.Entropy2017-06-15196Article10.3390/e190602782781099-43002017-06-15doi: 10.3390/e19060278LaVar Isaacson<![CDATA[Entropy, Vol. 19, Pages 281: An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation]]>
http://www.mdpi.com/1099-4300/19/6/281
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel framework. The proposed CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm is derived and analyzed in detail. A desired zero attraction term is put forward in the updating equation of the proposed CIMSM-PNLMS algorithm to force the inactive coefficients to zero. The performance of the proposed CIMSM-PNLMS algorithm is investigated for estimating an underwater communication channel estimation and an echo channel. The obtained results demonstrate that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms.Entropy2017-06-15196Article10.3390/e190602812811099-43002017-06-15doi: 10.3390/e19060281Zhan JinYingsong LiYanyan Wang<![CDATA[Entropy, Vol. 19, Pages 280: Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations]]>
http://www.mdpi.com/1099-4300/19/6/280
A semi-analytical method is proposed to calculate stochastic quasi-periodic responses of limit cycles in non-equilibrium dynamical systems excited by periodic forces and weak random fluctuations, approximately. First, a kind of 1/N-stroboscopic map is introduced to discretize the quasi-periodic torus into closed curves, which are then approximated by periodic points. Using a stochastic sensitivity function of discrete time systems, the transverse dispersion of these circles can be quantified. Furthermore, combined with the longitudinal distribution of the circles, the probability density function of these closed curves in stroboscopic sections can be determined. The validity of this approach is shown through a van der Pol oscillator and Brusselator.Entropy2017-06-15196Article10.3390/e190602802801099-43002017-06-15doi: 10.3390/e19060280Kongming GuoJun JiangYalan Xu<![CDATA[Entropy, Vol. 19, Pages 277: Weak Fault Diagnosis of Wind Turbine Gearboxes Based on MED-LMD]]>
http://www.mdpi.com/1099-4300/19/6/277
In view of the problem that the fault signal of the rolling bearing is weak and the fault feature is difficult to extract in the strong noise environment, a method based on minimum entropy deconvolution (MED) and local mean deconvolution (LMD) is proposed to extract the weak fault features of the rolling bearing. Through the analysis of the simulation signal, we find that LMD has many limitations for the feature extraction of weak signals under strong background noise. In order to eliminate the noise interference and extract the characteristics of the weak fault, MED is employed as the pre-filter to remove noise. This method is applied to the weak fault feature extraction of rolling bearings; that is, using MED to reduce the noise of the wind turbine gearbox test bench under strong background noise, and then using the LMD method to decompose the denoised signals into several product functions (PFs), and finally analyzing the PF components that have strong correlation by a cyclic autocorrelation function. The finding is that the failure of the wind power gearbox is generated from the micro-bending of the high-speed shaft and the pitting of the #10 bearing outer race at the output end of the high-speed shaft. This method is compared with LMD, which shows the effectiveness of this method. This paper provides a new method for the extraction of multiple faults and weak features in strong background noise.Entropy2017-06-15196Article10.3390/e190602772771099-43002017-06-15doi: 10.3390/e19060277Zhijian WangJunyuan WangYanfei KouJiping ZhangShaohui NingZhifang Zhao<![CDATA[Entropy, Vol. 19, Pages 275: The Entropy of Words—Learnability and Expressivity across More than 1000 Languages]]>
http://www.mdpi.com/1099-4300/19/6/275
The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics and language sciences more generally. Information theory gives us tools at hand to measure precisely the average amount of choice associated with words: the word entropy. Here, we use three parallel corpora, encompassing ca. 450 million words in 1916 texts and 1259 languages, to tackle some of the major conceptual and practical problems of word entropy estimation: dependence on text size, register, style and estimation method, as well as non-independence of words in co-text. We present two main findings: Firstly, word entropies display relatively narrow, unimodal distributions. There is no language in our sample with a unigram entropy of less than six bits/word. We argue that this is in line with information-theoretic models of communication. Languages are held in a narrow range by two fundamental pressures: word learnability and word expressivity, with a potential bias towards expressivity. Secondly, there is a strong linear relationship between unigram entropies and entropy rates. The entropy difference between words with and without co-textual information is narrowly distributed around ca. three bits/word. In other words, knowing the preceding text reduces the uncertainty of words by roughly the same amount across languages of the world.Entropy2017-06-14196Article10.3390/e190602752751099-43002017-06-14doi: 10.3390/e19060275Christian BentzDimitrios AlikaniotisMichael CysouwRamon Ferrer-i-Cancho<![CDATA[Entropy, Vol. 19, Pages 276: Noise Enhancement for Weighted Sum of Type I and II Error Probabilities with Constraints]]>
http://www.mdpi.com/1099-4300/19/6/276
In this paper, the noise-enhanced detection problem is investigated for the binary hypothesis-testing. The optimal additive noise is determined according to a criterion proposed by DeGroot and Schervish (2011), which aims to minimize the weighted sum of type I and II error probabilities under constraints on type I and II error probabilities. Based on a generic composite hypothesis-testing formulation, the optimal additive noise is obtained. The sufficient conditions are also deduced to verify whether the usage of the additive noise can or cannot improve the detectability of a given detector. In addition, some additional results are obtained according to the specificity of the binary hypothesis-testing, and an algorithm is developed for finding the corresponding optimal noise. Finally, numerical examples are given to verify the theoretical results and proofs of the main theorems are presented in the Appendix.Entropy2017-06-14196Article10.3390/e190602762761099-43002017-06-14doi: 10.3390/e19060276Shujun LiuTing YangKui Zhang<![CDATA[Entropy, Vol. 19, Pages 270: An Information-Spectrum Approach to the Capacity Region of the Interference Channel]]>
http://www.mdpi.com/1099-4300/19/6/270
In this paper, a general formula for the capacity region of a general interference channel with two pairs of users is derived, which reveals that the capacity region is the union of a family of rectangles. In the region, each rectangle is determined by a pair of spectral inf-mutual information rates. The presented formula provides us with useful insights into the interference channels in spite of the difficulty of computing it. Specially, when the inputs are discrete, ergodic Markov processes and the channel is stationary memoryless, the formula can be evaluated by the BCJR (Bahl-Cocke-Jelinek-Raviv) algorithm. Also the formula suggests that considering the structure of the interference processes contributes to obtaining tighter inner bounds than the simplest one (obtained by treating the interference as noise). This is verified numerically by calculating the mutual information rates for Gaussian interference channels with embedded convolutional codes. Moreover, we present a coding scheme to approach the theoretical achievable rate pairs. Numerical results show that the decoding gains can be achieved by considering the structure of the interference.Entropy2017-06-13196Article10.3390/e190602702701099-43002017-06-13doi: 10.3390/e19060270Lei LinXiao MaChulong LiangXiujie HuangBaoming Bai<![CDATA[Entropy, Vol. 19, Pages 274: Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint]]>
http://www.mdpi.com/1099-4300/19/6/274
This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT) models with stochastic (or uncertain) constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT) models (such as log-normal, log-Cauchy, and log-logistic FT models) as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC) sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt) prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.Entropy2017-06-13196Article10.3390/e190602742741099-43002017-06-13doi: 10.3390/e19060274Hea-Jung Kim<![CDATA[Entropy, Vol. 19, Pages 272: Game-Theoretic Optimization of Bilateral Contract Transaction for Generation Companies and Large Consumers with Incomplete Information]]>
http://www.mdpi.com/1099-4300/19/6/272
Bilateral contract transaction among generation companies and large consumers is attracting much attention in the electricity market. A large consumer can purchase energy from generation companies directly under a bilateral contract, which can guarantee the economic interests of both sides. However, in pursuit of more profit, the competitions in the transaction exist not only between the company side and the consumer side, but also among generation companies. In order to maximize its profit, each company needs to optimize bidding price to attract large consumers. In this paper, a master–slave game is proposed to describe the competitions among generation companies and large consumers. Furthermore, a Bayesian game approach is formulated to describe the competitions among generation companies considering the incomplete information. In the model, the goal of each company is to determine the optimal bidding price with Bayesian game; and based on the bidding price provided by companies and the predicted spot price, large consumers decide their personnel purchase strategy to minimize their cost. Simulation results show that each participant in the transaction can benefit from the proposed game.Entropy2017-06-13196Article10.3390/e190602722721099-43002017-06-13doi: 10.3390/e19060272Yi TangJing LingCheng WuNing ChenXiaofeng LiuBingtuan Gao<![CDATA[Entropy, Vol. 19, Pages 273: Multiscale Information Theory and the Marginal Utility of Information]]>
http://www.mdpi.com/1099-4300/19/6/273
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior among system components results in overlapping or shared information. A system’s structure is revealed in the sharing of information across the system’s dependencies, each of which has an associated scale. Counting information according to its scale yields the quantity of scale-weighted information, which is conserved when a system is reorganized. In the interest of flexibility we allow information to be quantified using any function that satisfies two basic axioms. Shannon information and vector space dimension are examples. We discuss two quantitative indices that summarize system structure: an existing index, the complexity profile, and a new index, the marginal utility of information. Using simple examples, we show how these indices capture the multiscale structure of complex systems in a quantitative way.Entropy2017-06-13196Article10.3390/e190602732731099-43002017-06-13doi: 10.3390/e19060273Benjamin AllenBlake StaceyYaneer Bar-Yam<![CDATA[Entropy, Vol. 19, Pages 269: A Novel Distance Metric: Generalized Relative Entropy]]>
http://www.mdpi.com/1099-4300/19/6/269
Information entropy and its extension, which are important generalizations of entropy, are currently applied to many research domains. In this paper, a novel generalized relative entropy is constructed to avoid some defects of traditional relative entropy. We present the structure of generalized relative entropy after the discussion of defects in relative entropy. Moreover, some properties of the provided generalized relative entropy are presented and proved. The provided generalized relative entropy is proved to have a finite range and is a finite distance metric. Finally, we predict nucleosome positioning of fly and yeast based on generalized relative entropy and relative entropy respectively. The experimental results show that the properties of generalized relative entropy are better than relative entropy.Entropy2017-06-13196Article10.3390/e190602692691099-43002017-06-13doi: 10.3390/e19060269Shuai LiuMengye LuGaocheng LiuZheng Pan<![CDATA[Entropy, Vol. 19, Pages 271: Peierls–Bogolyubov’s Inequality for Deformed Exponentials]]>
http://www.mdpi.com/1099-4300/19/6/271
We study the convexity or concavity of certain trace functions for the deformed logarithmic and exponential functions, and in this way obtain new trace inequalities for deformed exponentials that may be considered as generalizations of Peierls–Bogolyubov’s inequality. We use these results to improve previously-known lower bounds for the Tsallis relative entropy.Entropy2017-06-12196Article10.3390/e190602712711099-43002017-06-12doi: 10.3390/e19060271Frank HansenJin LiangGuanghua Shi<![CDATA[Entropy, Vol. 19, Pages 254: Generalized Beta Distribution of the Second Kind for Flood Frequency Analysis]]>
http://www.mdpi.com/1099-4300/19/6/254
Estimation of flood magnitude for a given recurrence interval T (T-year flood) at a specific location is needed for design of hydraulic and civil infrastructure facilities. A key step in the estimation or flood frequency analysis (FFA) is the selection of a suitable distribution. More than one distribution is often found to be adequate for FFA on a given watershed and choosing the best one is often less than objective. In this study, the generalized beta distribution of the second kind (GB2) was introduced for FFA. The principle of maximum entropy (POME) method was proposed to estimate the GB2 parameters. The performance of GB2 distribution was evaluated using flood data from gauging stations on the Colorado River, USA. Frequency estimates from the GB2 distribution were also compared with those of commonly used distributions. Also, the evolution of frequency distribution along the stream from upstream to downstream was investigated. It concludes that the GB2 is appealing for FFA, since it has four parameters and includes some well-known distributions. Results of case study demonstrate that the parameters estimated by POME method are found reasonable. According to the RMSD and AIC values, the performance of the GB2 distribution is better than that of the widely used distributions in hydrology. When using different distributions for FFA, significant different design flood values are obtained. For a given return period, the design flood value of the downstream gauging stations is larger than that of the upstream gauging station. In addition, there is an evolution of distribution. Along the Yampa River, the distribution for FFA changes from the four-parameter GB2 distribution to the three-parameter Burr XII distribution.Entropy2017-06-12196Article10.3390/e190602542541099-43002017-06-12doi: 10.3390/e19060254Lu ChenVijay Singh<![CDATA[Entropy, Vol. 19, Pages 268: Information Geometry of Non-Equilibrium Processes in a Bistable System with a Cubic Damping]]>
http://www.mdpi.com/1099-4300/19/6/268
A probabilistic description is essential for understanding the dynamics of stochastic systems far from equilibrium, given uncertainty inherent in the systems. To compare different Probability Density Functions (PDFs), it is extremely useful to quantify the difference among different PDFs by assigning an appropriate metric to probability such that the distance increases with the difference between the two PDFs. This metric structure then provides a key link between stochastic systems and information geometry. For a non-equilibrium process, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory of the system quantifies the total number of different states that the system undergoes in time and is called the information length. By using this concept, we investigate the information geometry of non-equilibrium processes involved in disorder-order transitions between the critical and subcritical states in a bistable system. Specifically, we compute time-dependent PDFs, information length, the rate of change in information length, entropy change and Fisher information in disorder-to-order and order-to-disorder transitions and discuss similarities and disparities between the two transitions. In particular, we show that the total information length in order-to-disorder transition is much larger than that in disorder-to-order transition and elucidate the link to the drastically different evolution of entropy in both transitions. We also provide the comparison of the results with those in the case of the transition between the subcritical and supercritical states and discuss implications for fitness.Entropy2017-06-11196Article10.3390/e190602682681099-43002017-06-11doi: 10.3390/e19060268Rainer HollerbachEun-jin Kim<![CDATA[Entropy, Vol. 19, Pages 267: Kullback–Leibler Divergence and Mutual Information of Partitions in Product MV Algebras]]>
http://www.mdpi.com/1099-4300/19/6/267
The purpose of the paper is to introduce, using the known results concerning the entropy in product MV algebras, the concepts of mutual information and Kullback–Leibler divergence for the case of product MV algebras and examine algebraic properties of the proposed measures. In particular, a convexity of Kullback–Leibler divergence with respect to states in product MV algebras is proved, and chain rules for mutual information and Kullback–Leibler divergence are established. In addition, the data processing inequality for conditionally independent partitions in product MV algebras is proved.Entropy2017-06-10196Article10.3390/e190602672671099-43002017-06-10doi: 10.3390/e19060267Dagmar MarkechováBeloslav Riečan<![CDATA[Entropy, Vol. 19, Pages 266: Model-Based Approaches to Active Perception and Control]]>
http://www.mdpi.com/1099-4300/19/6/266
There is an on-going debate in cognitive (neuro) science and philosophy between classical cognitive theory and embodied, embedded, extended, and enactive (“4-Es”) views of cognition—a family of theories that emphasize the role of the body in cognition and the importance of brain-body-environment interaction over and above internal representation. This debate touches foundational issues, such as whether the brain internally represents the external environment, and “infers” or “computes” something. Here we focus on two (4-Es-based) criticisms to traditional cognitive theories—to the notions of passive perception and of serial information processing—and discuss alternative ways to address them, by appealing to frameworks that use, or do not use, notions of internal modelling and inference. Our analysis illustrates that: an explicitly inferential framework can capture some key aspects of embodied and enactive theories of cognition; some claims of computational and dynamical theories can be reconciled rather than seen as alternative explanations of cognitive phenomena; and some aspects of cognitive processing (e.g., detached cognitive operations, such as planning and imagination) that are sometimes puzzling to explain from enactive and non-representational perspectives can, instead, be captured nicely from the perspective that internal generative models and predictive processing mediate adaptive control loops.Entropy2017-06-09196Article10.3390/e190602662661099-43002017-06-09doi: 10.3390/e19060266Giovanni PezzuloFrancesco DonnarummaPierpaolo IodiceDomenico MaistoIvilin Stoianov<![CDATA[Entropy, Vol. 19, Pages 264: Joint Characteristic Timescales and Entropy Production Analyses for Model Reduction of Combustion Systems]]>
http://www.mdpi.com/1099-4300/19/6/264
The reduction of chemical kinetics describing combustion processes remains one of the major topics in the combustion theory and its applications. Problems concerning the estimation of reaction mechanisms real dimension remain unsolved, this being a critical point in the development of reduction models. In this study, we suggest a combination of local timescale and entropy production analyses to cope with this problem. In particular, the framework of skeletal mechanism is in the focus of the study as a practical and most straightforward implementation strategy for reduced mechanisms. Hydrogen and methane/dimethyl ether reaction mechanisms are considered for illustration and validation purposes. Two skeletal mechanism versions were obtained for methane/dimethyl ether combustion system by varying the tolerance used to identify important reactions in the characteristic timescale analysis of the system. Comparisons of ignition delay times and species profiles calculated with the detailed and the reduced models are presented. The results of the application show transparently the potential of the suggested approach to be automatically implemented for the reduction of large chemical kinetic models.Entropy2017-06-09196Article10.3390/e190602642641099-43002017-06-09doi: 10.3390/e19060264Sylvia PorrasViatcheslav BykovVladimir Gol’dshteinUlrich Maas<![CDATA[Entropy, Vol. 19, Pages 263: Exergy Dynamics of Systems in Thermal or Concentration Non-Equilibrium]]>
http://www.mdpi.com/1099-4300/19/6/263
The paper addresses the problem of the existence and quantification of the exergy of non-equilibrium systems. Assuming that both energy and exergy are a priori concepts, the Gibbs “available energy” A is calculated for arbitrary temperature or concentration distributions across the body, with an accuracy that depends only on the information one has of the initial distribution. It is shown that A exponentially relaxes to its equilibrium value, and it is then demonstrated that its value is different from that of the non-equilibrium exergy, the difference depending on the imposed boundary conditions on the system and thus the two quantities are shown to be incommensurable. It is finally argued that all iso-energetic non-equilibrium states can be ranked in terms of their non-equilibrium exergy content, and that each point of the Gibbs plane corresponds therefore to a set of possible initial distributions, each one with its own exergy-decay history. The non-equilibrium exergy is always larger than its equilibrium counterpart and constitutes the “real” total exergy content of the system, i.e., the real maximum work extractable from the initial system. A systematic application of this paradigm may be beneficial for meaningful future applications in the fields of engineering and natural science.Entropy2017-06-08196Article10.3390/e190602632631099-43002017-06-08doi: 10.3390/e19060263Enrico SciubbaFederico Zullo<![CDATA[Entropy, Vol. 19, Pages 262: Projection to Mixture Families and Rate-Distortion Bounds with Power Distortion Measures ]]>
http://www.mdpi.com/1099-4300/19/6/262
The explicit form of the rate-distortion function has rarely been obtained, except for few cases where the Shannon lower bound coincides with the rate-distortion function for the entire range of the positive rate. From an information geometrical point of view, the evaluation of the rate-distortion function is achieved by a projection to the mixture family defined by the distortion measure. In this paper, we consider the β -th power distortion measure, and prove that β -generalized Gaussian distribution is the only source that can make the Shannon lower bound tight at the minimum distortion level at zero rate. We demonstrate that the tightness of the Shannon lower bound for β = 1 (Laplacian source) and β = 2 (Gaussian source) yields upper bounds to the rate-distortion function of power distortion measures with a different power. These bounds evaluate from above the projection of the source distribution to the mixture family of the generalized Gaussian models. Applying similar arguments to ϵ -insensitive distortion measures, we consider the tightness of the Shannon lower bound and derive an upper bound to the distortion-rate function which is accurate at low rates.Entropy2017-06-07196Article10.3390/e190602622621099-43002017-06-07doi: 10.3390/e19060262Kazuho Watanabe<![CDATA[Entropy, Vol. 19, Pages 261: Spurious Results of Fluctuation Analysis Techniques in Magnitude and Sign Correlations]]>
http://www.mdpi.com/1099-4300/19/6/261
Fluctuation Analysis (FA) and specially Detrended Fluctuation Analysis (DFA) are techniques commonly used to quantify correlations and scaling properties of complex time series such as the observable outputs of great variety of dynamical systems, from Economics to Physiology. Often, such correlated time series are analyzed using the magnitude and sign decomposition, i.e., by using FA or DFA to study separately the sign and the magnitude series obtained from the original signal. This approach allows for distinguishing between systems with the same linear correlations but different dynamical properties. However, here we present analytical and numerical evidence showing that FA and DFA can lead to spurious results when applied to sign and magnitude series obtained from power-law correlated time series of fractional Gaussian noise (fGn) type. Specifically, we show that: (i) the autocorrelation functions of the sign and magnitude series obtained from fGns are always power-laws; However, (ii) when the sign series presents power-law anticorrelations, FA and DFA wrongly interpret the sign series as purely uncorrelated; Similarly, (iii) when analyzing power-law correlated magnitude (or volatility) series, FA and DFA fail to retrieve the real scaling properties, and identify the magnitude series as purely uncorrelated noise; Finally, (iv) using the relationship between FA and DFA and the autocorrelation function of the time series, we explain analytically the reason for the FA and DFA spurious results, which turns out to be an intrinsic property of both techniques when applied to sign and magnitude series.Entropy2017-06-07196Article10.3390/e190602612611099-43002017-06-07doi: 10.3390/e19060261Pedro CarpenaManuel Gómez-ExtremeraConcepción Carretero-CamposPedro Bernaola-GalvánAna Coronado<![CDATA[Entropy, Vol. 19, Pages 258: Self-Organized Patterns Induced by Neimark-Sacker, Flip and Turing Bifurcations in a Discrete Predator-Prey Model with Lesie-Gower Functional Response]]>
http://www.mdpi.com/1099-4300/19/6/258
The formation of self-organized patterns in predator-prey models has been a very hot topic recently. The dynamics of these models, bifurcations and pattern formations are so complex that studies are urgently needed. In this research, we transformed a continuous predator-prey model with Lesie-Gower functional response into a discrete model. Fixed points and stability analyses were studied. Around the stable fixed point, bifurcation analyses including: flip, Neimark-Sacker and Turing bifurcation were done and bifurcation conditions were obtained. Based on these bifurcation conditions, parameters values were selected to carry out numerical simulations on pattern formation. The simulation results showed that Neimark-Sacker bifurcation induced spots, spirals and transitional patterns from spots to spirals. Turing bifurcation induced labyrinth patterns and spirals coupled with mosaic patterns, while flip bifurcation induced many irregular complex patterns. Compared with former studies on continuous predator-prey model with Lesie-Gower functional response, our research on the discrete model demonstrated more complex dynamics and varieties of self-organized patterns.Entropy2017-06-07196Article10.3390/e190602582581099-43002017-06-07doi: 10.3390/e19060258Feifan ZhangHuayong ZhangShengnan MaTianxiang MengTousheng HuangHongju Yang<![CDATA[Entropy, Vol. 19, Pages 260: Information Distances versus Entropy Metric]]>
http://www.mdpi.com/1099-4300/19/6/260
Information distance has become an important tool in a wide variety of applications. Various types of information distance have been made over the years. These information distance measures are different from entropy metric, as the former is based on Kolmogorov complexity and the latter on Shannon entropy. However, for any computable probability distributions, up to a constant, the expected value of Kolmogorov complexity equals the Shannon entropy. We study the similar relationship between entropy and information distance. We also study the relationship between entropy and the normalized versions of information distances.Entropy2017-06-07196Article10.3390/e190602602601099-43002017-06-07doi: 10.3390/e19060260Bo HuLvqing BiSongsong Dai<![CDATA[Entropy, Vol. 19, Pages 259: Structural Correlations in the Italian Overnight Money Market: An Analysis Based on Network Configuration Models]]>
http://www.mdpi.com/1099-4300/19/6/259
We study the structural correlations in the Italian overnight money market over the period 1999–2010. We show that the structural correlations vary across different versions of the network. Moreover, we employ different configuration models and examine whether higher-level characteristics of the observed network can be statistically reconstructed by maximizing the entropy of a randomized ensemble of networks restricted only by the lower-order features of the observed network. We find that often many of the high order correlations in the observed network can be considered emergent from the information embedded in the degree sequence in the binary version and in both the degree and strength sequences in the weighted version. However, this information is not enough to allow the models to account for all the patterns in the observed higher order structural correlations. In particular, one of the main features of the observed network that remains unexplained is the abnormally high level of weighted clustering in the years preceding the crisis, i.e., the huge increase in various indirect exposures generated via more intensive interbank credit links.Entropy2017-06-06196Article10.3390/e190602592591099-43002017-06-06doi: 10.3390/e19060259Duc LuuThomas LuxBoyan Yanovski<![CDATA[Entropy, Vol. 19, Pages 257: Time-Shift Multiscale Entropy Analysis of Physiological Signals]]>
http://www.mdpi.com/1099-4300/19/6/257
Measures of predictability in physiological signals using entropy measures have been widely applied in many areas of research. Multiscale entropy expresses different levels of either approximate entropy or sample entropy by means of multiple factors for generating multiple time series, enabling the capture of more useful information than using a scalar value produced by the two entropy methods. This paper presents the use of different time shifts on various intervals of time series to discover different entropy patterns of the time series. Examples and experimental results using white noise, 1/ f noise, photoplethysmography, and electromyography signals suggest the validity and better performance of the proposed time-shift multiscale entropy analysis of physiological signals than the multiscale entropy.Entropy2017-06-05196Article10.3390/e190602572571099-43002017-06-05doi: 10.3390/e19060257Tuan D. Pham<![CDATA[Entropy, Vol. 19, Pages 256: The Exergy Loss Distribution and the Heat Transfer Capability in Subcritical Organic Rankine Cycle]]>
http://www.mdpi.com/1099-4300/19/6/256
Taking net power output as the optimization objective, the exergy loss distribution of the subcritical Organic Rankine Cycle (ORC) system by using R245fa as the working fluid was calculated under the optimal conditions. The influences of heat source temperature, the evaporator pinch point temperature difference, the expander isentropic efficiency and the cooling water temperature rise on the exergy loss distribution of subcritical ORC system are comprehensively discussed. It is found that there exists a critical value of expander isentropic efficiency and cooling water temperature rise, respectively, under certain conditions. The magnitude of critical value will affect the relative distribution of exergy loss in the expander, the evaporator and the condenser. The research results will help to better understand the characteristics of the exergy loss distribution in an ORC system.Entropy2017-06-03196Article10.3390/e190602562561099-43002017-06-03doi: 10.3390/e19060256Chao HeYouzhou JiaoChaochao TianZhenfeng WangZhiping Zhang<![CDATA[Entropy, Vol. 19, Pages 255: Application of Entropy Generation to Improve Heat Transfer of Heat Sinks in Electric Machines]]>
http://www.mdpi.com/1099-4300/19/6/255
To intensify heat transfer within the complex three-dimensional flow field found in technical devices, all relevant transport phenomena have to be taken into account. In this work, a generic procedure based on a detailed analysis of entropy generation is developed to improve heat sinks found in electric machines. It enables a simultaneous consideration of temperature and velocity distributions, lumped into a single, scalar value, which can be used to directly identify regions with a high potential for heat transfer improvement. By analyzing the resulting entropy fields, it is demonstrated that the improved design obtained by this procedure is noticeably better, compared to those obtained with a classical analysis considering separately temperature and velocity distributions. This opens the door for an efficient, computer-based optimization of heat transfer in real applications.Entropy2017-06-02196Article10.3390/e190602552551099-43002017-06-02doi: 10.3390/e19060255Toni EgerThomas BolAyothi ThanuLászló DaróczyGábor JanigaRüdiger SchrothDominique Thévenin<![CDATA[Entropy, Vol. 19, Pages 239: An Entropy-Based Generalized Gamma Distribution for Flood Frequency Analysis]]>
http://www.mdpi.com/1099-4300/19/6/239
Flood frequency analysis (FFA) is needed for the design of water engineering and hydraulic structures. The choice of an appropriate frequency distribution is one of the most important issues in FFA. A key problem in FFA is that no single distribution has been accepted as a global standard. The common practice is to try some candidate distributions and select the one best fitting the data, based on a goodness of fit criterion. However, this practice entails much calculation. Sometimes generalized distributions, which can specialize into several simpler distributions, are fitted, for they may provide a better fit to data. Therefore, the generalized gamma (GG) distribution was employed for FFA in this study. The principle of maximum entropy (POME) was used to estimate GG parameters. Monte Carlo simulation was carried out to evaluate the performance of the GG distribution and to compare with widely used distributions. Finally, the T-year design flood was calculated using the GG and compared with that with other distributions. Results show that the GG distribution is either superior or comparable to other distributions.Entropy2017-06-02196Article10.3390/e190602392391099-43002017-06-02doi: 10.3390/e19060239Lu ChenVijay SinghFeng Xiong<![CDATA[Entropy, Vol. 19, Pages 253: Ruling out Higher-Order Interference from Purity Principles]]>
http://www.mdpi.com/1099-4300/19/6/253
As first noted by Rafael Sorkin, there is a limit to quantum interference. The interference pattern formed in a multi-slit experiment is a function of the interference patterns formed between pairs of slits; there are no genuinely new features resulting from considering three slits instead of two. Sorkin has introduced a hierarchy of mathematically conceivable higher-order interference behaviours, where classical theory lies at the first level of this hierarchy and quantum theory theory at the second. Informally, the order in this hierarchy corresponds to the number of slits on which the interference pattern has an irreducible dependence. Many authors have wondered why quantum interference is limited to the second level of this hierarchy. Does the existence of higher-order interference violate some natural physical principle that we believe should be fundamental? In the current work we show that such principles can be found which limit interference behaviour to second-order, or “quantum-like”, interference, but that do not restrict us to the entire quantum formalism. We work within the operational framework of generalised probabilistic theories, and prove that any theory satisfying Causality, Purity Preservation, Pure Sharpness, and Purification—four principles that formalise the fundamental character of purity in nature—exhibits at most second-order interference. Hence these theories are, at least conceptually, very “close” to quantum theory. Along the way we show that systems in such theories correspond to Euclidean Jordan algebras. Hence, they are self-dual and, moreover, multi-slit experiments in such theories are described by pure projectors.Entropy2017-06-01196Article10.3390/e190602532531099-43002017-06-01doi: 10.3390/e19060253Howard BarnumCiarán LeeCarlo ScandoloJohn Selby<![CDATA[Entropy, Vol. 19, Pages 251: Multiscale Entropy Analysis of the Differential RR Interval Time Series Signal and Its Application in Detecting Congestive Heart Failure]]>
http://www.mdpi.com/1099-4300/19/6/251
Cardiovascular systems essentially have multiscale control mechanisms. Multiscale entropy (MSE) analysis permits the dynamic characterization of the cardiovascular time series for both short-term and long-term processes, and thus can be more illuminating. The traditional MSE analysis for heart rate variability (HRV) is performed on the original RR interval time series (named as MSE_RR). In this study, we proposed an MSE analysis for the differential RR interval time series signal, named as MSE_dRR. The motivation of using the differential RR interval time series signal is that this signal has a direct link with the inherent non-linear property of electrical rhythm of the heart. The effectiveness of the MSE_RR and MSE_dRR were tested and compared on the long-term MIT-Boston’s Beth Israel Hospital (MIT-BIH) 54 normal sinus rhythm (NSR) and 29 congestive heart failure (CHF) RR interval recordings, aiming to explore which one is better for distinguishing the CHF patients from the NSR subjects. Four RR interval length for analysis were used ( N = 500 , N = 1000 , N = 2000 and N = 5000 ). The results showed that MSE_RR did not report significant differences between the NSR and CHF groups at several scales for each RR segment length type (Scales 7, 8 and 10 for N = 500 , Scales 3 and 10 for N = 1000 , Scales 2 and 3 for both N = 2000 and N = 5000 ). However, the new MSE_dRR gave significant separation for the two groups for all RR segment length types except N = 500 at Scales 9 and 10. The area under curve (AUC) values from the receiver operating characteristic (ROC) curve were used to further quantify the performances. The mean AUC of the new MSE_dRR from Scales 1–10 are 79.5%, 83.1%, 83.5% and 83.1% for N = 500 , N = 1000 , N = 2000 and N = 5000 , respectively, whereas the mean AUC of MSE_RR are only 68.6%, 69.8%, 69.6% and 67.1%, respectively. The five-fold cross validation support vector machine (SVM) classifier reports the classification Accuracy ( A c c ) of MSE_RR as 73.5%, 75.9% and 74.6% for N = 1000 , N = 2000 and N = 5000 , respectively, while for the new MSE_dRR analysis accuracy was 85.5%, 85.6% and 85.6%. Different biosignal editing methods (direct deletion and interpolation) did not change the analytical results. In summary, this study demonstrated that compared with MSE_RR, MSE_dRR reports better statistical stability and better discrimination ability for the NSR and CHF groups.Entropy2017-05-31196Article10.3390/e190602512511099-43002017-05-31doi: 10.3390/e19060251Chengyu LiuRui Gao<![CDATA[Entropy, Vol. 19, Pages 252: Off-Line Handwritten Signature Recognition by Wavelet Entropy and Neural Network]]>
http://www.mdpi.com/1099-4300/19/6/252
Handwritten signatures are widely utilized as a form of personal recognition. However, they have the unfortunate shortcoming of being easily abused by those who would fake the identification or intent of an individual which might be very harmful. Therefore, the need for an automatic signature recognition system is crucial. In this paper, a signature recognition approach based on a probabilistic neural network (PNN) and wavelet transform average framing entropy (AFE) is proposed. The system was tested with a wavelet packet (WP) entropy denoted as a WP entropy neural network system (WPENN) and with a discrete wavelet transform (DWT) entropy denoted as a DWT entropy neural network system (DWENN). Our investigation was conducted over several wavelet families and different entropy types. Identification tasks, as well as verification tasks, were investigated for a comprehensive signature system study. Several other methods used in the literature were considered for comparison. Two databases were used for algorithm testing. The best recognition rate result was achieved by WPENN whereby the threshold entropy reached 92%.Entropy2017-05-31196Article10.3390/e190602522521099-43002017-05-31doi: 10.3390/e19060252Khaled DaqrouqHusam SweidanAhmad BalameshMohammed Ajour<![CDATA[Entropy, Vol. 19, Pages 249: Horton Ratios Link Self-Similarity with Maximum Entropy of Eco-Geomorphological Properties in Stream Networks]]>
http://www.mdpi.com/1099-4300/19/6/249
Stream networks are branched structures wherein water and energy move between land and atmosphere, modulated by evapotranspiration and its interaction with the gravitational dissipation of potential energy as runoff. These actions vary among climates characterized by Budyko theory, yet have not been integrated with Horton scaling, the ubiquitous pattern of eco-hydrological variation among Strahler streams that populate river basins. From Budyko theory, we reveal optimum entropy coincident with high biodiversity. Basins on either side of optimum respond in opposite ways to precipitation, which we evaluated for the classic Hubbard Brook experiment in New Hampshire and for the Whitewater River basin in Kansas. We demonstrate that Horton ratios are equivalent to Lagrange multipliers used in the extremum function leading to Shannon information entropy being maximal, subject to constraints. Properties of stream networks vary with constraints and inter-annual variation in water balance that challenge vegetation to match expected resource supply throughout the network. The entropy-Horton framework informs questions of biodiversity, resilience to perturbations in water supply, changes in potential evapotranspiration, and land use changes that move ecosystems away from optimal entropy with concomitant loss of productivity and biodiversity.Entropy2017-05-30196Article10.3390/e190602492491099-43002017-05-30doi: 10.3390/e19060249Bruce MilneVijay Gupta<![CDATA[Entropy, Vol. 19, Pages 250: Deriving Proper Uniform Priors for Regression Coefficients, Parts I, II, and III ]]>
http://www.mdpi.com/1099-4300/19/6/250
It is a relatively well-known fact that in problems of Bayesian model selection, improper priors should, in general, be avoided. In this paper we will derive and discuss a collection of four proper uniform priors which lie on an ascending scale of informativeness. It will turn out that these priors lead us to evidences that are closely associated with the implied evidence of the Bayesian Information Criterion (BIC) and the Akaike Information Criterion (AIC). All the discussed evidences are then used in two small Monte Carlo studies, wherein for different sample sizes and noise levels the evidences are used to select between competing C-spline regression models. Also, there is given, for illustrative purposes, an outline on how to construct simple trivariate C-spline regression models. In regards to the length of this paper, only one half of this paper consists of theory and derivations, the other half consists of graphs and outputs of the two Monte Carlo studies.Entropy2017-05-30196Article10.3390/e190602502501099-43002017-05-30doi: 10.3390/e19060250H.R. ErpRonald. LingerPieter Gelder<![CDATA[Entropy, Vol. 19, Pages 246: Glassy States of Aging Social Networks]]>
http://www.mdpi.com/1099-4300/19/6/246
Individuals often develop reluctance to change their social relations, called “secondary homebody”, even though their interactions with their environment evolve with time. Some memory effect is loosely present deforcing changes. In other words, in the presence of memory, relations do not change easily. In order to investigate some history or memory effect on social networks, we introduce a temporal kernel function into the Heider conventional balance theory, allowing for the “quality” of past relations to contribute to the evolution of the system. This memory effect is shown to lead to the emergence of aged networks, thereby perfectly describing—and what is more, measuring—the aging process of links (“social relations”). It is shown that such a memory does not change the dynamical attractors of the system, but does prolong the time necessary to reach the “balanced states”. The general trend goes toward obtaining either global (“paradise” or “bipolar”) or local (“jammed”) balanced states, but is profoundly affected by aged relations. The resistance of elder links against changes decelerates the evolution of the system and traps it into so named glassy states. In contrast to balance configurations which live on stable states, such long-lived glassy states can survive in unstable states.Entropy2017-05-30196Article10.3390/e190602462461099-43002017-05-30doi: 10.3390/e19060246Foroogh HassanibesheliLeila HedayatifarHadise SafdariMarcel AusloosG. Jafari<![CDATA[Entropy, Vol. 19, Pages 222: Automatic Epileptic Seizure Detection in EEG Signals Using Multi-Domain Feature Extraction and Nonlinear Analysis]]>
http://www.mdpi.com/1099-4300/19/6/222
Epileptic seizure detection is commonly implemented by expert clinicians with visual observation of electroencephalography (EEG) signals, which tends to be time consuming and sensitive to bias. The epileptic detection in most previous research suffers from low power and unsuitability for processing large datasets. Therefore, a computerized epileptic seizure detection method is highly required to eradicate the aforementioned problems, expedite epilepsy research and aid medical professionals. In this work, we propose an automatic epilepsy diagnosis framework based on the combination of multi-domain feature extraction and nonlinear analysis of EEG signals. Firstly, EEG signals are pre-processed by using the wavelet threshold method to remove the artifacts. We then extract representative features in the time domain, frequency domain, time-frequency domain and nonlinear analysis features based on the information theory. These features are further extracted in five frequency sub-bands based on the clinical interest, and the dimension of the original feature space is then reduced by using both a principal component analysis and an analysis of variance. Furthermore, the optimal combination of the extracted features is identified and evaluated via different classifiers for the epileptic seizure detection of EEG signals. Finally, the performance of the proposed method is investigated by using a public EEG database at the University Hospital Bonn, Germany. Experimental results demonstrate that the proposed epileptic seizure detection method can achieve a high average accuracy of 99.25%, indicating a powerful method in the detection and classification of epileptic seizures. The proposed seizure detection scheme is thus hoped to eliminate the burden of expert clinicians when they are processing a large number of data by visual observation and to speed-up the epilepsy diagnosis.Entropy2017-05-27196Article10.3390/e190602222221099-43002017-05-27doi: 10.3390/e19060222Lina WangWeining XueYang LiMeilin LuoJie HuangWeigang CuiChao Huang<![CDATA[Entropy, Vol. 19, Pages 248: On the Configurational Entropy of Nanoscale Solutions for More Accurate Surface and Bulk Nano-Thermodynamic Calculations]]>
http://www.mdpi.com/1099-4300/19/6/248
The configurational entropy of nanoscale solutions is discussed in this paper. As follows from the comparison of the exact equation of Boltzmann and its Stirling approximation (widely used for both macroscale and nanoscale solutions today), the latter significantly over-estimates the former for nano-phases and surface regions. On the other hand, the exact Boltzmann equation cannot be used for practical calculations, as it requires the calculation of the factorial of the number of atoms in a phase, and those factorials are such large numbers that they cannot be handled by commonly used computer codes. Herewith, a correction term is introduced in this paper to replace the Stirling approximation by the so-called “de Moivre approximation”. This new approximation is a continuous function of the number of atoms/molecules and the composition of the nano-solution. This correction becomes negligible for phases larger than 15 nm in diameter. However, the correction term does not cause mathematical difficulties, even if it is used for macro-phases. Using this correction, future nano-thermodynamic calculations will become more precise. Equations are worked out for both integral and partial configurational entropies of multi-component nano-solutions. The equations are correct only for nano-solutions, which contain at least a single atom of each component (below this concentration, there is no sense to make any calculations).Entropy2017-05-27196Article10.3390/e190602482481099-43002017-05-27doi: 10.3390/e19060248Andras DezsoGeorge Kaptay<![CDATA[Entropy, Vol. 19, Pages 247: Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy]]>
http://www.mdpi.com/1099-4300/19/6/247
Variable selection methods play an important role in the field of attribute mining. The Naive Bayes (NB) classifier is a very simple and popular classification method that yields good results in a short processing time. Hence, it is a very appropriate classifier for very large datasets. The method has a high dependence on the relationships between the variables. The Info-Gain (IG) measure, which is based on general entropy, can be used as a quick variable selection method. This measure ranks the importance of the attribute variables on a variable under study via the information obtained from a dataset. The main drawback is that it is always non-negative and it requires setting the information threshold to select the set of most important variables for each dataset. We introduce here a new quick variable selection method that generalizes the method based on the Info-Gain measure. It uses imprecise probabilities and the maximum entropy measure to select the most informative variables without setting a threshold. This new variable selection method, combined with the Naive Bayes classifier, improves the original method and provides a valuable tool for handling datasets with a very large number of features and a huge amount of data, where more complex methods are not computationally feasible.Entropy2017-05-25196Article10.3390/e190602472471099-43002017-05-25doi: 10.3390/e19060247Joaquín AbellánJavier Castellano<![CDATA[Entropy, Vol. 19, Pages 245: Entropy Analysis of Monetary Unions]]>
http://www.mdpi.com/1099-4300/19/6/245
This paper is an exercise of quantitative dynamic analysis applied to bilateral relationships among partners within two monetary union case studies: the Portuguese escudo zone monetary union (EZMU) and the European euro zone monetary union (EMU). Real world data are tackled and measures that are usual in complex system analysis, such as entropy, mutual information, Canberra distance, and Jensen–Shannon divergence, are adopted. The emerging relationships are visualized by means of the multidimensional scaling and hierarchical clustering computational techniques. Results bring evidence on long-run stochastic dynamics that lead to asymmetric indebtedness mechanisms among the partners of a monetary union and sustainability difficulties. The consequences of unsustainability and disruption of monetary unions have high importance for the discussion on optimal currency areas from a geopolitical perspective.Entropy2017-05-24196Article10.3390/e190602452451099-43002017-05-24doi: 10.3390/e19060245Maria MataJose Machado<![CDATA[Entropy, Vol. 19, Pages 244: The Tale of Two Financial Crises: An Entropic Perspective]]>
http://www.mdpi.com/1099-4300/19/6/244
This paper provides a comparative analysis of stock market dynamics of the 1987 and 2008 financial crises and discusses the extent to which risk management measures based on entropy can be successful in predicting aggregate market expectations. We find that the Tsallis entropy is more appropriate for the short and sudden market crash of 1987, while the approximate entropy is the dominant predictor of the prolonged, fundamental crisis of 2008. We conclude by suggesting the use of entropy as a market sentiment indicator in technical analysis.Entropy2017-05-24196Article10.3390/e190602442441099-43002017-05-24doi: 10.3390/e19060244Ramazan GençayNikola Gradojevic<![CDATA[Entropy, Vol. 19, Pages 243: On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback]]>
http://www.mdpi.com/1099-4300/19/6/243
This work studies the relationship between the energy allocated for transmitting a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless channel output feedback (GBCF) and the resulting distortion at the receivers. Our goal is to characterize the minimum transmission energy required for broadcasting a pair of source samples, such that each source can be reconstructed at its respective receiver to within a target distortion, when the source-channel bandwidth ratio is not restricted. This minimum transmission energy is defined as the energy-distortion tradeoff (EDT). We derive a lower bound and three upper bounds on the optimal EDT. For the upper bounds, we analyze the EDT of three transmission schemes: two schemes are based on separate source-channel coding and apply encoding over multiple samples of source pairs, and the third scheme is a joint source-channel coding scheme that applies uncoded linear transmission on a single source-sample pair and is obtained by extending the Ozarow–Leung (OL) scheme. Numerical simulations show that the EDT of the OL-based scheme is close to that of the better of the two separation-based schemes, which makes the OL scheme attractive for energy-efficient, low-latency and low-complexity source transmission over GBCFs.Entropy2017-05-24196Article10.3390/e190602432431099-43002017-05-24doi: 10.3390/e19060243Yonathan MurinYonatan KaspiRon DaboraDeniz Gündüz<![CDATA[Entropy, Vol. 19, Pages 242: A Framework for Designing the Architectures of Deep Convolutional Neural Networks]]>
http://www.mdpi.com/1099-4300/19/6/242
Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance.Entropy2017-05-24196Article10.3390/e190602422421099-43002017-05-24doi: 10.3390/e19060242Saleh AlbelwiAusif Mahmood<![CDATA[Entropy, Vol. 19, Pages 241: Axiomatic Characterization of the Quantum Relative Entropy and Free Energy]]>
http://www.mdpi.com/1099-4300/19/6/241
Building upon work by Matsumoto, we show that the quantum relative entropy with full-rank second argument is determined by four simple axioms: (i) Continuity in the first argument; (ii) the validity of the data-processing inequality; (iii) additivity under tensor products; and (iv) super-additivity. This observation has immediate implications for quantum thermodynamics, which we discuss. Specifically, we demonstrate that, under reasonable restrictions, the free energy is singled out as a measure of athermality. In particular, we consider an extended class of Gibbs-preserving maps as free operations in a resource-theoretic framework, in which a catalyst is allowed to build up correlations with the system at hand. The free energy is the only extensive and continuous function that is monotonic under such free operations.Entropy2017-05-23196Article10.3390/e190602412411099-43002017-05-23doi: 10.3390/e19060241Henrik WilmingRodrigo GallegoJens Eisert<![CDATA[Entropy, Vol. 19, Pages 240: Maxwell’s Demon—A Historical Review]]>
http://www.mdpi.com/1099-4300/19/6/240
For more than 140 years Maxwell’s demon has intrigued, enlightened, mystified, frustrated, and challenged physicists in unique and interesting ways. Maxwell’s original conception was brilliant and insightful, but over the years numerous different versions of Maxwell’s demon have been presented. Most versions have been answered with reasonable physical arguments, with each of these answers (apparently) keeping the second law of thermodynamics intact. Though the laws of physics did not change in this process of questioning and answering, we have learned a lot along the way about statistical mechanics and thermodynamics. This paper will review a selected history and discuss some of the interesting historical characters who have participated.Entropy2017-05-23196Review10.3390/e190602402401099-43002017-05-23doi: 10.3390/e19060240Andrew Rex<![CDATA[Entropy, Vol. 19, Pages 233: On Linear Coding over Finite Rings and Applications to Computing]]>
http://www.mdpi.com/1099-4300/19/5/233
This paper presents a coding theorem for linear coding over finite rings, in the setting of the Slepian–Wolf source coding problem. This theorem covers corresponding achievability theorems of Elias (IRE Conv. Rec. 1955, 3, 37–46) and Csiszár (IEEE Trans. Inf. Theory 1982, 28, 585–592) for linear coding over finite fields as special cases. In addition, it is shown that, for any set of finite correlated discrete memoryless sources, there always exists a sequence of linear encoders over some finite non-field rings which achieves the data compression limit, the Slepian–Wolf region. Hence, the optimality problem regarding linear coding over finite non-field rings for data compression is closed with positive confirmation with respect to existence. For application, we address the problem of source coding for computing, where the decoder is interested in recovering a discrete function of the data generated and independently encoded by several correlated i.i.d. random sources. We propose linear coding over finite rings as an alternative solution to this problem. Results in Körner–Marton (IEEE Trans. Inf. Theory 1979, 25, 219–221) and Ahlswede–Han (IEEE Trans. Inf. Theory 1983, 29, 396–411, Theorem 10) are generalized to cases for encoding (pseudo) nomographic functions (over rings). Since a discrete function with a finite domain always admits a nomographic presentation, we conclude that both generalizations universally apply for encoding all discrete functions of finite domains. Based on these, we demonstrate that linear coding over finite rings strictly outperforms its field counterpart in terms of achieving better coding rates and reducing the required alphabet sizes of the encoders for encoding infinitely many discrete functions.Entropy2017-05-20195Article10.3390/e190502332331099-43002017-05-20doi: 10.3390/e19050233Sheng HuangMikael Skoglund<![CDATA[Entropy, Vol. 19, Pages 238: Lyapunov Spectra of Coulombic and Gravitational Periodic Systems]]>
http://www.mdpi.com/1099-4300/19/5/238
An open question in nonlinear dynamics is the relation between the Kolmogorov entropy and the largest Lyapunov exponent of a given orbit. Both have been shown to have diagnostic capability for phase transitions in thermodynamic systems. For systems with long-range interactions, the choice of boundary plays a critical role and appropriate boundary conditions must be invoked. In this work, we compute Lyapunov spectra for Coulombic and gravitational versions of the one-dimensional systems of parallel sheets with periodic boundary conditions. Exact expressions for time evolution of the tangent-space vectors are derived and are utilized toward computing Lypaunov characteristic exponents using an event-driven algorithm. The results indicate that the energy dependence of the largest Lyapunov exponent emulates that of Kolmogorov entropy for each system for a given system size. Our approach forms an effective and approximation-free instrument for studying the dynamical properties exhibited by the Coulombic and gravitational systems and finds applications in investigating indications of thermodynamic transitions in small as well as large versions of the spatially periodic systems. When a phase transition exists, we find that the largest Lyapunov exponent serves as a precursor of the transition that becomes more pronounced as the system size increases.Entropy2017-05-20195Article10.3390/e190502382381099-43002017-05-20doi: 10.3390/e19050238Pankaj KumarBruce Miller<![CDATA[Entropy, Vol. 19, Pages 237: Can a Robot Have Free Will?]]>
http://www.mdpi.com/1099-4300/19/5/237
Using insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self to self-determine; (b) there must be a non-zero probability of more than one option being enacted; (c) there must be an internal means of choosing among options (which is not merely random, since randomness is not a choice). For (a) to be fulfilled, the agent of self-determination must be organisationally closed (a “Kantian whole”). For (c) to be fulfilled: (d) options must be generated from an internal model of the self which can calculate future states contingent on possible responses; (e) choosing among these options requires their evaluation using an internally generated goal defined on an objective function representing the overall “master function” of the agent and (f) for “deep free-will”, at least two nested levels of choice and goal (d–e) must be enacted by the agent. The agent must also be able to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. The main impediment to free-will in present-day artificial robots, is their lack of being a Kantian whole. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low and include relatively simple life-forms that are at least able to learn.Entropy2017-05-20195Article10.3390/e190502372371099-43002017-05-20doi: 10.3390/e19050237Keith Farnsworth<![CDATA[Entropy, Vol. 19, Pages 236: Entropy in Investigation of Vasovagal Syndrome in Passive Head Up Tilt Test]]>
http://www.mdpi.com/1099-4300/19/5/236
This paper presents an application of Approximate Entropy (ApEn) and Sample Entropy (SampEn) in the analysis of heart rhythm, blood pressure and stroke volume for the diagnosis of vasovagal syndrome. The analyzed biosignals were recorded during positive passive tilt tests—HUTT(+). Signal changes and their entropy were compared in three main phases of the test: supine position, tilt, and pre-syncope, with special focus on the latter, which was analyzed in a sliding window of each signal. In some cases, ApEn and SampEn were equally useful for the assessment of signal complexity (p &lt; 0.05 in corresponding calculations). The complexity of the signals was found to decrease in the pre-syncope phase (SampEn (RRI): 1.20–0.34, SampEn (sBP): 1.29–0.57, SampEn (dBP): 1.19–0.48, SampEn (SV): 1.62–0.91). The pattern of the SampEn (SV) decrease differs from the pattern of the SampEn (sBP), SampEn (dBP) and SampEn (RRI) decrease. For all signals, the lowest entropy values in the pre-syncope phase were observed at the moment when loss of consciousness occurred.Entropy2017-05-20195Article10.3390/e190502362361099-43002017-05-20doi: 10.3390/e19050236Katarzyna BuszkoAgnieszka PiątkowskaEdward KoźlukGrzegorz Opolski<![CDATA[Entropy, Vol. 19, Pages 234: The Particle as a Statistical Ensemble of Events in Stueckelberg–Horwitz–Piron Electrodynamics ]]>
http://www.mdpi.com/1099-4300/19/5/234
In classical Maxwell electrodynamics, charged particles following deterministic trajectories are described by currents that induce fields, mediating interactions with other particles. Statistical methods are used when needed to treat complex particle and/or field configurations. In Stueckelberg–Horwitz–Piron (SHP) electrodynamics, the classical trajectories are traced out dynamically, through the evolution of a 4D spacetime event x μ ( τ ) as τ grows monotonically. Stueckelberg proposed to formalize the distinction between coordinate time x 0 = c t (measured by laboratory clocks) and chronology τ (the temporal ordering of event occurrence) in order to describe antiparticles and resolve problems of irreversibility such as grandfather paradoxes. Consequently, in SHP theory, the elementary object is not a particle (a 4D curve in spacetime) but rather an event (a single point along the dynamically evolving curve). Following standard deterministic methods in classical relativistic field theory, one is led to Maxwell-like field equations that are τ -dependent and sourced by a current that represents a statistical ensemble of instantaneous events distributed along the trajectory. The width λ of this distribution defines a correlation time for the interactions and a mass spectrum for the photons emitted by particles. As λ becomes very large, the photon mass goes to zero and the field equations become τ -independent Maxwell’s equations. Maxwell theory thus emerges as an equilibrium limit of SHP, in which λ is larger than any other relevant time scale. Thus, statistical mechanics is a fundamental ingredient in SHP electrodynamics, and its insights are required to give meaning to the concept of a particle.Entropy2017-05-19195Article10.3390/e190502342341099-43002017-05-19doi: 10.3390/e19050234Martin Land<![CDATA[Entropy, Vol. 19, Pages 232: A Kullback–Leibler View of Maximum Entropy and Maximum Log-Probability Methods]]>
http://www.mdpi.com/1099-4300/19/5/232
Entropy methods enable a convenient general approach to providing a probability distribution with partial information. The minimum cross-entropy principle selects the distribution that minimizes the Kullback–Leibler divergence subject to the given constraints. This general principle encompasses a wide variety of distributions, and generalizes other methods that have been proposed independently. There remains, however, some confusion about the breadth of entropy methods in the literature. In particular, the asymmetry of the Kullback–Leibler divergence provides two important special cases when the target distribution is uniform: the maximum entropy method and the maximum log-probability method. This paper compares the performance of both methods under a variety of conditions. We also examine a generalized maximum log-probability method as a further demonstration of the generality of the entropy approach.Entropy2017-05-19195Article10.3390/e190502322321099-43002017-05-19doi: 10.3390/e19050232Ali AbbasAndrea H. CadenbachEhsan Salimi<![CDATA[Entropy, Vol. 19, Pages 231: A Novel Faults Diagnosis Method for Rolling Element Bearings Based on EWT and Ambiguity Correlation Classifiers]]>
http://www.mdpi.com/1099-4300/19/5/231
According to non-stationary characteristic of the acoustic emission signal of rolling element bearings, a novel fault diagnosis method based on empirical wavelet transform (EWT) and ambiguity correlation classification (ACC) is proposed. In the proposed method, the acoustic emission signal acquired from a one-channel sensor is firstly decomposed using the EWT method, and then the mutual information of decomposed components and the original signal is computed and used to extract the noiseless component in order to obtain the reconstructed signal. Afterwards, the ambiguity correlation classifier, which has the advantages of ambiguity functions in the processing of the non-stationary signal, and the combining of correlation coefficients, is applied. Finally, multiple datasets of reconstructed signals for different operative conditions are fed to the ambiguity correlation classifier for training and testing. The proposed method was verified by experiments, and experimental results have shown that the proposed method can effectively diagnose three different operative conditions of rolling element bearings with higher detection rates than support vector machine and back-propagation (BP) neural network algorithms.Entropy2017-05-18195Article10.3390/e190502312311099-43002017-05-18doi: 10.3390/e19050231Xingmeng JiangLi WuMingtao Ge<![CDATA[Entropy, Vol. 19, Pages 230: Specific and Complete Local Integration of Patterns in Bayesian Networks]]>
http://www.mdpi.com/1099-4300/19/5/230
We present a first formal analysis of specific and complete local integration. Complete local integration was previously proposed as a criterion for detecting entities or wholes in distributed dynamical systems. Such entities in turn were conceived to form the basis of a theory of emergence of agents within dynamical systems. Here, we give a more thorough account of the underlying formal measures. The main contribution is the disintegration theorem which reveals a special role of completely locally integrated patterns (what we call ι-entities) within the trajectories they occur in. Apart from proving this theorem we introduce the disintegration hierarchy and its refinement-free version as a way to structure the patterns in a trajectory. Furthermore, we construct the least upper bound and provide a candidate for the greatest lower bound of specific local integration. Finally, we calculate the ι -entities in small example systems as a first sanity check and find that ι -entities largely fulfil simple expectations.Entropy2017-05-18195Article10.3390/e190502302301099-43002017-05-18doi: 10.3390/e19050230Martin BiehlTakashi IkegamiDaniel Polani<![CDATA[Entropy, Vol. 19, Pages 227: Ion Hopping and Constrained Li Diffusion Pathways in the Superionic State of Antifluorite Li2O]]>
http://www.mdpi.com/1099-4300/19/5/227
Li2O belongs to the family of antifluorites that show superionic behavior at high temperatures. While some of the superionic characteristics of Li2O are well-known, the mechanistic details of ionic conduction processes are somewhat nebulous. In this work, we first establish an onset of superionic conduction that is emblematic of a gradual disordering process among the Li ions at a characteristic temperature Tα (~1000 K) using reported neutron diffraction data and atomistic simulations. In the superionic state, the Li ions are observed to portray dynamic disorder by hopping between the tetrahedral lattice sites. We then show that string-like ionic diffusion pathways are established among the Li ions in the superionic state. The diffusivity of these dynamical string-like structures, which have a finite lifetime, shows a remarkable correlation to the bulk diffusivity of the system.Entropy2017-05-18195Article10.3390/e190502272271099-43002017-05-18doi: 10.3390/e19050227Ajay AnnamareddyJacob Eapen<![CDATA[Entropy, Vol. 19, Pages 228: Face Verification with Multi-Task and Multi-Scale Feature Fusion]]>
http://www.mdpi.com/1099-4300/19/5/228
Face verification for unrestricted faces in the wild is a challenging task. This paper proposes a method based on two deep convolutional neural networks (CNN) for face verification. In this work, we explore using identification signals to supervise one CNN and the combination of semi-verification and identification to train the other one. In order to estimate semi-verification loss at a low computation cost, a circle, which is composed of all faces, is used for selecting face pairs from pairwise samples. In the process of face normalization, we propose using different landmarks of faces to solve the problems caused by poses. In addition, the final face representation is formed by the concatenating feature of each deep CNN after principal component analysis (PCA) reduction. Furthermore, each feature is a combination of multi-scale representations through making use of auxiliary classifiers. For the final verification, we only adopt the face representation of one region and one resolution of a face jointing Joint Bayesian classifier. Experiments show that our method can extract effective face representation with a small training dataset and our algorithm achieves 99.71% verification accuracy on Labeled Faces in the Wild (LFW) dataset.Entropy2017-05-17195Article10.3390/e190502282281099-43002017-05-17doi: 10.3390/e19050228Xiaojun LuYue YangWeilin ZhangQi WangYang Wang<![CDATA[Entropy, Vol. 19, Pages 229: Investigation of the Intra- and Inter-Limb Muscle Coordination of Hands-and-Knees Crawling in Human Adults by Means of Muscle Synergy Analysis]]>
http://www.mdpi.com/1099-4300/19/5/229
To investigate the intra- and inter-limb muscle coordination mechanism of human hands-and-knees crawling by means of muscle synergy analysis, surface electromyographic (sEMG) signals of 20 human adults were collected bilaterally from 32 limb related muscles during crawling with hands and knees at different speeds. The nonnegative matrix factorization (NMF) algorithm was exerted on each limb to extract muscle synergies. The results showed that intra-limb coordination was relatively stable during human hands-and-knees crawling. Two synergies, one relating to the stance phase and the other relating to the swing phase, could be extracted from each limb during a crawling cycle. Synergy structures during different speeds kept good consistency, but the recruitment levels, durations, and phases of muscle synergies were adjusted to adapt the change of crawling speed. Furthermore, the ipsilateral phase lag (IPL) value which was used to depict the inter-limb coordination changed with crawling speed for most subjects, and subjects using the no-limb-pairing mode at low speed tended to adopt the trot-like mode or pace-like mode at high speed. The research results could be well explained by the two-level central pattern generator (CPG) model consisting of a half-center rhythm generator (RG) and a pattern formation (PF) circuit. This study sheds light on the underlying control mechanism of human crawling.Entropy2017-05-17195Article10.3390/e190502292291099-43002017-05-17doi: 10.3390/e19050229Xiang ChenXiaocong NiuDe WuYi YuXu Zhang<![CDATA[Entropy, Vol. 19, Pages 226: Information Entropy and Measures of Market Risk]]>
http://www.mdpi.com/1099-4300/19/5/226
In this paper we investigate the relationship between the information entropy of the distribution of intraday returns and intraday and daily measures of market risk. Using data on the EUR/JPY exchange rate, we find a negative relationship between entropy and intraday Value-at-Risk, and also between entropy and intraday Expected Shortfall. This relationship is then used to forecast daily Value-at-Risk, using the entropy of the distribution of intraday returns as a predictor.Entropy2017-05-16195Article10.3390/e190502262261099-43002017-05-16doi: 10.3390/e19050226Daniel PeleEmese LazarAlfonso Dufour<![CDATA[Entropy, Vol. 19, Pages 225: Entropy Information of Cardiorespiratory Dynamics in Neonates during Sleep]]>
http://www.mdpi.com/1099-4300/19/5/225
Sleep is a central activity in human adults and characterizes most of the newborn infant life. During sleep, autonomic control acts to modulate heart rate variability (HRV) and respiration. Mechanisms underlying cardiorespiratory interactions in different sleep states have been studied but are not yet fully understood. Signal processing approaches have focused on cardiorespiratory analysis to elucidate this co-regulation. This manuscript proposes to analyze heart rate (HR), respiratory variability and their interrelationship in newborn infants to characterize cardiorespiratory interactions in different sleep states (active vs. quiet). We are searching for indices that could detect regulation alteration or malfunction, potentially leading to infant distress. We have analyzed inter-beat (RR) interval series and respiration in a population of 151 newborns, and followed up with 33 at 1 month of age. RR interval series were obtained by recognizing peaks of the QRS complex in the electrocardiogram (ECG), corresponding to the ventricles depolarization. Univariate time domain, frequency domain and entropy measures were applied. In addition, Transfer Entropy was considered as a bivariate approach able to quantify the bidirectional information flow from one signal (respiration) to another (RR series). Results confirm the validity of the proposed approach. Overall, HRV is higher in active sleep, while high frequency (HF) power characterizes more quiet sleep. Entropy analysis provides higher indices for SampEn and Quadratic Sample entropy (QSE) in quiet sleep. Transfer Entropy values were higher in quiet sleep and point to a major influence of respiration on the RR series. At 1 month of age, time domain parameters show an increase in HR and a decrease in variability. No entropy differences were found across ages. The parameters employed in this study help to quantify the potential for infants to adapt their cardiorespiratory responses as they mature. Thus, they could be useful as early markers of risk for infant cardiorespiratory vulnerabilities.Entropy2017-05-15195Article10.3390/e190502252251099-43002017-05-15doi: 10.3390/e19050225Maristella LucchiniNicolò PiniWilliam FiferNina BurtchenMaria Signorini<![CDATA[Entropy, Vol. 19, Pages 224: Classification of Fractal Signals Using Two-Parameter Non-Extensive Wavelet Entropy]]>
http://www.mdpi.com/1099-4300/19/5/224
This article proposes a methodology for the classification of fractal signals as stationary or nonstationary. The methodology is based on the theoretical behavior of two-parameter wavelet entropy of fractal signals. The wavelet ( q , q ′ ) -entropy is a wavelet-based extension of the ( q , q ′ ) -entropy of Borges and is based on the entropy planes for various q and q ′ ; it is theoretically shown that it constitutes an efficient and effective technique for fractal signal classification. Moreover, the second parameter q ′ provides further analysis flexibility and robustness in the sense that different ( q , q ′ ) pairs can analyze the same phenomena and increase the range of dispersion of entropies. A comparison study against the standard signal summation conversion technique shows that the proposed methodology is not only comparable in accuracy but also more computationally efficient. The application of the proposed methodology to physiological and financial time series is also presented along with the classification of these as stationary or nonstationary.Entropy2017-05-15195Article10.3390/e190502242241099-43002017-05-15doi: 10.3390/e19050224Julio Ramírez-PachecoJoel Trejo-SánchezJoaquin Cortez-GonzálezRamón Palacio<![CDATA[Entropy, Vol. 19, Pages 223: Prediction and Evaluation of Zero Order Entropy Changes in Grammar-Based Codes]]>
http://www.mdpi.com/1099-4300/19/5/223
The change of zero order entropy is studied over different strategies of grammar production rule selection. The two major rules are distinguished: transformations leaving the message size intact and substitution functions changing the message size. Relations for zero order entropy changes were derived for both cases and conditions under which the entropy decreases were described. In this article, several different greedy strategies reducing zero order entropy, as well as message sizes are summarized, and the new strategy MinEnt is proposed. The resulting evolution of the zero order entropy is compared with a strategy of selecting the most frequent digram used in the Re-Pair algorithm.Entropy2017-05-13195Article10.3390/e190502232231099-43002017-05-13doi: 10.3390/e19050223Michal VasinekJan Platos<![CDATA[Entropy, Vol. 19, Pages 218: Minimum Entropy Active Fault Tolerant Control of the Non-Gaussian Stochastic Distribution System Subjected to Mean Constraint]]>
http://www.mdpi.com/1099-4300/19/5/218
Stochastic distribution control (SDC) systems are a group of systems where the outputs considered is the measured probability density function (PDF) of the system output whilst subjected to a normal crisp input. The purpose of the active fault tolerant control of such systems is to use the fault estimation information and other measured information to make the output PDF still track the given distribution when the objective PDF is known. However, if the target PDF is unavailable, the PDF tracking operation will be impossible. Minimum entropy control of the system output can be considered as an alternative strategy. The mean represents the center location of the stochastic variable, and it is reasonable that the minimum entropy fault tolerant controller can be designed subjected to mean constraint. In this paper, using the rational square-root B-spline model for the shape control of the system output probability density function (PDF), a nonlinear adaptive observer based fault diagnosis algorithm is proposed to diagnose the fault. Through the controller reconfiguration, the system entropy subjected to mean restriction can still be minimized when fault occurs. An illustrative example is utilized to demonstrate the use of the minimum entropy fault tolerant control algorithms.Entropy2017-05-11195Article10.3390/e190502182181099-43002017-05-11doi: 10.3390/e19050218Haokun JinYacun GuanLina Yao<![CDATA[Entropy, Vol. 19, Pages 221: Muscle Fatigue Analysis of the Deltoid during Three Head-Related Static Isometric Contraction Tasks]]>
http://www.mdpi.com/1099-4300/19/5/221
This study aimed to investigate the fatiguing characteristics of muscle-tendon units (MTUs) within skeletal muscles during static isometric contraction tasks. The deltoid was selected as the target muscle and three head-related static isometric contraction tasks were designed to activate three heads of the deltoid in different modes. Nine male subjects participated in this study. Surface electromyography (SEMG) signals were collected synchronously from the three heads of the deltoid. The performances of five SEMG parameters, including root mean square (RMS), mean power frequency (MPF), the first coefficient of autoregressive model (ARC1), sample entropy (SE) and Higuchi’s fractal dimension (HFD), in quantification of fatigue, were evaluated in terms of sensitivity to variability ratio (SVR) and consistency firstly. Then, the HFD parameter was selected as the fatigue index for further muscle fatigue analysis. The experimental results demonstrated that the three deltoid heads presented different activation modes during three head-related fatiguing contractions. The fatiguing characteristics of the three heads were found to be task-dependent, and the heads kept in a relatively high activation level were more prone to fatigue. In addition, the differences in fatiguing rate between heads increased with the increase in load. The findings of this study can be helpful in better understanding the underlying neuromuscular control strategies of the central nervous system (CNS). Based on the results of this study, the CNS was thought to control the contraction of the deltoid by taking the three heads as functional units, but a certain synergy among heads might also exist to accomplish a contraction task.Entropy2017-05-11195Article10.3390/e190502212211099-43002017-05-11doi: 10.3390/e19050221Wenxiang CuiXiang ChenShuai CaoXu Zhang<![CDATA[Entropy, Vol. 19, Pages 220: A Functorial Construction of Quantum Subtheories]]>
http://www.mdpi.com/1099-4300/19/5/220
We apply the geometric quantization procedure via symplectic groupoids to the setting of epistemically-restricted toy theories formalized by Spekkens (Spekkens, 2016). In the continuous degrees of freedom, this produces the algebraic structure of quadrature quantum subtheories. In the odd-prime finite degrees of freedom, we obtain a functor from the Frobenius algebra of the toy theories to the Frobenius algebra of stabilizer quantum mechanics.Entropy2017-05-11195Article10.3390/e190502202201099-43002017-05-11doi: 10.3390/e19050220Ivan ContrerasAli Duman<![CDATA[Entropy, Vol. 19, Pages 219: Calculating Iso-Committor Surfaces as Optimal Reaction Coordinates with Milestoning]]>
http://www.mdpi.com/1099-4300/19/5/219
Reaction coordinates are vital tools for qualitative and quantitative analysis of molecular processes. They provide a simple picture of reaction progress and essential input for calculations of free energies and rates. Iso-committor surfaces are considered the optimal reaction coordinate. We present an algorithm to compute efficiently a sequence of isocommittor surfaces. These surfaces are considered an optimal reaction coordinate. The algorithm analyzes Milestoning results to determine the committor function. It requires only the transition probabilities between the milestones, and not transition times. We discuss the following numerical examples: (i) a transition in the Mueller potential; (ii) a conformational change of a solvated peptide; and (iii) cholesterol aggregation in membranes.Entropy2017-05-11195Article10.3390/e190502192191099-43002017-05-11doi: 10.3390/e19050219Ron ElberJuan Bello-RivasPiao MaAlfredo CardenasArman Fathizadeh<![CDATA[Entropy, Vol. 19, Pages 217: On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means]]>
http://www.mdpi.com/1099-4300/19/5/217
This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i) the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii) the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.Entropy2017-05-11195Article10.3390/e190502172171099-43002017-05-11doi: 10.3390/e19050217George Livadiotis<![CDATA[Entropy, Vol. 19, Pages 215: Cauchy Principal Value Contour Integral with Applications]]>
http://www.mdpi.com/1099-4300/19/5/215
Cauchy principal value is a standard method applied in mathematical applications by which an improper, and possibly divergent, integral is measured in a balanced way around singularities or at infinity. On the other hand, entropy prediction of systems behavior from a thermodynamic perspective commonly involves contour integrals. With the aim of facilitating the calculus of such integrals in this entropic scenario, we revisit the generalization of Cauchy principal value to complex contour integral, formalize its definition and—by using residue theory techniques—provide an useful way to evaluate them.Entropy2017-05-10195Article10.3390/e190502152151099-43002017-05-10doi: 10.3390/e19050215Matilde LeguaLuis Sánchez-Ruiz<![CDATA[Entropy, Vol. 19, Pages 211: Meromorphic Non-Integrability of Several 3D Dynamical Systems]]>
http://www.mdpi.com/1099-4300/19/5/211
In this paper, we apply the differential Galoisian approach to investigate the meromorphic non-integrability of a class of 3D equations in mathematical physics, including Nosé–Hoover equations, the Lü system, the Rikitake-like system and Rucklidge equations, which are well known in the fields of molecular dynamics, chaotic theory and fluid mechanics, respectively. Our main results show that all these considered systems are, in fact, non-integrable in nearly all parameters.Entropy2017-05-10195Article10.3390/e190502112111099-43002017-05-10doi: 10.3390/e19050211Kaiyin HuangShaoyun ShiWenlei Li<![CDATA[Entropy, Vol. 19, Pages 216: Designing Labeled Graph Classifiers by Exploiting the Rényi Entropy of the Dissimilarity Representation]]>
http://www.mdpi.com/1099-4300/19/5/216
Representing patterns as labeled graphs is becoming increasingly common in the broad field of computational intelligence. Accordingly, a wide repertoire of pattern recognition tools, such as classifiers and knowledge discovery procedures, are nowadays available and tested for various datasets of labeled graphs. However, the design of effective learning procedures operating in the space of labeled graphs is still a challenging problem, especially from the computational complexity viewpoint. In this paper, we present a major improvement of a general-purpose classifier for graphs, which is conceived on an interplay between dissimilarity representation, clustering, information-theoretic techniques, and evolutionary optimization algorithms. The improvement focuses on a specific key subroutine devised to compress the input data. We prove different theorems which are fundamental to the setting of the parameters controlling such a compression operation. We demonstrate the effectiveness of the resulting classifier by benchmarking the developed variants on well-known datasets of labeled graphs, considering as distinct performance indicators the classification accuracy, computing time, and parsimony in terms of structural complexity of the synthesized classification models. The results show state-of-the-art standards in terms of test set accuracy and a considerable speed-up for what concerns the computing time.Entropy2017-05-09195Article10.3390/e190502162161099-43002017-05-09doi: 10.3390/e19050216Lorenzo Livi<![CDATA[Entropy, Vol. 19, Pages 214: Anatomy of a Spin: The Information-Theoretic Structure of Classical Spin Systems]]>
http://www.mdpi.com/1099-4300/19/5/214
Collective organization in matter plays a significant role in its expressed physical properties. Typically, it is detected via an order parameter, appropriately defined for each given system’s observed emergent patterns. Recent developments in information theory, however, suggest quantifying collective organization in a system- and phenomenon-agnostic way: decomposing the system’s thermodynamic entropy density into a localized entropy, that is solely contained in the dynamics at a single location, and a bound entropy, that is stored in space as domains, clusters, excitations, or other emergent structures. As a concrete demonstration, we compute this decomposition and related quantities explicitly for the nearest-neighbor Ising model on the 1D chain, on the Bethe lattice with coordination number k = 3 , and on the 2D square lattice, illustrating its generality and the functional insights it gives near and away from phase transitions. In particular, we consider the roles that different spin motifs play (in cluster bulk, cluster edges, and the like) and how these affect the dependencies between spins.Entropy2017-05-08195Article10.3390/e190502142141099-43002017-05-08doi: 10.3390/e19050214Vikram VijayaraghavanRyan JamesJames Crutchfield<![CDATA[Entropy, Vol. 19, Pages 213: Cockroach Swarm Optimization Algorithm for Travel Planning]]>
http://www.mdpi.com/1099-4300/19/5/213
In transport planning, one should allow passengers to travel through the complicated transportation scheme with efficient use of different modes of transport. In this paper, we propose the use of a cockroach swarm optimization algorithm for determining paths with the shortest travel time. In our approach, this algorithm has been modified to work with the time-expanded model. Therefore, we present how the algorithm has to be adapted to this model, including correctly creating solutions and defining steps and movement in the search space. By introducing the proposed modifications, we are able to solve journey planning. The results have shown that the performance of our approach, in terms of converging to the best solutions, is satisfactory. Moreover, we have compared our results with Dijkstra’s algorithm and a particle swarm optimization algorithm.Entropy2017-05-06195Article10.3390/e190502132131099-43002017-05-06doi: 10.3390/e19050213Joanna KwiecieńMarek Pasieka<![CDATA[Entropy, Vol. 19, Pages 210: Information Content Based Optimal Radar Waveform Design: LPI’s Purpose]]>
http://www.mdpi.com/1099-4300/19/5/210
This paper presents a low probability of interception (LPI) radar waveform design method with a fixed average power constraint based on information theory. The Kullback–Leibler divergence (KLD) between the intercept signal and background noise is presented as a practical metric to evaluate the performance of the adversary intercept receiver in this paper. Through combining it with the radar performance metric, that is, the mutual information (MI), a multi-objective optimization model of LPI waveform design is developed. It is a trade-off between the performance of radar and enemy intercept receiver. After being transformed into a single-objective optimization problem, it can be solved by using an interior point method and a sequential quadratic programming (SQP) method. Simulation results verify the correctness and effectiveness of the proposed LPI radar waveform design method.Entropy2017-05-06195Article10.3390/e190502102101099-43002017-05-06doi: 10.3390/e19050210Jun ChenFei WangJianjiang Zhou<![CDATA[Entropy, Vol. 19, Pages 212: Boltzmann Entropy of a Newtonian Universe]]>
http://www.mdpi.com/1099-4300/19/5/212
A dynamical estimate is given for the Boltzmann entropy of the Universe, under the simplifying assumptions provided by Newtonian cosmology. We first model the cosmological fluid as the probability fluid of a quantum-mechanical system. Next, following current ideas about the emergence of spacetime, we regard gravitational equipotentials as isoentropic surfaces. Therefore, gravitational entropy is proportional to the vacuum expectation value of the gravitational potential in a certain quantum state describing the matter contents of the Universe. The entropy of the matter sector can also be computed. While providing values of the entropy that turn out to be somewhat higher than existing estimates, our results are in perfect compliance with the upper bound set by the holographic principle.Entropy2017-05-06195Article10.3390/e190502122121099-43002017-05-06doi: 10.3390/e19050212D. CabreraPedro Fernández de CórdobaJ.M. Isidro<![CDATA[Entropy, Vol. 19, Pages 209: Ensemble Averages, Soliton Dynamics and Influence of Haptotaxis in a Model of Tumor-Induced Angiogenesis]]>
http://www.mdpi.com/1099-4300/19/5/209
In this work, we present a numerical study of the influence of matrix degrading enzyme (MDE) dynamics and haptotaxis on the development of vessel networks in tumor-induced angiogenesis. Avascular tumors produce growth factors that induce nearby blood vessels to emit sprouts formed by endothelial cells. These capillary sprouts advance toward the tumor by chemotaxis (gradients of growth factor) and haptotaxis (adhesion to the tissue matrix outside blood vessels). The motion of the capillaries in this constrained space is modelled by stochastic processes (Langevin equations, branching and merging of sprouts) coupled to continuum equations for concentrations of involved substances. There is a complementary deterministic description in terms of the density of actively moving tips of vessel sprouts. The latter forms a stable soliton-like wave whose motion is influenced by the different taxis mechanisms. We show the delaying effect of haptotaxis on the advance of the angiogenic vessel network by direct numerical simulations of the stochastic process and by a study of the soliton motion.Entropy2017-05-04195Article10.3390/e190502092091099-43002017-05-04doi: 10.3390/e19050209Luis BonillaManuel CarreteroFilippo Terragni<![CDATA[Entropy, Vol. 19, Pages 204: Measures of Qualitative Variation in the Case of Maximum Entropy]]>
http://www.mdpi.com/1099-4300/19/5/204
Asymptotic behavior of qualitative variation statistics, including entropy measures, can be modeled well by normal distributions. In this study, we test the normality of various qualitative variation measures in general. We find that almost all indices tend to normality as the sample size increases, and they are highly correlated. However, for all of these qualitative variation statistics, maximum uncertainty is a serious factor that prevents normality. Among these, we study the properties of two qualitative variation statistics; VarNC and StDev statistics in the case of maximum uncertainty, since these two statistics show lower sampling variability and utilize all sample information. We derive probability distribution functions of these statistics and prove that they are consistent. We also discuss the relationship between VarNC and the normalized form of Tsallis (α = 2) entropy in the case of maximum uncertainty.Entropy2017-05-04195Article10.3390/e190502042041099-43002017-05-04doi: 10.3390/e19050204Atif EvrenErhan Ustaoğlu<![CDATA[Entropy, Vol. 19, Pages 208: Objective Bayesian Entropy Inference for Two-Parameter Logistic Distribution Using Upper Record Values]]>
http://www.mdpi.com/1099-4300/19/5/208
In this paper, we provide an entropy inference method that is based on an objective Bayesian approach for upper record values having a two-parameter logistic distribution. We derive the entropy that is based on the i-th upper record value and the joint entropy that is based on the upper record values. Moreover, we examine their properties. For objective Bayesian analysis, we obtain objective priors, namely, the Jeffreys and reference priors, for the unknown parameters of the logistic distribution. The priors are based on upper record values. Then, we develop an entropy inference method that is based on these objective priors. In real data analysis, we assess the quality of the proposed models under the objective priors and compare them with the model under the informative prior.Entropy2017-05-03195Article10.3390/e190502082081099-43002017-05-03doi: 10.3390/e19050208Jung SeoYongku Kim<![CDATA[Entropy, Vol. 19, Pages 206: Divergence and Sufficiency for Convex Optimization]]>
http://www.mdpi.com/1099-4300/19/5/206
Logarithmic score and information divergence appear in information theory, statistics, statistical mechanics, and portfolio theory. We demonstrate that all these topics involve some kind of optimization that leads directly to regret functions and such regret functions are often given by Bregman divergences. If a regret function also fulfills a sufficiency condition it must be proportional to information divergence. We will demonstrate that sufficiency is equivalent to the apparently weaker notion of locality and it is also equivalent to the apparently stronger notion of monotonicity. These sufficiency conditions have quite different relevance in the different areas of application, and often they are not fulfilled. Therefore sufficiency conditions can be used to explain when results from one area can be transferred directly to another and when one will experience differences.Entropy2017-05-03195Article10.3390/e190502062061099-43002017-05-03doi: 10.3390/e19050206Peter Harremoës<![CDATA[Entropy, Vol. 19, Pages 205: About the Concept of Quantum Chaos]]>
http://www.mdpi.com/1099-4300/19/5/205
The research on quantum chaos finds its roots in the study of the spectrum of complex nuclei in the 1950s and the pioneering experiments in microwave billiards in the 1970s. Since then, a large number of new results was produced. Nevertheless, the work on the subject is, even at present, a superposition of several approaches expressed in different mathematical formalisms and weakly linked to each other. The purpose of this paper is to supply a unified framework for describing quantum chaos using the quantum ergodic hierarchy. Using the factorization property of this framework, we characterize the dynamical aspects of quantum chaos by obtaining the Ehrenfest time. We also outline a generalization of the quantum mixing level of the kicked rotator in the context of the impulsive differential equations.Entropy2017-05-03195Concept Paper10.3390/e190502052051099-43002017-05-03doi: 10.3390/e19050205Ignacio GomezMarcelo LosadaOlimpia Lombardi<![CDATA[Entropy, Vol. 19, Pages 207: Coarse Graining Shannon and von Neumann Entropies]]>
http://www.mdpi.com/1099-4300/19/5/207
The nature of coarse graining is intuitively “obvious”, but it is rather difficult to find explicit and calculable models of the coarse graining process (and the resulting entropy flow) discussed in the literature. What we would like to have at hand is some explicit and calculable process that takes an arbitrary system, with specified initial entropy S, and that monotonically and controllably drives the entropy to its maximum value. This does not have to be a physical process, in fact for some purposes it is better to deal with a gedanken-process, since then it is more obvious how the “hidden information” is hiding in the fine-grain correlations that one is simply agreeing not to look at. We shall present several simple mathematically well-defined and easy to work with conceptual models for coarse graining. We shall consider both the classical Shannon and quantum von Neumann entropies, including models based on quantum decoherence, and analyse the entropy flow in some detail. When coarse graining the quantum von Neumann entropy, we find it extremely useful to introduce an adaptation of Hawking’s super-scattering matrix. These explicit models that we shall construct allow us to quantify and keep clear track of the entropy that appears when coarse graining the system and the information that can be hidden in unobserved correlations (while not the focus of the current article, in the long run, these considerations are of interest when addressing the black hole information puzzle).Entropy2017-05-03195Article10.3390/e190502072071099-43002017-05-03doi: 10.3390/e19050207Ana Alonso-SerranoMatt Visser<![CDATA[Entropy, Vol. 19, Pages 203: Fractional Diffusion in a Solid with Mass Absorption]]>
http://www.mdpi.com/1099-4300/19/5/203
The space-time-fractional diffusion equation with the Caputo time-fractional derivative and Riesz fractional Laplacian is considered in the case of axial symmetry. Mass absorption (mass release) is described by a source term proportional to concentration. The integral transform technique is used. Different particular cases of the solution are studied. The numerical results are illustrated graphically.Entropy2017-05-02195Article10.3390/e190502032031099-43002017-05-02doi: 10.3390/e19050203Yuriy PovstenkoTamara KyrylychGrażyna Rygał<![CDATA[Entropy, Vol. 19, Pages 202: Kinetics of Interactions of Matter, Antimatter and Radiation Consistent with Antisymmetric (CPT-Invariant) Thermodynamics]]>
http://www.mdpi.com/1099-4300/19/5/202
This work investigates the influence of directional properties of decoherence on kinetics rate equations. The physical reality is understood as a chain of unitary and decoherence events. The former are quantum-deterministic, while the latter introduce uncertainty and increase entropy. For interactions of matter and antimatter, two approaches are considered: symmetric decoherence, which corresponds to conventional symmetric (CP-invariant) thermodynamics, and antisymmetric decoherence, which corresponds to antisymmetric (CPT-invariant) thermodynamics. Radiation, in its interactions with matter and antimatter, is shown to be decoherence-neutral. The symmetric and antisymmetric assumptions result in different interactions of radiation with matter and antimatter. The theoretical predictions for these differences are testable by comparing absorption (emission) of light by thermodynamic systems made of matter and antimatter. Canonical typicality for quantum mixtures is briefly discussed in Appendix A.Entropy2017-05-02195Article10.3390/e190502022021099-43002017-05-02doi: 10.3390/e19050202A.Y. Klimenko<![CDATA[Entropy, Vol. 19, Pages 198: Discovery of Kolmogorov Scaling in the Natural Language]]>
http://www.mdpi.com/1099-4300/19/5/198
We consider the rate R and variance σ 2 of Shannon information in snippets of text based on word frequencies in the natural language. We empirically identify Kolmogorov’s scaling law in σ 2 ∝ k - 1 . 66 ± 0 . 12 (95% c.l.) as a function of k = 1 / N measured by word count N. This result highlights a potential association of information flow in snippets, analogous to energy cascade in turbulent eddies in fluids at high Reynolds numbers. We propose R and σ 2 as robust utility functions for objective ranking of concordances in efficient search for maximal information seamlessly across different languages and as a starting point for artificial attention.Entropy2017-05-02195Letter10.3390/e190501981981099-43002017-05-02doi: 10.3390/e19050198Maurice van Putten<![CDATA[Entropy, Vol. 19, Pages 114: The Solution of Modified Fractional Bergman’s Minimal Blood Glucose-Insulin Model]]>
http://www.mdpi.com/1099-4300/19/5/114
In the present paper, we use analytical techniques to solve fractional nonlinear differential equations systems that arise in Bergman’s minimal model, used to describe blood glucose and insulin metabolism, after intravenous tolerance testing. We also discuss the stability and uniqueness of the solution.Entropy2017-05-02195Article10.3390/e190501141141099-43002017-05-02doi: 10.3390/e19050114Badr AlkahtaniObaid AlgahtaniRavi DubeyPranay Goswami<![CDATA[Entropy, Vol. 19, Pages 200: Effects of Movable-Baffle on Heat Transfer and Entropy Generation in a Cavity Saturated by CNT Suspensions: Three-Dimensional Modeling]]>
http://www.mdpi.com/1099-4300/19/5/200
Convective heat transfer and entropy generation in a 3D closed cavity, equipped with adiabatic-driven baffle and filled with CNT (carbon nanotube)-water nanofluid, are numerically investigated for a range of Rayleigh numbers from 103 to 105. This research is conducted for three configurations; fixed baffle (V = 0), rotating baffle clockwise (V+) and rotating baffle counterclockwise (V−) and a range of CNT concentrations from 0 to 15%. Governing equations are formulated using potential vector vorticity formulation in its three-dimensional form, then solved by the finite volume method. The effects of motion direction of the inserted driven baffle and CNT concentration on heat transfer and entropy generation are studied. It was observed that for low Rayleigh numbers, the motion of the driven baffle enhances heat transfer regardless of its direction and the CNT concentration effect is negligible. However, with an increasing Rayleigh number, adding driven baffle increases the heat transfer only when it moves in the direction of the decreasing temperature gradient; elsewhere, convective heat transfer cannot be enhanced due to flow blockage at the corners of the baffle.Entropy2017-04-29195Article10.3390/e190502002001099-43002017-04-29doi: 10.3390/e19050200Abdullah Al-RashedWalid AichLioua KolsiOmid MahianAhmed HusseinMohamed Borjini<![CDATA[Entropy, Vol. 19, Pages 201: Utility, Revealed Preferences Theory, and Strategic Ambiguity in Iterated Games]]>
http://www.mdpi.com/1099-4300/19/5/201
Iterated games, in which the same economic interaction is repeatedly played between the same agents, are an important framework for understanding the effectiveness of strategic choices over time. To date, very little work has applied information theory to the information sets used by agents in order to decide what action to take next in such strategic situations. This article looks at the mutual information between previous game states and an agent’s next action by introducing two new classes of games: “invertible games” and “cyclical games”. By explicitly expanding out the mutual information between past states and the next action we show under what circumstances the explicit values of the utility are irrelevant for iterated games and this is then related to revealed preferences theory of classical economics. These information measures are then applied to the Traveler’s Dilemma game and the Prisoner’s Dilemma game, the Prisoner’s Dilemma being invertible, to illustrate their use. In the Prisoner’s Dilemma, a novel connection is made between the computational principles of logic gates and both the structure of games and the agents’ decision strategies. This approach is applied to the cyclical game Matching Pennies to analyse the foundations of a behavioural ambiguity between two well studied strategies: “Tit-for-Tat” and “Win-Stay, Lose-Switch”.Entropy2017-04-29195Article10.3390/e190502012011099-43002017-04-29doi: 10.3390/e19050201Michael Harré