Entropy doi: 10.3390/e19110624

Authors: Xiaofei Zhu Xu Zhang Xiao Tang Xiaoping Gao Xiang Chen

The objective of this study is to re-evaluate the relation between surface electromyogram (EMG) and muscle contraction torque in biceps brachii (BB) muscles of healthy subjects using two different complexity measures. Ten healthy subjects were recruited and asked to complete a series of elbow flexion tasks following different isometric muscle contraction levels ranging from 10% to 80% of maximum voluntary contraction (MVC) with each increment of 10%. Meanwhile, both the elbow flexion torque and surface EMG data from the muscle were recorded. The root mean square (RMS), sample entropy (SampEn) and fuzzy entropy (FuzzyEn) of corresponding EMG data were analyzed for each contraction level, and the relation between EMG and muscle torque was accordingly quantified. The experimental results showed a nonlinear relation between the traditional RMS amplitude of EMG and the muscle torque. By contrast, the FuzzyEn of EMG exhibited an improved linear correlation with the muscle torque than the RMS amplitude of EMG, which indicates its great value in estimating BB muscle strength in a simple and straightforward manner. In addition, the SampEn of EMG was found to be insensitive to the varying muscle torques, almost presenting a flat trend with the increment of muscle force. Such a character of the SampEn implied its potential application as a promising surface EMG biomarker for examining neuromuscular changes while overcoming interference from muscle strength.

]]>Entropy doi: 10.3390/e19110623

Authors: Duo Hao Qiuming Li Chengwei Li

Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD) and relative entropy (RE). In this paper, the global motion vector (GMV) is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF), Kalman filter (KF), wavelet decomposition (MD) method, empirical mode decomposition (EMD)-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

]]>Entropy doi: 10.3390/e19110622

Authors: H. van Erp Ronald Linger Pieter van Gelder

In this paper, we will give the derivation of an inquiry calculus, or, equivalently, a Bayesian information theory. From simple ordering follow lattices, or, equivalently, algebras. Lattices admit a quantification, or, equivalently, algebras may be extended to calculi. The general rules of quantification are the sum and chain rules. Probability theory follows from a quantification on the specific lattice of statements that has an upper context. Inquiry calculus follows from a quantification on the specific lattice of questions that has a lower context. There will be given here a relevance measure and a product rule for relevances, which, taken together with the sum rule of relevances, will allow us to perform inquiry analyses in an algorithmic manner.

]]>Entropy doi: 10.3390/e19110612

Authors: Amanda Oliveira Adrião Dória Neto Allan Martins

Information Theory is a branch of mathematics, more specifically probability theory, that studies information quantification. Recently, several researches have been successful with the use of Information Theoretic Learning (ITL) as a new technique of unsupervised learning. In these works, information measures are used as criterion of optimality in learning. In this article, we will analyze a still unexplored aspect of these information measures, their dynamic behavior. Autoregressive models (linear and non-linear) will be used to represent the dynamics in information measures. As a source of dynamic information, videos with different characteristics like fading, monotonous sequences, etc., will be used.

]]>Entropy doi: 10.3390/e19110607

Authors: Min Lei Guang Meng Guangming Dong

Bearing vibration response studies are crucial for the condition monitoring of bearings and the quality inspection of rotating machinery systems. However, it is still very difficult to diagnose bearing faults, especially rolling element faults, due to the complex, high-dimensional and nonlinear characteristics of vibration signals as well as the strong background noise. A novel nonlinear analysis method—the symplectic entropy (SymEn) measure—is proposed to analyze the measured signals for fault monitoring of rolling bearings. The core technique of the SymEn approach is the entropy analysis based on the symplectic principal components. The dynamical characteristics of the rolling bearing data are analyzed using the SymEn method. Unlike other techniques consisting of high-dimensional features in the time-domain, frequency-domain and the empirical mode decomposition (EMD)/wavelet-domain, the SymEn approach constructs low-dimensional (i.e., two-dimensional) features based on the SymEn estimate. The vibration signals from our experiments and the Case Western Reserve University Bearing Data Center are applied to verify the effectiveness of the proposed method. Meanwhile, it is found that faulty bearings have a great influence on the other normal bearings. To sum up, the results indicate that the proposed method can be used to detect rolling bearing faults.

]]>Entropy doi: 10.3390/e19110620

Authors: Tiefeng Peng Qibin Li Longhua Xu Chao He Liqun Luo

Foam systems have been attracting extensive attention due to their importance in a variety of applications, e.g., in the cleaning industry, and in bubble flotation. In the context of flotation chemistry, flotation performance is strongly affected by bubble coalescence, which in turn relies significantly on the surface forces upon the liquid film between bubbles. Conventionally, unusual short-range strongly repulsive surface interactions for Newton black films (NBF) between two interfaces with thickness of less than 5 nm were not able to be incorporated into the available classical Derjaguin, Landau, Verwey, and Overbeek (DLVO) theory. The non-DLVO interaction would increase exponentially with the decrease of film thickness, as it plays a crucial role in determining liquid film stability. However, its mechanism and origin are still unclear. In the present work, we investigate the surface interaction of free-standing sodium dodecyl-sulfate (SDS) nanoscale black films in terms of disjoining pressure using the molecular simulation method. The aqueous nanoscale film, consisting of a water coating with SDS surfactants, and with disjoining pressure and film tension of SDS-NBF as a function of film thickness, were quantitatively determined by a post-processing technique derived from film thermodynamics.

]]>Entropy doi: 10.3390/e19110619

Authors: Viviana Meruane Matias Lasen Enrique López Droguett Alejandro Ortiz-Bernardin

Sandwich structures are very attractive due to their high strength at a minimum weight, and, therefore, there has been a rapid increase in their applications. Nevertheless, these structures may present imperfect bonding or debonding between the skins and core as a result of manufacturing defects or impact loads, degrading their mechanical properties. To improve both the safety and functionality of these systems, structural damage assessment methodologies can be implemented. This article presents a damage assessment algorithm to localize and quantify debonds in sandwich panels. The proposed algorithm uses damage indices derived from the modal strain energy method and a linear approximation with a maximum entropy algorithm. Full-field vibration measurements of the panels were acquired using a high-speed 3D digital image correlation (DIC) system. Since the number of damage indices per panel is too large to be used directly in a regression algorithm, reprocessing of the data using principal component analysis (PCA) and kernel PCA has been performed. The results demonstrate that the proposed methodology accurately identifies debonding in composite panels.

]]>Entropy doi: 10.3390/e19110621

Authors: Wassim Haddad

Thermodynamics is a physical branch of science that governs the thermal behavior of dynamical systems from those as simple as refrigerators to those as complex as our expanding universe. The laws of thermodynamics involving conservation of energy and nonconservation of entropy are, without a doubt, two of the most useful and general laws in all sciences. The first law of thermodynamics, according to which energy cannot be created or destroyed, merely transformed from one form to another, and the second law of thermodynamics, according to which the usable energy in an adiabatically isolated dynamical system is always diminishing in spite of the fact that energy is conserved, have had an impact far beyond science and engineering. In this paper, we trace the history of thermodynamics from its classical to its postmodern forms, and present a tutorial and didactic exposition of thermodynamics as it pertains to some of the deepest secrets of the universe.

]]>Entropy doi: 10.3390/e19110606

Authors: Yunna Wu Xiaokun Sun Hu Xu Chuanbo Xu Ruhang Xu

Traditional stochastic dominance rules are so strict and qualitative conditions that generally a stochastic dominance relation between two alternatives does not exist. To solve the problem, we firstly supplement the definitions of almost stochastic dominance (ASD). Then, we propose a new definition of stochastic dominance degree (SDD) that is based on the idea of ASD. The new definition takes both the objective mean and stakeholders’ subjective preference into account, and can measure both standard and almost stochastic dominance degree. The new definition contains four kinds of SDD corresponding to different stakeholders (rational investors, risk averters, risk seekers, and prospect investors). The operator in the definition can also be changed to fit in with different circumstances. On the basis of the new SDD definition, we present a method to solve stochastic multiple criteria decision-making problem. The numerical experiment shows that the new method could produce a more accurate result according to the utility situations of stakeholders. Moreover, even when it is difficult to elicit the group utility distribution of stakeholders, or when the group utility distribution is ambiguous, the method can still rank alternatives.

]]>Entropy doi: 10.3390/e19110570

Authors: Mengmeng Li Xiaoyan Zhang

As we move into the information age, the amount of data in various fields has increased dramatically, and data sources have become increasingly widely distributed. The corresponding phenomenon of missing data is increasingly common, and it leads to the generation of incomplete multi-source information systems. In this context, this paper’s proposal aims to address the limitations of rough set theory. We study the method of multi-source fusion in incomplete multi-source systems. This paper presents a method for fusing incomplete multi-source systems based on information entropy; in particular, by comparison with another method, our fusion method is validated. Furthermore, extensive experiments are conducted on six UCI data sets to verify the performance of the proposed method. Additionally, the experimental results indicate that multi-source information fusion approaches significantly outperform other approaches to fusion.

]]>Entropy doi: 10.3390/e19110618

Authors: Takayuki Koyama Takeru Matsuda Fumiyasu Komaki

We develop priors for Bayes estimation of quantum states that provide minimax state estimation. The relative entropy from the true density operator to a predictive density operator is adopted as a loss function. The proposed prior maximizes the conditional Holevo mutual information, and it is a quantum version of the latent information prior in classical statistics. For one qubit system, we provide a class of measurements that is optimal from the viewpoint of minimax state estimation.

]]>Entropy doi: 10.3390/e19110609

Authors: Aijun Guo Jianxia Chang Yimin Wang Qiang Huang Zhihui Guo

Copula functions have been extensively used to describe the joint behaviors of extreme hydrological events and to analyze hydrological risk. Advanced marginal distribution inference, for example, the maximum entropy theory, is particularly beneficial for improving the performance of the copulas. The goal of this paper, therefore, is twofold; first, to develop a coupled maximum entropy-copula method for hydrological risk analysis through deriving the bivariate return periods, risk, reliability and bivariate design events; and second, to reveal the impact of marginal distribution selection uncertainty and sampling uncertainty on bivariate design event identification. Particularly, the uncertainties involved in the second goal have not yet received significant consideration. The designed framework for hydrological risk analysis related to flood and extreme precipitation events is exemplarily applied in two catchments of the Loess plateau, China. Results show that (1) distribution derived by the maximum entropy principle outperforms the conventional distributions for the probabilistic modeling of flood and extreme precipitation events; (2) the bivariate return periods, risk, reliability and bivariate design events are able to be derived using the coupled entropy-copula method; (3) uncertainty analysis highlights the fact that appropriate performance of marginal distribution is closely related to bivariate design event identification. Most importantly, sampling uncertainty causes the confidence regions of bivariate design events with return periods of 30 years to be very large, overlapping with the values of flood and extreme precipitation, which have return periods of 10 and 50 years, respectively. The large confidence regions of bivariate design events greatly challenge its application in practical engineering design.

]]>Entropy doi: 10.3390/e19110617

Authors: Po-Ling Loh

In recent years, tools from information theory have played an increasingly prevalent role in statistical machine learning. In addition to developing efficient, computationally feasible algorithms for analyzing complex datasets, it is of theoretical importance to determine whether such algorithms are “optimal” in the sense that no other algorithm can lead to smaller statistical error. This paper provides a survey of various techniques used to derive information-theoretic lower bounds for estimation and learning. We focus on the settings of parameter and function estimation, community recovery, and online learning for multi-armed bandits. A common theme is that lower bounds are established by relating the statistical learning problem to a channel decoding problem, for which lower bounds may be derived involving information-theoretic quantities such as the mutual information, total variation distance, and Kullback–Leibler divergence. We close by discussing the use of information-theoretic quantities to measure independence in machine learning applications ranging from causality to medical imaging, and mention techniques for estimating these quantities efficiently in a data-driven manner.

]]>Entropy doi: 10.3390/e19110615

Authors: Daryl DeFord Katherine Moore

Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random walk data. We analyze random walk null models for correlated time series and describe a method for determining the corresponding ordinal pattern distributions. These null models more accurately reflect the observed pattern distributions in some economic data. This leads us to define a measure of complexity using the deviation of a time series from an associated random walk null model. We demonstrate the applicability of our methods using empirical data drawn from a variety of fields, including to a variety of stock market closing prices.

]]>Entropy doi: 10.3390/e19110613

Authors: Jongho Keum Kurt Kornelsen James Leach Paulin Coulibaly

Having reliable water monitoring networks is an essential component of water resources and environmental management. A standardized process for the design of water monitoring networks does not exist with the exception of the World Meteorological Organization (WMO) general guidelines about the minimum network density. While one of the major challenges in the design of optimal hydrometric networks has been establishing design objectives, information theory has been successfully adopted to network design problems by providing measures of the information content that can be deliverable from a station or a network. This review firstly summarizes the common entropy terms that have been used in water monitoring network designs. Then, this paper deals with the recent applications of the entropy concept for water monitoring network designs, which are categorized into (1) precipitation; (2) streamflow and water level; (3) water quality; and (4) soil moisture and groundwater networks. The integrated design method for multivariate monitoring networks is also covered. Despite several issues, entropy theory has been well suited to water monitoring network design. However, further work is still required to provide design standards and guidelines for operational use.

]]>Entropy doi: 10.3390/e19110614

Authors: Tong Qiao Wei Shan Chang Zhou

Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the network. In this paper, a novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes. By design, the re-defined entropy centrality which describes associations among node pairs and captures the process of influence propagation can be interpreted explained as a measure of actor potential for communication activity. We evaluate the efficiency of the proposed model by using four real-world datasets with varied sizes and densities and three artificial networks constructed by models including Barabasi-Albert, Erdos-Renyi and Watts-Stroggatz. The four datasets are Zachary’s karate club, USAir97, Collaboration network and Email network URV respectively. Extensive experimental results prove the effectiveness of the proposed method.

]]>Entropy doi: 10.3390/e19110616

Authors: Zhiliang Pan Ping Li Jinxing Li Yanping Li

Endwall fillet and bulb structures are proposed in this research to improve the temperature uniformity of pin-fined microchannels. The periodical laminar flow and heat transfer performances are investigated under different Reynolds numbers and radius of fillet and bulb. The results show that at a low Reynolds number, both the fillet and the bulb structures strengthen the span-wise and the normal secondary flow in the channel, eliminate the high temperature area in the pin-fin, improve the heat transfer performance of the rear of the cylinder, and enhance the thermal uniformity of the pin-fin surface and the outside wall. Compared to traditional pin-fined microchannels, the flow resistance coefficient f of the pin-fined microchannels with fillet, as well as a bulb with a 2 μm or 5 μm radius, does not increase significantly, while, f of the pin-fined microchannels with a 10 μm or 15 μm bulb increases notably. Moreover, Nu has a maximum increase of 16.93% for those with fillet and 20.65% for those with bulb, and the synthetic thermal performance coefficient TP increases by 16.22% at most for those with fillet and 15.67% at most for those with bulb. At last, as the Reynolds number increases, heat transfer improvement of the fillet and bulb decreases.

]]>Entropy doi: 10.3390/e19110611

Authors: Ahmed Abotabl Aria Nosratinia

The Decode-Compress-Forward (DCF) is a generalization of Decode-Forward (DF) and Compress-Forward (CF). This paper investigates conditions under which DCF offers gains over DF and CF, addresses the problem of coded modulation for DCF, and evaluates the performance of DCF coded modulation implemented via low-density parity-check (LDPC) codes and polar codes. We begin by revisiting the achievable rate of DCF in discrete memoryless channels under backward decoding. We then study coded modulation for the decode-compress-forward via multi-level coding. We show that the proposed multilevel coding approaches the known achievable rates of DCF. The proposed multilevel coding is implemented (and its performance verified) via a combination of standard DVB-S2 LDPC codes, and polar codes whose design follows the method of Blasco-Serrano.

]]>Entropy doi: 10.3390/e19110610

Authors: Shirin Saeedi Bidokhti Gerhard Kramer Shlomo Shamai

The downlink of symmetric Cloud Radio Access Networks (C-RANs) with multiple relays and a single receiver is studied. Lower and upper bounds are derived on the capacity. The lower bound is achieved by Marton’s coding, which facilitates dependence among the multiple-access channel inputs. The upper bound uses Ozarow’s technique to augment the system with an auxiliary random variable. The bounds are studied over scalar Gaussian C-RANs and are shown to meet and characterize the capacity for interesting regimes of operation.

]]>Entropy doi: 10.3390/e19110608

Authors: Takayuki Kawashima Hironori Fujisawa

In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is the γ -divergence, and the robust estimator using the γ -divergence is known for having a strong robustness. In this paper, we extend the γ -divergence to the regression problem, consider the robust and sparse regression based on the γ -divergence and show that it has a strong robustness under heavy contamination even when outliers are heterogeneous. The loss function is constructed by an empirical estimate of the γ -divergence with sparse regularization, and the parameter estimate is defined as the minimizer of the loss function. To obtain the robust and sparse estimate, we propose an efficient update algorithm, which has a monotone decreasing property of the loss function. Particularly, we discuss a linear regression problem with L 1 regularization in detail. In numerical experiments and real data analyses, we see that the proposed method outperforms past robust and sparse methods.

]]>Entropy doi: 10.3390/e19110605

Authors: Petr Jizba Jan Korbel

The aim of this paper is to show that the Tsallis-type (q-additive) entropic chain rule allows for a wider class of entropic functionals than previously thought. In particular, we point out that the ensuing entropy solutions (e.g., Tsallis entropy) can be determined uniquely only when one fixes the prescription for handling conditional entropies. By using the concept of Kolmogorov–Nagumo quasi-linear means, we prove this with the help of Darótzy’s mapping theorem. Our point is further illustrated with a number of explicit examples. Other salient issues, such as connections of conditional entropies with the de Finetti–Kolmogorov theorem for escort distributions and with Landsberg’s classification of non-extensive thermodynamic systems are also briefly discussed.

]]>Entropy doi: 10.3390/e19110604

Authors: Jerry Gibson

Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to some confusion about how these new rate distortion bounds, once they are obtained, should be interpreted and should be used in source codec performance analysis and design. We present the relevant rate distortion theory and show how this theory can be used for practical codec design and performance prediction and evaluation. Examples for speech and video indicate exactly how the new rate distortion functions can be calculated, interpreted, and extended. These examples illustrate the interplay between source models for rate distortion theoretic studies and the source models underlying video and speech codec design. Key concepts include the development of composite source models per source realization and the application of conditional rate distortion theory.

]]>Entropy doi: 10.3390/e19110603

Authors: Robert Swendsen

The proper definition of thermodynamics and the thermodynamic entropy is discussed in the light of recent developments. The postulates for thermodynamics are examined critically, and some modifications are suggested to allow for the inclusion of long-range forces (within a system), inhomogeneous systems with non-extensive entropy, and systems that can have negative temperatures. Only the thermodynamics of finite systems are considered, with the condition that the system is large enough for the fluctuations to be smaller than the experimental resolution. The statistical basis for thermodynamics is discussed, along with four different forms of the (classical and quantum) entropy. The strengths and weaknesses of each are evaluated in relation to the requirements of thermodynamics. Effects of order 1 / N , where N is the number of particles, are included in the discussion because they have played a significant role in the literature, even if they are too small to have a measurable effect in an experiment. The discussion includes the role of discreteness, the non-zero width of the energy and particle number distributions, the extensivity of models with non-interacting particles, and the concavity of the entropy with respect to energy. The results demonstrate the validity of negative temperatures.

]]>Entropy doi: 10.3390/e19110602

Authors: Wei-Ting Lee Che-Ming Li

A new measure based on the tripartite information diagram is proposed for identifying quantum discord in tripartite systems. The proposed measure generalizes the mutual information underlying discord from bipartite to tripartite systems, and utilizes both one-particle and two-particle projective measurements to reveal the characteristics of the tripartite quantum discord. The feasibility of the proposed measure is demonstrated by evaluating the tripartite quantum discord for systems with states close to Greenberger–Horne–Zeilinger, W, and biseparable states. In addition, the connections between tripartite quantum discord and two other quantum correlations—namely genuine tripartite entanglement and genuine tripartite Einstein–Podolsky–Rosen steering—are briefly discussed. The present study considers the case of quantum discord in tripartite systems. However, the proposed framework can be readily extended to general N-partite systems.

]]>Entropy doi: 10.3390/e19110601

Authors: Johannes Rauh

Secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret. Thus, secret sharing schemes are systems of variables in which it is very clearly specified which subsets have information about the secret. As such, they provide perfect model systems for information decompositions. However, following this intuition too far leads to an information decomposition with negative partial information terms, which are difficult to interpret. One possible explanation is that the partial information lattice proposed by Williams and Beer is incomplete and has to be extended to incorporate terms corresponding to higher-order redundancy. These results put bounds on information decompositions that follow the partial information framework, and they hint at where the partial information lattice needs to be improved.

]]>Entropy doi: 10.3390/e19110600

Authors: Yanguang Chen Jiejing Wang Jian Feng

The spatial patterns and processes of cities can be described with various entropy functions. However, spatial entropy always depends on the scale of measurement, and it is difficult to find a characteristic value for it. In contrast, fractal parameters can be employed to characterize scale-free phenomena and reflect the local features of random multi-scaling structure. This paper is devoted to exploring the similarities and differences between spatial entropy and fractal dimension in urban description. Drawing an analogy between cities and growing fractals, we illustrate the definitions of fractal dimension based on different entropy concepts. Three representative fractal dimensions in the multifractal dimension set, capacity dimension, information dimension, and correlation dimension, are utilized to make empirical analyses of the urban form of two Chinese cities, Beijing and Hangzhou. The results show that the entropy values vary with the measurement scale, but the fractal dimension value is stable is method and study area are fixed; if the linear size of boxes is small enough (e.g., &lt;1/25), the linear correlation between entropy and fractal dimension is significant (based on the confidence level of 99%). Further empirical analysis indicates that fractal dimension is close to the characteristic values of spatial entropy. This suggests that the physical meaning of fractal dimension can be interpreted by the ideas from entropy and scaling and the conclusion is revealing for future spatial analysis of cities.

]]>Entropy doi: 10.3390/e19110598

Authors: Yun Lu Mingjiang Wang Rongchao Peng Qiquan Zhang

In the diagnosis of neurological diseases and assessment of brain function, entropy measures for quantifying electroencephalogram (EEG) signals are attracting ever-increasing attention worldwide. However, some entropy measures, such as approximate entropy (ApEn), sample entropy (SpEn), multiscale entropy and so on, imply high computational costs because their computations are based on hundreds of data points. In this paper, we propose an effective and practical method to accelerate the computation of these entropy measures by exploiting vectors with dissimilarity (VDS). By means of the VDS decision, distance calculations of most dissimilar vectors can be avoided during computation. The experimental results show that, compared with the conventional method, the proposed VDS method enables a reduction of the average computation time of SpEn in random signals and EEG signals by 78.5% and 78.9%, respectively. The computation times are consistently reduced by about 80.1~82.8% for five kinds of EEG signals of different lengths. The experiments further demonstrate the use of the VDS method not only to accelerate the computation of SpEn in electromyography and electrocardiogram signals but also to accelerate the computations of time-shift multiscale entropy and ApEn in EEG signals. All results indicate that the VDS method is a powerful strategy for accelerating the computation of entropy measures and has promising application potential in the field of biomedical informatics.

]]>Entropy doi: 10.3390/e19110599

Authors: Yingxin Zhao Zhiyang Liu Yuanyuan Wang Hong Wu Shuxue Ding

Compressive sensing theory has attracted widespread attention in recent years and sparse signal reconstruction has been widely used in signal processing and communication. This paper addresses the problem of sparse signal recovery especially with non-Gaussian noise. The main contribution of this paper is the proposal of an algorithm where the negentropy and reweighted schemes represent the core of an approach to the solution of the problem. The signal reconstruction problem is formalized as a constrained minimization problem, where the objective function is the sum of a measurement of error statistical characteristic term, the negentropy, and a sparse regularization term, ℓp-norm, for 0 &lt; p &lt; 1. The ℓp-norm, however, leads to a non-convex optimization problem which is difficult to solve efficiently. Herein we treat the ℓp -norm as a serious of weighted ℓ1-norms so that the sub-problems become convex. We propose an optimized algorithm that combines forward-backward splitting. The algorithm is fast and succeeds in exactly recovering sparse signals with Gaussian and non-Gaussian noise. Several numerical experiments and comparisons demonstrate the superiority of the proposed algorithm.

]]>Entropy doi: 10.3390/e19110595

Authors: Erik Aurell

This paper revisits the classical problem of representing a thermal bath interacting with a system as a large collection of harmonic oscillators initially in thermal equilibrium. As is well known, the system then obeys an equation, which in the bulk and in the suitable limit tends to the Kramers–Langevin equation of physical kinetics. I consider time-dependent system-bath coupling and show that this leads to an additional harmonic force acting on the system. When the coupling is switched on and switched off rapidly, the force has delta-function support at the initial and final time. I further show that the work and heat functionals as recently defined in stochastic thermodynamics at strong coupling contain additional terms depending on the time derivative of the system-bath coupling. I discuss these terms and show that while they can be very large if the system-bath coupling changes quickly, they only give a finite contribution to the work that enters in Jarzynski’s equality. I also discuss that these corrections to standard work and heat functionals provide an explanation for non-standard terms in the change of the von Neumann entropy of a quantum bath interacting with a quantum system found in an earlier contribution (Aurell and Eichhorn, 2015).

]]>Entropy doi: 10.3390/e19110596

Authors: Shanli Xiao Yujia Wang Hui Yu Shankun Nie

In order to improve the product disassembly efficiency, the disassembly line balancing problem (DLBP) is transformed into a problem of searching for the optimum path in the directed and weighted graph by constructing the disassembly hierarchy information graph (DHIG). Then, combining the characteristic of the disassembly sequence, an entropy-based adaptive hybrid particle swarm optimization algorithm (AHPSO) is presented. In this algorithm, entropy is introduced to measure the changing tendency of population diversity, and the dimension learning, crossover and mutation operator are used to increase the probability of producing feasible disassembly solutions (FDS). Performance of the proposed methodology is tested on the primary problem instances available in the literature, and the results are compared with other evolutionary algorithms. The results show that the proposed algorithm is efficient to solve the complex DLBP.

]]>Entropy doi: 10.3390/e19110594

Authors: Enrico Sciubba Federico Zullo

The paper discusses how the two thermodynamic properties, energy (U) and exergy (E), can be used to solve the problem of quantifying the entropy of non-equilibrium systems. Both energy and exergy are a priori concepts, and their formal dependence on thermodynamic state variables at equilibrium is known. Exploiting the results of a previous study, we first calculate the non-equilibrium exergy En-eq can be calculated for an arbitrary temperature distributions across a macroscopic body with an accuracy that depends only on the available information about the initial distribution: the analytical results confirm that En-eq exponentially relaxes to its equilibrium value. Using the Gyftopoulos-Beretta formalism, a non-equilibrium entropy Sn-eq(x,t) is then derived from En-eq(x,t) and U(x,t). It is finally shown that the non-equilibrium entropy generation between two states is always larger than its equilibrium (herein referred to as “classical”) counterpart. We conclude that every iso-energetic non-equilibrium state corresponds to an infinite set of non-equivalent states that can be ranked in terms of increasing entropy. Therefore, each point of the Gibbs plane corresponds therefore to a set of possible initial distributions: the non-equilibrium entropy is a multi-valued function that depends on the initial mass and energy distribution within the body. Though the concept cannot be directly extended to microscopic systems, it is argued that the present formulation is compatible with a possible reinterpretation of the existing non-equilibrium formulations, namely those of Tsallis and Grmela, and answers at least in part one of the objections set forth by Lieb and Yngvason. A systematic application of this paradigm is very convenient from a theoretical point of view and may be beneficial for meaningful future applications in the fields of nano-engineering and biological sciences.

]]>Entropy doi: 10.3390/e19110597

Authors: Zhenghong Zhou Juanli Ju Xiaoling Su Vijay Singh Gengxi Zhang

Monthly streamflow has elements of stochasticity, seasonality, and periodicity. Spectral analysis and time series analysis can, respectively, be employed to characterize the periodical pattern and the stochastic pattern. Both Burg entropy spectral analysis (BESA) and configurational entropy spectral analysis (CESA) combine spectral analysis and time series analysis. This study compared the predictive performances of BESA and CESA for monthly streamflow forecasting in six basins in Northwest China. Four criteria were selected to evaluate the performances of these two entropy spectral analyses: relative error (RE), root mean square error (RMSE), coefficient of determination (R2), and Nash–Sutcliffe efficiency coefficient (NSE). It was found that in Northwest China, both BESA and CESA forecasted monthly streamflow well with strong correlation. The forecast accuracy of BESA is higher than CESA. For the streamflow with weak correlation, the conclusion is the opposite.

]]>Entropy doi: 10.3390/e19110593

Authors: Mohammad Abdollahzadeh Jamalabadi Payam Hooshmand Navid Bagheri HamidReza KhakRah Majid Dousti

The authors wish to make the following correction to this paper [...]

]]>Entropy doi: 10.3390/e19110592

Authors: Lina Hao Xiaoling Su Vijay Singh Olusola Ayantobo

An integrated optimization model was developed for the spatial distribution of agricultural crops in order to efficiently utilize agricultural water and land resources simultaneously. The model is based on the spatial distribution of crop suitability, spatial distribution of population density, and agricultural land use data. Multi-source remote sensing data are combined with constraints of optimal crop area, which are obtained from agricultural cropping pattern optimization model. Using the middle reaches of the Heihe River basin as an example, the spatial distribution of maize and wheat were optimized by minimizing cross-entropy between crop distribution probabilities and desired but unknown distribution probabilities. Results showed that the area of maize should increase and the area of wheat should decrease in the study area compared with the situation in 2013. The comprehensive suitable area distribution of maize is approximately in accordance with the distribution in the present situation; however, the comprehensive suitable area distribution of wheat is not consistent with the distribution in the present situation. Through optimization, the high proportion of maize and wheat area was more concentrated than before. The maize area with more than 80% allocation concentrates on the south of the study area, and the wheat area with more than 30% allocation concentrates on the central part of the study area. The outcome of this study provides a scientific basis for farmers to select crops that are suitable in a particular area.

]]>Entropy doi: 10.3390/e19110591

Authors: Zhengwei Pan Juliang Jin Chunhui Li Shaowei Ning Rongxing Zhou

This paper establishes a water resources vulnerability framework based on sensitivity, natural resilience and artificial adaptation, through the analyses of the four states of the water system and its accompanying transformation processes. Furthermore, it proposes an analysis method for water resources vulnerability based on connection entropy, which extends the concept of contact entropy. An example is given of the water resources vulnerability in Anhui Province of China, which analysis illustrates that, overall, vulnerability levels fluctuated and showed apparent improvement trends from 2001 to 2015. Some suggestions are also provided for the improvement of the level of water resources vulnerability in Anhui Province, considering the viewpoint of the vulnerability index.

]]>Entropy doi: 10.3390/e19110590

Authors: Paolo Castiglioni Paolo Coruzzi Matteo Bini Gianfranco Parati Andrea Faini

Multiscale entropy (MSE) quantifies the cardiovascular complexity evaluating Sample Entropy (SampEn) on coarse-grained series at increasing scales τ. Two approaches exist, one using a fixed tolerance r at all scales (MSEFT), the other a varying tolerance r(τ) adjusted following the standard-deviation changes after coarse graining (MSEVT). The aim of this study is to clarify how the choice between MSEFT and MSEVT influences quantification and interpretation of cardiovascular MSE, and whether it affects some signals more than others. To achieve this aim, we considered 2-h long beat-by-beat recordings of inter-beat intervals and of systolic and diastolic blood pressures in male (N = 42) and female (N = 42) healthy volunteers. We compared MSE estimated with fixed and varying tolerances, and evaluated whether the choice between MSEFT and MSEVT estimators influence quantification and interpretation of sex-related differences. We found substantial discrepancies between MSEFT and MSEVT results, related to the degree of correlation among samples and more important for heart rate than for blood pressure; moreover the choice between MSEFT and MSEVT may influence the interpretation of gender differences for MSE of heart rate. We conclude that studies on cardiovascular complexity should carefully choose between fixed- or varying-tolerance estimators, particularly when evaluating MSE of heart rate.

]]>Entropy doi: 10.3390/e19110588

Authors: Artemy Kolchinsky Brendan Tracey

Following the publication of our paper [1], we uncovered a mistake in the derivation of two formulas in the manuscript.[...]

]]>Entropy doi: 10.3390/e19110589

Authors: Peter W. Egolf Kolumban Hutter

In the last few decades a series of experiments have revealed that turbulence is a cooperative and critical phenomenon showing a continuous phase change with the critical Reynolds number at its onset. However, the applications of phase transition models, such as the Mean Field Theory (MFT), the Heisenberg model, the XY model, etc. to turbulence, have not been realized so far. Now, in this article, a successful analogy to magnetism is reported, and it is shown that a Mean Field Theory of Turbulence (MFTT) can be built that reveals new results. In analogy to compressibility in fluids and susceptibility in magnetic materials, the vorticibility (the authors of this article propose this new name in analogy to response functions, derived and given names in other fields) of a turbulent flowing fluid is revealed, which is identical to the relative turbulence intensity. By analogy to magnetism, in a natural manner, the Curie Law of Turbulence was discovered. It is clear that the MFTT is a theory describing equilibrium flow systems, whereas for a long time it is known that turbulence is a highly non-equilibrium phenomenon. Nonetheless, as a starting point for the development of thermodynamic models of turbulence, the presented MFTT is very useful to gain physical insight, just as Kraichnan’s turbulent energy spectra of 2-D and 3-D turbulence are, which were developed with equilibrium Boltzmann-Gibbs thermodynamics and only recently have been generalized and adapted to non-equilibrium and intermittent turbulent flow fields.

]]>Entropy doi: 10.3390/e19110587

Authors: Yancai Xiao Yi Hong Xiuhai Chen Weijia Chen

Misalignment is one of the common faults for the doubly-fed wind turbine (DFWT), and the normal operation of the unit will be greatly affected under this state. Because it is difficult to obtain a large number of misaligned fault samples of wind turbines in practice, ADAMS and MATLAB are used to simulate the various misalignment conditions of the wind turbine transmission system to obtain the corresponding stator current in this paper. Then, the dual-tree complex wavelet transform is used to decompose and reconstruct the characteristic signal, and the dual-tree complex wavelet energy entropy is obtained from the reconstructed coefficients to form the feature vector of the fault diagnosis. Support vector machine is used as classifier and particle swarm optimization is used to optimize the relevant parameters of support vector machine (SVM) to improve its classification performance. The results show that the method proposed in this paper can effectively and accurately classify the misalignment of the transmission system of the wind turbine and improve the reliability of the fault diagnosis.

]]>Entropy doi: 10.3390/e19110585

Authors: Jinde Zheng Deyu Tu Haiyang Pan Xiaolei Hu Tao Liu Qingyun Liu

The vibration signals of rolling bearings are often nonlinear and non-stationary. Multiscale entropy (MSE) has been widely applied to measure the complexity of nonlinear mechanical vibration signals, however, at present only the single channel vibration signals are used for fault diagnosis by many scholars. In this paper multiscale entropy in multivariate framework, i.e., multivariate multiscale entropy (MMSE) is introduced to machinery fault diagnosis to improve the efficiency of fault identification as much as possible through using multi-channel vibration information. MMSE evaluates the multivariate complexity of synchronous multi-channel data and is an effective method for measuring complexity and mutual nonlinear dynamic relationship, but its statistical stability is poor. Refined composite multivariate multiscale fuzzy entropy (RCMMFE) was developed to overcome the problems existing in MMSE and was compared with MSE, multiscale fuzzy entropy, MMSE and multivariate multiscale fuzzy entropy by analyzing simulation data. Finally, a new fault diagnosis method for rolling bearing was proposed based on RCMMFE for fault feature extraction, Laplacian score and particle swarm optimization support vector machine (PSO-SVM) for automatic fault mode identification. The proposed method was compared with the existing methods by analyzing experimental data analysis and the results indicate its effectiveness and superiority.

]]>Entropy doi: 10.3390/e19110584

Authors: Masa Tsuchiya Alessandro Giuliani Kenichi Yoshikawa

Our previous work on the temporal development of the genome-expression profile in single-cell early mouse embryo indicated that reprogramming occurs via a critical transition state, where the critical-regulation pattern of the zygote state disappears. In this report, we unveil the detailed mechanism of how the dynamic interaction of thermodynamic states (critical states) enables the genome system to pass through the critical transition state to achieve genome reprogramming right after the late 2-cell state. Self-organized criticality (SOC) control of overall expression provides a snapshot of self-organization and explains the coexistence of critical states at a certain experimental time point. The time-development of self-organization is dynamically modulated by changes in expression flux between critical states through the cell nucleus milieu, where sequential global perturbations involving activation-inhibition of multiple critical states occur from the middle 2-cell to the 4-cell state. Two cyclic fluxes act as feedback flow and generate critical-state coherent oscillatory dynamics. Dynamic perturbation of these cyclic flows due to vivid activation of the ensemble of low-variance expression (sub-critical state) genes allows the genome system to overcome a transition state during reprogramming. Our findings imply that a universal mechanism of long-term global RNA oscillation underlies autonomous SOC control, and the critical gene ensemble at a critical point (CP) drives genome reprogramming. Identification of the corresponding molecular players will be essential for understanding single-cell reprogramming.

]]>Entropy doi: 10.3390/e19110586

Authors: Hyeji Kim Weihao Gao Sreeram Kannan Sewoong Oh Pramod Viswanath

Discovering a correlation from one variable to another variable is of fundamental scientific and practical interest. While existing correlation measures are suitable for discovering average correlation, they fail to discover hidden or potential correlations. To bridge this gap, (i) we postulate a set of natural axioms that we expect a measure of potential correlation to satisfy; (ii) we show that the rate of information bottleneck, i.e., the hypercontractivity coefficient, satisfies all the proposed axioms; (iii) we provide a novel estimator to estimate the hypercontractivity coefficient from samples; and (iv) we provide numerical experiments demonstrating that this proposed estimator discovers potential correlations among various indicators of WHO datasets, is robust in discovering gene interactions from gene expression time series data, and is statistically more powerful than the estimators for other correlation measures in binary hypothesis testing of canonical examples of potential correlations.

]]>Entropy doi: 10.3390/e19110583

Authors: Marcin Blachnik

Building an accurate prediction model is challenging and requires appropriate model selection. This process is very time consuming but can be accelerated with meta-learning–automatic model recommendation by estimating the performances of given prediction models without training them. Meta-learning utilizes metadata extracted from the dataset to effectively estimate the accuracy of the model in question. To achieve that goal, metadata descriptors must be gathered efficiently and must be informative to allow the precise estimation of prediction accuracy. In this paper, a new type of metadata descriptors is analyzed. These descriptors are based on the compression level obtained from the instance selection methods at the data-preprocessing stage. To verify their suitability, two types of experiments on real-world datasets have been conducted. In the first one, 11 instance selection methods were examined in order to validate the compression–accuracy relation for three classifiers: k-nearest neighbors (kNN), support vector machine (SVM), and random forest. From this analysis, two methods are recommended (instance-based learning type 2 (IB2), and edited nearest neighbor (ENN)) which are then compared with the state-of-the-art metaset descriptors. The obtained results confirm that the two suggested compression-based meta-features help to predict accuracy of the base model much more accurately than the state-of-the-art solution.

]]>Entropy doi: 10.3390/e19110582

Authors: Stefan Hagmair Martin Bachler Matthias Braunisch Georg Lorenz Christoph Schmaderer Anna-Lena Hasenau Lukas Stülpnagel Axel Bauer Kostantinos Rizas Siegfried Wassertheurer Christopher Mayer

Heart rate variability (HRV) analysis is a non-invasive tool for assessing cardiac health. Entropy measures quantify the chaotic properties of HRV, but they are sensitive to the choice of their required parameters. Previous studies therefore have performed parameter optimization, targeting solely their particular patient cohort. In contrast, this work aimed to challenge entropy measures with recently published parameter sets, without time-consuming optimization, for risk prediction in end-stage renal disease patients. Approximate entropy, sample entropy, fuzzy entropy, fuzzy measure entropy, and corrected approximate entropy were examined. In total, 265 hemodialysis patients from the ISAR (rISk strAtification in end-stage Renal disease) study were analyzed. Throughout a median follow-up time of 43 months, 70 patients died. Fuzzy entropy and corrected approximate entropy (CApEn) provided significant hazard ratios, which remained significant after adjustment for clinical risk factors from literature if an entropy maximizing threshold parameter was chosen. Revealing results were seen in the subgroup of patients with heart disease (HD) when setting the radius to a multiple of the data’s standard deviation ( r = 0.2 · σ ); all entropies, except CApEn, predicted mortality significantly and remained significant after adjustment. Therefore, these two parameter settings seem to reflect different cardiac properties. This work shows the potential of entropy measures for cardiovascular risk stratification in cohorts the parameters were not optimized for, and it provides additional insights into the parameter choice.

]]>Entropy doi: 10.3390/e19110581

Authors: Demetris Koutsoyiannis

While the modern definition of entropy is genuinely probabilistic, in entropy production the classical thermodynamic definition, as in heat transfer, is typically used. Here we explore the concept of entropy production within stochastics and, particularly, two forms of entropy production in logarithmic time, unconditionally (EPLT) or conditionally on the past and present having been observed (CEPLT). We study the theoretical properties of both forms, in general and in application to a broad set of stochastic processes. A main question investigated, related to model identification and fitting from data, is how to estimate the entropy production from a time series. It turns out that there is a link of the EPLT with the climacogram, and of the CEPLT with two additional tools introduced here, namely the differenced climacogram and the climacospectrum. In particular, EPLT and CEPLT are related to slopes of log-log plots of these tools, with the asymptotic slopes at the tails being most important as they justify the emergence of scaling laws of second-order characteristics of stochastic processes. As a real-world application, we use an extraordinary long time series of turbulent velocity and show how a parsimonious stochastic model can be identified and fitted using the tools developed.

]]>Entropy doi: 10.3390/e19110579

Authors: Sergio Croquer Sébastien Poncet Zine Aidoun

This study presents a thermodynamic model for determining the entrainment ratio and double choke limiting pressure of supersonic ejectors within the context of heat driven refrigeration cycles, with and without droplet injection, at the constant area section of the device. Input data include the inlet operating conditions and key geometry parameters (primary throat, mixing section and diffuser outlet diameter), whereas output information includes the ejector entrainment ratio, maximum double choke compression ratio, ejector efficiency, exergy efficiency and exergy destruction index. In single-phase operation, the ejector entrainment ratio and double choke limiting pressure are determined with a mean accuracy of 18 % and 2.5 % , respectively. In two-phase operation, the choked mass flow rate across convergent-divergent nozzles is estimated with a deviation of 10 % . An analysis on the effect of droplet injection confirms the hypothesis that droplet injection reduces by 8 % the pressure and Mach number jumps associated with shock waves occuring at the end of the constant area section. Nonetheless, other factors such as the mixing of the droplets with the main flow are introduced, resulting in an overall reduction by 11 % of the ejector efficiency and by 15 % of the exergy efficiency.

]]>Entropy doi: 10.3390/e19110580

Authors: Samuel Adesanya Hammed Ogunseye J. Falade R.S. Lebelo

This paper addresses entropy generation in the flow of an electrically-conducting couple stress nanofluid through a vertical porous channel subjected to constant heat flux. By using the Buongiorno model, equations for momentum, energy, and nanofluid concentration are modelled, solved using homotopy analysis and furthermore, solved numerically. The variations of significant fluid parameters with respect to fluid velocity, temperature, nanofluid concentration, entropy generation, and irreversibility ratio are investigated, presented graphically, and discussed based on physical laws.

]]>Entropy doi: 10.3390/e19110487

Authors: Wei Ong Alan Tan V. Vengadasalam Cheah Tan Thean Ooi

Voice activity detection (VAD) is a vital process in voice communication systems to avoid unnecessary coding and transmission of noise. Most of the existing VAD algorithms continue to suffer high false alarm rates and low sensitivity when the signal-to-noise ratio (SNR) is low, at 0 dB and below. Others are developed to operate in offline mode or are impractical for implementation in actual devices due to high computational complexity. This paper proposes the upper envelope weighted entropy (UEWE) measure as a means to enable high separation of speech and non-speech segments in voice communication. The asymmetric nonlinear filter (ANF) is employed in UEWE to extract the adaptive weight factor that is subsequently used to compensate the noise effect. In addition, this paper also introduces a dual-rate adaptive nonlinear filter (DANF) with high adaptivity to rapid time-varying noise for computation of the decision threshold. Performance comparison with standard and recent VADs shows that the proposed algorithm is superior especially in real-time practical applications.

]]>Entropy doi: 10.3390/e19110537

Authors: Dominique Brun-Battistini Alfredo Sandoval-Villalbazo Ana Garcia-Perciante

Richard C. Tolman analyzed the relation between a temperature gradient and a gravitational field in an equilibrium situation. In 2012, Tolman’s law was generalized to a non-equilibrium situation for a simple dilute relativistic fluid. The result in that scenario, obtained by introducing the gravitational force through the molecular acceleration, couples the heat flux with the metric coefficients and the gradients of the state variables. In the present paper it is shown, by explicitly describing the single particle orbits as geodesics in Boltzmann’s equation, that a gravitational field drives a heat flux in this type of system. The calculation is devoted solely to the gravitational field contribution to this heat flux in which a Newtonian limit to the Schwarzschild metric is assumed. The corresponding transport coefficient, which is obtained within a relaxation approximation, corresponds to the dilute fluid in a weak gravitational field. The effect is negligible in the non-relativistic regime, as evidenced by the direct evaluation of the corresponding limit.

]]>Entropy doi: 10.3390/e19110578

Authors: Wenke Zang Weining Zhang Wenqian Zhang Xiyu Liu

MRI segmentation is critically important for clinical study and diagnosis. Existing methods based on soft clustering have several drawbacks, including low accuracy in the presence of image noise and artifacts, and high computational cost. In this paper, we introduce a new formulation of the MRI segmentation problem as a kernel-based intuitionistic fuzzy C-means (KIFCM) clustering problem and propose a new DNA-based genetic algorithm to obtain the optimal KIFCM clustering. While this algorithm searches the solution space for the optimal model parameters, it also obtains the optimal clustering, therefore the optimal MRI segmentation. We perform empirical study by comparing our method with six state-of-the-art soft clustering methods using a set of UCI (University of California, Irvine) datasets and a set of synthetic and clinic MRI datasets. The preliminary results show that our method outperforms other methods in both the clustering metrics and the computational efficiency.

]]>Entropy doi: 10.3390/e19110577

Authors: Daoming Dai Fengshan Si Jing Wang

This paper constructs a continuous dual-channel closed-loop supply chain (DCLSC) model with delayed decision under government intervention. The existence conditions of the local stability of the equilibrium point are discussed. We analyze the influence of delay parameters, the adjustment speed of wholesale price, recovery rate of waste products, direct price, carbon quota subsidy, and carbon tax on the stability and complexity of model by using bifurcation diagram, entropy diagram, attractor, and time series diagram and so on. Besides, the delay feedback control method is adopted to control the unstable or chaotic system effectively. The main conclusions of this paper show that the variables mentioned above must be within a reasonable range. Otherwise, the model will lose stability or enter chaos. The government can effectively adjust manufacturers' profit through carbon tax and carbon quota subsidy, and encourage manufacturers to reduce carbon emissions and increase the remanufacturing of waste products.

]]>Entropy doi: 10.3390/e19110576

Authors: Varinder Singh Ramandeep Johal

We study the optimal performance of Feynman’s ratchet and pawl, a paradigmatic model in nonequilibrium physics, using ecological criterion as the objective function. The analysis is performed by two different methods: (i) a two-parameter optimization over internal energy scales; and (ii) a one-parameter optimization of the estimate for the objective function, after averaging over the prior probability distribution (Jeffreys’ prior) for one of the uncertain internal energy scales. We study the model for both engine and refrigerator modes. We derive expressions for the efficiency/coefficient of performance (COP) at maximum ecological function. These expressions from the two methods are found to agree closely with equilibrium situations. Furthermore, the expressions obtained by the second method (with estimation) agree with the expressions obtained in finite-time thermodynamic models.

]]>Entropy doi: 10.3390/e19110560

Authors: Jim Kay Robin Ince Benjamin Dering William Phillips

Information processing within neural systems often depends upon selective amplification of relevant signals and suppression of irrelevant signals. This has been shown many times by studies of contextual effects but there is as yet no consensus on how to interpret such studies. Some researchers interpret the effects of context as contributing to the selective receptive field (RF) input about which neurons transmit information. Others interpret context effects as affecting transmission of information about RF input without becoming part of the RF information transmitted. Here we use partial information decomposition (PID) and entropic information decomposition (EID) to study the properties of a form of modulation previously used in neurobiologically plausible neural nets. PID shows that this form of modulation can affect transmission of information in the RF input without the binary output transmitting any information unique to the modulator. EID produces similar decompositions, except that information unique to the modulator and the mechanistic shared component can be negative when modulating and modulated signals are correlated. Synergistic and source shared components were never negative in the conditions studied. Thus, both PID and EID show that modulatory inputs to a local processor can affect the transmission of information from other inputs. Contrary to what was previously assumed, this transmission can occur without the modulatory inputs becoming part of the information transmitted, as shown by the use of PID with the model we consider. Decompositions of psychophysical data from a visual contrast detection task with surrounding context suggest that a similar form of modulation may also occur in real neural systems.

]]>Entropy doi: 10.3390/e19110575

Authors: Ralf Hofmann

Based on a recent numerical simulation of the temporal evolution of a spherically perturbed BPS monopole, SU(2) Yang-Mills thermodynamics, Louis de Broglie’s deliberations on the disparate Lorentz transformations of the frequency of an internal “clock” on one hand and the associated quantum energy on the other hand, and postulating that the electron is represented by a figure-eight shaped, self-intersecting center vortex loop in SU(2) Quantum Yang-Mills theory we estimate the spatial radius R 0 of this self-intersection region in terms of the electron’s Compton wave length λ C . This region, which is immersed into the confining phase, constitutes a blob of deconfining phase of temperature T 0 mildly above the critical temperature T c carrying a frequently perturbed BPS monopole (with a magnetic-electric dual interpretation of its charge w.r.t. U(1)⊂SU(2)). We also establish a quantitative relation between rest mass m 0 of the electron and SU(2) Yang-Mills scale Λ , which in turn is defined via T c . Surprisingly, R 0 turns out to be comparable to the Bohr radius while the core size of the monopole matches λ C , and the correction to the mass of the electron due to Coulomb energy is about 2%.

]]>Entropy doi: 10.3390/e19110572

Authors: Martin Ibl Jan Čapek

Complexity analysis of dynamic systems provides a better understanding of the internal behaviours that are associated with tension and efficiency, which in the socio-technical systems may lead to innovation. One of the popular approaches for the assessment of complexity is associated with self-similarity. The dynamic component of dynamic systems represents the relationships and interactions among the inner elements (and its surroundings) and fully describes its behaviour. The approach used in this work addresses complexity analysis in terms of system behaviour, i.e., the so-called behavioural analysis of complexity. The self-similarity of a system (structural or behavioural) can be determined, for example, using fractal geometry, whose toolbox provides a number of methods for the measurement of the so-called fractal dimension. Other instruments for measuring the self-similarity in a system, include the Hurst exponent and the framework of complex system theory in general. The approach introduced in this work defines the complexity analysis in a social-technical system under tension. The proposed procedure consists of modelling the key dynamic components of a discrete event dynamic system by any definition of Petri nets. From the stationary probabilities, one can then decide whether the system is self-similar using the abovementioned tools. In addition, the proposed approach allows for finding the critical values (phase transitions) of the analysed systems.

]]>Entropy doi: 10.3390/e19110574

Authors: Rongxing Zhou Zhengwei Pan Juliang Jin Chunhui Li Shaowei Ning

As a new development form for evaluating the regional water resources carrying capacity, forewarning regional water resources of their carrying capacities is an important adjustment and control measure for regional water security management. Up to now, most research on this issue have been qualitative analyses, with a lack of quantitative research. For this reason, an index system for forewarning regional water resources of their carrying capacities and grade standards, has been established in Anhui Province, China, in this paper. Subjective weights of forewarning indices can be calculated using a fuzzy analytic hierarchy process, based on an accelerating genetic algorithm, while objective weights of forewarning indices can be calculated by using a projection pursuit method, based on an accelerating genetic algorithm. These two kinds of weights can be combined into combination weights of forewarning indices, by using the minimum relative information entropy principle. Furthermore, a forewarning model of regional water resources carrying capacity, based on entropy combination weight, is put forward. The model can fully integrate subjective and objective information in the process of forewarning. The results show that the calculation results of the model are reasonable and the method has high adaptability. Therefore, this model is worth studying and popularizing.

]]>Entropy doi: 10.3390/e19110573

Authors: Victor Bakhtin Andrei Lebedev

This article presents a new definition of t-entropy that makes it more explicit and simplifies the process of its calculation.

]]>Entropy doi: 10.3390/e19110571

Authors: Chloe Gao David Limmer

We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green–Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

]]>Entropy doi: 10.3390/e19100568

Authors: Bo Shi Yudong Zhang Chaochao Yuan Shuihua Wang Peng Li

Entropy measures have been extensively used to assess heart rate variability (HRV), a noninvasive marker of cardiovascular autonomic regulation. It is yet to be elucidated whether those entropy measures can sensitively respond to changes of autonomic balance and whether the responses, if there are any, are consistent across different entropy measures. Sixteen healthy subjects were enrolled in this study. Each subject undertook two 5-min ECG measurements, one in a resting seated position and another while walking on a treadmill at a regular speed of 5 km/h. For each subject, the two measurements were conducted in a randomized order and a 30-min rest was required between them. HRV time series were derived and were analyzed by eight entropy measures, i.e., approximate entropy (ApEn), corrected ApEn (cApEn), sample entropy (SampEn), fuzzy entropy without removing local trend (FuzzyEn-g), fuzzy entropy with local trend removal (FuzzyEn-l), permutation entropy (PermEn), conditional entropy (CE), and distribution entropy (DistEn). Compared to resting seated position, regular walking led to significantly reduced CE and DistEn (both p ≤ 0.006; Cohen’s d = 0.9 for CE, d = 1.7 for DistEn), and increased PermEn (p &lt; 0.0001; d = 1.9), while all these changes disappeared after performing a linear detrend or a wavelet detrend (&lt;~0.03 Hz) on HRV. In addition, cApEn, SampEn, FuzzyEn-g, and FuzzyEn-l showed significant decreases during regular walking after linear detrending (all p &lt; 0.006; 0.8 &lt; d &lt; 1), while a significantly increased ApEn (p &lt; 0.0001; d = 1.9) and a significantly reduced cApEn (p = 0.0006; d = 0.8) were observed after wavelet detrending. To conclude, multiple entropy analyses should be performed to assess HRV in order for objective results and caution should be paid when drawing conclusions based on observations from a single measure. Besides, results from different studies will not be comparable unless it is clearly stated whether data have been detrended and the methods used for detrending have been specified.

]]>Entropy doi: 10.3390/e19100569

Authors: Andrea Murari Teddy Craciunescu Emmanuele Peluso Michela Gelfusa JET Contributors

Abstract: Modern experiments in Magnetic Confinement Nuclear Fusion can produce Gigabytes of data, mainly in form of time series. The acquired signals, composing massive databases, are typically affected by significant levels of noise. The interpretation of the time series can therefore become quite involved, particularly when tenuous causal relations have to be investigated. In the last years, synchronization experiments, to control potentially dangerous instabilities, have become a subject of intensive research. Their interpretation requires quite delicate causality analysis. In this paper, the approach of Information Geometry is applied to the problem of assessing the effectiveness of synchronization experiments on JET (Joint European Torus). In particular, the use of the Geodesic Distance on Gaussian Manifolds is shown to improve the results of advanced techniques such as Recurrent Plots and Complex Networks, when the noise level is not negligible. In cases affected by particularly high levels of noise, compromising the traditional treatments, the use of the Geodesic Distance on Gaussian Manifolds allows deriving quite encouraging results. In addition to consolidating conclusions previously quite uncertain, it has been demonstrated that the proposed approach permit to successfully analyze signals of discharges which were otherwise unusable, therefore salvaging the interpretation of those experiments.

]]>Entropy doi: 10.3390/e19100567

Authors: Jose Blázquez-Salcedo

In the large coupling regime of the 5-dimensional Einstein–Maxwell–Chern–Simons theory, charged and rotating cohomogeneity-1 black holes form sequences of extremal and non-extremal radially excited configurations. These asymptotically global Anti-de Sitter (AdS 5 ) black holes form a discrete set of solutions, characterised by the vanishing of the total angular momenta, or the horizon angular velocity. However, the solutions are not static. In this paper, we study the branch structure that contains these excited states, and its relation with the static Reissner–Nordström-AdS black hole. Thermodynamic properties of these solutions are considered, revealing that the branches with lower excitation number can become thermodynamically unstable beyond certain critical solutions that depend on the free parameters of the configuration.

]]>Entropy doi: 10.3390/e19100566

Authors: Kalyan Annamalai Arnab Nanda

The energy for sustaining life is released through the oxidation of glucose, fats, and proteins. A part of the energy released within each cell is stored as chemical energy of Adenosine Tri-Phosphate molecules, which is essential for performing life-sustaining functions, while the remainder is released as heat in order to maintain isothermal state of the body. Earlier literature introduced the availability concepts from thermodynamics, related the specific irreversibility and entropy generation rates to metabolic efficiency and energy release rate of organ k, computed whole body specific entropy generation rate of whole body at any given age as a sum of entropy generation within four vital organs Brain, Heart, Kidney, Liver (BHKL) with 5th organ being the rest of organs (R5) and estimated the life span using an upper limit on lifetime entropy generated per unit mass of body, σM,life. The organ entropy stress expressed in terms of lifetime specific entropy generated per unit mass of body organs (kJ/(K kg of organ k)) was used to rank organs and heart ranked highest while liver ranked lowest. The present work includes the effects of (1) two additional organs: adipose tissue (AT) and skeletal muscles (SM) which are of importance to athletes; (2) proportions of nutrients oxidized which affects blood temperature and metabolic efficiencies; (3) conversion of the entropy stress from organ/cellular level to mitochondrial level; and (4) use these parameters as metabolism-based biomarkers for quantifying the biological aging process in reaching the limit of σM,life. Based on the 7-organ model and Elia constants for organ metabolic rates for a male of 84 kg steady mass and using basic and derived allometric constants of organs, the lifetime energy expenditure is estimated to be 2725 MJ/kg body mass while lifetime entropy generated is 6050 kJ/(K kg body mass) with contributions of 190; 1835.0; 610; 290; 700; 1470 and 95 kJ/K contributed by AT-BHKL-SM-R7 to 1 kg body mass over life time. The corresponding life time entropy stresses of organs are: 1.2; 60.5; 110.5; 110.5; 50.5; 3.5; 3.0 MJ/K per kg organ mass. Thus, among vital organs highest stress is for heart and kidney and lowest stress is for liver. The 5-organ model (BHKL and R5) also shows similar ranking. Based on mitochondrial volume and 5-organ model, the entropy stresses of organs expressed in kJ/K per cm3 of Mito volume are: 12,670; 5465; 2855; 4730 kJ/cm3 of Mito for BHKL indicating brain to be highly stressed and liver to be least stressed. Thus, the organ entropy stress ranking based on unit volume of mitochondria within an organ (kJ/(K cm3 of Mito of organ k)) differs from entropy stress based on unit mass of organ. Based on metabolic loading, the brains of athletes already under extreme mitochondrial stress and facing reduced metabolic efficiency under concussion are subjected to more increased stress. In the absence of non-intrusive measurements for estimating organ-based metabolic rates which can serve as metabolism-based biomarkers for biological aging (BA) of whole body, alternate methods are suggested for estimating the biological aging rate.

]]>Entropy doi: 10.3390/e19100565

Authors: Ishay Wohl Naomi Zurgil Yaron Hakuk Maria Sobolev Mordechai Deutsch

A simple, label-free cytometry technique is introduced. It is based on the analysis of the fluctuation of image Gray Level Information Entropy (GLIE) which is shown to reflect intracellular biophysical properties like generalized entropy. In this study, the analytical relations between cellular thermodynamic generalized entropy and diffusivity and GLIE fluctuation measures are explored for the first time. The standard deviation (SD) of GLIE is shown by experiments, simulation and theoretical analysis to be indifferent to microscope system “noise”. Then, the ability of GLIE fluctuation measures to reflect basic cellular entropy conditions of early death and malignancy is demonstrated in a cell model of human, healthy-donor lymphocytes, malignant Jurkat cells, as well as dead lymphocytes and Jurkat cells. Utilization of GLIE-based fluctuation measures seems to have the advantage of displaying biophysical characterization of the tested cells, like diffusivity and entropy, in a novel, unique, simple and illustrative way.

]]>Entropy doi: 10.3390/e19100563

Authors: Young-Sik Kim Hosung Park Jong-Seon No

Fractional repetition (FR) codes are a class of distributed storage codes that replicate and distribute information data over several nodes for easy repair, as well as efficient reconstruction. In this paper, we propose three new constructions of FR codes based on relative difference sets (RDSs) with λ = 1 . Specifically, we propose new ( q 2 - 1 , q , q ) FR codes using cyclic RDS with parameters ( q + 1 , q - 1 , q , 1 ) constructed from q-ary m-sequences of period q 2 - 1 for a prime power q, ( p 2 , p , p ) FR codes using non-cyclic RDS with parameters ( p , p , p , 1 ) for an odd prime p or p = 4 and ( 4 l , 2 l , 2 l ) FR codes using non-cyclic RDS with parameters ( 2 l , 2 l , 2 l , 1 ) constructed from the Galois ring for a positive integer l. They are differentiated from the existing FR codes with respect to the constructable code parameters. It turns out that the proposed FR codes are (near) optimal for some parameters in terms of the FR capacity bound. Especially, ( 8 , 3 , 3 ) and ( 9 , 3 , 3 ) FR codes are optimal, that is, they meet the FR capacity bound for all k. To support various code parameters, we modify the proposed ( q 2 - 1 , q , q ) FR codes using decimation by a factor of the code length q 2 - 1 , which also gives us new good FR codes.

]]>Entropy doi: 10.3390/e19100562

Authors: Robert Jack Marcus Kaiser Johannes Zimmer

We describe some general results that constrain the dynamical fluctuations that can occur in non-equilibrium steady states, with a focus on molecular dynamics. That is, we consider Hamiltonian systems, coupled to external heat baths, and driven out of equilibrium by non-conservative forces. We focus on the probabilities of rare events (large deviations). First, we discuss a PT (parity-time) symmetry that appears in ensembles of trajectories where a current is constrained to have a large (non-typical) value. We analyse the heat flow in such ensembles, and compare it with non-equilibrium steady states. Second, we consider pathwise large deviations that are defined by considering many copies of a system. We show how the probability currents in such systems can be decomposed into orthogonal contributions that are related to convergence to equilibrium and to dissipation. We discuss the implications of these results for modelling non-equilibrium steady states.

]]>Entropy doi: 10.3390/e19100564

Authors: Michael Evans Irwin Guttman Peiying Li

Methods are developed for eliciting a Dirichlet prior based upon stating bounds on the individual probabilities that hold with high prior probability. This approach to selecting a prior is applied to a contingency table problem where it is demonstrated how to assess the prior with respect to the bias it induces as well as how to check for prior-data conflict. It is shown that the assessment of a hypothesis via relative belief can easily take into account what it means for the falsity of the hypothesis to correspond to a difference of practical importance and provide evidence in favor of a hypothesis.

]]>Entropy doi: 10.3390/e19100561

Authors: Robert Skeel Youhan Fang

Markov chain Monte Carlo sampling propagators, including numerical integrators for stochastic dynamics, are central to the calculation of thermodynamic quantities and determination of structure for molecular systems. Efficiency is paramount, and to a great extent, this is determined by the integrated autocorrelation time (IAcT). This quantity varies depending on the observable that is being estimated. It is suggested that it is the maximum of the IAcT over all observables that is the relevant metric. Reviewed here is a method for estimating this quantity. For reversible propagators (which are those that satisfy detailed balance), the maximum IAcT is determined by the spectral gap in the forward transfer operator, but for irreversible propagators, the maximum IAcT can be far less than or greater than what might be inferred from the spectral gap. This is consistent with recent theoretical results (not to mention past practical experience) suggesting that irreversible propagators generally perform better if not much better than reversible ones. Typical irreversible propagators have a parameter controlling the mix of ballistic and diffusive movement. To gain insight into the effect of the damping parameter for Langevin dynamics, its optimal value is obtained here for a multidimensional quadratic potential energy function.

]]>Entropy doi: 10.3390/e19100559

Authors: Kevin Knuth Ben Placek Daniel Angerhausen Jennifer Carter Bryan D’Angelo Anthony Gai Bertrand Carado

The fields of astronomy and astrophysics are currently engaged in an unprecedented era of discovery as recent missions have revealed thousands of exoplanets orbiting other stars. While the Kepler Space Telescope mission has enabled most of these exoplanets to be detected by identifying transiting events, exoplanets often exhibit additional photometric effects that can be used to improve the characterization of exoplanets. The EXONEST Exoplanetary Explorer is a Bayesian exoplanet inference engine based on nested sampling and originally designed to analyze archived Kepler Space Telescope and CoRoT (Convection Rotation et Transits planétaires) exoplanet mission data. We discuss the EXONEST software package and describe how it accommodates plug-and-play models of exoplanet-associated photometric effects for the purpose of exoplanet detection, characterization and scientific hypothesis testing. The current suite of models allows for both circular and eccentric orbits in conjunction with photometric effects, such as the primary transit and secondary eclipse, reflected light, thermal emissions, ellipsoidal variations, Doppler beaming and superrotation. We discuss our new efforts to expand the capabilities of the software to include more subtle photometric effects involving reflected and refracted light. We discuss the EXONEST inference engine design and introduce our plans to port the current MATLAB-based EXONEST software package over to the next generation Exoplanetary Explorer, which will be a Python-based open source project with the capability to employ third-party plug-and-play models of exoplanet-related photometric effects.

]]>Entropy doi: 10.3390/e19100546

Authors: Dongyun Bai Peng Huang Hongxin Ma Tao Wang Guihua Zeng

We show that the successful use of a noiseless linear amplifier (NLA) can help increase the maximum transmission distance and tolerate more excess noise of the plug-and-play dual-phase-modulated continuous-variable quantum key distribution. In particular, an equivalent entanglement-based scheme model is proposed to analyze the security, and the secure bound is derived with the presence of a Gaussian noisy and lossy channel. The analysis shows that the performance of the NLA-based protocol can be further improved by adjusting the effective parameters.

]]>Entropy doi: 10.3390/e19100558

Authors: Yves Lecarpentier Victor Claes Xénophon Krokidis Jean-Louis Hébert Oumar Timbely François-Xavier Blanc Francine Michel Alexandre Vallée

A. Huxley’s equations were used to determine the mechanical properties of muscle myosin II (MII) at the molecular level, as well as the probability of the occurrence of the different stages in the actin–myosin cycle. It was then possible to use the formalism of statistical mechanics with the grand canonical ensemble to calculate numerous thermodynamic parameters such as entropy, internal energy, affinity, thermodynamic flow, thermodynamic force, and entropy production rate. This allows us to compare the thermodynamic parameters of a non-muscle contractile system, such as the normal human placenta, with those of different striated skeletal muscles (soleus and extensor digitalis longus) as well as the heart muscle and smooth muscles (trachea and uterus) in the rat. In the human placental tissues, it was observed that the kinetics of the actin–myosin crossbridges were considerably slow compared with those of smooth and striated muscular systems. The entropy production rate was also particularly low in the human placental tissues, as compared with that observed in smooth and striated muscular systems. This is partly due to the low thermodynamic flow found in the human placental tissues. However, the unitary force of non-muscle myosin (NMII) generated by each crossbridge cycle in the myofibroblasts of the human placental tissues was similar in magnitude to that of MII in the myocytes of both smooth and striated muscle cells. Statistical mechanics represents a powerful tool for studying the thermodynamics of all contractile muscle and non-muscle systems.

]]>Entropy doi: 10.3390/e19100552

Authors: Yi Tang Han Cui Qi Wang

Frequency prediction after a disturbance has received increasing research attention given its substantial value in providing a decision-making foundation in power system emergency control. With the advancing development of machine learning, analysis power systems with machine-learning methods has become completely different from traditional approaches. In this paper, an ensemble algorithm using cross-entropy as a combination strategy is presented to address the trade-off between prediction accuracy and calculation speed. The prediction difficulty caused by inadequate numbers of severe disturbance samples is also overcome by the ensemble model. In the proposed ensemble algorithm, base learners are selected following the principle of diversity, which guarantees the ensemble algorithm’s accuracy. Cross-entropy is applied to evaluate the fitting performance of the base learners and to set the weight coefficient in the ensemble algorithm. Subsequently, an online prediction model based on the algorithm is established that integrates training, prediction and updating. In the Western System Coordinating Council 9-bus (WSCC 9) system and the Institute of Electrical and Electronics Engineers 39-bus (IEEE 39) system, the algorithm is shown to significantly improve the prediction accuracy in both sample-rich and sample-poor situations, verifying the effectiveness and superiority of the proposed ensemble algorithm.

]]>Entropy doi: 10.3390/e19100553

Authors: Hui-Chung Yeh Yen-Chang Chen Che-Hao Chang Cheng-Hsuan Ho Chiang Wei

In this study, a method combining radar and entropy was proposed to design a rainfall network. Owing to the shortage of rain gauges in mountain areas, weather radars are used to measure rainfall over catchments. The major advantage of radar is that it is possible to observe rainfall widely in a short time. However, the rainfall data obtained by radar do not necessarily correspond to that observed by ground-based rain gauges. The in-situ rainfall data from telemetering rain gauges were used to calibrate a radar system. Therefore, the rainfall intensity; as well as its distribution over the catchment can be obtained using radar. Once the rainfall data of past years at the desired locations over the catchment were generated, the entropy based on probability was applied to optimize the rainfall network. This method is applicable in remote and mountain areas. Its most important utility is to construct an optimal rainfall network in an ungauged catchment. The design of a rainfall network in the catchment of the Feitsui Reservoir was used to illustrate the various steps as well as the reliability of the method.

]]>Entropy doi: 10.3390/e19100554

Authors: Jesús Gutiérrez-Gutiérrez Marta Zárraga-Rodríguez Xabier Insausti

In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new results on the discrete Fourier transform (DFT) of WSS processes.

]]>Entropy doi: 10.3390/e19100551

Authors: Orlando Luongo

Dark energy’s thermodynamics is here revised giving particular attention to the role played by specific heats and entropy in a flat Friedmann-Robertson-Walker universe. Under the hypothesis of adiabatic heat exchanges, we rewrite the specific heats through cosmographic, model-independent quantities and we trace their evolutions in terms of z. We demonstrate that dark energy may be modeled as perfect gas, only as the Mayer relation is preserved. In particular, we find that the Mayer relation holds if j − q &gt; 1 2 . The former result turns out to be general so that, even at the transition time, the jerk parameter j cannot violate the condition: j t r &gt; 1 2 . This outcome rules out those models which predict opposite cases, whereas it turns out to be compatible with the concordance paradigm. We thus compare our bounds with the Λ CDM model, highlighting that a constant dark energy term seems to be compatible with the so-obtained specific heat thermodynamics, after a precise redshift domain. In our treatment, we show the degeneracy between unified dark energy models with zero sound speed and the concordance paradigm. Under this scheme, we suggest that the cosmological constant may be viewed as an effective approach to dark energy either at small or high redshift domains. Last but not least, we discuss how to reconstruct dark energy’s entropy from specific heats and we finally compute both entropy and specific heats into the luminosity distance d L , in order to fix constraints over them through cosmic data.

]]>Entropy doi: 10.3390/e19100557

Authors: Jian Yu Junyi Cao Wei-Hsin Liao Yangquan Chen Jing Lin Rong Liu

The complexity quantification of human gait time series has received considerable interest for wearable healthcare. Symbolic entropy is one of the most prevalent algorithms used to measure the complexity of a time series, but it fails to account for the multiple time scales and multi-channel statistical dependence inherent in such time series. To overcome this problem, multivariate multiscale symbolic entropy is proposed in this paper to distinguish the complexity of human gait signals in health and disease. The embedding dimension, time delay and quantization levels are appropriately designed to construct similarity of signals for calculating complexity of human gait. The proposed method can accurately detect healthy and pathologic group from realistic multivariate human gait time series on multiple scales. It strongly supports wearable healthcare with simplicity, robustness, and fast computation.

]]>Entropy doi: 10.3390/e19100556

Authors: Metod Saniga

It is demonstrated that the magic three-qubit Veldkamp line occurs naturally within the Veldkamp space of a combinatorial Grassmannian of type G 2 ( 7 ) , V ( G 2 ( 7 ) ) . The lines of the ambient symplectic polar space are those lines of V ( G 2 ( 7 ) ) whose cores feature an odd number of points of G 2 ( 7 ) . After introducing the basic properties of three different types of points and seven distinct types of lines of V ( G 2 ( 7 ) ) , we explicitly show the combinatorial Grassmannian composition of the magic Veldkamp line; we first give representatives of points and lines of its core generalized quadrangle GQ ( 2 , 2 ) , and then additional points and lines of a specific elliptic quadric Q - (5, 2), a hyperbolic quadric Q + (5, 2), and a quadratic cone Q ^ (4, 2) that are centered on the GQ ( 2 , 2 ) . In particular, each point of Q + (5, 2) is represented by a Pasch configuration and its complementary line, the (Schläfli) double-six of points in Q - (5, 2) comprise six Cayley–Salmon configurations and six Desargues configurations with their complementary points, and the remaining Cayley–Salmon configuration stands for the vertex of Q ^ (4, 2).

]]>Entropy doi: 10.3390/e19100555

Authors: Andrew Gelman Daniel Simpson Michael Betancourt

A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.

]]>Entropy doi: 10.3390/e19100550

Authors: Chang Hsu Sung-Yang Wei Han-Ping Huang Long Hsu Sien Chi Chung-Kang Peng

Healthy systems exhibit complex dynamics on the changing of information embedded in physiologic signals on multiple time scales that can be quantified by employing multiscale entropy (MSE) analysis. Here, we propose a measure of complexity, called entropy of entropy (EoE) analysis. The analysis combines the features of MSE and an alternate measure of information, called superinformation, useful for DNA sequences. In this work, we apply the hybrid analysis to the cardiac interbeat interval time series. We find that the EoE value is significantly higher for the healthy than the pathologic groups. Particularly, short time series of 70 heart beats is sufficient for EoE analysis with an accuracy of 81% and longer series of 500 beats results in an accuracy of 90%. In addition, the EoE versus Shannon entropy plot of heart rate time series exhibits an inverted U relationship with the maximal EoE value appearing in the middle of extreme order and disorder.

]]>Entropy doi: 10.3390/e19100549

Authors: Yuanpu Xia Ziming Xiong Xin Dong Hao Lu

The impact of uncertainty on risk assessment and decision-making is increasingly being prioritized, especially for large geotechnical projects such as tunnels, where uncertainty is often the main source of risk. Epistemic uncertainty, which can be reduced, is the focus of attention. In this study, the existing entropy-risk decision model is first discussed and analyzed, and its deficiencies are improved upon and overcome. Then, this study addresses the fact that existing studies only consider parameter uncertainty and ignore the influence of the model uncertainty. Here, focus is on the issue of model uncertainty and differences in risk consciousness with different decision-makers. The utility theory is introduced in the model. Finally, a risk decision model is proposed based on the sensitivity analysis and the tolerance cost, which can improve decision-making efficiency. This research can provide guidance or reference for the evaluation and decision-making of complex systems engineering problems, and indicate a direction for further research of risk assessment and decision-making issues.

]]>Entropy doi: 10.3390/e19100548

Authors: Nadia Mammone Simona De Salvo Cosimo Ieracitano Silvia Marino Angela Marra Francesco Corallo Francesco Morabito

In the study of neurological disorders, Electroencephalographic (EEG) signal processing can provide valuable information because abnormalities in the interaction between neuron circuits may reflect on macroscopic abnormalities in the electrical potentials that can be detected on the scalp. A Mild Cognitive Impairment (MCI) condition, when caused by a disorder degenerating into dementia, affects the brain connectivity. Motivated by the promising results achieved through the recently developed descriptor of coupling strength between EEG signals, the Permutation Disalignment Index (PDI), the present paper introduces a novel PDI-based complex network model to evaluate the longitudinal variations in brain-electrical connectivity. A group of 33 amnestic MCI subjects was enrolled and followed-up with over four months. The results were compared to MoCA (Montreal Cognitive Assessment) tests, which scores the cognitive abilities of the patient. A significant negative correlation could be observed between MoCA variation and the characteristic path length ( λ ) variation ( r = - 0 . 56 , p = 0 . 0006 ), whereas a significant positive correlation could be observed between MoCA variation and the variation of clustering coefficient (CC, r = 0 . 58 , p = 0 . 0004 ), global efficiency (GE, r = 0 . 57 , p = 0 . 0005 ) and small worldness (SW, r = 0 . 57 , p = 0 . 0005 ). Cognitive decline thus seems to reflect an underlying cortical “disconnection” phenomenon: worsened subjects indeed showed an increased λ and decreased CC, GE and SW. The PDI-based connectivity model, proposed in the present work, could be a novel tool for the objective quantification of longitudinal brain-electrical connectivity changes in MCI subjects.

]]>Entropy doi: 10.3390/e19100544

Authors: Shamik Gupta Stefano Ruffo

We investigate the stationary and dynamic properties of the celebrated Nosé–Hoover dynamics of many-body interacting Hamiltonian systems, with an emphasis on the effect of inter-particle interactions. To this end, we consider a model system with both short- and long-range interactions. The Nosé–Hoover dynamics aim to generate the canonical equilibrium distribution of a system at a desired temperature by employing a set of time-reversible, deterministic equations of motion. A signature of canonical equilibrium is a single-particle momentum distribution that is Gaussian. We find that the equilibrium properties of the system within the Nosé–Hoover dynamics coincides with that within the canonical ensemble. Moreover, starting from out-of-equilibrium initial conditions, the average kinetic energy of the system relaxes to its target value over a size-independent timescale. However, quite surprisingly, our results indicate that under the same conditions and with only long-range interactions present in the system, the momentum distribution relaxes to its Gaussian form in equilibrium over a scale that diverges with the system size. On adding short-range interactions, the relaxation is found to occur over a timescale that has a much weaker dependence on system size. This system-size dependence of the timescale vanishes when only short-range interactions are present in the system. An implication of such an ultra-slow relaxation when only long-range interactions are present in the system is that macroscopic observables other than the average kinetic energy when estimated in the Nosé–Hoover dynamics may take an unusually long time to relax to its canonical equilibrium value. Our work underlines the crucial role that interactions play in deciding the equivalence between Nosé–Hoover and canonical equilibrium.

]]>Entropy doi: 10.3390/e19100545

Authors: Chung Chan

The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components in order to generate a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion by offering a more structured achieving scheme and some simpler conjectures to prove.

]]>Entropy doi: 10.3390/e19100543

Authors: Alejandro Chinea

In recent years, the interpretation of our observations of animal behaviour, in particular that of cetaceans, has captured a substantial amount of attention in the scientific community. The traditional view that supports a special intellectual status for this mammalian order has fallen under significant scrutiny, in large part due to problems of how to define and test the cognitive performance of animals. This paper presents evidence supporting complex cognition in cetaceans obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge. This theoretical framework explaining animal intelligence in neural computational terms is supported using a new mathematical model. Two pathways leading to higher levels of intelligence in animals are identified, each reflecting a trade-off either in energetic requirements or the number of neurons used. A description of the evolutionary pathway that led to increased cognitive capacities in cetacean brains is detailed and evidence supporting complex cognition in cetaceans is presented. This paper also provides an interpretation of the adaptive function of cetacean neuronal traits.

]]>Entropy doi: 10.3390/e19100540

Authors: Juan Diaz Diego Mateos Carina Boyallian

In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this problem. In this work, we introduce a novel method—the permutation Lempel–Ziv Complexity vs. Permutation Entropy map. We apply this method to the EEGs of two patients with specific diagnosed pathologies during respective follow up processes of pharmacological changes in order to detect alterations that are not evident with the usual inspection method. The method allows for comparing between different states of the patients’ treatment, with a healthy control group, given global information about the signal, supplementing the traditional method of visual inspection of EEG.

]]>Entropy doi: 10.3390/e19100542

Authors: Martin Gueuning Renaud Lambiotte Jean-Charles Delvenne

We consider the problem of diffusion on temporal networks, where the dynamics of each edge is modelled by an independent renewal process. Despite the apparent simplicity of the model, the trajectories of a random walker exhibit non-trivial properties. Here, we quantify the walker’s tendency to backtrack at each step (return where he/she comes from), as well as the resulting effect on the mixing rate of the process. As we show through empirical data, non-Poisson dynamics may significantly slow down diffusion due to backtracking, by a mechanism intrinsically different from the standard bus paradox and related temporal mechanisms. We conclude by discussing the implications of our work for the interpretation of results generated by null models of temporal networks.

]]>Entropy doi: 10.3390/e19100541

Authors: Nibaldo Rodriguez Guillermo Cabrera Carolina Lagos Enrique Cabrera

The behavioural diagnostics of bearings play an essential role in the management of several rotation machine systems. However, current diagnostic methods do not deliver satisfactory results with respect to failures in variable speed rotational phenomena. In this paper, we consider the Shannon entropy as an important fault signature pattern. To compute the entropy, we propose combining stationary wavelet transform and singular value decomposition. The resulting feature extraction method, that we call stationary wavelet singular entropy (SWSE), aims to improve the accuracy of the diagnostics of bearing failure by finding a small number of high-quality fault signature patterns. The features extracted by the SWSE are then passed on to a kernel extreme learning machine (KELM) classifier. The proposed SWSE-KELM algorithm is evaluated using two bearing vibration signal databases obtained from Case Western Reserve University. We compare our SWSE feature extraction method to other well-known methods in the literature such as stationary wavelet packet singular entropy (SWPSE) and decimated wavelet packet singular entropy (DWPSE). The experimental results show that the SWSE-KELM consistently outperforms both the SWPSE-KELM and DWPSE-KELM methods. Further, our SWSE method requires fewer features than the other two evaluated methods, which makes our SWSE-KELM algorithm simpler and faster.

]]>Entropy doi: 10.3390/e19100539

Authors: Carlos Plata Antonio Prados

We analyze the emergence of Kovacs-like memory effects in athermal systems within the linear response regime. This is done by starting from both the master equation for the probability distribution and the equations for the physically-relevant moments. The general results are applied to a general class of models with conserved momentum and non-conserved energy. Our theoretical predictions, obtained within the first Sonine approximation, show an excellent agreement with the numerical results. Furthermore, we prove that the observed non-monotonic relaxation is consistent with the monotonic decay of the non-equilibrium entropy.

]]>Entropy doi: 10.3390/e19100538

Authors: Guoqiang Xu Haochun Zhang Xiu Zhang Yan Jin

Active control of heat flux can be realized with transformation optics (TO) thermal metamaterials. Recently, a new class of metamaterial tunable cells has been proposed, aiming to significantly reduce the difficulty of fabrication and to flexibly switch functions by employing several cells assembled on related positions following the TO design. However, owing to the integration and rotation of materials in tunable cells, they might lead to extra thermal losses as compared with the previous continuum design. This paper focuses on investigating the thermodynamic properties of tunable cells under related design parameters. The universal expression for the local entropy generation rate in such metamaterial systems is obtained considering the influence of rotation. A series of contrast schemes are established to describe the thermodynamic process and thermal energy distributions from the viewpoint of entropy analysis. Moreover, effects of design parameters on thermal dissipations and system irreversibility are investigated. In conclusion, more thermal dissipations and stronger thermodynamic processes occur in a system with larger conductivity ratios and rotation angles. This paper presents a detailed description of the thermodynamic properties of metamaterial tunable cells and provides reference for selecting appropriate design parameters on related positions to fabricate more efficient and energy-economical switchable TO devices.

]]>Entropy doi: 10.3390/e19100536

Authors: Francisco Vega Reyes Antonio Lasanta

We analyze the transport properties of a low density ensemble of identical macroscopic particles immersed in an active fluid. The particles are modeled as inelastic hard spheres (granular gas). The non-homogeneous active fluid is modeled by means of a non-uniform stochastic thermostat. The theoretical results are validated with a numerical solution of the corresponding the kinetic equation (direct simulation Monte Carlo method). We show a steady flow in the system that is accurately described by Navier-Stokes (NS) hydrodynamics, even for high inelasticity. Surprisingly, we find that the deviations from NS hydrodynamics for this flow are stronger as the inelasticity decreases. The active fluid action is modeled here with a non-uniform fluctuating volume force. This is a relevant result given that hydrodynamics of particles in complex environments, such as biological crowded environments, is still a question under intense debate.

]]>Entropy doi: 10.3390/e19100535

Authors: Alessandro Bravetti

We give a short survey on the concept of contact Hamiltonian dynamics and its use in several areas of physics, namely reversible and irreversible thermodynamics, statistical physics and classical mechanics. Some relevant examples are provided along the way. We conclude by giving insights into possible future directions.

]]>Entropy doi: 10.3390/e19100532

Authors: Mingtian Li Jihua Ma

We consider the sets of quasi-regular points in the countable symbolic space. We measure the sizes of the sets by Billingsley-Hausdorff dimension defined by Gibbs measures. It is shown that the dimensions of those sets, always bounded from below by the convergence exponent of the Gibbs measure, are given by a variational principle, which generalizes Li and Ma’s result and Bowen’s result.

]]>Entropy doi: 10.3390/e19100534

Authors: Chinmaya Panigrahy Angel Garcia-Pedrero Ayan Seal Dionisio Rodríguez-Esparragón Nihar Mahato Consuelo Gonzalo-Martín

The Fractal Dimension (FD) of an image defines the roughness using a real number which is highly associated with the human perception of surface roughness. It has been applied successfully for many computer vision applications such as texture analysis, segmentation and classification. Several techniques can be found in literature to estimate FD. One such technique is Differential Box Counting (DBC). Its performance is influenced by many parameters. In particular, the box height is directly related to the gray-level variations over image grid, which badly affects the performance of DBC. In this work, a new method for estimating box height is proposed without changing the other parameters of DBC. The proposed box height has been determined empirically and depends only on the image size. All the experiments have been performed on simulated Fractal Brownian Motion (FBM) Database and Brodatz Database. It has been proved experimentally that the proposed box height allow to improve the performance of DBC, Shifting DBC, Improved DBC and Improved Triangle DBC, which are closer to actual FD values of the simulated FBM images.

]]>Entropy doi: 10.3390/e19100523

Authors: Berik Koichubekov Viktor Riklefs Marina Sorokina Ilya Korshukov Lyudmila Turgunova Yelena Laryushina Riszhan Bakirova Gulmira Muldaeva Ernur Bekov Makhabbat Kultenova

Lagged Poincaré plots have been successful in characterizing abnormal cardiac function. However, the current research practices do not favour any specific lag of Poincaré plots, thus complicating the comparison of results of different researchers in their analysis of heart rate of healthy subjects and patients. We researched the informative nature of lagged Poincaré plots in different states of the autonomic nervous system. It was tested in three models: different age groups, groups with different balance of autonomous regulation, and in hypertensive patients. Correlation analysis shows that for lag l = 6, SD1/SD2 has weak (r = 0.33) correlation with linear parameters of heart rate variability (HRV). For l more than 6 it displays even less correlation with linear parameters, but the changes in SD1/SD2 become statistically insignificant. Secondly, surrogate data tests show that the real SD1/SD2 is statistically different from its surrogate value and the conclusion could be made that the heart rhythm has nonlinear properties. Thirdly, the three models showed that for different functional states of the autonomic nervous system (ANS), SD1/SD2 ratio varied only for lags l = 5 and 6. All of this allow to us to give cautious recommendation to use SD1/SD2 with lags 5 and 6 as a nonlinear characteristic of HRV. The received data could be used as the basis for continuing the research in standardisation of nonlinear analytic methods.

]]>Entropy doi: 10.3390/e19100533

Authors: Rui Tang Simon Fong Nilanjan Dey Raymond Wong Sabah Mohammed

Recently, a new algorithm named dynamic group optimization (DGO) has been proposed, which lends itself strongly to exploration and exploitation. Although DGO has demonstrated its efficacy in comparison to other classical optimization algorithms, DGO has two computational drawbacks. The first one is related to the two mutation operators of DGO, where they may decrease the diversity of the population, limiting the search ability. The second one is the homogeneity of the updated population information which is selected only from the companions in the same group. It may result in premature convergence and deteriorate the mutation operators. In order to deal with these two problems in this paper, a new hybridized algorithm is proposed, which combines the dynamic group optimization algorithm with the cross entropy method. The cross entropy method takes advantage of sampling the problem space by generating candidate solutions using the distribution, then it updates the distribution based on the better candidate solution discovered. The cross entropy operator does not only enlarge the promising search area, but it also guarantees that the new solution is taken from all the surrounding useful information into consideration. The proposed algorithm is tested on 23 up-to-date benchmark functions; the experimental results verify that the proposed algorithm over the other contemporary population-based swarming algorithms is more effective and efficient.

]]>Entropy doi: 10.3390/e19100490

Authors: Muhammad Qasim Zafar Hayat Khan Ilyas Khan Qasem Al-Mdallal

The entropy generation due to heat transfer and fluid friction in mixed convective peristaltic flow of methanol-Al2O3 nano fluid is examined. Maxwell’s thermal conductivity model is used in analysis. Velocity and temperature profiles are utilized in the computation of the entropy generation number. The effects of involved physical parameters on velocity, temperature, entropy generation number, and Bejan number are discussed and explained graphically.

]]>Entropy doi: 10.3390/e19100531

Authors: Ryan James James Crutchfield

Accurately determining dependency structure is critical to understanding a complex system’s organization. We recently showed that the transfer entropy fails in a key aspect of this—measuring information flow—due to its conflation of dyadic and polyadic relationships. We extend this observation to demonstrate that Shannon information measures (entropy and mutual information, in their conditional and multivariate forms) can fail to accurately ascertain multivariate dependencies due to their conflation of qualitatively different relations among variables. This has broad implications, particularly when employing information to express the organization and mechanisms embedded in complex systems, including the burgeoning efforts to combine complex network theory with information theory. Here, we do not suggest that any aspect of information theory is wrong. Rather, the vast majority of its informational measures are simply inadequate for determining the meaningful relationships among variables within joint probability distributions. We close by demonstrating that such distributions exist across an arbitrary set of variables.

]]>Entropy doi: 10.3390/e19100530

Authors: Abdullah Makkeh Dirk Theis Raul Vicente

Bertschinger, Rauh, Olbrich, Jost, and Ay (Entropy, 2014) have proposed a definition of a decomposition of the mutual information M I ( X : Y , Z ) into shared, synergistic, and unique information by way of solving a convex optimization problem. In this paper, we discuss the solution of their Convex Program from theoretical and practical points of view.

]]>Entropy doi: 10.3390/e19100529

Authors: Xin Li Bin Dai Zheng Ma

The model for a broadcast channel with confidential messages (BC-CM) plays an important role in the physical layer security of modern communication systems. In recent years, it has been shown that a noiseless feedback channel from the legitimate receiver to the transmitter increases the secrecy capacity region of the BC-CM. However, at present, the feedback coding scheme for the BC-CM only focuses on producing secret keys via noiseless feedback, and other usages of the feedback need to be further explored. In this paper, we propose a new feedback coding scheme for the BC-CM. The noiseless feedback in this new scheme is not only used to produce secret keys for the legitimate receiver and the transmitter but is also used to generate update information that allows both receivers (the legitimate receiver and the wiretapper) to improve their channel outputs. From a binary example, we show that this full utilization of noiseless feedback helps to increase the secrecy level of the previous feedback scheme for the BC-CM.

]]>Entropy doi: 10.3390/e19100528

Authors: Reinaldo Arellano-Valle Javier Contreras-Reyes Milan Stehlík

The problem of measuring the disparity of a particular probability density function from a normal one has been addressed in several recent studies. The most used technique to deal with the problem has been exact expressions using information measures over particular distributions. In this paper, we consider a class of asymmetric distributions with a normal kernel, called Generalized Skew-Normal (GSN) distributions. We measure the degrees of disparity of these distributions from the normal distribution by using exact expressions for the GSN negentropy in terms of cumulants. Specifically, we focus on skew-normal and modified skew-normal distributions. Then, we establish the Kullback–Leibler divergences between each GSN distribution and the normal one in terms of their negentropies to develop hypothesis testing for normality. Finally, we apply this result to condition factor time series of anchovies off northern Chile.

]]>Entropy doi: 10.3390/e19100527

Authors: Johannes Rauh Pradeep Banerjee Eckehard Olbrich Jürgen Jost Nils Bertschinger David Wolpert

Suppose we have a pair of information channels, κ 1 , κ 2 , with a common input. The Blackwell order is a partial order over channels that compares κ 1 and κ 2 by the maximal expected utility an agent can obtain when decisions are based on the channel outputs. Equivalently, κ 1 is said to be Blackwell-inferior to κ 2 if and only if κ 1 can be constructed by garbling the output of κ 2 . A related partial order stipulates that κ 2 is more capable than κ 1 if the mutual information between the input and output is larger for κ 2 than for κ 1 for any distribution over inputs. A Blackwell-inferior channel is necessarily less capable. However, examples are known where κ 1 is less capable than κ 2 but not Blackwell-inferior. We show that this may even happen when κ 1 is constructed by coarse-graining the inputs of κ 2 . Such a coarse-graining is a special kind of “pre-garbling” of the channel inputs. This example directly establishes that the expected value of the shared utility function for the coarse-grained channel is larger than it is for the non-coarse-grained channel. This contradicts the intuition that coarse-graining can only destroy information and lead to inferior channels. We also discuss our results in the context of information decompositions.

]]>