Entropy doi: 10.3390/e19120683

Authors: Ke Tang Hong Xiao

The numerical study of continuum-rarefied gas flows is of considerable interest because it can provide fundamental knowledge regarding flow physics. Recently, the nonlinear coupled constitutive method (NCCM) has been derived from the Boltzmann equation and implemented to investigate continuum-rarefied gas flows. In this study, we first report the important and detailed issues in the use of the H theorem and positive entropy generation in the NCCM. Importantly, the unified nonlinear dissipation model and its relationships to the Rayleigh–Onsager function were demonstrated in the treatment of the collision term of the Boltzmann equation. In addition, we compare the Grad moment method, the Burnett equation, and the NCCM. Next, differences between the NCCM equations and the Navier–Stokes equations are explained in detail. For validation, numerical studies of rarefied and continuum gas flows were conducted. These studies include rarefied and/or continuum gas flows around a two-dimensional (2D) cavity, a 2D airfoil, a 2D cylinder, and a three-dimensional space shuttle. It was observed that the present results of the NCCM are in good agreement with those of the Direct Simulation Monte Carlo (DSMC) method in rarefied cases and are in good agreement with those of the Navier–Stokes equations in continuum cases. Finally, this study can be regarded as a theoretical basis of the NCCM for the development of a unified framework for solving continuum-rarefied gas flows.

]]>Entropy doi: 10.3390/e19120682

Authors: Tatsuaki Tsuruyama

The field of information science has greatly developed, and applications in various fields have emerged. In this paper, we evaluated the coding system in the theory of Tsallis entropy for transmission of messages and aimed to formulate the channel capacity by maximization of the Tsallis entropy within a given condition of code length. As a result, we obtained a simple relational expression between code length and code appearance probability and, additionally, a generalized formula of the channel capacity on the basis of Tsallis entropy statistics. This theoretical framework may contribute to data processing techniques and other applications.

]]>Entropy doi: 10.3390/e19120680

Authors: Intan Low Po-Chih Kuo Yu-Hsiang Liu Cheng-Lin Tsai Hsiang-Tai Chao Jen-Chuen Hsieh Li-Fen Chen Yong-Sheng Chen

How chronic pain affects brain functions remains unclear. As a potential indicator, brain complexity estimated by entropy-based methods may be helpful for revealing the underlying neurophysiological mechanism of chronic pain. In this study, complexity features with multiple time scales and spectral features were extracted from resting-state magnetoencephalographic signals of 156 female participants with/without primary dysmenorrhea (PDM) during pain-free state. Revealed by multiscale sample entropy (MSE), PDM patients (PDMs) exhibited loss of brain complexity in regions associated with sensory, affective, and evaluative components of pain, including sensorimotor, limbic, and salience networks. Significant correlations between MSE values and psychological states (depression and anxiety) were found in PDMs, which may indicate specific nonlinear disturbances in limbic and default mode network circuits after long-term menstrual pain. These findings suggest that MSE is an important measure of brain complexity and is potentially applicable to future diagnosis of chronic pain.

]]>Entropy doi: 10.3390/e19120678

Authors: Qian Zeng Jin Wang

We explored the dynamics of two interacting information systems. We show that for the Markovian marginal systems, the driving force for information dynamics is determined by both the information landscape and information flux. While the information landscape can be used to construct the driving force to describe the equilibrium time-reversible information system dynamics, the information flux can be used to describe the nonequilibrium time-irreversible behaviors of the information system dynamics. The information flux explicitly breaks the detailed balance and is a direct measure of the degree of the nonequilibrium or time-irreversibility. We further demonstrate that the mutual information rate between the two subsystems can be decomposed into the equilibrium time-reversible and nonequilibrium time-irreversible parts, respectively. This decomposition of the Mutual Information Rate (MIR) corresponds to the information landscape-flux decomposition explicitly when the two subsystems behave as Markov chains. Finally, we uncover the intimate relationship between the nonequilibrium thermodynamics in terms of the entropy production rates and the time-irreversible part of the mutual information rate. We found that this relationship and MIR decomposition still hold for the more general stationary and ergodic cases. We demonstrate the above features with two examples of the bivariate Markov chains.

]]>Entropy doi: 10.3390/e19120679

Authors: Yan Jin

Second-law analysis (SLA) is an important concept in thermodynamics, which basically assesses energy by its value in terms of its convertibility from one form to another.[...]

]]>Entropy doi: 10.3390/e19120677

Authors: Xingran Cui Emily Chang Wen-Hung Yang Bernard C. Jiang Albert C. Yang Chung-Kang Peng

Atrial fibrillation (AF) is an abnormal rhythm of the heart, which can increase heart-related complications. Paroxysmal AF episodes occur intermittently with varying duration. Human-based diagnosis of paroxysmal AF with a longer-term electrocardiogram recording is time-consuming. Here we present a fully automated ensemble model for AF episode detection based on RR-interval time series, applying a novel approach of information-based similarity analysis and ensemble scheme. By mapping RR-interval time series to binary symbolic sequences and comparing the rank-frequency patterns of m-bit words, the dissimilarity between AF and normal sinus rhythms (NSR) were quantified. To achieve high detection specificity and sensitivity, and low variance, a weighted variation of bagging with multiple AF and NSR templates was applied. By performing dissimilarity comparisons between unknown RR-interval time series and multiple templates, paroxysmal AF episodes were detected. Based on our results, optimal AF detection parameters are symbolic word length m = 9 and observation window n = 150, achieving 97.04% sensitivity, 97.96% specificity, and 97.78% overall accuracy. Sensitivity, specificity, and overall accuracy vary little despite changes in m and n parameters. This study provides quantitative information to enhance the categorization of AF and normal cardiac rhythms.

]]>Entropy doi: 10.3390/e19120675

Authors: Inga Stolz Karsten Keller

It is popular to study a time-dependent nonlinear system by encoding outcomes of measurements into sequences of symbols following certain symbolization schemes. Mostly, symbolizations by threshold crossings or variants of it are applied, but also, the relatively new symbolic approach, which goes back to innovative works of Bandt and Pompe—ordinal symbolic dynamics—plays an increasing role. In this paper, we discuss both approaches novelly in one breath with respect to the theoretical determination of the Kolmogorov-Sinai entropy (KS entropy). For this purpose, we propose and investigate a unifying approach to formalize symbolizations. By doing so, we can emphasize the main advantage of the ordinal approach if no symbolization scheme can be found that characterizes KS entropy directly: the ordinal approach, as well as generalizations of it provide, under very natural conditions, a direct route to KS entropy by default.

]]>Entropy doi: 10.3390/e19120676

Authors: Camelia Stanciu Dorin Stanciu Adina-Teodora Gheorghian Elena-Beatrice Tănase Cătălina Dobre Marius Spiroiu

A solar driven cooling system consisting of a single effect H2O-LiBr absorbtion cooling module (ACS), a parabolic trough collector (PTC), and a storage tank (ST) module is analyzed during one full day operation. The pressurized water is used to transfer heat from PTC to ST and to feed the ACS desorber. The system is constrained to operate at the maximum ACS exergetic efficiency, under a time dependent cooling load computed on 15 July for a one storey house located near Bucharest, Romania. To set up the solar assembly, two commercial PTCs were selected, namely PT1-IST and PTC 1800 Solitem, and a single unit ST was initially considered. The mathematical model, relying on the energy balance equations, was coded under Engineering Equation Solver (EES) environment. The solar data were obtained from the Meteonorm database. The numerical simulations proved that the system cannot cover the imposed cooling load all day long, due to the large variation of water temperature inside the ST. By splitting the ST into two units, the results revealed that the PT1-IST collector only drives the ACS between 9 am and 4:30 pm, while the PTC 1800 one covers the entire cooling period (9 am–6 pm) for optimum ST capacities of 90 kg/90 kg and 90 kg/140 kg, respectively.

]]>Entropy doi: 10.3390/e19120674

Authors: Michael Bowler Colleen Kelly

Many species of plants are found in regions to which they are alien. Their global distributions are characterised by a family of exponential functions of the kind that arise in elementary statistical mechanics (an example in ecology is MacArthur’s broken stick). We show here that all these functions are quantitatively reproduced by a model containing a single parameter—some global resource partitioned at random on the two axes of species number and site number. A dynamical model generating this equilibrium is a two-fold stochastic process and suggests a curious and interesting biological interpretation in terms of niche structures fluctuating with time and productivity, with sites and species highly idiosyncratic. Idiosyncrasy implies that attempts to identify a priori those species likely to become naturalised are unlikely to be successful. Although this paper is primarily concerned with a particular problem in population biology, the two-fold stochastic process may be of more general interest.

]]>Entropy doi: 10.3390/e19120673

Authors: Pinar Tosun Daniel Abásolo Gillian Stenson Raphaelle Winsky-Sommerer

Specific patterns of brain activity during sleep and waking are recorded in the electroencephalogram (EEG). Time-frequency analysis methods have been widely used to analyse the EEG and identified characteristic oscillations for each vigilance state (VS), i.e., wakefulness, rapid-eye movement (REM) and non-rapid-eye movement (NREM) sleep. However, other aspects such as change of patterns associated with brain dynamics may not be captured unless a non-linear-based analysis method is used. In this pilot study, Permutation Lempel–Ziv complexity (PLZC), a novel symbolic dynamics analysis method, was used to characterise the changes in the EEG in sleep and wakefulness during baseline and recovery from sleep deprivation (SD). The results obtained with PLZC were contrasted with a related non-linear method, Lempel–Ziv complexity (LZC). Both measure the emergence of new patterns. However, LZC is dependent on the absolute amplitude of the EEG, while PLZC is only dependent on the relative amplitude due to symbolisation procedure and thus, more resistant to noise. We showed that PLZC discriminates activated brain states associated with wakefulness and REM sleep, which both displayed higher complexity, compared to NREM sleep. Additionally, significantly lower PLZC values were measured in NREM sleep during the recovery period following SD compared to baseline, suggesting a reduced emergence of new activity patterns in the EEG. These findings were validated using PLZC on surrogate data. By contrast, LZC was merely reflecting changes in the spectral composition of the EEG. Overall, this study implies that PLZC is a robust non-linear complexity measure, which is not dependent on amplitude variations in the signal, and which may be useful to further assess EEG alterations induced by environmental or pharmacological manipulations.

]]>Entropy doi: 10.3390/e19120671

Authors: Arpan Bhattacharyya Ling-Yan Hung Pak Lau Si-Nong Liu

In this paper, we would like to systematically explore the implications of non-perturbative effects on entanglement in a many body system. Instead of pursuing the usual path-integral method in a singular space, we attempt to study the wavefunctions in detail. We begin with a toy model of multiple particles whose interaction potential admits multiple minima. We study the entanglement of the true ground state after taking the tunneling effects into account and find some simple patterns. Notably, in the case of multiple particle interactions, entanglement entropy generically decreases with increasing number of minima. The knowledge of the subsystem actually increases with the number of minima. The reduced density matrix can also be seen to have close connections with graph spectra. In a more careful study of the two-well tunneling system, we also extract the exponentially-suppressed tail contribution, the analogue of instantons. To understand the effects of multiple minima in a field theory, we are inspired to inspect wavefunctions in a toy model of a bosonic field describing quasi-particles of two different condensates related by Bogoliubov transformations. We find that the area law is naturally preserved. This is probably a useful set of perspectives that promise wider applications.

]]>Entropy doi: 10.3390/e19120672

Authors: Ryo Matsuoka Kohzoh Yoshino Eiichi Watanabe Ken Kiyono

Multiscale entropy (MSE) profiles of heart rate variability (HRV) in patients with atrial fibrillation (AFib) provides clinically useful information for ischemic stroke risk assessment, suggesting that the complex properties characterized by MSE profiles are associated with ischemic stroke risk. However, the meaning of HRV complexity in patients with AFib has not been clearly interpreted, and the physical and mathematical understanding of the relation between HRV dynamics and the ischemic stroke risk is not well established. To gain a deeper insight into HRV dynamics in patients with AFib, and to improve ischemic stroke risk assessment using HRV analysis, we study the HRV characteristics related to MSE profiles, such as the long-range correlation and probability density function. In this study, we analyze the HRV time series of 173 patients with permanent AFib. Our results show that, although HRV time series in patients with AFib exhibit long-range correlation (1/f fluctuations)—as observed in healthy subjects—in a range longer than 90 s, these autocorrelation properties have no significant predictive power for ischemic stroke occurrence. Further, the probability density function structure of the coarse-grained times series at scales greater than 2 s is dominantly associated with ischemic stroke risk. This observation could provide valuable information for improving ischemic stroke risk assessment using HRV analysis.

]]>Entropy doi: 10.3390/e19120668

Authors: Jens Schulenborg Angelo Di Marco Joren Vanherck Maarten R. Wegewijs Janine Splettstoesser

Thermoelectric transport is traditionally analyzed using relations imposed by time-reversal symmetry, ranging from Onsager’s results to fluctuation relations in counting statistics. In this paper, we show that a recently discovered duality relation for fermionic systems—deriving from the fundamental fermion-parity superselection principle of quantum many-particle systems—provides new insights into thermoelectric transport. Using a master equation, we analyze the stationary charge and heat currents through a weakly coupled, but strongly interacting single-level quantum dot subject to electrical and thermal bias. In linear transport, the fermion-parity duality shows that features of thermoelectric response coefficients are actually dominated by the average and fluctuations of the charge in a dual quantum dot system, governed by attractive instead of repulsive electron-electron interaction. In the nonlinear regime, the duality furthermore relates most transport coefficients to much better understood equilibrium quantities. Finally, we naturally identify the fermion-parity as the part of the Coulomb interaction relevant for both the linear and nonlinear Fourier heat. Altogether, our findings hence reveal that next to time-reversal, the duality imposes equally important symmetry restrictions on thermoelectric transport. As such, it is also expected to simplify computations and clarify the physical understanding for more complex systems than the simplest relevant interacting nanostructure model studied here.

]]>Entropy doi: 10.3390/e19120670

Authors: Grzegorz Wilk Zbigniew Włodarczyk

We discuss two examples of oscillations apparently hidden in some experimental results for high-energy multiparticle production processes: (i) the log-periodic oscillatory pattern decorating the power-like Tsallis distributions of transverse momenta; (ii) the oscillations of the modified combinants obtained from the measured multiplicity distributions. Our calculations are confronted with p p data from the Large Hadron Collider (LHC). We show that in both cases, these phenomena can provide new insight into the dynamics of these processes.

]]>Entropy doi: 10.3390/e19120669

Authors: Yoshiharu Dogishi Shun Endo Woon Sohn Kenji Katayama

Photo-responsive double emulsions made of liquid crystal (LC) were prepared by a microfluidic device, and the light-induced processes were studied. The phase transition was induced from the center of the topological defect for an emulsion made of (N-(4-methoxybenzylidene)-4-butylaniline (MBBA), and strange texture change was observed for an emulsion made of 4-cyano-4′-pentylbiphenyl (5CB) doped with azobenzene. The results suggest that there are defect-involved processes in the phase change of LC double emulsions.

]]>Entropy doi: 10.3390/e19120667

Authors: Paulo Rossi Renato Vicente

In this work, we propose a Bayesian online reconstruction algorithm for sparse signals based on Compressed Sensing and inspired by L1-regularization schemes. A previous work has introduced a mean-field approximation for the Bayesian online algorithm and has shown that it is possible to saturate the offline performance in the presence of Gaussian measurement noise when the signal generating distribution is known. Here, we build on these results and show that reconstruction is possible even if prior knowledge about the generation of the signal is limited, by introduction of a Laplace prior and of an extra Kullback–Leibler divergence minimization step for hyper-parameter learning.

]]>Entropy doi: 10.3390/e19120666

Authors: Edurne Ibarrola-Ulzurrun Javier Marcello Consuelo Gonzalo-Martin

Hyperspectral imagery (HSI) integrates many continuous and narrow bands that cover different regions of the electromagnetic spectrum. However, the main challenge is the high dimensionality of HSI data due to the ’Hughes’ phenomenon. Thus, dimensionality reduction is necessary before applying classification algorithms to obtain accurate thematic maps. We focus the study on the following feature-extraction algorithms: Principal Component Analysis (PCA), Minimum Noise Fraction (MNF), and Independent Component Analysis (ICA). After a literature survey, we have observed a lack of a comparative study on these techniques as well as accurate strategies to determine the number of components. Hence, the first objective was to compare traditional dimensionality reduction techniques (PCA, MNF, and ICA) in HSI of the Compact Airborne Spectrographic Imager (CASI) sensor and to evaluate different strategies for selecting the most suitable number of components in the transformed space. The second objective was to determine a new dimensionality reduction approach by dividing the CASI HSI regarding the spectral regions covering the electromagnetic spectrum. The components selected from the transformed space of the different spectral regions were stacked. This stacked transformed space was evaluated to see if the proposed approach improves the final classification.

]]>Entropy doi: 10.3390/e19120633

Authors: Mingtao Ge Jie Wang Xiangyang Ren

This study proposes a novel fault diagnosis method that is based on empirical wavelet transform (EWT) and kernel density estimation classifier (KDEC), which can well diagnose fault type of the rolling element bearings. With the proposed fault diagnosis method, the vibration signal of rolling element bearing was firstly decomposed into a series of F modes by EWT, and the root mean square, kurtosis, and skewness of the F modes were computed and combined into the feature vector. According to the characteristics of kernel density estimation, a classifier based on kernel density estimation and mutual information was proposed. Then, the feature vectors were input into the KDEC for training and testing. The experimental results indicated that the proposed method can effectively identify three different operative conditions of rolling element bearings, and the accuracy rates was higher than support vector machine (SVM) classifier and back-propagation (BP) neural network classifier.

]]>Entropy doi: 10.3390/e19120662

Authors: Roberto Zivieri Nicola Pacini

Lactic fermentation and respiration are important metabolic pathways on which life is based. Here, the rate of entropy in a cell associated to fermentation and respiration processes in glucose catabolism of living systems is calculated. This is done for both internal and external heat and matter transport according to a thermodynamic approach based on Prigogine’s formalism. It is shown that the rate of entropy associated to irreversible reactions in fermentation processes is higher than the corresponding one in respiration processes. Instead, this behaviour is reversed for diffusion of chemical species and for heat exchanges. The ratio between the rates of entropy associated to the two metabolic pathways has a space and time dependence for diffusion of chemical species and is invariant for heat and irreversible reactions. In both fermentation and respiration processes studied separately, the total entropy rate tends towards a minimum value fulfilling Prigogine’s minimum dissipation principle and is in accordance with the second principle of thermodynamics. The applications of these results could be important for cancer detection and therapy.

]]>Entropy doi: 10.3390/e19120663

Authors: Jianjun Su Dezheng Wang Yinong Zhang Fan Yang Yan Zhao Xiangkun Pang

Transfer entropy (TE) is a model-free approach based on information theory to capture causality between variables, which has been used for the modeling and monitoring of, and fault diagnosis in, complex industrial processes. It is able to detect the causality between variables without assuming any underlying model, but it is computationally burdensome. To overcome this limitation, a hybrid method of TE and the modified conditional mutual information (CMI) approach is proposed by using generated multi-valued alarm series. In order to obtain a process topology, TE can generate a causal map of all sub-processes and modified CMI can be used to distinguish the direct connectivity from the above-mentioned causal map by using multi-valued alarm series. The effectiveness and accuracy rate of the proposed method are validated by simulated and real industrial cases (the Tennessee-Eastman process) to capture process topology by using multi-valued alarm series.

]]>Entropy doi: 10.3390/e19120665

Authors: Simone Fiori Ruben Di Filippo

The chaos-based optimization algorithm (COA) is a method to optimize possibly nonlinear complex functions of several variables by chaos search. The main innovation behind the chaos-based optimization algorithm is to generate chaotic trajectories by means of nonlinear, discrete-time dynamical systems to explore the search space while looking for the global minimum of a complex criterion function. The aim of the present research is to investigate the numerical properties of the COA, both on complex optimization test-functions from the literature and on a real-world problem, to contribute to the understanding of its global-search features. In addition, the present research suggests a refinement of the original COA algorithm to improve its optimization performances. In particular, the real-world optimization problem tackled within the paper is the estimation of six electro-mechanical parameters of a model of a direct-current (DC) electrical motor. A large number of test results prove that the algorithm achieves an excellent numerical precision at a little expense in the computational complexity, which appears as extremely limited, compared to the complexity of other benchmark optimization algorithms, namely, the genetic algorithm and the simulated annealing algorithm.

]]>Entropy doi: 10.3390/e19120664

Authors: Kevin Vanslette

We find that the standard relative entropy and the Umegaki entropy are designed for the purpose of inferentially updating probabilities and density matrices, respectively. From the same set of inferentially guided design criteria, both of the previously stated entropies are derived in parallel. This formulates a quantum maximum entropy method for the purpose of inferring density matrices in the absence of complete information.

]]>Entropy doi: 10.3390/e19120661

Authors: Yi Sun Limin Wang Minghui Sun

Bayesian network classifiers (BNCs) have demonstrated competitive classification accuracy in a variety of real-world applications. However, it is error-prone for BNCs to discriminate among high-confidence labels. To address this issue, we propose the label-driven learning framework, which incorporates instance-based learning and ensemble learning. For each testing instance, high-confidence labels are first selected by a generalist classifier, e.g., the tree-augmented naive Bayes (TAN) classifier. Then, by focusing on these labels, conditional mutual information is redefined to more precisely measure mutual dependence between attributes, thus leading to a refined generalist with a more reasonable network structure. To enable finer discrimination, an expert classifier is tailored for each high-confidence label. Finally, the predictions of the refined generalist and the experts are aggregated. We extend TAN to LTAN (Label-driven TAN) by applying the proposed framework. Extensive experimental results demonstrate that LTAN delivers superior classification accuracy to not only several state-of-the-art single-structure BNCs but also some established ensemble BNCs at the expense of reasonable computation overhead.

]]>Entropy doi: 10.3390/e19120658

Authors: Montserrat Vallverdú Aroa Ruiz-Muñoz Emma Roca Pere Caminal Ferran A. Rodríguez Alfredo Irurtia Alexandre Perera

The aim of the study was to analyze heart rate variability (HRV) response to high-intensity exercise during a 35-km mountain trail race and to ascertain whether fitness level could influence autonomic nervous system (ANS) modulation. Time-domain, frequency-domain, and multi-scale entropy (MSE) indexes were calculated for eleven mountain-trail runners who completed the race. Many changes were observed, mostly related to exercise load and fatigue. These changes were characterized by increased mean values and standard deviations of the normal-to-normal intervals associated with sympathetic activity, and by decreased differences between successive intervals related to parasympathetic activity. Normalized low frequency (LF) power suggested that ANS modulation varied greatly during the race and between individuals. Normalized high frequency (HF) power, associated with parasympathetic activity, varied considerably over the race, and tended to decrease at the final stages, whereas changes in the LF/HF ratio corresponded to intervals with varying exercise load. MSE indexes, related to system complexity, indicated the existence of many interactions between the heart and its neurological control mechanism. The time-domain, frequency-domain, and MSE indexes were also able to discriminate faster from slower runners, mainly in the more difficult and in the final stages of the race. These findings suggest the use of HRV analysis to study cardiac function mechanisms in endurance sports.

]]>Entropy doi: 10.3390/e19120655

Authors: Yubo Li Yongqiang Cheng Xiang Li Hongqiang Wang Xiaoqiang Hua Yuliang Qin

In this paper, Bayesian nonlinear filtering is considered from the viewpoint of information geometry and a novel filtering method is proposed based on information geometric optimization. Under the Bayesian filtering framework, we derive a relationship between the nonlinear characteristics of filtering and the metric tensor of the corresponding statistical manifold. Bayesian joint distributions are used to construct the statistical manifold. In this case, nonlinear filtering can be converted to an optimization problem on the statistical manifold and the adaptive natural gradient descent method is used to seek the optimal estimate. The proposed method provides a general filtering formulation and the Kalman filter, the Extended Kalman filter (EKF) and the Iterated Extended Kalman filter (IEKF) can be seen as special cases of this formulation. The performance of the proposed method is evaluated on a passive target tracking problem and the results demonstrate the superiority of the proposed method compared to various Kalman filter methods.

]]>Entropy doi: 10.3390/e19120657

Authors: Rongxi Zhou Xiao Liu Mei Yu Kyle Huang

This paper systematically investigates the properties of six kinds of entropy-based risk measures: Information Entropy and Cumulative Residual Entropy in the probability space, Fuzzy Entropy, Credibility Entropy and Sine Entropy in the fuzzy space, and Hybrid Entropy in the hybridized uncertainty of both fuzziness and randomness. We discover that none of the risk measures satisfy all six of the following properties, which various scholars have associated with effective risk measures: Monotonicity, Translation Invariance, Sub-additivity, Positive Homogeneity, Consistency and Convexity. Measures based on Fuzzy Entropy, Credibility Entropy, and Sine Entropy all exhibit the same properties: Sub-additivity, Positive Homogeneity, Consistency, and Convexity. These measures based on Information Entropy and Hybrid Entropy, meanwhile, only exhibit Sub-additivity and Consistency. Cumulative Residual Entropy satisfies just Sub-additivity, Positive Homogeneity, and Convexity. After identifying these properties, we develop seven portfolio models based on different risk measures and made empirical comparisons using samples from both the Shenzhen Stock Exchange of China and the New York Stock Exchange of America. The comparisons show that the Mean Fuzzy Entropy Model performs the best among the seven models with respect to both daily returns and relative cumulative returns. Overall, these results could provide an important reference for both constructing effective risk measures and rationally selecting the appropriate risk measure under different portfolio selection conditions.

]]>Entropy doi: 10.3390/e19120656

Authors: Chong Huang Peter Kairouz Xiao Chen Lalitha Sankar Ram Rajagopal

Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.

]]>Entropy doi: 10.3390/e19120654

Authors: Yu Zhang Vijay Singh Aaron Byrd

A flow duration curve (FDC) is widely used for predicting water supply, hydropower, environmental flow, sediment load, and pollutant load. Among different methods of constructing an FDC, the entropy-based method, developed recently, is appealing because of its several desirable characteristics, such as simplicity, flexibility, and statistical basis. This method contains a parameter, called entropy parameter M, which constitutes the basis for constructing the FDC. Since M is related to the ratio of the average streamflow to the maximum streamflow which, in turn, is related to the drainage area, it may be possible to determine M a priori and construct an FDC for ungauged basins. This paper, therefore, analyzed the characteristics of M in both space and time using streamflow data from 73 gauging stations in the Brazos River basin, Texas, USA. Results showed that the M values were impacted by reservoir operation and possibly climate change. The values were fluctuating, but relatively stable, after the operation of the reservoirs. Parameter M was found to change inversely with the ratio of average streamflow to the maximum streamflow. When there was an extreme event, there occurred a jump in the M value. Further, spatially, M had a larger value if the drainage area was small.

]]>Entropy doi: 10.3390/e19120660

Authors: Liang Cheng Jun Niu Dehai Liao

The spatial and temporal variability of precipitation time series were investigated for the Hexi Corridor, in Northwest China, by analyzing the entropy information. The examinations were performed on monthly, seasonal, and annual timescales based on 29 meteorological stations for the period of 1961–2015. The apportionment entropy and intensity entropy were used to analyze the regional precipitation characteristics, including the intra-annual and decadal distribution of monthly and annual precipitation amounts, as well as the number of precipitation days within a year and a decade. The regions with high precipitation variability are found in the western part of the Hexi corridor and with less precipitation, and may have a high possibility of drought occurrence. The variability of the number of precipitation days decreased from the west to the east of the corridor. Higher variability, in terms of both of precipitation amount and intensity during crop-growing season, has been found in the recent decade. In addition, the correlation between entropy-based precipitation variability and the crop yield is also compared, and the crop yield in historical periods is found to be correlated with the precipitation intensity disorder index in the middle reaches of the Hexi corridor.

]]>Entropy doi: 10.3390/e19120659

Authors: Enrica Santucci

We propose a quantum version of the well known minimum distance classification model called Nearest Mean Classifier (NMC). In this regard, we presented our first results in two previous works. First, a quantum counterpart of the NMC for two-dimensional problems was introduced, named Quantum Nearest Mean Classifier (QNMC), together with a possible generalization to any number of dimensions. Secondly, we studied the n-dimensional problem into detail and we showed a new encoding for arbitrary n-feature vectors into density operators. In the present paper, another promising encoding is considered, suggested by recent debates on quantum machine learning. Further, we observe a significant property concerning the non-invariance by feature rescaling of our quantum classifier. This fact, which represents a meaningful difference between the NMC and the respective quantum version, allows us to introduce a free parameter whose variation provides, in some cases, better classification results for the QNMC. The experimental section is devoted: (i) to compare the NMC and QNMC performance on different datasets; and (ii) to study the effects of the non-invariance under uniform rescaling for the QNMC.

]]>Entropy doi: 10.3390/e19120644

Authors: Shu-Nan Li Bing-Yang Cao

Most existing phenomenological heat conduction models are expressed by temperature and heat flux distributions, whose definitions might be debatable in heat conductions with strong non-equilibrium. The constitutive relations of Fourier and hyperbolic heat conductions are here rewritten by the entropy and entropy flux distributions in the frameworks of classical irreversible thermodynamics (CIT) and extended irreversible thermodynamics (EIT). The entropic constitutive relations are then generalized by Boltzmann–Gibbs–Shannon (BGS) statistical mechanics, which can avoid the debatable definitions of thermodynamic quantities relying on local equilibrium. It shows a possibility of modeling heat conduction through entropic constitutive relations. The applicability of the generalizations by BGS statistical mechanics is also discussed based on the relaxation time approximation, and it is found that the generalizations require a sufficiently small entropy production rate.

]]>Entropy doi: 10.3390/e19120652

Authors: Theerasak Chanwimalueang Danilo Mandic

The nonparametric Sample Entropy (SE) estimator has become a standard for the quantification of structural complexity of nonstationary time series, even in critical cases of unfavorable noise levels. The SE has proven very successful for signals that exhibit a certain degree of the underlying structure, but do not obey standard probability distributions, a typical case in real-world scenarios such as with physiological signals. However, the SE estimates structural complexity based on uncertainty rather than on (self) correlation, so that, for reliable estimation, the SE requires long data segments, is sensitive to spikes and erratic peaks in data, and owing to its amplitude dependence it exhibits lack of precision for signals with long-term correlations. To this end, we propose a class of new entropy estimators based on the similarity of embedding vectors, evaluated through the angular distance, the Shannon entropy and the coarse-grained scale. Analysis of the effects of embedding dimension, sample size and tolerance shows that the so introduced Cosine Similarity Entropy (CSE) and the enhanced Multiscale Cosine Similarity Entropy (MCSE) are amplitude-independent and therefore superior to the SE when applied to short time series. Unlike the SE, the CSE is shown to yield valid entropy values over a broad range of embedding dimensions. By evaluating the CSE and the MCSE over a variety of benchmark synthetic signals as well as for real-world data (heart rate variability of three different cardiovascular pathologies), the proposed algorithms are demonstrated to be able to quantify degrees of structural complexity in the context of self-correlation over small to large temporal scales, thus offering physically meaningful interpretations and rigor in the understanding the intrinsic properties of the structural complexity of a system, such as the number of its degrees of freedom.

]]>Entropy doi: 10.3390/e19120650

Authors: Zdzisław Jaworski Paulina Pianko-Oprych

The modeling of carbon deposition from C-H-O reformates has usually employed thermodynamic data for graphite, but has rarely employed such data for impure filamentous carbon. Therefore, electrochemical data for the literature on the chemical potential of two types of purified carbon nanotubes (CNTs) are included in the study. Parameter values determining the thermodynamic equilibrium of the deposition of either graphite or CNTs are computed for dry and wet reformates from natural gas and liquefied petroleum gas. The calculation results are presented as the atomic oxygen-to-carbon ratio (O/C) against temperature (200 to 100 °C) for various pressures (1 to 30 bar). Areas of O/C for either carbon deposition or deposition-free are computed, and indicate the critical O/C values below which the deposition can occur. Only three types of deposited carbon were found in the studied equilibrium conditions: Graphite, multi-walled CNTs, and single-walled CNTs in bundles. The temperature regions of the appearance of the thermodynamically stable forms of solid carbon are numerically determined as being independent of pressure and the analyzed reactants. The modeling indicates a significant increase in the critical O/C for the deposition of CNTs against that for graphite. The highest rise in the critical O/C, of up to 290% at 30 bar, was found for the wet reforming process.

]]>Entropy doi: 10.3390/e19120651

Authors: Zhiyi Duan Limin Wang

To maximize the benefit that can be derived from the information implicit in big data, ensemble methods generate multiple models with sufficient diversity through randomization or perturbation. A k-dependence Bayesian classifier (KDB) is a highly scalable learning algorithm with excellent time and space complexity, along with high expressivity. This paper introduces a new ensemble approach of KDBs, a k-dependence forest (KDF), which induces a specific attribute order and conditional dependencies between attributes for each subclassifier. We demonstrate that these subclassifiers are diverse and complementary. Our extensive experimental evaluation on 40 datasets reveals that this ensemble method achieves better classification performance than state-of-the-art out-of-core ensemble learners such as the AODE (averaged one-dependence estimator) and averaged tree-augmented naive Bayes (ATAN).

]]>Entropy doi: 10.3390/e19120653

Authors: Allan Tameshtit

In a departure from most work in quantum information utilizing Gaussian states, we use a single such state to represent a qubit and model environmental noise with a class of quadratic dissipative equations. A benefit of this single Gaussian representation is that with one deconvolution, we can eliminate noise. In this deconvolution picture, a basis of squeezed states evolves to another basis of such states. One of the limitations of our approach is that noise is eliminated only at a privileged time. We suggest that this limitation may actually be used advantageously to send information securely: the privileged time is only known to the sender and the receiver, and any intruder accessing the information at any other time encounters noisy data.

]]>Entropy doi: 10.3390/e19120646

Authors: Jenny Farmer Fareeha Kanwal Nikita Nikulsin Matthew Tsilimigras Donald Jacobs

Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD) trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF) about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.

]]>Entropy doi: 10.3390/e19120649

Authors: Marco Monge Jorge Vidal Luis Villalba

In recent years, an important increase in the amount and impact of Distributed Denial of Service (DDoS) threats has been reported by the different information security organizations. They typically target the depletion of the computational resources of the victims, hence drastically harming their operational capabilities. Inspired by these methods, Economic Denial of Sustainability (EDoS) attacks pose a similar motivation, but adapted to Cloud computing environments, where the denial is achieved by damaging the economy of both suppliers and customers. Therefore, the most common EDoS approach is making the offered services unsustainable by exploiting their auto-scaling algorithms. In order to contribute to their mitigation, this paper introduces a novel EDoS detection method based on the study of entropy variations related with metrics taken into account when deciding auto-scaling actuations. Through the prediction and definition of adaptive thresholds, unexpected behaviors capable of fraudulently demand new resource hiring are distinguished. With the purpose of demonstrate the effectiveness of the proposal, an experimental scenario adapted to the singularities of the EDoS threats and the assumptions driven by their original definition is described in depth. The preliminary results proved high accuracy.

]]>Entropy doi: 10.3390/e19120634

Authors: Turkay Baran Nilgun Harmancioglu Cem Cetinkaya Filiz Barbaros

This study attempts to extend the prevailing definition of informational entropy, where entropy relates to the amount of reduction of uncertainty or, indirectly, to the amount of information gained through measurements of a random variable. The approach adopted herein describes informational entropy not as an absolute measure of information, but as a measure of the variation of information. This makes it possible to obtain a single value for informational entropy, instead of several values that vary with the selection of the discretizing interval, when discrete probabilities of hydrological events are estimated through relative class frequencies and discretizing intervals. Furthermore, the present work introduces confidence limits for the informational entropy function, which facilitates a comparison between the uncertainties of various hydrological processes with different scales of magnitude and different probability structures. The work addresses hydrologists and environmental engineers more than it does mathematicians and statisticians. In particular, it is intended to help solve information-related problems in hydrological monitoring design and assessment. This paper first considers the selection of probability distributions of best fit to hydrological data, using generated synthetic time series. Next, it attempts to assess hydrometric monitoring duration in a netwrok, this time using observed runoff data series. In both applications, it focuses, basically, on the theoretical background for the extended definition of informational entropy. The methodology is shown to give valid results in each case.

]]>Entropy doi: 10.3390/e19120647

Authors: Matthias Sachs Benedict Leimkuhler Vincent Danos

Langevin dynamics is a versatile stochastic model used in biology, chemistry, engineering, physics and computer science. Traditionally, in thermal equilibrium, one assumes (i) the forces are given as the gradient of a potential and (ii) a fluctuation-dissipation relation holds between stochastic and dissipative forces; these assumptions ensure that the system samples a prescribed invariant Gibbs-Boltzmann distribution for a specified target temperature. In this article, we relax these assumptions, incorporating variable friction and temperature parameters and allowing nonconservative force fields, for which the form of the stationary state is typically not known a priori. We examine theoretical issues such as stability of the steady state and ergodic properties, as well as practical aspects such as the design of numerical methods for stochastic particle models. Applications to nonequilibrium systems with thermal gradients and active particles are discussed.

]]>Entropy doi: 10.3390/e19120648

Authors: Bowen Hou Zhangming He Xuanying Zhou Haiyin Zhou Dong Li Jiongqi Wang

As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

]]>Entropy doi: 10.3390/e19120645

Authors: Robert Griffiths

This paper answers Bell’s question: What does quantum information refer to? It is about quantum properties represented by subspaces of the quantum Hilbert space, or their projectors, to which standard (Kolmogorov) probabilities can be assigned by using a projective decomposition of the identity (PDI or framework) as a quantum sample space. The single framework rule of consistent histories prevents paradoxes or contradictions. When only one framework is employed, classical (Shannon) information theory can be imported unchanged into the quantum domain. A particular case is the macroscopic world of classical physics whose quantum description needs only a single quasiclassical framework. Nontrivial issues unique to quantum information, those with no classical analog, arise when aspects of two or more incompatible frameworks are compared.

]]>Entropy doi: 10.3390/e19120643

Authors: Hoo-Suk Oh Youngseog Lee Ho-Young Kwak

In this study, diagnosis of a 300-MW combined cycle power plant under faulty conditions was performed using a thermoeconomic method called modified productive structure analysis. The malfunction and dysfunction, unit cost of irreversibility and lost cost flow rate for each component were calculated for the cases of pre-fixed malfunction and the reference conditions. A commercial simulating software, GateCycleTM (version 6.1.2), was used to estimate the thermodynamic properties under faulty conditions. The relative malfunction (RMF) and the relative difference in the lost cost flow rate between real operation and reference conditions (RDLC) were found to be effective indicators for the identification of faulty components. Simulation results revealed that 0.5% degradation in the isentropic efficiency of air compressor, 2% in gas turbine, 2% in steam turbine and 2% degradation in energy loss in heat exchangers can be identified. Multi-fault scenarios that can be detected by the indicators were also considered. Additional lost exergy due to these types of faulty components, that can be detected by RMF or RDLC, is less than 5% of the exergy lost in the components in the normal condition.

]]>Entropy doi: 10.3390/e19120642

Authors: Luigi Gresele Matteo Marsili

Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent) variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics) directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

]]>Entropy doi: 10.3390/e19120641

Authors: Vijay Singh Bellie Sivakumar Huijuan Cui

Water engineering is an amalgam of engineering (e.g., hydraulics, hydrology, irrigation, ecosystems, environment, water resources) and non-engineering (e.g., social, economic, political) aspects that are needed for planning, designing and managing water systems. These aspects and the associated issues have been dealt with in the literature using different techniques that are based on different concepts and assumptions. A fundamental question that still remains is: Can we develop a unifying theory for addressing these? The second law of thermodynamics permits us to develop a theory that helps address these in a unified manner. This theory can be referred to as the entropy theory. The thermodynamic entropy theory is analogous to the Shannon entropy or the information theory. Perhaps, the most popular generalization of the Shannon entropy is the Tsallis entropy. The Tsallis entropy has been applied to a wide spectrum of problems in water engineering. This paper provides an overview of Tsallis entropy theory in water engineering. After some basic description of entropy and Tsallis entropy, a review of its applications in water engineering is presented, based on three types of problems: (1) problems requiring entropy maximization; (2) problems requiring coupling Tsallis entropy theory with another theory; and (3) problems involving physical relations.

]]>Entropy doi: 10.3390/e19120627

Authors: Peng Luo Jinye Peng

Semi-Nonnegative Matrix Factorization (Semi-NMF), as a variant of NMF, inherits the merit of parts-based representation of NMF and possesses the ability to process mixed sign data, which has attracted extensive attention. However, standard Semi-NMF still suffers from the following limitations. First of all, Semi-NMF fits data in a Euclidean space, which ignores the geometrical structure in the data. What’s more, Semi-NMF does not incorporate the discriminative information in the learned subspace. Last but not least, the learned basis in Semi-NMF is unnecessarily part based because there are no explicit constraints to ensure that the representation is part based. To settle these issues, in this paper, we propose a novel Semi-NMF algorithm, called Group sparsity and Graph regularized Semi-Nonnegative Matrix Factorization with Discriminability (GGSemi-NMFD) to overcome the aforementioned problems. GGSemi-NMFD adds the graph regularization term in Semi-NMF, which can well preserve the local geometrical information of the data space. To obtain the discriminative information, approximation orthogonal constraints are added in the learned subspace. In addition, ℓ 21 norm constraints are adopted for the basis matrix, which can encourage the basis matrix to be row sparse. Experimental results in six datasets demonstrate the effectiveness of the proposed algorithms.

]]>Entropy doi: 10.3390/e19120639

Authors: Francisco Peña Alejandro González Alvaro Nunez Pedro Orellana René Rojas Patricio Vargas

We study the effect of the degeneracy factor in the energy levels of the well-known Landau problem for a magnetic engine. The scheme of the cycle is composed of two adiabatic processes and two isomagnetic processes, driven by a quasi-static modulation of external magnetic field intensity. We derive the analytical expression of the relation between the magnetic field and temperature along the adiabatic process and, in particular, reproduce the expression for the efficiency as a function of the compression ratio.

]]>Entropy doi: 10.3390/e19120640

Authors: Carlos Granero-Belinchon Stéphane Roux Patrice Abry Muriel Doret Nicolas Garnier

Intrapartum fetal heart rate (FHR) monitoring constitutes a reference tool in clinical practice to assess the baby’s health status and to detect fetal acidosis. It is usually analyzed by visual inspection grounded on FIGO criteria. Characterization of intrapartum fetal heart rate temporal dynamics remains a challenging task and continuously receives academic research efforts. Complexity measures, often implemented with tools referred to as approximate entropy (ApEn) or sample entropy (SampEn), have regularly been reported as significant features for intrapartum FHR analysis. We explore how information theory, and especially auto-mutual information (AMI), is connected to ApEn and SampEn and can be used to probe FHR dynamics. Applied to a large (1404 subjects) and documented database of FHR data, collected in a French academic hospital, it is shown that (i) auto-mutual information outperforms ApEn and SampEn for acidosis detection in the first stage of labor and continues to yield the best performance in the second stage; (ii) Shannon entropy increases as labor progresses and is always much larger in the second stage; (iii) babies suffering from fetal acidosis additionally show more structured temporal dynamics than healthy ones and that this progressive structuration can be used for early acidosis detection.

]]>Entropy doi: 10.3390/e19120638

Authors: Burak Eroğlu Ramazan Gençay M. Yazgan

We evaluate the performances of wavelet jump detection tests by using simulated high-frequency data, in which jumps and some other non-standard features are present. Wavelet-based jump detection tests have a clear advantage over the alternatives, as they are capable of stating the exact timing and number of jumps. The results indicate that, in addition to those advantages, these detection tests also preserve desirable power and size properties even in non-standard data environments, whereas their alternatives fail to sustain their desirable properties beyond standard data features.

]]>Entropy doi: 10.3390/e19120547

Authors: Christopher Stephens Victor Sánchez-Cordero Constantino González Salazar

The characterization and quantification of ecological interactions and the construction of species’ distributions and their associated ecological niches are of fundamental theoretical and practical importance. In this paper, we discuss a Bayesian inference framework, which, using spatial data, offers a general formalism within which ecological interactions may be characterized and quantified. Interactions are identified through deviations of the spatial distribution of co-occurrences of spatial variables relative to a benchmark for the non-interacting system and based on a statistical ensemble of spatial cells. The formalism allows for the integration of both biotic and abiotic factors of arbitrary resolution. We concentrate on the conceptual and mathematical underpinnings of the formalism, showing how, using the naive Bayes approximation, it can be used to not only compare and contrast the relative contribution from each variable, but also to construct species’ distributions and ecological niches based on an arbitrary variable type. We also show how non-linear interactions between distinct niche variables can be identified and the degree of confounding between variables accounted for.

]]>Entropy doi: 10.3390/e19120636

Authors: Sten Sootla Dirk Theis Raul Vicente

Information theory is often utilized to capture both linear as well as nonlinear relationships between any two parts of a dynamical complex system. Recently, an extension to classical information theory called partial information decomposition has been developed, which allows one to partition the information that two subsystems have about a third one into unique, redundant and synergistic contributions. Here, we apply a recent estimator of partial information decomposition to characterize the dynamics of two different complex systems. First, we analyze the distribution of information in triplets of spins in the 2D Ising model as a function of temperature. We find that while redundant information obtains a maximum at the critical point, synergistic information peaks in the disorder phase. Secondly, we characterize 1D elementary cellular automata rules based on the information distribution between neighboring cells. We describe several clusters of rules with similar partial information decomposition. These examples illustrate how the partial information decomposition provides a characterization of the emergent dynamics of complex systems in terms of the information distributed across their interacting units.

]]>Entropy doi: 10.3390/e19120635

Authors: Basak Guler Aylin Yener Prithwish Basu Ananthram Swami

We consider a two party network where each party wishes to compute a function of two correlated sources. Each source is observed by one of the parties. The true joint distribution of the sources is known to one party. The other party, on the other hand, assumes a distribution for which the set of source pairs that have a positive probability is only a subset of those that may appear in the true distribution. In that sense, this party has only partial information about the true distribution from which the sources are generated. We study the impact of this asymmetry on the worst-case message length for zero-error function computation, by identifying the conditions under which reconciling the missing information prior to communication is better than not reconciling it but instead using an interactive protocol that ensures zero-error communication without reconciliation. Accordingly, we provide upper and lower bounds on the minimum worst-case message length for the communication strategies with and without reconciliation. Through specializing the proposed model to certain distribution classes, we show that partially reconciling the true distribution by allowing a certain degree of ambiguity can perform better than the strategies with perfect reconciliation as well as strategies that do not start with an explicit reconciliation step. As such, our results demonstrate a tradeoff between the reconciliation and communication rates, and that the worst-case message length is a result of the interplay between the two factors.

]]>Entropy doi: 10.3390/e19110629

Authors: Naoki Kawamura Tatsuya Yokota Hidekata Hontani Muneyuki Sakata Yuichi Kimura

It is known that the process of reconstruction of a Positron Emission Tomography (PET) image from sinogram data is very sensitive to measurement noises; it is still an important research topic to reconstruct PET images with high signal-to-noise ratios. In this paper, we propose a new reconstruction method for a temporal series of PET images from a temporal series of sinogram data. In the proposed method, PET images are reconstructed by minimizing the Kullback–Leibler divergence between the observed sinogram data and sinogram data derived from a parametric model of PET images. The contributions of the proposition include the following: (1) regions of targets in images are explicitly expressed using a set of spatial bases in order to ignore the noises in the background; (2) a parametric time activity model of PET images is explicitly introduced as a constraint; and (3) an algorithm for solving the optimization problem is clearly described. To demonstrate the advantages of the proposed method, quantitative evaluations are performed using both synthetic and clinical data of human brains.

]]>Entropy doi: 10.3390/e19110632

Authors: Ricardo Paéz-Hernández Norma Sánchez-Salas Juan Chimal-Eguía Delfino Ladino-Luna

This paper presents an analysis of a Curzon and Alhborn thermal engine model where both internal irreversibilities and non-instantaneous adiabatic branches are considered, operating with maximum ecological function and maximum power output regimes. Its thermodynamic properties are shown, and an analysis of its local dynamic stability is performed. The results derived are compared throughout the work with the results obtained previously for a case in which the adiabatic branches were assumed as instantaneous. The results indicate a better performance for thermodynamic properties in the model with instantaneous adiabatic branches, whereas there is an improvement in robustness in the case where non-instantaneous adiabatic branches are considered.

]]>Entropy doi: 10.3390/e19110631

Authors: Tarald O. Kvålseth

Starting with a new formulation for the mutual information (MI) between a pair of events, this paper derives alternative upper bounds and extends those to the case of two discrete random variables. Normalized mutual information (NMI) measures are then obtained from those bounds, emphasizing the use of least upper bounds. Conditional NMI measures are also derived for three different events and three different random variables. Since the MI formulation for a pair of events is always nonnegative, it can properly be extended to include weighted MI and NMI measures for pairs of events or for random variables that are analogous to the well-known weighted entropy. This weighted MI is generalized to the case of continuous random variables. Such weighted measures have the advantage over previously proposed measures of always being nonnegative. A simple transformation is derived for the NMI, such that the transformed measures have the value-validity property necessary for making various appropriate comparisons between values of those measures. A numerical example is provided.

]]>Entropy doi: 10.3390/e19110630

Authors: Piotr Bołtuć

The paper introduces the notion of “metacomputable” processes as those which are the product of computable processes. This notion seems interesting in the instance when metacomputable processes may not be computable themselves, but are produced by computable ones. The notion of computability used here relies on Turing computability. When we talk about something being non-computable, this can be viewed as computation that incorporates Turing’s oracle, maybe a true randomizer (perhaps a quantum one). The notions of “processes” is used broadly, so that it also covers “objects” under the functional description; for the sake of this paper an object is seen as computable if processes that fully describe relevant aspects of its functioning are computable. The paper also introduces a distinction between phenomenal content and the epistemic subject which holds that content. The distinction provides an application of the notion of the metacomputable. In accordance with the functional definition of computable objects, sketched out above, it is possible to think of objects, such as brains, as being computable. If we take the functionality of brains relevant for consideration to be their supposed ability to generate first-person consciousness, and if they were computable in this regard, it would mean that brains, as generators of consciousness, could be described, straightforwardly, by Turing-computable mathematical functions. If there were other, maybe artificial, generators of first-person consciousness, then we could hope to design those as Turing-computable machines as well. However, thinking of such generators of consciousness as computable does not preclude the stream of consciousness being non-computable. This is the main point of this article—computable processes, including functionally described machines, may be able to generate incomputable products. Those processes, while not computable, are metacomputable—by regulative definition introduced in this article. Another example of a metacomputable process that is not also computable would be a true randomizer, if we were able to build one. Presumably, it would be built according to a computable design, e.g., by a machine designed using AutoCAD, that could be programmed into an industrial robot. Yet, its product—a perfect randomizer—would be incomputable. The last point I need to make belongs to ontology in the theory of computability. The claim that computable objects, or processes, may produce incomputable ones does not commit us to what I call computational monism—the idea that non-computable processes may, strictly speaking, be transformed into computable ones. Metacomputable objects, or processes, may originate from computable systems (systems will be understood here as complex, structured objects or processes) that have non-computable admixtures. Such processes are computable as long as those non-computable admixtures are latent, or otherwise irrelevant for a given functionality, and they are non-computable if the admixtures become active and relevant. Ontology, in which computational processes, or objects, can produce non-computable processes, or objects, iff the former ones have non-computable components, may be termed computational dualism. Such objects or processes may be computable despite containing non-computable elements, in particular if there is an on and off switch of those non-computable processes, and it is off. One kind of such a switch is provided, in biology, by latent genes that become active only in specific environmental situations, or at a given age. Both ontologies, informational dualism and informational monism, are compatible with some non-computable processes being metacomputable.

]]>Entropy doi: 10.3390/e19110628

Authors: Jian Jiao Sha Wang Bowen Feng Shushi Gu Shaohua Wu Qinyu Zhang

In this paper, we propose a rate-compatible (RC) parallel concatenated punctured polar (PCPP) codes for incremental redundancy hybrid automatic repeat request (IR-HARQ) transmission schemes, which can transmit multiple data blocks over a time-varying channel. The PCPP coding scheme can provide RC polar coding blocks in order to adapt to channel variations. First, we investigate an improved random puncturing (IRP) pattern for the PCPP coding scheme due to the code-rate and block length limitations of conventional polar codes. The proposed IRP algorithm only select puncturing bits from the frozen bits set and keep the information bits unchanged during puncturing, which can improve 0.2–1 dB decoding performance more than the existing random puncturing (RP) algorithm. Then, we develop a RC IR-HARQ transmission scheme based on PCPP codes. By analyzing the overhead of the previous successful decoded PCPP coding block in our IR-HARQ scheme, the optimal initial code-rate can be determined for each new PCPP coding block over time-varying channels. Simulation results show that the average number of transmissions is about 1.8 times for each PCPP coding block in our RC IR-HARQ scheme with a 2-level PCPP encoding construction, which can reduce half of the average number of transmissions than the existing RC polar coding schemes.

]]>Entropy doi: 10.3390/e19110625

Authors: Chunming Zhang Zhengjun Zhang

The classical quadratic loss for the partially linear model (PLM) and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD)” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM), which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005) between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations.

]]>Entropy doi: 10.3390/e19110626

Authors: Carsten Hartmann Lorenz Richter Christof Schütte Wei Zhang

The article surveys and extends variational formulations of the thermodynamic free energy and discusses their information-theoretic content from the perspective of mathematical statistics. We revisit the well-known Jarzynski equality for nonequilibrium free energy sampling within the framework of importance sampling and Girsanov change-of-measure transformations. The implications of the different variational formulations for designing efficient stochastic optimization and nonequilibrium simulation algorithms for computing free energies are discussed and illustrated.

]]>Entropy doi: 10.3390/e19110624

Authors: Xiaofei Zhu Xu Zhang Xiao Tang Xiaoping Gao Xiang Chen

The objective of this study is to re-evaluate the relation between surface electromyogram (EMG) and muscle contraction torque in biceps brachii (BB) muscles of healthy subjects using two different complexity measures. Ten healthy subjects were recruited and asked to complete a series of elbow flexion tasks following different isometric muscle contraction levels ranging from 10% to 80% of maximum voluntary contraction (MVC) with each increment of 10%. Meanwhile, both the elbow flexion torque and surface EMG data from the muscle were recorded. The root mean square (RMS), sample entropy (SampEn) and fuzzy entropy (FuzzyEn) of corresponding EMG data were analyzed for each contraction level, and the relation between EMG and muscle torque was accordingly quantified. The experimental results showed a nonlinear relation between the traditional RMS amplitude of EMG and the muscle torque. By contrast, the FuzzyEn of EMG exhibited an improved linear correlation with the muscle torque than the RMS amplitude of EMG, which indicates its great value in estimating BB muscle strength in a simple and straightforward manner. In addition, the SampEn of EMG was found to be insensitive to the varying muscle torques, almost presenting a flat trend with the increment of muscle force. Such a character of the SampEn implied its potential application as a promising surface EMG biomarker for examining neuromuscular changes while overcoming interference from muscle strength.

]]>Entropy doi: 10.3390/e19110623

Authors: Duo Hao Qiuming Li Chengwei Li

Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD) and relative entropy (RE). In this paper, the global motion vector (GMV) is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF), Kalman filter (KF), wavelet decomposition (MD) method, empirical mode decomposition (EMD)-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

]]>Entropy doi: 10.3390/e19110622

Authors: H. van Erp Ronald Linger Pieter van Gelder

In this paper, we will give the derivation of an inquiry calculus, or, equivalently, a Bayesian information theory. From simple ordering follow lattices, or, equivalently, algebras. Lattices admit a quantification, or, equivalently, algebras may be extended to calculi. The general rules of quantification are the sum and chain rules. Probability theory follows from a quantification on the specific lattice of statements that has an upper context. Inquiry calculus follows from a quantification on the specific lattice of questions that has a lower context. There will be given here a relevance measure and a product rule for relevances, which, taken together with the sum rule of relevances, will allow us to perform inquiry analyses in an algorithmic manner.

]]>Entropy doi: 10.3390/e19110612

Authors: Amanda Oliveira Adrião Dória Neto Allan Martins

Information Theory is a branch of mathematics, more specifically probability theory, that studies information quantification. Recently, several researches have been successful with the use of Information Theoretic Learning (ITL) as a new technique of unsupervised learning. In these works, information measures are used as criterion of optimality in learning. In this article, we will analyze a still unexplored aspect of these information measures, their dynamic behavior. Autoregressive models (linear and non-linear) will be used to represent the dynamics in information measures. As a source of dynamic information, videos with different characteristics like fading, monotonous sequences, etc., will be used.

]]>Entropy doi: 10.3390/e19110607

Authors: Min Lei Guang Meng Guangming Dong

Bearing vibration response studies are crucial for the condition monitoring of bearings and the quality inspection of rotating machinery systems. However, it is still very difficult to diagnose bearing faults, especially rolling element faults, due to the complex, high-dimensional and nonlinear characteristics of vibration signals as well as the strong background noise. A novel nonlinear analysis method—the symplectic entropy (SymEn) measure—is proposed to analyze the measured signals for fault monitoring of rolling bearings. The core technique of the SymEn approach is the entropy analysis based on the symplectic principal components. The dynamical characteristics of the rolling bearing data are analyzed using the SymEn method. Unlike other techniques consisting of high-dimensional features in the time-domain, frequency-domain and the empirical mode decomposition (EMD)/wavelet-domain, the SymEn approach constructs low-dimensional (i.e., two-dimensional) features based on the SymEn estimate. The vibration signals from our experiments and the Case Western Reserve University Bearing Data Center are applied to verify the effectiveness of the proposed method. Meanwhile, it is found that faulty bearings have a great influence on the other normal bearings. To sum up, the results indicate that the proposed method can be used to detect rolling bearing faults.

]]>Entropy doi: 10.3390/e19110620

Authors: Tiefeng Peng Qibin Li Longhua Xu Chao He Liqun Luo

Foam systems have been attracting extensive attention due to their importance in a variety of applications, e.g., in the cleaning industry, and in bubble flotation. In the context of flotation chemistry, flotation performance is strongly affected by bubble coalescence, which in turn relies significantly on the surface forces upon the liquid film between bubbles. Conventionally, unusual short-range strongly repulsive surface interactions for Newton black films (NBF) between two interfaces with thickness of less than 5 nm were not able to be incorporated into the available classical Derjaguin, Landau, Verwey, and Overbeek (DLVO) theory. The non-DLVO interaction would increase exponentially with the decrease of film thickness, as it plays a crucial role in determining liquid film stability. However, its mechanism and origin are still unclear. In the present work, we investigate the surface interaction of free-standing sodium dodecyl-sulfate (SDS) nanoscale black films in terms of disjoining pressure using the molecular simulation method. The aqueous nanoscale film, consisting of a water coating with SDS surfactants, and with disjoining pressure and film tension of SDS-NBF as a function of film thickness, were quantitatively determined by a post-processing technique derived from film thermodynamics.

]]>Entropy doi: 10.3390/e19110619

Authors: Viviana Meruane Matias Lasen Enrique López Droguett Alejandro Ortiz-Bernardin

Sandwich structures are very attractive due to their high strength at a minimum weight, and, therefore, there has been a rapid increase in their applications. Nevertheless, these structures may present imperfect bonding or debonding between the skins and core as a result of manufacturing defects or impact loads, degrading their mechanical properties. To improve both the safety and functionality of these systems, structural damage assessment methodologies can be implemented. This article presents a damage assessment algorithm to localize and quantify debonds in sandwich panels. The proposed algorithm uses damage indices derived from the modal strain energy method and a linear approximation with a maximum entropy algorithm. Full-field vibration measurements of the panels were acquired using a high-speed 3D digital image correlation (DIC) system. Since the number of damage indices per panel is too large to be used directly in a regression algorithm, reprocessing of the data using principal component analysis (PCA) and kernel PCA has been performed. The results demonstrate that the proposed methodology accurately identifies debonding in composite panels.

]]>Entropy doi: 10.3390/e19110621

Authors: Wassim M. Haddad

Thermodynamics is a physical branch of science that governs the thermal behavior of dynamical systems from those as simple as refrigerators to those as complex as our expanding universe. The laws of thermodynamics involving conservation of energy and nonconservation of entropy are, without a doubt, two of the most useful and general laws in all sciences. The first law of thermodynamics, according to which energy cannot be created or destroyed, merely transformed from one form to another, and the second law of thermodynamics, according to which the usable energy in an adiabatically isolated dynamical system is always diminishing in spite of the fact that energy is conserved, have had an impact far beyond science and engineering. In this paper, we trace the history of thermodynamics from its classical to its postmodern forms, and present a tutorial and didactic exposition of thermodynamics as it pertains to some of the deepest secrets of the universe.

]]>Entropy doi: 10.3390/e19110606

Authors: Yunna Wu Xiaokun Sun Hu Xu Chuanbo Xu Ruhang Xu

Traditional stochastic dominance rules are so strict and qualitative conditions that generally a stochastic dominance relation between two alternatives does not exist. To solve the problem, we firstly supplement the definitions of almost stochastic dominance (ASD). Then, we propose a new definition of stochastic dominance degree (SDD) that is based on the idea of ASD. The new definition takes both the objective mean and stakeholders’ subjective preference into account, and can measure both standard and almost stochastic dominance degree. The new definition contains four kinds of SDD corresponding to different stakeholders (rational investors, risk averters, risk seekers, and prospect investors). The operator in the definition can also be changed to fit in with different circumstances. On the basis of the new SDD definition, we present a method to solve stochastic multiple criteria decision-making problem. The numerical experiment shows that the new method could produce a more accurate result according to the utility situations of stakeholders. Moreover, even when it is difficult to elicit the group utility distribution of stakeholders, or when the group utility distribution is ambiguous, the method can still rank alternatives.

]]>Entropy doi: 10.3390/e19110570

Authors: Mengmeng Li Xiaoyan Zhang

As we move into the information age, the amount of data in various fields has increased dramatically, and data sources have become increasingly widely distributed. The corresponding phenomenon of missing data is increasingly common, and it leads to the generation of incomplete multi-source information systems. In this context, this paper’s proposal aims to address the limitations of rough set theory. We study the method of multi-source fusion in incomplete multi-source systems. This paper presents a method for fusing incomplete multi-source systems based on information entropy; in particular, by comparison with another method, our fusion method is validated. Furthermore, extensive experiments are conducted on six UCI data sets to verify the performance of the proposed method. Additionally, the experimental results indicate that multi-source information fusion approaches significantly outperform other approaches to fusion.

]]>Entropy doi: 10.3390/e19110618

Authors: Takayuki Koyama Takeru Matsuda Fumiyasu Komaki

We develop priors for Bayes estimation of quantum states that provide minimax state estimation. The relative entropy from the true density operator to a predictive density operator is adopted as a loss function. The proposed prior maximizes the conditional Holevo mutual information, and it is a quantum version of the latent information prior in classical statistics. For one qubit system, we provide a class of measurements that is optimal from the viewpoint of minimax state estimation.

]]>Entropy doi: 10.3390/e19110609

Authors: Aijun Guo Jianxia Chang Yimin Wang Qiang Huang Zhihui Guo

Copula functions have been extensively used to describe the joint behaviors of extreme hydrological events and to analyze hydrological risk. Advanced marginal distribution inference, for example, the maximum entropy theory, is particularly beneficial for improving the performance of the copulas. The goal of this paper, therefore, is twofold; first, to develop a coupled maximum entropy-copula method for hydrological risk analysis through deriving the bivariate return periods, risk, reliability and bivariate design events; and second, to reveal the impact of marginal distribution selection uncertainty and sampling uncertainty on bivariate design event identification. Particularly, the uncertainties involved in the second goal have not yet received significant consideration. The designed framework for hydrological risk analysis related to flood and extreme precipitation events is exemplarily applied in two catchments of the Loess plateau, China. Results show that (1) distribution derived by the maximum entropy principle outperforms the conventional distributions for the probabilistic modeling of flood and extreme precipitation events; (2) the bivariate return periods, risk, reliability and bivariate design events are able to be derived using the coupled entropy-copula method; (3) uncertainty analysis highlights the fact that appropriate performance of marginal distribution is closely related to bivariate design event identification. Most importantly, sampling uncertainty causes the confidence regions of bivariate design events with return periods of 30 years to be very large, overlapping with the values of flood and extreme precipitation, which have return periods of 10 and 50 years, respectively. The large confidence regions of bivariate design events greatly challenge its application in practical engineering design.

]]>Entropy doi: 10.3390/e19110617

Authors: Po-Ling Loh

In recent years, tools from information theory have played an increasingly prevalent role in statistical machine learning. In addition to developing efficient, computationally feasible algorithms for analyzing complex datasets, it is of theoretical importance to determine whether such algorithms are “optimal” in the sense that no other algorithm can lead to smaller statistical error. This paper provides a survey of various techniques used to derive information-theoretic lower bounds for estimation and learning. We focus on the settings of parameter and function estimation, community recovery, and online learning for multi-armed bandits. A common theme is that lower bounds are established by relating the statistical learning problem to a channel decoding problem, for which lower bounds may be derived involving information-theoretic quantities such as the mutual information, total variation distance, and Kullback–Leibler divergence. We close by discussing the use of information-theoretic quantities to measure independence in machine learning applications ranging from causality to medical imaging, and mention techniques for estimating these quantities efficiently in a data-driven manner.

]]>Entropy doi: 10.3390/e19110615

Authors: Daryl DeFord Katherine Moore

Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random walk data. We analyze random walk null models for correlated time series and describe a method for determining the corresponding ordinal pattern distributions. These null models more accurately reflect the observed pattern distributions in some economic data. This leads us to define a measure of complexity using the deviation of a time series from an associated random walk null model. We demonstrate the applicability of our methods using empirical data drawn from a variety of fields, including to a variety of stock market closing prices.

]]>Entropy doi: 10.3390/e19110613

Authors: Jongho Keum Kurt Kornelsen James Leach Paulin Coulibaly

Having reliable water monitoring networks is an essential component of water resources and environmental management. A standardized process for the design of water monitoring networks does not exist with the exception of the World Meteorological Organization (WMO) general guidelines about the minimum network density. While one of the major challenges in the design of optimal hydrometric networks has been establishing design objectives, information theory has been successfully adopted to network design problems by providing measures of the information content that can be deliverable from a station or a network. This review firstly summarizes the common entropy terms that have been used in water monitoring network designs. Then, this paper deals with the recent applications of the entropy concept for water monitoring network designs, which are categorized into (1) precipitation; (2) streamflow and water level; (3) water quality; and (4) soil moisture and groundwater networks. The integrated design method for multivariate monitoring networks is also covered. Despite several issues, entropy theory has been well suited to water monitoring network design. However, further work is still required to provide design standards and guidelines for operational use.

]]>Entropy doi: 10.3390/e19110614

Authors: Tong Qiao Wei Shan Chang Zhou

Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the network. In this paper, a novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes. By design, the re-defined entropy centrality which describes associations among node pairs and captures the process of influence propagation can be interpreted explained as a measure of actor potential for communication activity. We evaluate the efficiency of the proposed model by using four real-world datasets with varied sizes and densities and three artificial networks constructed by models including Barabasi-Albert, Erdos-Renyi and Watts-Stroggatz. The four datasets are Zachary’s karate club, USAir97, Collaboration network and Email network URV respectively. Extensive experimental results prove the effectiveness of the proposed method.

]]>Entropy doi: 10.3390/e19110616

Authors: Zhiliang Pan Ping Li Jinxing Li Yanping Li

Endwall fillet and bulb structures are proposed in this research to improve the temperature uniformity of pin-fined microchannels. The periodical laminar flow and heat transfer performances are investigated under different Reynolds numbers and radius of fillet and bulb. The results show that at a low Reynolds number, both the fillet and the bulb structures strengthen the span-wise and the normal secondary flow in the channel, eliminate the high temperature area in the pin-fin, improve the heat transfer performance of the rear of the cylinder, and enhance the thermal uniformity of the pin-fin surface and the outside wall. Compared to traditional pin-fined microchannels, the flow resistance coefficient f of the pin-fined microchannels with fillet, as well as a bulb with a 2 μm or 5 μm radius, does not increase significantly, while, f of the pin-fined microchannels with a 10 μm or 15 μm bulb increases notably. Moreover, Nu has a maximum increase of 16.93% for those with fillet and 20.65% for those with bulb, and the synthetic thermal performance coefficient TP increases by 16.22% at most for those with fillet and 15.67% at most for those with bulb. At last, as the Reynolds number increases, heat transfer improvement of the fillet and bulb decreases.

]]>Entropy doi: 10.3390/e19110611

Authors: Ahmed Abotabl Aria Nosratinia

The Decode-Compress-Forward (DCF) is a generalization of Decode-Forward (DF) and Compress-Forward (CF). This paper investigates conditions under which DCF offers gains over DF and CF, addresses the problem of coded modulation for DCF, and evaluates the performance of DCF coded modulation implemented via low-density parity-check (LDPC) codes and polar codes. We begin by revisiting the achievable rate of DCF in discrete memoryless channels under backward decoding. We then study coded modulation for the decode-compress-forward via multi-level coding. We show that the proposed multilevel coding approaches the known achievable rates of DCF. The proposed multilevel coding is implemented (and its performance verified) via a combination of standard DVB-S2 LDPC codes, and polar codes whose design follows the method of Blasco-Serrano.

]]>Entropy doi: 10.3390/e19110610

Authors: Shirin Saeedi Bidokhti Gerhard Kramer Shlomo Shamai

The downlink of symmetric Cloud Radio Access Networks (C-RANs) with multiple relays and a single receiver is studied. Lower and upper bounds are derived on the capacity. The lower bound is achieved by Marton’s coding, which facilitates dependence among the multiple-access channel inputs. The upper bound uses Ozarow’s technique to augment the system with an auxiliary random variable. The bounds are studied over scalar Gaussian C-RANs and are shown to meet and characterize the capacity for interesting regimes of operation.

]]>Entropy doi: 10.3390/e19110608

Authors: Takayuki Kawashima Hironori Fujisawa

In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is the γ -divergence, and the robust estimator using the γ -divergence is known for having a strong robustness. In this paper, we extend the γ -divergence to the regression problem, consider the robust and sparse regression based on the γ -divergence and show that it has a strong robustness under heavy contamination even when outliers are heterogeneous. The loss function is constructed by an empirical estimate of the γ -divergence with sparse regularization, and the parameter estimate is defined as the minimizer of the loss function. To obtain the robust and sparse estimate, we propose an efficient update algorithm, which has a monotone decreasing property of the loss function. Particularly, we discuss a linear regression problem with L 1 regularization in detail. In numerical experiments and real data analyses, we see that the proposed method outperforms past robust and sparse methods.

]]>Entropy doi: 10.3390/e19110605

Authors: Petr Jizba Jan Korbel

The aim of this paper is to show that the Tsallis-type (q-additive) entropic chain rule allows for a wider class of entropic functionals than previously thought. In particular, we point out that the ensuing entropy solutions (e.g., Tsallis entropy) can be determined uniquely only when one fixes the prescription for handling conditional entropies. By using the concept of Kolmogorov–Nagumo quasi-linear means, we prove this with the help of Darótzy’s mapping theorem. Our point is further illustrated with a number of explicit examples. Other salient issues, such as connections of conditional entropies with the de Finetti–Kolmogorov theorem for escort distributions and with Landsberg’s classification of non-extensive thermodynamic systems are also briefly discussed.

]]>Entropy doi: 10.3390/e19110604

Authors: Jerry Gibson

Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to some confusion about how these new rate distortion bounds, once they are obtained, should be interpreted and should be used in source codec performance analysis and design. We present the relevant rate distortion theory and show how this theory can be used for practical codec design and performance prediction and evaluation. Examples for speech and video indicate exactly how the new rate distortion functions can be calculated, interpreted, and extended. These examples illustrate the interplay between source models for rate distortion theoretic studies and the source models underlying video and speech codec design. Key concepts include the development of composite source models per source realization and the application of conditional rate distortion theory.

]]>Entropy doi: 10.3390/e19110603

Authors: Robert Swendsen

The proper definition of thermodynamics and the thermodynamic entropy is discussed in the light of recent developments. The postulates for thermodynamics are examined critically, and some modifications are suggested to allow for the inclusion of long-range forces (within a system), inhomogeneous systems with non-extensive entropy, and systems that can have negative temperatures. Only the thermodynamics of finite systems are considered, with the condition that the system is large enough for the fluctuations to be smaller than the experimental resolution. The statistical basis for thermodynamics is discussed, along with four different forms of the (classical and quantum) entropy. The strengths and weaknesses of each are evaluated in relation to the requirements of thermodynamics. Effects of order 1 / N , where N is the number of particles, are included in the discussion because they have played a significant role in the literature, even if they are too small to have a measurable effect in an experiment. The discussion includes the role of discreteness, the non-zero width of the energy and particle number distributions, the extensivity of models with non-interacting particles, and the concavity of the entropy with respect to energy. The results demonstrate the validity of negative temperatures.

]]>Entropy doi: 10.3390/e19110602

Authors: Wei-Ting Lee Che-Ming Li

A new measure based on the tripartite information diagram is proposed for identifying quantum discord in tripartite systems. The proposed measure generalizes the mutual information underlying discord from bipartite to tripartite systems, and utilizes both one-particle and two-particle projective measurements to reveal the characteristics of the tripartite quantum discord. The feasibility of the proposed measure is demonstrated by evaluating the tripartite quantum discord for systems with states close to Greenberger–Horne–Zeilinger, W, and biseparable states. In addition, the connections between tripartite quantum discord and two other quantum correlations—namely genuine tripartite entanglement and genuine tripartite Einstein–Podolsky–Rosen steering—are briefly discussed. The present study considers the case of quantum discord in tripartite systems. However, the proposed framework can be readily extended to general N-partite systems.

]]>Entropy doi: 10.3390/e19110601

Authors: Johannes Rauh

Secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret. Thus, secret sharing schemes are systems of variables in which it is very clearly specified which subsets have information about the secret. As such, they provide perfect model systems for information decompositions. However, following this intuition too far leads to an information decomposition with negative partial information terms, which are difficult to interpret. One possible explanation is that the partial information lattice proposed by Williams and Beer is incomplete and has to be extended to incorporate terms corresponding to higher-order redundancy. These results put bounds on information decompositions that follow the partial information framework, and they hint at where the partial information lattice needs to be improved.

]]>Entropy doi: 10.3390/e19110600

Authors: Yanguang Chen Jiejing Wang Jian Feng

The spatial patterns and processes of cities can be described with various entropy functions. However, spatial entropy always depends on the scale of measurement, and it is difficult to find a characteristic value for it. In contrast, fractal parameters can be employed to characterize scale-free phenomena and reflect the local features of random multi-scaling structure. This paper is devoted to exploring the similarities and differences between spatial entropy and fractal dimension in urban description. Drawing an analogy between cities and growing fractals, we illustrate the definitions of fractal dimension based on different entropy concepts. Three representative fractal dimensions in the multifractal dimension set, capacity dimension, information dimension, and correlation dimension, are utilized to make empirical analyses of the urban form of two Chinese cities, Beijing and Hangzhou. The results show that the entropy values vary with the measurement scale, but the fractal dimension value is stable is method and study area are fixed; if the linear size of boxes is small enough (e.g., &lt;1/25), the linear correlation between entropy and fractal dimension is significant (based on the confidence level of 99%). Further empirical analysis indicates that fractal dimension is close to the characteristic values of spatial entropy. This suggests that the physical meaning of fractal dimension can be interpreted by the ideas from entropy and scaling and the conclusion is revealing for future spatial analysis of cities.

]]>Entropy doi: 10.3390/e19110598

Authors: Yun Lu Mingjiang Wang Rongchao Peng Qiquan Zhang

In the diagnosis of neurological diseases and assessment of brain function, entropy measures for quantifying electroencephalogram (EEG) signals are attracting ever-increasing attention worldwide. However, some entropy measures, such as approximate entropy (ApEn), sample entropy (SpEn), multiscale entropy and so on, imply high computational costs because their computations are based on hundreds of data points. In this paper, we propose an effective and practical method to accelerate the computation of these entropy measures by exploiting vectors with dissimilarity (VDS). By means of the VDS decision, distance calculations of most dissimilar vectors can be avoided during computation. The experimental results show that, compared with the conventional method, the proposed VDS method enables a reduction of the average computation time of SpEn in random signals and EEG signals by 78.5% and 78.9%, respectively. The computation times are consistently reduced by about 80.1~82.8% for five kinds of EEG signals of different lengths. The experiments further demonstrate the use of the VDS method not only to accelerate the computation of SpEn in electromyography and electrocardiogram signals but also to accelerate the computations of time-shift multiscale entropy and ApEn in EEG signals. All results indicate that the VDS method is a powerful strategy for accelerating the computation of entropy measures and has promising application potential in the field of biomedical informatics.

]]>Entropy doi: 10.3390/e19110599

Authors: Yingxin Zhao Zhiyang Liu Yuanyuan Wang Hong Wu Shuxue Ding

Compressive sensing theory has attracted widespread attention in recent years and sparse signal reconstruction has been widely used in signal processing and communication. This paper addresses the problem of sparse signal recovery especially with non-Gaussian noise. The main contribution of this paper is the proposal of an algorithm where the negentropy and reweighted schemes represent the core of an approach to the solution of the problem. The signal reconstruction problem is formalized as a constrained minimization problem, where the objective function is the sum of a measurement of error statistical characteristic term, the negentropy, and a sparse regularization term, ℓp-norm, for 0 &lt; p &lt; 1. The ℓp-norm, however, leads to a non-convex optimization problem which is difficult to solve efficiently. Herein we treat the ℓp -norm as a serious of weighted ℓ1-norms so that the sub-problems become convex. We propose an optimized algorithm that combines forward-backward splitting. The algorithm is fast and succeeds in exactly recovering sparse signals with Gaussian and non-Gaussian noise. Several numerical experiments and comparisons demonstrate the superiority of the proposed algorithm.

]]>Entropy doi: 10.3390/e19110595

Authors: Erik Aurell

This paper revisits the classical problem of representing a thermal bath interacting with a system as a large collection of harmonic oscillators initially in thermal equilibrium. As is well known, the system then obeys an equation, which in the bulk and in the suitable limit tends to the Kramers–Langevin equation of physical kinetics. I consider time-dependent system-bath coupling and show that this leads to an additional harmonic force acting on the system. When the coupling is switched on and switched off rapidly, the force has delta-function support at the initial and final time. I further show that the work and heat functionals as recently defined in stochastic thermodynamics at strong coupling contain additional terms depending on the time derivative of the system-bath coupling. I discuss these terms and show that while they can be very large if the system-bath coupling changes quickly, they only give a finite contribution to the work that enters in Jarzynski’s equality. I also discuss that these corrections to standard work and heat functionals provide an explanation for non-standard terms in the change of the von Neumann entropy of a quantum bath interacting with a quantum system found in an earlier contribution (Aurell and Eichhorn, 2015).

]]>Entropy doi: 10.3390/e19110596

Authors: Shanli Xiao Yujia Wang Hui Yu Shankun Nie

In order to improve the product disassembly efficiency, the disassembly line balancing problem (DLBP) is transformed into a problem of searching for the optimum path in the directed and weighted graph by constructing the disassembly hierarchy information graph (DHIG). Then, combining the characteristic of the disassembly sequence, an entropy-based adaptive hybrid particle swarm optimization algorithm (AHPSO) is presented. In this algorithm, entropy is introduced to measure the changing tendency of population diversity, and the dimension learning, crossover and mutation operator are used to increase the probability of producing feasible disassembly solutions (FDS). Performance of the proposed methodology is tested on the primary problem instances available in the literature, and the results are compared with other evolutionary algorithms. The results show that the proposed algorithm is efficient to solve the complex DLBP.

]]>Entropy doi: 10.3390/e19110594

Authors: Enrico Sciubba Federico Zullo

The paper discusses how the two thermodynamic properties, energy (U) and exergy (E), can be used to solve the problem of quantifying the entropy of non-equilibrium systems. Both energy and exergy are a priori concepts, and their formal dependence on thermodynamic state variables at equilibrium is known. Exploiting the results of a previous study, we first calculate the non-equilibrium exergy En-eq can be calculated for an arbitrary temperature distributions across a macroscopic body with an accuracy that depends only on the available information about the initial distribution: the analytical results confirm that En-eq exponentially relaxes to its equilibrium value. Using the Gyftopoulos-Beretta formalism, a non-equilibrium entropy Sn-eq(x,t) is then derived from En-eq(x,t) and U(x,t). It is finally shown that the non-equilibrium entropy generation between two states is always larger than its equilibrium (herein referred to as “classical”) counterpart. We conclude that every iso-energetic non-equilibrium state corresponds to an infinite set of non-equivalent states that can be ranked in terms of increasing entropy. Therefore, each point of the Gibbs plane corresponds therefore to a set of possible initial distributions: the non-equilibrium entropy is a multi-valued function that depends on the initial mass and energy distribution within the body. Though the concept cannot be directly extended to microscopic systems, it is argued that the present formulation is compatible with a possible reinterpretation of the existing non-equilibrium formulations, namely those of Tsallis and Grmela, and answers at least in part one of the objections set forth by Lieb and Yngvason. A systematic application of this paradigm is very convenient from a theoretical point of view and may be beneficial for meaningful future applications in the fields of nano-engineering and biological sciences.

]]>Entropy doi: 10.3390/e19110597

Authors: Zhenghong Zhou Juanli Ju Xiaoling Su Vijay Singh Gengxi Zhang

Monthly streamflow has elements of stochasticity, seasonality, and periodicity. Spectral analysis and time series analysis can, respectively, be employed to characterize the periodical pattern and the stochastic pattern. Both Burg entropy spectral analysis (BESA) and configurational entropy spectral analysis (CESA) combine spectral analysis and time series analysis. This study compared the predictive performances of BESA and CESA for monthly streamflow forecasting in six basins in Northwest China. Four criteria were selected to evaluate the performances of these two entropy spectral analyses: relative error (RE), root mean square error (RMSE), coefficient of determination (R2), and Nash–Sutcliffe efficiency coefficient (NSE). It was found that in Northwest China, both BESA and CESA forecasted monthly streamflow well with strong correlation. The forecast accuracy of BESA is higher than CESA. For the streamflow with weak correlation, the conclusion is the opposite.

]]>Entropy doi: 10.3390/e19110593

Authors: Mohammad Abdollahzadeh Jamalabadi Payam Hooshmand Navid Bagheri HamidReza KhakRah Majid Dousti

The authors wish to make the following correction to this paper [...]

]]>Entropy doi: 10.3390/e19110592

Authors: Lina Hao Xiaoling Su Vijay Singh Olusola Ayantobo

An integrated optimization model was developed for the spatial distribution of agricultural crops in order to efficiently utilize agricultural water and land resources simultaneously. The model is based on the spatial distribution of crop suitability, spatial distribution of population density, and agricultural land use data. Multi-source remote sensing data are combined with constraints of optimal crop area, which are obtained from agricultural cropping pattern optimization model. Using the middle reaches of the Heihe River basin as an example, the spatial distribution of maize and wheat were optimized by minimizing cross-entropy between crop distribution probabilities and desired but unknown distribution probabilities. Results showed that the area of maize should increase and the area of wheat should decrease in the study area compared with the situation in 2013. The comprehensive suitable area distribution of maize is approximately in accordance with the distribution in the present situation; however, the comprehensive suitable area distribution of wheat is not consistent with the distribution in the present situation. Through optimization, the high proportion of maize and wheat area was more concentrated than before. The maize area with more than 80% allocation concentrates on the south of the study area, and the wheat area with more than 30% allocation concentrates on the central part of the study area. The outcome of this study provides a scientific basis for farmers to select crops that are suitable in a particular area.

]]>Entropy doi: 10.3390/e19110591

Authors: Zhengwei Pan Juliang Jin Chunhui Li Shaowei Ning Rongxing Zhou

This paper establishes a water resources vulnerability framework based on sensitivity, natural resilience and artificial adaptation, through the analyses of the four states of the water system and its accompanying transformation processes. Furthermore, it proposes an analysis method for water resources vulnerability based on connection entropy, which extends the concept of contact entropy. An example is given of the water resources vulnerability in Anhui Province of China, which analysis illustrates that, overall, vulnerability levels fluctuated and showed apparent improvement trends from 2001 to 2015. Some suggestions are also provided for the improvement of the level of water resources vulnerability in Anhui Province, considering the viewpoint of the vulnerability index.

]]>Entropy doi: 10.3390/e19110590

Authors: Paolo Castiglioni Paolo Coruzzi Matteo Bini Gianfranco Parati Andrea Faini

Multiscale entropy (MSE) quantifies the cardiovascular complexity evaluating Sample Entropy (SampEn) on coarse-grained series at increasing scales τ. Two approaches exist, one using a fixed tolerance r at all scales (MSEFT), the other a varying tolerance r(τ) adjusted following the standard-deviation changes after coarse graining (MSEVT). The aim of this study is to clarify how the choice between MSEFT and MSEVT influences quantification and interpretation of cardiovascular MSE, and whether it affects some signals more than others. To achieve this aim, we considered 2-h long beat-by-beat recordings of inter-beat intervals and of systolic and diastolic blood pressures in male (N = 42) and female (N = 42) healthy volunteers. We compared MSE estimated with fixed and varying tolerances, and evaluated whether the choice between MSEFT and MSEVT estimators influence quantification and interpretation of sex-related differences. We found substantial discrepancies between MSEFT and MSEVT results, related to the degree of correlation among samples and more important for heart rate than for blood pressure; moreover the choice between MSEFT and MSEVT may influence the interpretation of gender differences for MSE of heart rate. We conclude that studies on cardiovascular complexity should carefully choose between fixed- or varying-tolerance estimators, particularly when evaluating MSE of heart rate.

]]>Entropy doi: 10.3390/e19110588

Authors: Artemy Kolchinsky Brendan Tracey

Following the publication of our paper [1], we uncovered a mistake in the derivation of two formulas in the manuscript.[...]

]]>Entropy doi: 10.3390/e19110589

Authors: Peter W. Egolf Kolumban Hutter

In the last few decades a series of experiments have revealed that turbulence is a cooperative and critical phenomenon showing a continuous phase change with the critical Reynolds number at its onset. However, the applications of phase transition models, such as the Mean Field Theory (MFT), the Heisenberg model, the XY model, etc. to turbulence, have not been realized so far. Now, in this article, a successful analogy to magnetism is reported, and it is shown that a Mean Field Theory of Turbulence (MFTT) can be built that reveals new results. In analogy to compressibility in fluids and susceptibility in magnetic materials, the vorticibility (the authors of this article propose this new name in analogy to response functions, derived and given names in other fields) of a turbulent flowing fluid is revealed, which is identical to the relative turbulence intensity. By analogy to magnetism, in a natural manner, the Curie Law of Turbulence was discovered. It is clear that the MFTT is a theory describing equilibrium flow systems, whereas for a long time it is known that turbulence is a highly non-equilibrium phenomenon. Nonetheless, as a starting point for the development of thermodynamic models of turbulence, the presented MFTT is very useful to gain physical insight, just as Kraichnan’s turbulent energy spectra of 2-D and 3-D turbulence are, which were developed with equilibrium Boltzmann-Gibbs thermodynamics and only recently have been generalized and adapted to non-equilibrium and intermittent turbulent flow fields.

]]>Entropy doi: 10.3390/e19110587

Authors: Yancai Xiao Yi Hong Xiuhai Chen Weijia Chen

Misalignment is one of the common faults for the doubly-fed wind turbine (DFWT), and the normal operation of the unit will be greatly affected under this state. Because it is difficult to obtain a large number of misaligned fault samples of wind turbines in practice, ADAMS and MATLAB are used to simulate the various misalignment conditions of the wind turbine transmission system to obtain the corresponding stator current in this paper. Then, the dual-tree complex wavelet transform is used to decompose and reconstruct the characteristic signal, and the dual-tree complex wavelet energy entropy is obtained from the reconstructed coefficients to form the feature vector of the fault diagnosis. Support vector machine is used as classifier and particle swarm optimization is used to optimize the relevant parameters of support vector machine (SVM) to improve its classification performance. The results show that the method proposed in this paper can effectively and accurately classify the misalignment of the transmission system of the wind turbine and improve the reliability of the fault diagnosis.

]]>Entropy doi: 10.3390/e19110585

Authors: Jinde Zheng Deyu Tu Haiyang Pan Xiaolei Hu Tao Liu Qingyun Liu

The vibration signals of rolling bearings are often nonlinear and non-stationary. Multiscale entropy (MSE) has been widely applied to measure the complexity of nonlinear mechanical vibration signals, however, at present only the single channel vibration signals are used for fault diagnosis by many scholars. In this paper multiscale entropy in multivariate framework, i.e., multivariate multiscale entropy (MMSE) is introduced to machinery fault diagnosis to improve the efficiency of fault identification as much as possible through using multi-channel vibration information. MMSE evaluates the multivariate complexity of synchronous multi-channel data and is an effective method for measuring complexity and mutual nonlinear dynamic relationship, but its statistical stability is poor. Refined composite multivariate multiscale fuzzy entropy (RCMMFE) was developed to overcome the problems existing in MMSE and was compared with MSE, multiscale fuzzy entropy, MMSE and multivariate multiscale fuzzy entropy by analyzing simulation data. Finally, a new fault diagnosis method for rolling bearing was proposed based on RCMMFE for fault feature extraction, Laplacian score and particle swarm optimization support vector machine (PSO-SVM) for automatic fault mode identification. The proposed method was compared with the existing methods by analyzing experimental data analysis and the results indicate its effectiveness and superiority.

]]>Entropy doi: 10.3390/e19110584

Authors: Masa Tsuchiya Alessandro Giuliani Kenichi Yoshikawa

Our previous work on the temporal development of the genome-expression profile in single-cell early mouse embryo indicated that reprogramming occurs via a critical transition state, where the critical-regulation pattern of the zygote state disappears. In this report, we unveil the detailed mechanism of how the dynamic interaction of thermodynamic states (critical states) enables the genome system to pass through the critical transition state to achieve genome reprogramming right after the late 2-cell state. Self-organized criticality (SOC) control of overall expression provides a snapshot of self-organization and explains the coexistence of critical states at a certain experimental time point. The time-development of self-organization is dynamically modulated by changes in expression flux between critical states through the cell nucleus milieu, where sequential global perturbations involving activation-inhibition of multiple critical states occur from the middle 2-cell to the 4-cell state. Two cyclic fluxes act as feedback flow and generate critical-state coherent oscillatory dynamics. Dynamic perturbation of these cyclic flows due to vivid activation of the ensemble of low-variance expression (sub-critical state) genes allows the genome system to overcome a transition state during reprogramming. Our findings imply that a universal mechanism of long-term global RNA oscillation underlies autonomous SOC control, and the critical gene ensemble at a critical point (CP) drives genome reprogramming. Identification of the corresponding molecular players will be essential for understanding single-cell reprogramming.

]]>Entropy doi: 10.3390/e19110586

Authors: Hyeji Kim Weihao Gao Sreeram Kannan Sewoong Oh Pramod Viswanath

Discovering a correlation from one variable to another variable is of fundamental scientific and practical interest. While existing correlation measures are suitable for discovering average correlation, they fail to discover hidden or potential correlations. To bridge this gap, (i) we postulate a set of natural axioms that we expect a measure of potential correlation to satisfy; (ii) we show that the rate of information bottleneck, i.e., the hypercontractivity coefficient, satisfies all the proposed axioms; (iii) we provide a novel estimator to estimate the hypercontractivity coefficient from samples; and (iv) we provide numerical experiments demonstrating that this proposed estimator discovers potential correlations among various indicators of WHO datasets, is robust in discovering gene interactions from gene expression time series data, and is statistically more powerful than the estimators for other correlation measures in binary hypothesis testing of canonical examples of potential correlations.

]]>