Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 12 (December 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-51
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Second-Law Analysis: A Powerful Tool for Analyzing Computational Fluid Dynamics (CFD) Results
Entropy 2017, 19(12), 679; doi:10.3390/e19120679
Received: 1 December 2017 / Revised: 1 December 2017 / Accepted: 8 December 2017 / Published: 11 December 2017
PDF Full-text (176 KB) | HTML Full-text | XML Full-text
Abstract
Second-law analysis (SLA) is an important concept in thermodynamics, which basically assesses energy by its value in terms of its convertibility from one form to another.[...] Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)

Research

Jump to: Editorial, Review

Open AccessArticle Bayesian Inference of Ecological Interactions from Spatial Data
Entropy 2017, 19(12), 547; doi:10.3390/e19120547
Received: 4 September 2017 / Revised: 6 October 2017 / Accepted: 12 October 2017 / Published: 25 November 2017
PDF Full-text (2049 KB) | HTML Full-text | XML Full-text
Abstract
The characterization and quantification of ecological interactions and the construction of species’ distributions and their associated ecological niches are of fundamental theoretical and practical importance. In this paper, we discuss a Bayesian inference framework, which, using spatial data, offers a general formalism within
[...] Read more.
The characterization and quantification of ecological interactions and the construction of species’ distributions and their associated ecological niches are of fundamental theoretical and practical importance. In this paper, we discuss a Bayesian inference framework, which, using spatial data, offers a general formalism within which ecological interactions may be characterized and quantified. Interactions are identified through deviations of the spatial distribution of co-occurrences of spatial variables relative to a benchmark for the non-interacting system and based on a statistical ensemble of spatial cells. The formalism allows for the integration of both biotic and abiotic factors of arbitrary resolution. We concentrate on the conceptual and mathematical underpinnings of the formalism, showing how, using the naive Bayes approximation, it can be used to not only compare and contrast the relative contribution from each variable, but also to construct species’ distributions and ecological niches based on an arbitrary variable type. We also show how non-linear interactions between distinct niche variables can be identified and the degree of confounding between variables accounted for. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Group Sparsity and Graph Regularized Semi-Nonnegative Matrix Factorization with Discriminability for Data Representation
Entropy 2017, 19(12), 627; doi:10.3390/e19120627
Received: 14 September 2017 / Revised: 6 November 2017 / Accepted: 13 November 2017 / Published: 27 November 2017
PDF Full-text (829 KB) | HTML Full-text | XML Full-text
Abstract
Semi-Nonnegative Matrix Factorization (Semi-NMF), as a variant of NMF, inherits the merit of parts-based representation of NMF and possesses the ability to process mixed sign data, which has attracted extensive attention. However, standard Semi-NMF still suffers from the following limitations. First of all,
[...] Read more.
Semi-Nonnegative Matrix Factorization (Semi-NMF), as a variant of NMF, inherits the merit of parts-based representation of NMF and possesses the ability to process mixed sign data, which has attracted extensive attention. However, standard Semi-NMF still suffers from the following limitations. First of all, Semi-NMF fits data in a Euclidean space, which ignores the geometrical structure in the data. What’s more, Semi-NMF does not incorporate the discriminative information in the learned subspace. Last but not least, the learned basis in Semi-NMF is unnecessarily part based because there are no explicit constraints to ensure that the representation is part based. To settle these issues, in this paper, we propose a novel Semi-NMF algorithm, called Group sparsity and Graph regularized Semi-Nonnegative Matrix Factorization with Discriminability (GGSemi-NMFD) to overcome the aforementioned problems. GGSemi-NMFD adds the graph regularization term in Semi-NMF, which can well preserve the local geometrical information of the data space. To obtain the discriminative information, approximation orthogonal constraints are added in the learned subspace. In addition, 21 norm constraints are adopted for the basis matrix, which can encourage the basis matrix to be row sparse. Experimental results in six datasets demonstrate the effectiveness of the proposed algorithms. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Fault Diagnosis of Rolling Bearings Based on EWT and KDEC
Entropy 2017, 19(12), 633; doi:10.3390/e19120633
Received: 7 September 2017 / Revised: 6 November 2017 / Accepted: 20 November 2017 / Published: 5 December 2017
PDF Full-text (3839 KB) | HTML Full-text | XML Full-text
Abstract
This study proposes a novel fault diagnosis method that is based on empirical wavelet transform (EWT) and kernel density estimation classifier (KDEC), which can well diagnose fault type of the rolling element bearings. With the proposed fault diagnosis method, the vibration signal of
[...] Read more.
This study proposes a novel fault diagnosis method that is based on empirical wavelet transform (EWT) and kernel density estimation classifier (KDEC), which can well diagnose fault type of the rolling element bearings. With the proposed fault diagnosis method, the vibration signal of rolling element bearing was firstly decomposed into a series of F modes by EWT, and the root mean square, kurtosis, and skewness of the F modes were computed and combined into the feature vector. According to the characteristics of kernel density estimation, a classifier based on kernel density estimation and mutual information was proposed. Then, the feature vectors were input into the KDEC for training and testing. The experimental results indicated that the proposed method can effectively identify three different operative conditions of rolling element bearings, and the accuracy rates was higher than support vector machine (SVM) classifier and back-propagation (BP) neural network classifier. Full article
(This article belongs to the Section Information Theory)
Figures

Open AccessArticle An Extension to the Revised Approach in the Assessment of Informational Entropy
Entropy 2017, 19(12), 634; doi:10.3390/e19120634
Received: 29 September 2017 / Revised: 12 November 2017 / Accepted: 20 November 2017 / Published: 29 November 2017
PDF Full-text (3560 KB) | HTML Full-text | XML Full-text
Abstract
This study attempts to extend the prevailing definition of informational entropy, where entropy relates to the amount of reduction of uncertainty or, indirectly, to the amount of information gained through measurements of a random variable. The approach adopted herein describes informational entropy not
[...] Read more.
This study attempts to extend the prevailing definition of informational entropy, where entropy relates to the amount of reduction of uncertainty or, indirectly, to the amount of information gained through measurements of a random variable. The approach adopted herein describes informational entropy not as an absolute measure of information, but as a measure of the variation of information. This makes it possible to obtain a single value for informational entropy, instead of several values that vary with the selection of the discretizing interval, when discrete probabilities of hydrological events are estimated through relative class frequencies and discretizing intervals. Furthermore, the present work introduces confidence limits for the informational entropy function, which facilitates a comparison between the uncertainties of various hydrological processes with different scales of magnitude and different probability structures. The work addresses hydrologists and environmental engineers more than it does mathematicians and statisticians. In particular, it is intended to help solve information-related problems in hydrological monitoring design and assessment. This paper first considers the selection of probability distributions of best fit to hydrological data, using generated synthetic time series. Next, it attempts to assess hydrometric monitoring duration in a netwrok, this time using observed runoff data series. In both applications, it focuses, basically, on the theoretical background for the extended definition of informational entropy. The methodology is shown to give valid results in each case. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessFeature PaperArticle Two-Party Zero-Error Function Computation with Asymmetric Priors
Entropy 2017, 19(12), 635; doi:10.3390/e19120635
Received: 11 July 2017 / Revised: 25 October 2017 / Accepted: 13 November 2017 / Published: 23 November 2017
PDF Full-text (876 KB) | HTML Full-text | XML Full-text
Abstract
We consider a two party network where each party wishes to compute a function of two correlated sources. Each source is observed by one of the parties. The true joint distribution of the sources is known to one party. The other party, on
[...] Read more.
We consider a two party network where each party wishes to compute a function of two correlated sources. Each source is observed by one of the parties. The true joint distribution of the sources is known to one party. The other party, on the other hand, assumes a distribution for which the set of source pairs that have a positive probability is only a subset of those that may appear in the true distribution. In that sense, this party has only partial information about the true distribution from which the sources are generated. We study the impact of this asymmetry on the worst-case message length for zero-error function computation, by identifying the conditions under which reconciling the missing information prior to communication is better than not reconciling it but instead using an interactive protocol that ensures zero-error communication without reconciliation. Accordingly, we provide upper and lower bounds on the minimum worst-case message length for the communication strategies with and without reconciliation. Through specializing the proposed model to certain distribution classes, we show that partially reconciling the true distribution by allowing a certain degree of ambiguity can perform better than the strategies with perfect reconciliation as well as strategies that do not start with an explicit reconciliation step. As such, our results demonstrate a tradeoff between the reconciliation and communication rates, and that the worst-case message length is a result of the interplay between the two factors. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Analyzing Information Distribution in Complex Systems
Entropy 2017, 19(12), 636; doi:10.3390/e19120636
Received: 21 July 2017 / Revised: 27 September 2017 / Accepted: 1 November 2017 / Published: 24 November 2017
PDF Full-text (1379 KB) | HTML Full-text | XML Full-text
Abstract
Information theory is often utilized to capture both linear as well as nonlinear relationships between any two parts of a dynamical complex system. Recently, an extension to classical information theory called partial information decomposition has been developed, which allows one to partition the
[...] Read more.
Information theory is often utilized to capture both linear as well as nonlinear relationships between any two parts of a dynamical complex system. Recently, an extension to classical information theory called partial information decomposition has been developed, which allows one to partition the information that two subsystems have about a third one into unique, redundant and synergistic contributions. Here, we apply a recent estimator of partial information decomposition to characterize the dynamics of two different complex systems. First, we analyze the distribution of information in triplets of spins in the 2D Ising model as a function of temperature. We find that while redundant information obtains a maximum at the critical point, synergistic information peaks in the disorder phase. Secondly, we characterize 1D elementary cellular automata rules based on the information distribution between neighboring cells. We describe several clusters of rules with similar partial information decomposition. These examples illustrate how the partial information decomposition provides a characterization of the emergent dynamics of complex systems in terms of the information distributed across their interacting units. Full article
Figures

Figure 1

Open AccessArticle How Successful Are Wavelets in Detecting Jumps?
Entropy 2017, 19(12), 638; doi:10.3390/e19120638
Received: 30 October 2017 / Revised: 22 November 2017 / Accepted: 22 November 2017 / Published: 25 November 2017
PDF Full-text (284 KB) | HTML Full-text | XML Full-text
Abstract
We evaluate the performances of wavelet jump detection tests by using simulated high-frequency data, in which jumps and some other non-standard features are present. Wavelet-based jump detection tests have a clear advantage over the alternatives, as they are capable of stating the exact
[...] Read more.
We evaluate the performances of wavelet jump detection tests by using simulated high-frequency data, in which jumps and some other non-standard features are present. Wavelet-based jump detection tests have a clear advantage over the alternatives, as they are capable of stating the exact timing and number of jumps. The results indicate that, in addition to those advantages, these detection tests also preserve desirable power and size properties even in non-standard data environments, whereas their alternatives fail to sustain their desirable properties beyond standard data features. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Open AccessArticle Magnetic Engine for the Single-Particle Landau Problem
Entropy 2017, 19(12), 639; doi:10.3390/e19120639
Received: 30 September 2017 / Revised: 21 November 2017 / Accepted: 22 November 2017 / Published: 25 November 2017
PDF Full-text (593 KB) | HTML Full-text | XML Full-text
Abstract
We study the effect of the degeneracy factor in the energy levels of the well-known Landau problem for a magnetic engine. The scheme of the cycle is composed of two adiabatic processes and two isomagnetic processes, driven by a quasi-static modulation of external
[...] Read more.
We study the effect of the degeneracy factor in the energy levels of the well-known Landau problem for a magnetic engine. The scheme of the cycle is composed of two adiabatic processes and two isomagnetic processes, driven by a quasi-static modulation of external magnetic field intensity. We derive the analytical expression of the relation between the magnetic field and temperature along the adiabatic process and, in particular, reproduce the expression for the efficiency as a function of the compression ratio. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Information Theory to Probe Intrapartum Fetal Heart Rate Dynamics
Entropy 2017, 19(12), 640; doi:10.3390/e19120640
Received: 18 October 2017 / Revised: 21 November 2017 / Accepted: 21 November 2017 / Published: 25 November 2017
PDF Full-text (680 KB) | HTML Full-text | XML Full-text
Abstract
Intrapartum fetal heart rate (FHR) monitoring constitutes a reference tool in clinical practice to assess the baby’s health status and to detect fetal acidosis. It is usually analyzed by visual inspection grounded on FIGO criteria. Characterization of intrapartum fetal heart rate temporal dynamics
[...] Read more.
Intrapartum fetal heart rate (FHR) monitoring constitutes a reference tool in clinical practice to assess the baby’s health status and to detect fetal acidosis. It is usually analyzed by visual inspection grounded on FIGO criteria. Characterization of intrapartum fetal heart rate temporal dynamics remains a challenging task and continuously receives academic research efforts. Complexity measures, often implemented with tools referred to as approximate entropy (ApEn) or sample entropy (SampEn), have regularly been reported as significant features for intrapartum FHR analysis. We explore how information theory, and especially auto-mutual information (AMI), is connected to ApEn and SampEn and can be used to probe FHR dynamics. Applied to a large (1404 subjects) and documented database of FHR data, collected in a French academic hospital, it is shown that (i) auto-mutual information outperforms ApEn and SampEn for acidosis detection in the first stage of labor and continues to yield the best performance in the second stage; (ii) Shannon entropy increases as labor progresses and is always much larger in the second stage; (iii) babies suffering from fetal acidosis additionally show more structured temporal dynamics than healthy ones and that this progressive structuration can be used for early acidosis detection. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics II)
Figures

Figure 1

Open AccessFeature PaperArticle On Maximum Entropy and Inference
Entropy 2017, 19(12), 642; doi:10.3390/e19120642
Received: 9 October 2017 / Revised: 18 November 2017 / Accepted: 20 November 2017 / Published: 28 November 2017
PDF Full-text (439 KB) | HTML Full-text | XML Full-text
Abstract
Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords
[...] Read more.
Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent) variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics) directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessArticle Diagnosis of Combined Cycle Power Plant Based on Thermoeconomic Analysis: A Computer Simulation Study
Entropy 2017, 19(12), 643; doi:10.3390/e19120643
Received: 12 September 2017 / Revised: 16 November 2017 / Accepted: 25 November 2017 / Published: 28 November 2017
PDF Full-text (3958 KB) | HTML Full-text | XML Full-text
Abstract
In this study, diagnosis of a 300-MW combined cycle power plant under faulty conditions was performed using a thermoeconomic method called modified productive structure analysis. The malfunction and dysfunction, unit cost of irreversibility and lost cost flow rate for each component were calculated
[...] Read more.
In this study, diagnosis of a 300-MW combined cycle power plant under faulty conditions was performed using a thermoeconomic method called modified productive structure analysis. The malfunction and dysfunction, unit cost of irreversibility and lost cost flow rate for each component were calculated for the cases of pre-fixed malfunction and the reference conditions. A commercial simulating software, GateCycleTM (version 6.1.2), was used to estimate the thermodynamic properties under faulty conditions. The relative malfunction (RMF) and the relative difference in the lost cost flow rate between real operation and reference conditions (RDLC) were found to be effective indicators for the identification of faulty components. Simulation results revealed that 0.5% degradation in the isentropic efficiency of air compressor, 2% in gas turbine, 2% in steam turbine and 2% degradation in energy loss in heat exchangers can be identified. Multi-fault scenarios that can be detected by the indicators were also considered. Additional lost exergy due to these types of faulty components, that can be detected by RMF or RDLC, is less than 5% of the exergy lost in the components in the normal condition. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Entropic Constitutive Relation and Modeling for Fourier and Hyperbolic Heat Conductions
Entropy 2017, 19(12), 644; doi:10.3390/e19120644
Received: 16 September 2017 / Revised: 23 November 2017 / Accepted: 27 November 2017 / Published: 1 December 2017
PDF Full-text (279 KB) | HTML Full-text | XML Full-text
Abstract
Most existing phenomenological heat conduction models are expressed by temperature and heat flux distributions, whose definitions might be debatable in heat conductions with strong non-equilibrium. The constitutive relations of Fourier and hyperbolic heat conductions are here rewritten by the entropy and entropy flux
[...] Read more.
Most existing phenomenological heat conduction models are expressed by temperature and heat flux distributions, whose definitions might be debatable in heat conductions with strong non-equilibrium. The constitutive relations of Fourier and hyperbolic heat conductions are here rewritten by the entropy and entropy flux distributions in the frameworks of classical irreversible thermodynamics (CIT) and extended irreversible thermodynamics (EIT). The entropic constitutive relations are then generalized by Boltzmann–Gibbs–Shannon (BGS) statistical mechanics, which can avoid the debatable definitions of thermodynamic quantities relying on local equilibrium. It shows a possibility of modeling heat conduction through entropic constitutive relations. The applicability of the generalizations by BGS statistical mechanics is also discussed based on the relaxation time approximation, and it is found that the generalizations require a sufficiently small entropy production rate. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Open AccessArticle Quantum Information: What Is It All About?
Entropy 2017, 19(12), 645; doi:10.3390/e19120645
Received: 23 October 2017 / Revised: 16 November 2017 / Accepted: 22 November 2017 / Published: 29 November 2017
PDF Full-text (211 KB) | HTML Full-text | XML Full-text
Abstract
This paper answers Bell’s question: What does quantum information refer to? It is about quantum properties represented by subspaces of the quantum Hilbert space, or their projectors, to which standard (Kolmogorov) probabilities can be assigned by using a projective decomposition of the identity
[...] Read more.
This paper answers Bell’s question: What does quantum information refer to? It is about quantum properties represented by subspaces of the quantum Hilbert space, or their projectors, to which standard (Kolmogorov) probabilities can be assigned by using a projective decomposition of the identity (PDI or framework) as a quantum sample space. The single framework rule of consistent histories prevents paradoxes or contradictions. When only one framework is employed, classical (Shannon) information theory can be imported unchanged into the quantum domain. A particular case is the macroscopic world of classical physics whose quantum description needs only a single quasiclassical framework. Nontrivial issues unique to quantum information, those with no classical analog, arise when aspects of two or more incompatible frameworks are compared. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories
Entropy 2017, 19(12), 646; doi:10.3390/e19120646
Received: 10 October 2017 / Revised: 20 November 2017 / Accepted: 23 November 2017 / Published: 29 November 2017
PDF Full-text (3738 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD) trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of
[...] Read more.
Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD) trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF) about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Figures

Figure 1

Open AccessArticle Langevin Dynamics with Variable Coefficients and Nonconservative Forces: From Stationary States to Numerical Methods
Entropy 2017, 19(12), 647; doi:10.3390/e19120647
Received: 27 September 2017 / Revised: 20 November 2017 / Accepted: 22 November 2017 / Published: 29 November 2017
PDF Full-text (702 KB) | HTML Full-text | XML Full-text
Abstract
Langevin dynamics is a versatile stochastic model used in biology, chemistry, engineering, physics and computer science. Traditionally, in thermal equilibrium, one assumes (i) the forces are given as the gradient of a potential and (ii) a fluctuation-dissipation relation holds between stochastic and dissipative
[...] Read more.
Langevin dynamics is a versatile stochastic model used in biology, chemistry, engineering, physics and computer science. Traditionally, in thermal equilibrium, one assumes (i) the forces are given as the gradient of a potential and (ii) a fluctuation-dissipation relation holds between stochastic and dissipative forces; these assumptions ensure that the system samples a prescribed invariant Gibbs-Boltzmann distribution for a specified target temperature. In this article, we relax these assumptions, incorporating variable friction and temperature parameters and allowing nonconservative force fields, for which the form of the stationary state is typically not known a priori. We examine theoretical issues such as stability of the steady state and ergodic properties, as well as practical aspects such as the design of numerical methods for stochastic particle models. Applications to nonequilibrium systems with thermal gradients and active particles are discussed. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Figures

Open AccessArticle Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise
Entropy 2017, 19(12), 648; doi:10.3390/e19120648
Received: 15 November 2017 / Revised: 25 November 2017 / Accepted: 26 November 2017 / Published: 29 November 2017
PDF Full-text (1847 KB) | HTML Full-text | XML Full-text
Abstract
As one of the most critical issues for target track, α-jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter
[...] Read more.
As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Entropy-Based Economic Denial of Sustainability Detection
Entropy 2017, 19(12), 649; doi:10.3390/e19120649
Received: 11 November 2017 / Revised: 23 November 2017 / Accepted: 27 November 2017 / Published: 29 November 2017
PDF Full-text (1242 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, an important increase in the amount and impact of Distributed Denial of Service (DDoS) threats has been reported by the different information security organizations. They typically target the depletion of the computational resources of the victims, hence drastically harming their
[...] Read more.
In recent years, an important increase in the amount and impact of Distributed Denial of Service (DDoS) threats has been reported by the different information security organizations. They typically target the depletion of the computational resources of the victims, hence drastically harming their operational capabilities. Inspired by these methods, Economic Denial of Sustainability (EDoS) attacks pose a similar motivation, but adapted to Cloud computing environments, where the denial is achieved by damaging the economy of both suppliers and customers. Therefore, the most common EDoS approach is making the offered services unsustainable by exploiting their auto-scaling algorithms. In order to contribute to their mitigation, this paper introduces a novel EDoS detection method based on the study of entropy variations related with metrics taken into account when deciding auto-scaling actuations. Through the prediction and definition of adaptive thresholds, unexpected behaviors capable of fraudulently demand new resource hiring are distinguished. With the purpose of demonstrate the effectiveness of the proposal, an experimental scenario adapted to the singularities of the EDoS threats and the assumptions driven by their original definition is described in depth. The preliminary results proved high accuracy. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle On the Deposition Equilibrium of Carbon Nanotubes or Graphite in the Reforming Processes of Lower Hydrocarbon Fuels
Entropy 2017, 19(12), 650; doi:10.3390/e19120650
Received: 9 October 2017 / Revised: 25 November 2017 / Accepted: 27 November 2017 / Published: 30 November 2017
PDF Full-text (2598 KB) | HTML Full-text | XML Full-text
Abstract
The modeling of carbon deposition from C-H-O reformates has usually employed thermodynamic data for graphite, but has rarely employed such data for impure filamentous carbon. Therefore, electrochemical data for the literature on the chemical potential of two types of purified carbon nanotubes (CNTs)
[...] Read more.
The modeling of carbon deposition from C-H-O reformates has usually employed thermodynamic data for graphite, but has rarely employed such data for impure filamentous carbon. Therefore, electrochemical data for the literature on the chemical potential of two types of purified carbon nanotubes (CNTs) are included in the study. Parameter values determining the thermodynamic equilibrium of the deposition of either graphite or CNTs are computed for dry and wet reformates from natural gas and liquefied petroleum gas. The calculation results are presented as the atomic oxygen-to-carbon ratio (O/C) against temperature (200 to 100 °C) for various pressures (1 to 30 bar). Areas of O/C for either carbon deposition or deposition-free are computed, and indicate the critical O/C values below which the deposition can occur. Only three types of deposited carbon were found in the studied equilibrium conditions: Graphite, multi-walled CNTs, and single-walled CNTs in bundles. The temperature regions of the appearance of the thermodynamically stable forms of solid carbon are numerically determined as being independent of pressure and the analyzed reactants. The modeling indicates a significant increase in the critical O/C for the deposition of CNTs against that for graphite. The highest rise in the critical O/C, of up to 290% at 30 bar, was found for the wet reforming process. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Open AccessArticle K-Dependence Bayesian Classifier Ensemble
Entropy 2017, 19(12), 651; doi:10.3390/e19120651
Received: 6 September 2017 / Revised: 23 November 2017 / Accepted: 27 November 2017 / Published: 30 November 2017
PDF Full-text (4203 KB) | HTML Full-text | XML Full-text
Abstract
To maximize the benefit that can be derived from the information implicit in big data, ensemble methods generate multiple models with sufficient diversity through randomization or perturbation. A k-dependence Bayesian classifier (KDB) is a highly scalable learning algorithm with excellent time and
[...] Read more.
To maximize the benefit that can be derived from the information implicit in big data, ensemble methods generate multiple models with sufficient diversity through randomization or perturbation. A k-dependence Bayesian classifier (KDB) is a highly scalable learning algorithm with excellent time and space complexity, along with high expressivity. This paper introduces a new ensemble approach of KDBs, a k-dependence forest (KDF), which induces a specific attribute order and conditional dependencies between attributes for each subclassifier. We demonstrate that these subclassifiers are diverse and complementary. Our extensive experimental evaluation on 40 datasets reveals that this ensemble method achieves better classification performance than state-of-the-art out-of-core ensemble learners such as the AODE (averaged one-dependence estimator) and averaged tree-augmented naive Bayes (ATAN). Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessFeature PaperArticle Cosine Similarity Entropy: Self-Correlation-Based Complexity Analysis of Dynamical Systems
Entropy 2017, 19(12), 652; doi:10.3390/e19120652
Received: 24 October 2017 / Revised: 20 November 2017 / Accepted: 27 November 2017 / Published: 30 November 2017
PDF Full-text (2018 KB) | HTML Full-text | XML Full-text
Abstract
The nonparametric Sample Entropy (SE) estimator has become a standard for the quantification of structural complexity of nonstationary time series, even in critical cases of unfavorable noise levels. The SE has proven very successful for signals that exhibit a certain degree of the
[...] Read more.
The nonparametric Sample Entropy (SE) estimator has become a standard for the quantification of structural complexity of nonstationary time series, even in critical cases of unfavorable noise levels. The SE has proven very successful for signals that exhibit a certain degree of the underlying structure, but do not obey standard probability distributions, a typical case in real-world scenarios such as with physiological signals. However, the SE estimates structural complexity based on uncertainty rather than on (self) correlation, so that, for reliable estimation, the SE requires long data segments, is sensitive to spikes and erratic peaks in data, and owing to its amplitude dependence it exhibits lack of precision for signals with long-term correlations. To this end, we propose a class of new entropy estimators based on the similarity of embedding vectors, evaluated through the angular distance, the Shannon entropy and the coarse-grained scale. Analysis of the effects of embedding dimension, sample size and tolerance shows that the so introduced Cosine Similarity Entropy (CSE) and the enhanced Multiscale Cosine Similarity Entropy (MCSE) are amplitude-independent and therefore superior to the SE when applied to short time series. Unlike the SE, the CSE is shown to yield valid entropy values over a broad range of embedding dimensions. By evaluating the CSE and the MCSE over a variety of benchmark synthetic signals as well as for real-world data (heart rate variability of three different cardiovascular pathologies), the proposed algorithms are demonstrated to be able to quantify degrees of structural complexity in the context of self-correlation over small to large temporal scales, thus offering physically meaningful interpretations and rigor in the understanding the intrinsic properties of the structural complexity of a system, such as the number of its degrees of freedom. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Coherent Processing of a Qubit Using One Squeezed State
Entropy 2017, 19(12), 653; doi:10.3390/e19120653
Received: 5 October 2017 / Revised: 19 November 2017 / Accepted: 22 November 2017 / Published: 30 November 2017
PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
In a departure from most work in quantum information utilizing Gaussian states, we use a single such state to represent a qubit and model environmental noise with a class of quadratic dissipative equations. A benefit of this single Gaussian representation is that with
[...] Read more.
In a departure from most work in quantum information utilizing Gaussian states, we use a single such state to represent a qubit and model environmental noise with a class of quadratic dissipative equations. A benefit of this single Gaussian representation is that with one deconvolution, we can eliminate noise. In this deconvolution picture, a basis of squeezed states evolves to another basis of such states. One of the limitations of our approach is that noise is eliminated only at a privileged time. We suggest that this limitation may actually be used advantageously to send information securely: the privileged time is only known to the sender and the receiver, and any intruder accessing the information at any other time encounters noisy data. Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Open AccessArticle Entropy Parameter M in Modeling a Flow Duration Curve
Entropy 2017, 19(12), 654; doi:10.3390/e19120654
Received: 20 September 2017 / Revised: 28 November 2017 / Accepted: 30 November 2017 / Published: 1 December 2017
PDF Full-text (4323 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
A flow duration curve (FDC) is widely used for predicting water supply, hydropower, environmental flow, sediment load, and pollutant load. Among different methods of constructing an FDC, the entropy-based method, developed recently, is appealing because of its several desirable characteristics, such as simplicity,
[...] Read more.
A flow duration curve (FDC) is widely used for predicting water supply, hydropower, environmental flow, sediment load, and pollutant load. Among different methods of constructing an FDC, the entropy-based method, developed recently, is appealing because of its several desirable characteristics, such as simplicity, flexibility, and statistical basis. This method contains a parameter, called entropy parameter M, which constitutes the basis for constructing the FDC. Since M is related to the ratio of the average streamflow to the maximum streamflow which, in turn, is related to the drainage area, it may be possible to determine M a priori and construct an FDC for ungauged basins. This paper, therefore, analyzed the characteristics of M in both space and time using streamflow data from 73 gauging stations in the Brazos River basin, Texas, USA. Results showed that the M values were impacted by reservoir operation and possibly climate change. The values were fluctuating, but relatively stable, after the operation of the reservoirs. Parameter M was found to change inversely with the ratio of average streamflow to the maximum streamflow. When there was an extreme event, there occurred a jump in the M value. Further, spatially, M had a larger value if the drainage area was small. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Bayesian Nonlinear Filtering via Information Geometric Optimization
Entropy 2017, 19(12), 655; doi:10.3390/e19120655
Received: 17 October 2017 / Revised: 16 November 2017 / Accepted: 29 November 2017 / Published: 1 December 2017
PDF Full-text (1101 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, Bayesian nonlinear filtering is considered from the viewpoint of information geometry and a novel filtering method is proposed based on information geometric optimization. Under the Bayesian filtering framework, we derive a relationship between the nonlinear characteristics of filtering and the
[...] Read more.
In this paper, Bayesian nonlinear filtering is considered from the viewpoint of information geometry and a novel filtering method is proposed based on information geometric optimization. Under the Bayesian filtering framework, we derive a relationship between the nonlinear characteristics of filtering and the metric tensor of the corresponding statistical manifold. Bayesian joint distributions are used to construct the statistical manifold. In this case, nonlinear filtering can be converted to an optimization problem on the statistical manifold and the adaptive natural gradient descent method is used to seek the optimal estimate. The proposed method provides a general filtering formulation and the Kalman filter, the Extended Kalman filter (EKF) and the Iterated Extended Kalman filter (IEKF) can be seen as special cases of this formulation. The performance of the proposed method is evaluated on a passive target tracking problem and the results demonstrate the superiority of the proposed method compared to various Kalman filter methods. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle Context-Aware Generative Adversarial Privacy
Entropy 2017, 19(12), 656; doi:10.3390/e19120656
Received: 12 October 2017 / Revised: 21 November 2017 / Accepted: 22 November 2017 / Published: 1 December 2017
PDF Full-text (2308 KB) | HTML Full-text | XML Full-text
Abstract
Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other
[...] Read more.
Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Properties of Risk Measures of Generalized Entropy in Portfolio Selection
Entropy 2017, 19(12), 657; doi:10.3390/e19120657
Received: 21 September 2017 / Revised: 20 November 2017 / Accepted: 30 November 2017 / Published: 1 December 2017
PDF Full-text (2352 KB) | HTML Full-text | XML Full-text
Abstract
This paper systematically investigates the properties of six kinds of entropy-based risk measures: Information Entropy and Cumulative Residual Entropy in the probability space, Fuzzy Entropy, Credibility Entropy and Sine Entropy in the fuzzy space, and Hybrid Entropy in the hybridized uncertainty of both
[...] Read more.
This paper systematically investigates the properties of six kinds of entropy-based risk measures: Information Entropy and Cumulative Residual Entropy in the probability space, Fuzzy Entropy, Credibility Entropy and Sine Entropy in the fuzzy space, and Hybrid Entropy in the hybridized uncertainty of both fuzziness and randomness. We discover that none of the risk measures satisfy all six of the following properties, which various scholars have associated with effective risk measures: Monotonicity, Translation Invariance, Sub-additivity, Positive Homogeneity, Consistency and Convexity. Measures based on Fuzzy Entropy, Credibility Entropy, and Sine Entropy all exhibit the same properties: Sub-additivity, Positive Homogeneity, Consistency, and Convexity. These measures based on Information Entropy and Hybrid Entropy, meanwhile, only exhibit Sub-additivity and Consistency. Cumulative Residual Entropy satisfies just Sub-additivity, Positive Homogeneity, and Convexity. After identifying these properties, we develop seven portfolio models based on different risk measures and made empirical comparisons using samples from both the Shenzhen Stock Exchange of China and the New York Stock Exchange of America. The comparisons show that the Mean Fuzzy Entropy Model performs the best among the seven models with respect to both daily returns and relative cumulative returns. Overall, these results could provide an important reference for both constructing effective risk measures and rationally selecting the appropriate risk measure under different portfolio selection conditions. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle Assessment of Heart Rate Variability during an Endurance Mountain Trail Race by Multi-Scale Entropy Analysis
Entropy 2017, 19(12), 658; doi:10.3390/e19120658
Received: 20 October 2017 / Revised: 17 November 2017 / Accepted: 27 November 2017 / Published: 1 December 2017
PDF Full-text (2152 KB) | HTML Full-text | XML Full-text
Abstract
The aim of the study was to analyze heart rate variability (HRV) response to high-intensity exercise during a 35-km mountain trail race and to ascertain whether fitness level could influence autonomic nervous system (ANS) modulation. Time-domain, frequency-domain, and multi-scale entropy (MSE)
[...] Read more.
The aim of the study was to analyze heart rate variability (HRV) response to high-intensity exercise during a 35-km mountain trail race and to ascertain whether fitness level could influence autonomic nervous system (ANS) modulation. Time-domain, frequency-domain, and multi-scale entropy (MSE) indexes were calculated for eleven mountain-trail runners who completed the race. Many changes were observed, mostly related to exercise load and fatigue. These changes were characterized by increased mean values and standard deviations of the normal-to-normal intervals associated with sympathetic activity, and by decreased differences between successive intervals related to parasympathetic activity. Normalized low frequency (LF) power suggested that ANS modulation varied greatly during the race and between individuals. Normalized high frequency (HF) power, associated with parasympathetic activity, varied considerably over the race, and tended to decrease at the final stages, whereas changes in the LF/HF ratio corresponded to intervals with varying exercise load. MSE indexes, related to system complexity, indicated the existence of many interactions between the heart and its neurological control mechanism. The time-domain, frequency-domain, and MSE indexes were also able to discriminate faster from slower runners, mainly in the more difficult and in the final stages of the race. These findings suggest the use of HRV analysis to study cardiac function mechanisms in endurance sports. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Quantum Minimum Distance Classifier
Entropy 2017, 19(12), 659; doi:10.3390/e19120659
Received: 15 October 2017 / Revised: 21 November 2017 / Accepted: 29 November 2017 / Published: 1 December 2017
PDF Full-text (1220 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We propose a quantum version of the well known minimum distance classification model called Nearest Mean Classifier (NMC). In this regard, we presented our first results in two previous works. First, a quantum counterpart of the NMC for two-dimensional problems was introduced, named
[...] Read more.
We propose a quantum version of the well known minimum distance classification model called Nearest Mean Classifier (NMC). In this regard, we presented our first results in two previous works. First, a quantum counterpart of the NMC for two-dimensional problems was introduced, named Quantum Nearest Mean Classifier (QNMC), together with a possible generalization to any number of dimensions. Secondly, we studied the n-dimensional problem into detail and we showed a new encoding for arbitrary n-feature vectors into density operators. In the present paper, another promising encoding is considered, suggested by recent debates on quantum machine learning. Further, we observe a significant property concerning the non-invariance by feature rescaling of our quantum classifier. This fact, which represents a meaningful difference between the NMC and the respective quantum version, allows us to introduce a free parameter whose variation provides, in some cases, better classification results for the QNMC. The experimental section is devoted: (i) to compare the NMC and QNMC performance on different datasets; and (ii) to study the effects of the non-invariance under uniform rescaling for the QNMC. Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Figures

Figure 1

Open AccessArticle Entropy-Based Investigation on the Precipitation Variability over the Hexi Corridor in China
Entropy 2017, 19(12), 660; doi:10.3390/e19120660
Received: 30 September 2017 / Revised: 23 November 2017 / Accepted: 27 November 2017 / Published: 1 December 2017
PDF Full-text (9534 KB) | HTML Full-text | XML Full-text
Abstract
The spatial and temporal variability of precipitation time series were investigated for the Hexi Corridor, in Northwest China, by analyzing the entropy information. The examinations were performed on monthly, seasonal, and annual timescales based on 29 meteorological stations for the period of 1961–2015.
[...] Read more.
The spatial and temporal variability of precipitation time series were investigated for the Hexi Corridor, in Northwest China, by analyzing the entropy information. The examinations were performed on monthly, seasonal, and annual timescales based on 29 meteorological stations for the period of 1961–2015. The apportionment entropy and intensity entropy were used to analyze the regional precipitation characteristics, including the intra-annual and decadal distribution of monthly and annual precipitation amounts, as well as the number of precipitation days within a year and a decade. The regions with high precipitation variability are found in the western part of the Hexi corridor and with less precipitation, and may have a high possibility of drought occurrence. The variability of the number of precipitation days decreased from the west to the east of the corridor. Higher variability, in terms of both of precipitation amount and intensity during crop-growing season, has been found in the recent decade. In addition, the correlation between entropy-based precipitation variability and the crop yield is also compared, and the crop yield in historical periods is found to be correlated with the precipitation intensity disorder index in the middle reaches of the Hexi corridor. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Label-Driven Learning Framework: Towards More Accurate Bayesian Network Classifiers through Discrimination of High-Confidence Labels
Entropy 2017, 19(12), 661; doi:10.3390/e19120661
Received: 31 October 2017 / Revised: 26 November 2017 / Accepted: 30 November 2017 / Published: 3 December 2017
PDF Full-text (4016 KB) | HTML Full-text | XML Full-text
Abstract
Bayesian network classifiers (BNCs) have demonstrated competitive classification accuracy in a variety of real-world applications. However, it is error-prone for BNCs to discriminate among high-confidence labels. To address this issue, we propose the label-driven learning framework, which incorporates instance-based learning and ensemble learning.
[...] Read more.
Bayesian network classifiers (BNCs) have demonstrated competitive classification accuracy in a variety of real-world applications. However, it is error-prone for BNCs to discriminate among high-confidence labels. To address this issue, we propose the label-driven learning framework, which incorporates instance-based learning and ensemble learning. For each testing instance, high-confidence labels are first selected by a generalist classifier, e.g., the tree-augmented naive Bayes (TAN) classifier. Then, by focusing on these labels, conditional mutual information is redefined to more precisely measure mutual dependence between attributes, thus leading to a refined generalist with a more reasonable network structure. To enable finer discrimination, an expert classifier is tailored for each high-confidence label. Finally, the predictions of the refined generalist and the experts are aggregated. We extend TAN to LTAN (Label-driven TAN) by applying the proposed framework. Extensive experimental results demonstrate that LTAN delivers superior classification accuracy to not only several state-of-the-art single-structure BNCs but also some established ensemble BNCs at the expense of reasonable computation overhead. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle Is an Entropy-Based Approach Suitable for an Understanding of the Metabolic Pathways of Fermentation and Respiration?
Entropy 2017, 19(12), 662; doi:10.3390/e19120662
Received: 15 November 2017 / Revised: 30 November 2017 / Accepted: 1 December 2017 / Published: 4 December 2017
PDF Full-text (5200 KB) | HTML Full-text | XML Full-text
Abstract
Lactic fermentation and respiration are important metabolic pathways on which life is based. Here, the rate of entropy in a cell associated to fermentation and respiration processes in glucose catabolism of living systems is calculated. This is done for both internal and external
[...] Read more.
Lactic fermentation and respiration are important metabolic pathways on which life is based. Here, the rate of entropy in a cell associated to fermentation and respiration processes in glucose catabolism of living systems is calculated. This is done for both internal and external heat and matter transport according to a thermodynamic approach based on Prigogine’s formalism. It is shown that the rate of entropy associated to irreversible reactions in fermentation processes is higher than the corresponding one in respiration processes. Instead, this behaviour is reversed for diffusion of chemical species and for heat exchanges. The ratio between the rates of entropy associated to the two metabolic pathways has a space and time dependence for diffusion of chemical species and is invariant for heat and irreversible reactions. In both fermentation and respiration processes studied separately, the total entropy rate tends towards a minimum value fulfilling Prigogine’s minimum dissipation principle and is in accordance with the second principle of thermodynamics. The applications of these results could be important for cancer detection and therapy. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessArticle Capturing Causality for Fault Diagnosis Based on Multi-Valued Alarm Series Using Transfer Entropy
Entropy 2017, 19(12), 663; doi:10.3390/e19120663
Received: 1 November 2017 / Revised: 24 November 2017 / Accepted: 29 November 2017 / Published: 4 December 2017
PDF Full-text (6342 KB) | HTML Full-text | XML Full-text
Abstract
Transfer entropy (TE) is a model-free approach based on information theory to capture causality between variables, which has been used for the modeling and monitoring of, and fault diagnosis in, complex industrial processes. It is able to detect the causality between variables without
[...] Read more.
Transfer entropy (TE) is a model-free approach based on information theory to capture causality between variables, which has been used for the modeling and monitoring of, and fault diagnosis in, complex industrial processes. It is able to detect the causality between variables without assuming any underlying model, but it is computationally burdensome. To overcome this limitation, a hybrid method of TE and the modified conditional mutual information (CMI) approach is proposed by using generated multi-valued alarm series. In order to obtain a process topology, TE can generate a causal map of all sub-processes and modified CMI can be used to distinguish the direct connectivity from the above-mentioned causal map by using multi-valued alarm series. The effectiveness and accuracy rate of the proposed method are validated by simulated and real industrial cases (the Tennessee-Eastman process) to capture process topology by using multi-valued alarm series. Full article
(This article belongs to the Special Issue Entropy and Complexity of Data)
Figures

Figure 1

Open AccessArticle Entropic Updating of Probabilities and Density Matrices
Entropy 2017, 19(12), 664; doi:10.3390/e19120664
Received: 2 November 2017 / Revised: 1 December 2017 / Accepted: 2 December 2017 / Published: 4 December 2017
PDF Full-text (801 KB) | HTML Full-text | XML Full-text
Abstract
We find that the standard relative entropy and the Umegaki entropy are designed for the purpose of inferentially updating probabilities and density matrices, respectively. From the same set of inferentially guided design criteria, both of the previously stated entropies are derived in parallel.
[...] Read more.
We find that the standard relative entropy and the Umegaki entropy are designed for the purpose of inferentially updating probabilities and density matrices, respectively. From the same set of inferentially guided design criteria, both of the previously stated entropies are derived in parallel. This formulates a quantum maximum entropy method for the purpose of inferring density matrices in the absence of complete information. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle An Improved Chaotic Optimization Algorithm Applied to a DC Electrical Motor Modeling
Entropy 2017, 19(12), 665; doi:10.3390/e19120665
Received: 1 October 2017 / Revised: 16 November 2017 / Accepted: 24 November 2017 / Published: 4 December 2017
PDF Full-text (648 KB) | HTML Full-text | XML Full-text
Abstract
The chaos-based optimization algorithm (COA) is a method to optimize possibly nonlinear complex functions of several variables by chaos search. The main innovation behind the chaos-based optimization algorithm is to generate chaotic trajectories by means of nonlinear, discrete-time dynamical systems to explore the
[...] Read more.
The chaos-based optimization algorithm (COA) is a method to optimize possibly nonlinear complex functions of several variables by chaos search. The main innovation behind the chaos-based optimization algorithm is to generate chaotic trajectories by means of nonlinear, discrete-time dynamical systems to explore the search space while looking for the global minimum of a complex criterion function. The aim of the present research is to investigate the numerical properties of the COA, both on complex optimization test-functions from the literature and on a real-world problem, to contribute to the understanding of its global-search features. In addition, the present research suggests a refinement of the original COA algorithm to improve its optimization performances. In particular, the real-world optimization problem tackled within the paper is the estimation of six electro-mechanical parameters of a model of a direct-current (DC) electrical motor. A large number of test results prove that the algorithm achieves an excellent numerical precision at a little expense in the computational complexity, which appears as extremely limited, compared to the complexity of other benchmark optimization algorithms, namely, the genetic algorithm and the simulated annealing algorithm. Full article
(This article belongs to the Special Issue Probabilistic Methods for Inverse Problems)
Figures

Figure 1

Open AccessArticle Assessment of Component Selection Strategies in Hyperspectral Imagery
Entropy 2017, 19(12), 666; doi:10.3390/e19120666
Received: 23 October 2017 / Revised: 30 November 2017 / Accepted: 1 December 2017 / Published: 5 December 2017
PDF Full-text (8300 KB) | HTML Full-text | XML Full-text
Abstract
Hyperspectral imagery (HSI) integrates many continuous and narrow bands that cover different regions of the electromagnetic spectrum. However, the main challenge is the high dimensionality of HSI data due to the ’Hughes’ phenomenon. Thus, dimensionality reduction is necessary before applying classification algorithms to
[...] Read more.
Hyperspectral imagery (HSI) integrates many continuous and narrow bands that cover different regions of the electromagnetic spectrum. However, the main challenge is the high dimensionality of HSI data due to the ’Hughes’ phenomenon. Thus, dimensionality reduction is necessary before applying classification algorithms to obtain accurate thematic maps. We focus the study on the following feature-extraction algorithms: Principal Component Analysis (PCA), Minimum Noise Fraction (MNF), and Independent Component Analysis (ICA). After a literature survey, we have observed a lack of a comparative study on these techniques as well as accurate strategies to determine the number of components. Hence, the first objective was to compare traditional dimensionality reduction techniques (PCA, MNF, and ICA) in HSI of the Compact Airborne Spectrographic Imager (CASI) sensor and to evaluate different strategies for selecting the most suitable number of components in the transformed space. The second objective was to determine a new dimensionality reduction approach by dividing the CASI HSI regarding the spectral regions covering the electromagnetic spectrum. The components selected from the transformed space of the different spectral regions were stacked. This stacked transformed space was evaluated to see if the proposed approach improves the final classification. Full article
(This article belongs to the Special Issue Selected Papers from IWOBI—Entropy-Based Applied Signal Processing)
Figures

Figure 1

Open AccessArticle L1-Minimization Algorithm for Bayesian Online Compressed Sensing
Entropy 2017, 19(12), 667; doi:10.3390/e19120667
Received: 3 November 2017 / Revised: 25 November 2017 / Accepted: 1 December 2017 / Published: 5 December 2017
PDF Full-text (340 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we propose a Bayesian online reconstruction algorithm for sparse signals based on Compressed Sensing and inspired by L1-regularization schemes. A previous work has introduced a mean-field approximation for the Bayesian online algorithm and has shown that it is possible to
[...] Read more.
In this work, we propose a Bayesian online reconstruction algorithm for sparse signals based on Compressed Sensing and inspired by L1-regularization schemes. A previous work has introduced a mean-field approximation for the Bayesian online algorithm and has shown that it is possible to saturate the offline performance in the presence of Gaussian measurement noise when the signal generating distribution is known. Here, we build on these results and show that reconstruction is possible even if prior knowledge about the generation of the signal is limited, by introduction of a Laplace prior and of an extra Kullback–Leibler divergence minimization step for hyper-parameter learning. Full article
Figures

Figure 1

Open AccessArticle Thermoelectrics of Interacting Nanosystems—Exploiting Superselection Instead of Time-Reversal Symmetry
Entropy 2017, 19(12), 668; doi:10.3390/e19120668
Received: 3 November 2017 / Revised: 28 November 2017 / Accepted: 29 November 2017 / Published: 6 December 2017
PDF Full-text (987 KB) | HTML Full-text | XML Full-text
Abstract
Thermoelectric transport is traditionally analyzed using relations imposed by time-reversal symmetry, ranging from Onsager’s results to fluctuation relations in counting statistics. In this paper, we show that a recently discovered duality relation for fermionic systems—deriving from the fundamental fermion-parity superselection principle of quantum
[...] Read more.
Thermoelectric transport is traditionally analyzed using relations imposed by time-reversal symmetry, ranging from Onsager’s results to fluctuation relations in counting statistics. In this paper, we show that a recently discovered duality relation for fermionic systems—deriving from the fundamental fermion-parity superselection principle of quantum many-particle systems—provides new insights into thermoelectric transport. Using a master equation, we analyze the stationary charge and heat currents through a weakly coupled, but strongly interacting single-level quantum dot subject to electrical and thermal bias. In linear transport, the fermion-parity duality shows that features of thermoelectric response coefficients are actually dominated by the average and fluctuations of the charge in a dual quantum dot system, governed by attractive instead of repulsive electron-electron interaction. In the nonlinear regime, the duality furthermore relates most transport coefficients to much better understood equilibrium quantities. Finally, we naturally identify the fermion-parity as the part of the Coulomb interaction relevant for both the linear and nonlinear Fourier heat. Altogether, our findings hence reveal that next to time-reversal, the duality imposes equally important symmetry restrictions on thermoelectric transport. As such, it is also expected to simplify computations and clarify the physical understanding for more complex systems than the simplest relevant interacting nanostructure model studied here. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Figures

Figure 1

Open AccessArticle Formation of Photo-Responsive Liquid Crystalline Emulsion by Using Microfluidics Device
Entropy 2017, 19(12), 669; doi:10.3390/e19120669
Received: 8 November 2017 / Revised: 30 November 2017 / Accepted: 4 December 2017 / Published: 6 December 2017
PDF Full-text (1734 KB) | HTML Full-text | XML Full-text
Abstract
Photo-responsive double emulsions made of liquid crystal (LC) were prepared by a microfluidic device, and the light-induced processes were studied. The phase transition was induced from the center of the topological defect for an emulsion made of (N-(4-methoxybenzylidene)-4-butylaniline (MBBA), and strange
[...] Read more.
Photo-responsive double emulsions made of liquid crystal (LC) were prepared by a microfluidic device, and the light-induced processes were studied. The phase transition was induced from the center of the topological defect for an emulsion made of (N-(4-methoxybenzylidene)-4-butylaniline (MBBA), and strange texture change was observed for an emulsion made of 4-cyano-4′-pentylbiphenyl (5CB) doped with azobenzene. The results suggest that there are defect-involved processes in the phase change of LC double emulsions. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics of Interfaces)
Figures

Figure 1

Open AccessArticle Oscillations in Multiparticle Production Processes
Entropy 2017, 19(12), 670; doi:10.3390/e19120670
Received: 22 November 2017 / Revised: 30 November 2017 / Accepted: 4 December 2017 / Published: 6 December 2017
PDF Full-text (415 KB) | HTML Full-text | XML Full-text
Abstract
We discuss two examples of oscillations apparently hidden in some experimental results for high-energy multiparticle production processes: (i) the log-periodic oscillatory pattern decorating the power-like Tsallis distributions of transverse momenta; (ii) the oscillations of the modified combinants obtained from the measured multiplicity distributions.
[...] Read more.
We discuss two examples of oscillations apparently hidden in some experimental results for high-energy multiparticle production processes: (i) the log-periodic oscillatory pattern decorating the power-like Tsallis distributions of transverse momenta; (ii) the oscillations of the modified combinants obtained from the measured multiplicity distributions. Our calculations are confronted with p p data from the Large Hadron Collider (LHC). We show that in both cases, these phenomena can provide new insight into the dynamics of these processes. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Inspecting Non-Perturbative Contributions to the Entanglement Entropy via Wavefunctions
Entropy 2017, 19(12), 671; doi:10.3390/e19120671
Received: 14 November 2017 / Revised: 4 December 2017 / Accepted: 5 December 2017 / Published: 7 December 2017
PDF Full-text (407 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we would like to systematically explore the implications of non-perturbative effects on entanglement in a many body system. Instead of pursuing the usual path-integral method in a singular space, we attempt to study the wavefunctions in detail. We begin with
[...] Read more.
In this paper, we would like to systematically explore the implications of non-perturbative effects on entanglement in a many body system. Instead of pursuing the usual path-integral method in a singular space, we attempt to study the wavefunctions in detail. We begin with a toy model of multiple particles whose interaction potential admits multiple minima. We study the entanglement of the true ground state after taking the tunneling effects into account and find some simple patterns. Notably, in the case of multiple particle interactions, entanglement entropy generically decreases with increasing number of minima. The knowledge of the subsystem actually increases with the number of minima. The reduced density matrix can also be seen to have close connections with graph spectra. In a more careful study of the two-well tunneling system, we also extract the exponentially-suppressed tail contribution, the analogue of instantons. To understand the effects of multiple minima in a field theory, we are inspired to inspect wavefunctions in a toy model of a bosonic field describing quasi-particles of two different condensates related by Bogoliubov transformations. We find that the area law is naturally preserved. This is probably a useful set of perspectives that promise wider applications. Full article
Figures

Figure 1

Open AccessArticle Association between Multiscale Entropy Characteristics of Heart Rate Variability and Ischemic Stroke Risk in Patients with Permanent Atrial Fibrillation
Entropy 2017, 19(12), 672; doi:10.3390/e19120672
Received: 6 November 2017 / Revised: 1 December 2017 / Accepted: 6 December 2017 / Published: 7 December 2017
PDF Full-text (3539 KB) | HTML Full-text | XML Full-text
Abstract
Multiscale entropy (MSE) profiles of heart rate variability (HRV) in patients with atrial fibrillation (AFib) provides clinically useful information for ischemic stroke risk assessment, suggesting that the complex properties characterized by MSE profiles are associated with ischemic stroke risk. However, the meaning of
[...] Read more.
Multiscale entropy (MSE) profiles of heart rate variability (HRV) in patients with atrial fibrillation (AFib) provides clinically useful information for ischemic stroke risk assessment, suggesting that the complex properties characterized by MSE profiles are associated with ischemic stroke risk. However, the meaning of HRV complexity in patients with AFib has not been clearly interpreted, and the physical and mathematical understanding of the relation between HRV dynamics and the ischemic stroke risk is not well established. To gain a deeper insight into HRV dynamics in patients with AFib, and to improve ischemic stroke risk assessment using HRV analysis, we study the HRV characteristics related to MSE profiles, such as the long-range correlation and probability density function. In this study, we analyze the HRV time series of 173 patients with permanent AFib. Our results show that, although HRV time series in patients with AFib exhibit long-range correlation (1/f fluctuations)—as observed in healthy subjects—in a range longer than 90 s, these autocorrelation properties have no significant predictive power for ischemic stroke occurrence. Further, the probability density function structure of the coarse-grained times series at scales greater than 2 s is dominantly associated with ischemic stroke risk. This observation could provide valuable information for improving ischemic stroke risk assessment using HRV analysis. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Characterisation of the Effects of Sleep Deprivation on the Electroencephalogram Using Permutation Lempel–Ziv Complexity, a Non-Linear Analysis Tool
Entropy 2017, 19(12), 673; doi:10.3390/e19120673
Received: 30 October 2017 / Revised: 2 December 2017 / Accepted: 5 December 2017 / Published: 8 December 2017
PDF Full-text (6479 KB) | HTML Full-text | XML Full-text
Abstract
Specific patterns of brain activity during sleep and waking are recorded in the electroencephalogram (EEG). Time-frequency analysis methods have been widely used to analyse the EEG and identified characteristic oscillations for each vigilance state (VS), i.e., wakefulness, rapid-eye movement (REM) and non-rapid-eye movement
[...] Read more.
Specific patterns of brain activity during sleep and waking are recorded in the electroencephalogram (EEG). Time-frequency analysis methods have been widely used to analyse the EEG and identified characteristic oscillations for each vigilance state (VS), i.e., wakefulness, rapid-eye movement (REM) and non-rapid-eye movement (NREM) sleep. However, other aspects such as change of patterns associated with brain dynamics may not be captured unless a non-linear-based analysis method is used. In this pilot study, Permutation Lempel–Ziv complexity (PLZC), a novel symbolic dynamics analysis method, was used to characterise the changes in the EEG in sleep and wakefulness during baseline and recovery from sleep deprivation (SD). The results obtained with PLZC were contrasted with a related non-linear method, Lempel–Ziv complexity (LZC). Both measure the emergence of new patterns. However, LZC is dependent on the absolute amplitude of the EEG, while PLZC is only dependent on the relative amplitude due to symbolisation procedure and thus, more resistant to noise. We showed that PLZC discriminates activated brain states associated with wakefulness and REM sleep, which both displayed higher complexity, compared to NREM sleep. Additionally, significantly lower PLZC values were measured in NREM sleep during the recovery period following SD compared to baseline, suggesting a reduced emergence of new activity patterns in the EEG. These findings were validated using PLZC on surrogate data. By contrast, LZC was merely reflecting changes in the spectral composition of the EEG. Overall, this study implies that PLZC is a robust non-linear complexity measure, which is not dependent on amplitude variations in the signal, and which may be useful to further assess EEG alterations induced by environmental or pharmacological manipulations. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle On the Statistical Mechanics of Alien Species Distribution
Entropy 2017, 19(12), 674; doi:10.3390/e19120674
Received: 20 November 2017 / Revised: 6 December 2017 / Accepted: 8 December 2017 / Published: 9 December 2017
PDF Full-text (854 KB) | HTML Full-text | XML Full-text
Abstract
Many species of plants are found in regions to which they are alien. Their global distributions are characterised by a family of exponential functions of the kind that arise in elementary statistical mechanics (an example in ecology is MacArthur’s broken stick). We show
[...] Read more.
Many species of plants are found in regions to which they are alien. Their global distributions are characterised by a family of exponential functions of the kind that arise in elementary statistical mechanics (an example in ecology is MacArthur’s broken stick). We show here that all these functions are quantitatively reproduced by a model containing a single parameter—some global resource partitioned at random on the two axes of species number and site number. A dynamical model generating this equilibrium is a two-fold stochastic process and suggests a curious and interesting biological interpretation in terms of niche structures fluctuating with time and productivity, with sites and species highly idiosyncratic. Idiosyncrasy implies that attempts to identify a priori those species likely to become naturalised are unlikely to be successful. Although this paper is primarily concerned with a particular problem in population biology, the two-fold stochastic process may be of more general interest. Full article
(This article belongs to the Special Issue Applications of Information Theory in the Geosciences II)
Figures

Figure 1

Open AccessArticle A General Symbolic Approach to Kolmogorov-Sinai Entropy
Entropy 2017, 19(12), 675; doi:10.3390/e19120675
Received: 31 October 2017 / Revised: 4 December 2017 / Accepted: 5 December 2017 / Published: 9 December 2017
PDF Full-text (389 KB) | HTML Full-text | XML Full-text
Abstract
It is popular to study a time-dependent nonlinear system by encoding outcomes of measurements into sequences of symbols following certain symbolization schemes. Mostly, symbolizations by threshold crossings or variants of it are applied, but also, the relatively new symbolic approach, which goes back
[...] Read more.
It is popular to study a time-dependent nonlinear system by encoding outcomes of measurements into sequences of symbols following certain symbolization schemes. Mostly, symbolizations by threshold crossings or variants of it are applied, but also, the relatively new symbolic approach, which goes back to innovative works of Bandt and Pompe—ordinal symbolic dynamics—plays an increasing role. In this paper, we discuss both approaches novelly in one breath with respect to the theoretical determination of the Kolmogorov-Sinai entropy (KS entropy). For this purpose, we propose and investigate a unifying approach to formalize symbolizations. By doing so, we can emphasize the main advantage of the ordinal approach if no symbolization scheme can be found that characterizes KS entropy directly: the ordinal approach, as well as generalizations of it provide, under very natural conditions, a direct route to KS entropy by default. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle Maximum Exergetic Efficiency Operation of a Solar Powered H2O-LiBr Absorption Cooling System
Entropy 2017, 19(12), 676; doi:10.3390/e19120676
Received: 22 October 2017 / Revised: 3 December 2017 / Accepted: 6 December 2017 / Published: 9 December 2017
PDF Full-text (13286 KB) | HTML Full-text | XML Full-text
Abstract
A solar driven cooling system consisting of a single effect H2O-LiBr absorbtion cooling module (ACS), a parabolic trough collector (PTC), and a storage tank (ST) module is analyzed during one full day operation. The pressurized water is used to transfer heat
[...] Read more.
A solar driven cooling system consisting of a single effect H2O-LiBr absorbtion cooling module (ACS), a parabolic trough collector (PTC), and a storage tank (ST) module is analyzed during one full day operation. The pressurized water is used to transfer heat from PTC to ST and to feed the ACS desorber. The system is constrained to operate at the maximum ACS exergetic efficiency, under a time dependent cooling load computed on 15 July for a one storey house located near Bucharest, Romania. To set up the solar assembly, two commercial PTCs were selected, namely PT1-IST and PTC 1800 Solitem, and a single unit ST was initially considered. The mathematical model, relying on the energy balance equations, was coded under Engineering Equation Solver (EES) environment. The solar data were obtained from the Meteonorm database. The numerical simulations proved that the system cannot cover the imposed cooling load all day long, due to the large variation of water temperature inside the ST. By splitting the ST into two units, the results revealed that the PT1-IST collector only drives the ACS between 9 am and 4:30 pm, while the PTC 1800 one covers the entire cooling period (9 am–6 pm) for optimum ST capacities of 90 kg/90 kg and 90 kg/140 kg, respectively. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Automated Detection of Paroxysmal Atrial Fibrillation Using an Information-Based Similarity Approach
Entropy 2017, 19(12), 677; doi:10.3390/e19120677
Received: 10 October 2017 / Revised: 20 November 2017 / Accepted: 8 December 2017 / Published: 10 December 2017
PDF Full-text (4601 KB) | HTML Full-text | XML Full-text
Abstract
Atrial fibrillation (AF) is an abnormal rhythm of the heart, which can increase heart-related complications. Paroxysmal AF episodes occur intermittently with varying duration. Human-based diagnosis of paroxysmal AF with a longer-term electrocardiogram recording is time-consuming. Here we present a fully automated ensemble model
[...] Read more.
Atrial fibrillation (AF) is an abnormal rhythm of the heart, which can increase heart-related complications. Paroxysmal AF episodes occur intermittently with varying duration. Human-based diagnosis of paroxysmal AF with a longer-term electrocardiogram recording is time-consuming. Here we present a fully automated ensemble model for AF episode detection based on RR-interval time series, applying a novel approach of information-based similarity analysis and ensemble scheme. By mapping RR-interval time series to binary symbolic sequences and comparing the rank-frequency patterns of m-bit words, the dissimilarity between AF and normal sinus rhythms (NSR) were quantified. To achieve high detection specificity and sensitivity, and low variance, a weighted variation of bagging with multiple AF and NSR templates was applied. By performing dissimilarity comparisons between unknown RR-interval time series and multiple templates, paroxysmal AF episodes were detected. Based on our results, optimal AF detection parameters are symbolic word length m = 9 and observation window n = 150, achieving 97.04% sensitivity, 97.96% specificity, and 97.78% overall accuracy. Sensitivity, specificity, and overall accuracy vary little despite changes in m and n parameters. This study provides quantitative information to enhance the categorization of AF and normal cardiac rhythms. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Information Landscape and Flux, Mutual Information Rate Decomposition and Connections to Entropy Production
Entropy 2017, 19(12), 678; doi:10.3390/e19120678
Received: 29 September 2017 / Revised: 27 November 2017 / Accepted: 6 December 2017 / Published: 11 December 2017
PDF Full-text (768 KB) | HTML Full-text | XML Full-text
Abstract
We explored the dynamics of two interacting information systems. We show that for the Markovian marginal systems, the driving force for information dynamics is determined by both the information landscape and information flux. While the information landscape can be used to construct the
[...] Read more.
We explored the dynamics of two interacting information systems. We show that for the Markovian marginal systems, the driving force for information dynamics is determined by both the information landscape and information flux. While the information landscape can be used to construct the driving force to describe the equilibrium time-reversible information system dynamics, the information flux can be used to describe the nonequilibrium time-irreversible behaviors of the information system dynamics. The information flux explicitly breaks the detailed balance and is a direct measure of the degree of the nonequilibrium or time-irreversibility. We further demonstrate that the mutual information rate between the two subsystems can be decomposed into the equilibrium time-reversible and nonequilibrium time-irreversible parts, respectively. This decomposition of the Mutual Information Rate (MIR) corresponds to the information landscape-flux decomposition explicitly when the two subsystems behave as Markov chains. Finally, we uncover the intimate relationship between the nonequilibrium thermodynamics in terms of the entropy production rates and the time-irreversible part of the mutual information rate. We found that this relationship and MIR decomposition still hold for the more general stationary and ergodic cases. We demonstrate the above features with two examples of the bivariate Markov chains. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Open AccessArticle Altered Brain Complexity in Women with Primary Dysmenorrhea: A Resting-State Magneto-Encephalography Study Using Multiscale Entropy Analysis
Entropy 2017, 19(12), 680; doi:10.3390/e19120680
Received: 30 September 2017 / Revised: 20 November 2017 / Accepted: 6 December 2017 / Published: 11 December 2017
PDF Full-text (848 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
How chronic pain affects brain functions remains unclear. As a potential indicator, brain complexity estimated by entropy-based methods may be helpful for revealing the underlying neurophysiological mechanism of chronic pain. In this study, complexity features with multiple time scales and spectral features were
[...] Read more.
How chronic pain affects brain functions remains unclear. As a potential indicator, brain complexity estimated by entropy-based methods may be helpful for revealing the underlying neurophysiological mechanism of chronic pain. In this study, complexity features with multiple time scales and spectral features were extracted from resting-state magnetoencephalographic signals of 156 female participants with/without primary dysmenorrhea (PDM) during pain-free state. Revealed by multiscale sample entropy (MSE), PDM patients (PDMs) exhibited loss of brain complexity in regions associated with sensory, affective, and evaluative components of pain, including sensorimotor, limbic, and salience networks. Significant correlations between MSE values and psychological states (depression and anxiety) were found in PDMs, which may indicate specific nonlinear disturbances in limbic and default mode network circuits after long-term menstrual pain. These findings suggest that MSE is an important measure of brain complexity and is potentially applicable to future diagnosis of chronic pain. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Channel Capacity of Coding System on Tsallis Entropy and q-Statistics
Entropy 2017, 19(12), 682; doi:10.3390/e19120682
Received: 12 August 2017 / Revised: 4 December 2017 / Accepted: 8 December 2017 / Published: 12 December 2017
PDF Full-text (193 KB) | HTML Full-text | XML Full-text
Abstract
The field of information science has greatly developed, and applications in various fields have emerged. In this paper, we evaluated the coding system in the theory of Tsallis entropy for transmission of messages and aimed to formulate the channel capacity by maximization of
[...] Read more.
The field of information science has greatly developed, and applications in various fields have emerged. In this paper, we evaluated the coding system in the theory of Tsallis entropy for transmission of messages and aimed to formulate the channel capacity by maximization of the Tsallis entropy within a given condition of code length. As a result, we obtained a simple relational expression between code length and code appearance probability and, additionally, a generalized formula of the channel capacity on the basis of Tsallis entropy statistics. This theoretical framework may contribute to data processing techniques and other applications. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Entropy Conditions Involved in the Nonlinear Coupled Constitutive Method for Solving Continuum and Rarefied Gas Flows
Entropy 2017, 19(12), 683; doi:10.3390/e19120683
Received: 5 September 2017 / Revised: 19 November 2017 / Accepted: 8 December 2017 / Published: 12 December 2017
PDF Full-text (4404 KB) | HTML Full-text | XML Full-text
Abstract
The numerical study of continuum-rarefied gas flows is of considerable interest because it can provide fundamental knowledge regarding flow physics. Recently, the nonlinear coupled constitutive method (NCCM) has been derived from the Boltzmann equation and implemented to investigate continuum-rarefied gas flows. In this
[...] Read more.
The numerical study of continuum-rarefied gas flows is of considerable interest because it can provide fundamental knowledge regarding flow physics. Recently, the nonlinear coupled constitutive method (NCCM) has been derived from the Boltzmann equation and implemented to investigate continuum-rarefied gas flows. In this study, we first report the important and detailed issues in the use of the H theorem and positive entropy generation in the NCCM. Importantly, the unified nonlinear dissipation model and its relationships to the Rayleigh–Onsager function were demonstrated in the treatment of the collision term of the Boltzmann equation. In addition, we compare the Grad moment method, the Burnett equation, and the NCCM. Next, differences between the NCCM equations and the Navier–Stokes equations are explained in detail. For validation, numerical studies of rarefied and continuum gas flows were conducted. These studies include rarefied and/or continuum gas flows around a two-dimensional (2D) cavity, a 2D airfoil, a 2D cylinder, and a three-dimensional space shuttle. It was observed that the present results of the NCCM are in good agreement with those of the Direct Simulation Monte Carlo (DSMC) method in rarefied cases and are in good agreement with those of the Navier–Stokes equations in continuum cases. Finally, this study can be regarded as a theoretical basis of the NCCM for the development of a unified framework for solving continuum-rarefied gas flows. Full article
Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessReview Tsallis Entropy Theory for Modeling in Water Engineering: A Review
Entropy 2017, 19(12), 641; doi:10.3390/e19120641
Received: 15 September 2017 / Revised: 15 November 2017 / Accepted: 23 November 2017 / Published: 27 November 2017
PDF Full-text (1511 KB) | HTML Full-text | XML Full-text
Abstract
Water engineering is an amalgam of engineering (e.g., hydraulics, hydrology, irrigation, ecosystems, environment, water resources) and non-engineering (e.g., social, economic, political) aspects that are needed for planning, designing and managing water systems. These aspects and the associated issues have been dealt with in
[...] Read more.
Water engineering is an amalgam of engineering (e.g., hydraulics, hydrology, irrigation, ecosystems, environment, water resources) and non-engineering (e.g., social, economic, political) aspects that are needed for planning, designing and managing water systems. These aspects and the associated issues have been dealt with in the literature using different techniques that are based on different concepts and assumptions. A fundamental question that still remains is: Can we develop a unifying theory for addressing these? The second law of thermodynamics permits us to develop a theory that helps address these in a unified manner. This theory can be referred to as the entropy theory. The thermodynamic entropy theory is analogous to the Shannon entropy or the information theory. Perhaps, the most popular generalization of the Shannon entropy is the Tsallis entropy. The Tsallis entropy has been applied to a wide spectrum of problems in water engineering. This paper provides an overview of Tsallis entropy theory in water engineering. After some basic description of entropy and Tsallis entropy, a review of its applications in water engineering is presented, based on three types of problems: (1) problems requiring entropy maximization; (2) problems requiring coupling Tsallis entropy theory with another theory; and (3) problems involving physical relations. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Back to Top