Editor's Choice Articles

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessFeature PaperEditor’s ChoiceArticle
Entropic Steering Criteria: Applications to Bipartite and Tripartite Systems
Entropy 2018, 20(10), 763; https://doi.org/10.3390/e20100763 - 05 Oct 2018
Cited by 8
Abstract
The effect of quantum steering describes a possible action at a distance via local measurements. Whereas many attempts on characterizing steerability have been pursued, answering the question as to whether a given state is steerable or not remains a difficult task. Here, we [...] Read more.
The effect of quantum steering describes a possible action at a distance via local measurements. Whereas many attempts on characterizing steerability have been pursued, answering the question as to whether a given state is steerable or not remains a difficult task. Here, we investigate the applicability of a recently proposed method for building steering criteria from generalized entropic uncertainty relations. This method works for any entropy which satisfy the properties of (i) (pseudo-) additivity for independent distributions; (ii) state independent entropic uncertainty relation (EUR); and (iii) joint convexity of a corresponding relative entropy. Our study extends the former analysis to Tsallis and Rényi entropies on bipartite and tripartite systems. As examples, we investigate the steerability of the three-qubit GHZ and W states. Full article
(This article belongs to the Special Issue Quantum Nonlocality) Printed Edition available
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Harmonic Sierpinski Gasket and Applications
Entropy 2018, 20(9), 714; https://doi.org/10.3390/e20090714 - 17 Sep 2018
Cited by 40
Abstract
The aim of this paper is to investigate the generalization of the Sierpinski gasket through the harmonic metric. In particular, this work presents an antenna based on such a generalization. In fact, the harmonic Sierpinski gasket is used as a geometric configuration of [...] Read more.
The aim of this paper is to investigate the generalization of the Sierpinski gasket through the harmonic metric. In particular, this work presents an antenna based on such a generalization. In fact, the harmonic Sierpinski gasket is used as a geometric configuration of small antennas. As with fractal antennas and Rényi entropy, their performance is characterized by the associated entropy that is studied and discussed here. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Symmetry, Outer Bounds, and Code Constructions: A Computer-Aided Investigation on the Fundamental Limits of Caching
Entropy 2018, 20(8), 603; https://doi.org/10.3390/e20080603 - 13 Aug 2018
Cited by 13
Abstract
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space [...] Read more.
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space serves as the starting point of this approach; however, our effort goes significantly beyond using it to prove information inequalities. We first identify and formalize the symmetry structure in the problem, which enables us to show the existence of optimal symmetric solutions. A symmetry-reduced linear program is then used to identify the boundary of the memory-transmission-rate tradeoff for several small cases, for which we obtain a set of tight outer bounds. General hypotheses on the optimal tradeoff region are formed from these computed data, which are then analytically proven. This leads to a complete characterization of the optimal tradeoff for systems with only two users, and certain partial characterization for systems with only two files. Next, we show that by carefully analyzing the joint entropy structure of the outer bounds for certain cases, a novel code construction can be reverse-engineered, which eventually leads to a general class of codes. Finally, we show that outer bounds can be computed through strategically relaxing the LP in different ways, which can be used to explore the problem computationally. This allows us firstly to deduce generic characteristic of the converse proof, and secondly to compute outer bounds for larger problem cases, despite the seemingly impossible computation scale. Full article
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Conditional Gaussian Systems for Multiscale Nonlinear Stochastic Systems: Prediction, State Estimation and Uncertainty Quantification
Entropy 2018, 20(7), 509; https://doi.org/10.3390/e20070509 - 04 Jul 2018
Cited by 9
Abstract
A conditional Gaussian framework for understanding and predicting complex multiscale nonlinear stochastic systems is developed. Despite the conditional Gaussianity, such systems are nevertheless highly nonlinear and are able to capture the non-Gaussian features of nature. The special structure of the system allows closed [...] Read more.
A conditional Gaussian framework for understanding and predicting complex multiscale nonlinear stochastic systems is developed. Despite the conditional Gaussianity, such systems are nevertheless highly nonlinear and are able to capture the non-Gaussian features of nature. The special structure of the system allows closed analytical formulae for solving the conditional statistics and is thus computationally efficient. A rich gallery of examples of conditional Gaussian systems are illustrated here, which includes data-driven physics-constrained nonlinear stochastic models, stochastically coupled reaction–diffusion models in neuroscience and ecology, and large-scale dynamical models in turbulence, fluids and geophysical flows. Making use of the conditional Gaussian structure, efficient statistically accurate algorithms involving a novel hybrid strategy for different subspaces, a judicious block decomposition and statistical symmetry are developed for solving the Fokker–Planck equation in large dimensions. The conditional Gaussian framework is also applied to develop extremely cheap multiscale data assimilation schemes, such as the stochastic superparameterization, which use particle filters to capture the non-Gaussian statistics on the large-scale part whose dimension is small whereas the statistics of the small-scale part are conditional Gaussian given the large-scale part. Other topics of the conditional Gaussian systems studied here include designing new parameter estimation schemes and understanding model errors. Full article
(This article belongs to the Special Issue Information Theory and Stochastics for Multiscale Nonlinear Systems)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
The Gibbs Paradox: Early History and Solutions
Entropy 2018, 20(6), 443; https://doi.org/10.3390/e20060443 - 06 Jun 2018
Cited by 5
Abstract
This article is a detailed history of the Gibbs paradox, with philosophical morals. It purports to explain the origins of the paradox, to describe and criticize solutions of the paradox from the early times to the present, to use the history of statistical [...] Read more.
This article is a detailed history of the Gibbs paradox, with philosophical morals. It purports to explain the origins of the paradox, to describe and criticize solutions of the paradox from the early times to the present, to use the history of statistical mechanics as a reservoir of ideas for clarifying foundations and removing prejudices, and to relate the paradox to broad misunderstandings of the nature of physical theory. Full article
(This article belongs to the Special Issue Gibbs Paradox 2018)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Criterion of Existence of Power-Law Memory for Economic Processes
Entropy 2018, 20(6), 414; https://doi.org/10.3390/e20060414 - 29 May 2018
Cited by 14
Abstract
In this paper, we propose criteria for the existence of memory of power-law type (PLT) memory in economic processes. We give the criterion of existence of power-law long-range dependence in time by using the analogy with the concept of the long-range alpha-interaction. We [...] Read more.
In this paper, we propose criteria for the existence of memory of power-law type (PLT) memory in economic processes. We give the criterion of existence of power-law long-range dependence in time by using the analogy with the concept of the long-range alpha-interaction. We also suggest the criterion of existence of PLT memory for frequency domain by using the concept of non-integer dimensions. For an economic process, for which it is known that an endogenous variable depends on an exogenous variable, the proposed criteria make it possible to identify the presence of the PLT memory. The suggested criteria are illustrated in various examples. The use of the proposed criteria allows apply the fractional calculus to construct dynamic models of economic processes. These criteria can be also used to identify the linear integro-differential operators that can be considered as fractional derivatives and integrals of non-integer orders. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Open AccessEditor’s ChoiceArticle
Quantum Trajectories: Real or Surreal?
Entropy 2018, 20(5), 353; https://doi.org/10.3390/e20050353 - 08 May 2018
Cited by 5
Abstract
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context [...] Read more.
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context of quantum mechanics. We show that the conclusion that the Bohm trajectories should be called “surreal” because they are at “variance with the actual observed track” of a particle is wrong as it is based on a false argument. We also present the results of a numerical investigation of a double Stern-Gerlach experiment which shows clearly the role of the spin within the Bohm formalism and discuss situations where the appearance of the quantum potential is open to direct experimental exploration. Full article
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Password Security as a Game of Entropies
Entropy 2018, 20(5), 312; https://doi.org/10.3390/e20050312 - 25 Apr 2018
Cited by 4
Abstract
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability [...] Read more.
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability of the password (measured by Shannon entropy), opposed to the difficulty for player 2 of guessing it (measured by min-entropy), and the cognitive efforts of player 1 tied to changing the password (measured by relative entropy, i.e., Kullback–Leibler divergence). The model and contribution are thus twofold: (i) it applies multi-objective game theory to the password security problem; and (ii) it introduces different concepts of entropy to measure the quality of a password choice process under different angles (and not a given password itself, since this cannot be quality-assessed in terms of entropy). We illustrate our approach with an example from everyday life, namely we analyze the password choices of employees. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Open AccessEditor’s ChoiceArticle
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
Entropy 2018, 20(4), 297; https://doi.org/10.3390/e20040297 - 18 Apr 2018
Cited by 17
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The [...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Polynomial-Time Algorithm for Learning Optimal BFS-Consistent Dynamic Bayesian Networks
Entropy 2018, 20(4), 274; https://doi.org/10.3390/e20040274 - 12 Apr 2018
Cited by 1
Abstract
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that [...] Read more.
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Distance Entropy Cartography Characterises Centrality in Complex Networks
Entropy 2018, 20(4), 268; https://doi.org/10.3390/e20040268 - 11 Apr 2018
Cited by 11
Abstract
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network [...] Read more.
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network models. By coupling distance entropy information with closeness centrality, we introduce a network cartography which allows one to reduce the degeneracy of ranking based on closeness alone. We apply this methodology to the empirical multiplex lexical network encoding the linguistic relationships known to English speaking toddlers. We show that the distance entropy cartography better predicts how children learn words compared to closeness centrality. Our results highlight the importance of distance entropy for gaining insights from distance patterns in complex networks. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting
Entropy 2018, 20(4), 264; https://doi.org/10.3390/e20040264 - 10 Apr 2018
Cited by 7
Abstract
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. [...] Read more.
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. Sample entropy is based on the conditional entropy where a major concern is the number of past delays in the conditional term. In this study, we deploy a lag-specific conditional entropy to identify the informative past values. Moreover, considering the seasonality structure of data, we propose a clustering-based sample entropy to exploit the temporal information. Clustering-based sample entropy is based on the sample entropy definition while considering the clustering information of the training data and the membership of the test point to the clusters. In this study, we utilize the proposed method for transductive feature selection in black-box weather forecasting and conduct the experiments on minimum and maximum temperature prediction in Brussels for 1–6 days ahead. The results reveal that considering the local structure of the data can improve the feature selection performance. In addition, despite the large reduction in the number of features, the performance is competitive with the case of using all features. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessFeature PaperEditor’s ChoiceArticle
Leggett-Garg Inequalities for Quantum Fluctuating Work
Entropy 2018, 20(3), 200; https://doi.org/10.3390/e20030200 - 16 Mar 2018
Cited by 5
Abstract
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a [...] Read more.
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a quantum system unitarily driven in time. It is shown that these inequalities can be violated in a driven two-level system, thereby demonstrating that there exists no general macrorealistic description of quantum work. These violations are shown to emerge within the standard Two-Projective-Measurement scheme as well as for alternative definitions of fluctuating work that are based on weak measurement. Our results elucidate the influences of temporal correlations on work extraction in the quantum regime and highlight a key difference between quantum and classical thermodynamics. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
Entropy 2018, 20(3), 173; https://doi.org/10.3390/e20030173 - 06 Mar 2018
Cited by 11
Abstract
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. [...] Read more.
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. IIT proposes that, to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that, if a measure of Φ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ in large systems within a practical amount of time. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience) Printed Edition available
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
A Variational Formulation of Nonequilibrium Thermodynamics for Discrete Open Systems with Mass and Heat Transfer
Entropy 2018, 20(3), 163; https://doi.org/10.3390/e20030163 - 04 Mar 2018
Cited by 6
Abstract
We propose a variational formulation for the nonequilibrium thermodynamics of discrete open systems, i.e., discrete systems which can exchange mass and heat with the exterior. Our approach is based on a general variational formulation for systems with time-dependent nonlinear nonholonomic constraints and time-dependent [...] Read more.
We propose a variational formulation for the nonequilibrium thermodynamics of discrete open systems, i.e., discrete systems which can exchange mass and heat with the exterior. Our approach is based on a general variational formulation for systems with time-dependent nonlinear nonholonomic constraints and time-dependent Lagrangian. For discrete open systems, the time-dependent nonlinear constraint is associated with the rate of internal entropy production of the system. We show that this constraint on the solution curve systematically yields a constraint on the variations to be used in the action functional. The proposed variational formulation is intrinsic and provides the same structure for a wide class of discrete open systems. We illustrate our theory by presenting examples of open systems experiencing mechanical interactions, as well as internal diffusion, internal heat transfer, and their cross-effects. Our approach yields a systematic way to derive the complete evolution equations for the open systems, including the expression of the internal entropy production of the system, independently on its complexity. It might be especially useful for the study of the nonequilibrium thermodynamics of biophysical systems. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Minimising the Kullback–Leibler Divergence for Model Selection in Distributed Nonlinear Systems
Entropy 2018, 20(2), 51; https://doi.org/10.3390/e20020051 - 23 Jan 2018
Cited by 11
Abstract
The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of [...] Read more.
The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. More specifically, these measures are applicable when selecting a candidate model for a distributed system, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables; however, we can exploit the properties of certain dynamical systems to formulate exact methods based on differential topology. We approach the problem by using reconstruction theorems to derive an analytical expression for the KL divergence of a candidate DAG from the observed dataset. Using this result, we present a scoring function based on transfer entropy to be used as a subroutine in a structure learning algorithm. We then demonstrate its use in recovering the structure of coupled Lorenz and Rössler systems. Full article
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Low Computational Cost for Sample Entropy
Entropy 2018, 20(1), 61; https://doi.org/10.3390/e20010061 - 13 Jan 2018
Cited by 17
Abstract
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used [...] Read more.
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy. Full article
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Transfer Entropy as a Tool for Hydrodynamic Model Validation
Entropy 2018, 20(1), 58; https://doi.org/10.3390/e20010058 - 12 Jan 2018
Cited by 7
Abstract
The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid, [...] Read more.
The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid, and solute transport through the deltaic network. We propose using transfer entropy (TE) to validate model results. TE quantifies the information transferred between variables in terms of strength, timescale, and direction. Using water level data collected in the distributary channels and inter-channel islands of Wax Lake Delta, Louisiana, USA, along with modeled water level data generated for the same locations using Delft3D, we assess how well couplings between external drivers (river discharge, tides, wind) and modeled water levels reproduce the observed data couplings. We perform this operation through time using ten-day windows. Modeled and observed couplings compare well; their differences reflect the spatial parameterization of wind and roughness in the model, which prevents the model from capturing high frequency fluctuations of water level. The model captures couplings better in channels than on islands, suggesting that mechanisms of channel-island connectivity are not fully represented in the model. Overall, TE serves as an additional validation tool to quantify the couplings of the system of interest at multiple spatial and temporal scales. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Information Entropy Suggests Stronger Nonlinear Associations between Hydro-Meteorological Variables and ENSO
Entropy 2018, 20(1), 38; https://doi.org/10.3390/e20010038 - 09 Jan 2018
Cited by 6
Abstract
Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to [...] Read more.
Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to quantify the linear and non-linear relationship between annual streamflow, extreme precipitation indices over Mekong river basin, and ENSO. We primarily used Pearson correlation as a linear association metric for comparison with mutual information. The analysis was performed at four hydro-meteorological stations located on the mainstream Mekong river basin. It was observed that the nonlinear correlation information is comparatively higher between the large-scale climate index and local hydro-meteorology data in comparison to the traditional linear correlation information. The spatial analysis was carried out using all the grid points in the river basin, which suggests a spatial dependence structure between precipitation extremes and ENSO. Overall, this study suggests that mutual information approach can further detect more meaningful connections between large-scale climate indices and hydro-meteorological variables at different spatio-temporal scales. Application of nonlinear mutual information metric can be an efficient tool to better understand hydro-climatic variables dynamics resulting in improved climate-informed adaptation strategies. Full article
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Exact Renormalization Groups As a Form of Entropic Dynamics
Entropy 2018, 20(1), 25; https://doi.org/10.3390/e20010025 - 04 Jan 2018
Cited by 4
Abstract
The Renormalization Group (RG) is a set of methods that have been instrumental in tackling problems involving an infinite number of degrees of freedom, such as, for example, in quantum field theory and critical phenomena. What all these methods have in common—which is [...] Read more.
The Renormalization Group (RG) is a set of methods that have been instrumental in tackling problems involving an infinite number of degrees of freedom, such as, for example, in quantum field theory and critical phenomena. What all these methods have in common—which is what explains their success—is that they allow a systematic search for those degrees of freedom that happen to be relevant to the phenomena in question. In the standard approaches the RG transformations are implemented by either coarse graining or through a change of variables. When these transformations are infinitesimal, the formalism can be described as a continuous dynamical flow in a fictitious time parameter. It is generally the case that these exact RG equations are functional diffusion equations. In this paper we show that the exact RG equations can be derived using entropic methods. The RG flow is then described as a form of entropic dynamics of field configurations. Although equivalent to other versions of the RG, in this approach the RG transformations receive a purely inferential interpretation that establishes a clear link to information theory. Full article

Review

Jump to: Research

Open AccessEditor’s ChoiceReview
Levitated Nanoparticles for Microscopic Thermodynamics—A Review
Entropy 2018, 20(5), 326; https://doi.org/10.3390/e20050326 - 28 Apr 2018
Cited by 21
Abstract
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we [...] Read more.
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we review the application of levitated nanoparticles as a new experimental platform to explore stochastic thermodynamics in small systems. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

Open AccessFeature PaperEditor’s ChoiceReview
Information Theoretic Approaches for Motor-Imagery BCI Systems: Review and Experimental Comparison
Entropy 2018, 20(1), 7; https://doi.org/10.3390/e20010007 - 02 Jan 2018
Cited by 7
Abstract
Brain computer interfaces (BCIs) have been attracting a great interest in recent years. The common spatial patterns (CSP) technique is a well-established approach to the spatial filtering of the electroencephalogram (EEG) data in BCI applications. Even though CSP was originally proposed from a [...] Read more.
Brain computer interfaces (BCIs) have been attracting a great interest in recent years. The common spatial patterns (CSP) technique is a well-established approach to the spatial filtering of the electroencephalogram (EEG) data in BCI applications. Even though CSP was originally proposed from a heuristic viewpoint, it can be also built on very strong foundations using information theory. This paper reviews the relationship between CSP and several information-theoretic approaches, including the Kullback–Leibler divergence, the Beta divergence and the Alpha-Beta log-det (AB-LD)divergence. We also revise other approaches based on the idea of selecting those features that are maximally informative about the class labels. The performance of all the methods will be also compared via experiments. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Show Figures

Figure 1

Back to TopTop