Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 17, Issue 5 (May 2015), Pages 2556-3517

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-49
Export citation of selected articles as:
Open AccessArticle Information Decomposition and Synergy
Entropy 2015, 17(5), 3501-3517; https://doi.org/10.3390/e17053501
Received: 26 March 2015 / Revised: 12 May 2015 / Accepted: 19 May 2015 / Published: 22 May 2015
Cited by 23 | Viewed by 2668 | PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
Recently, a series of papers addressed the problem of decomposing the information of two random variables into shared information, unique information and synergistic information. Several measures were proposed, although still no consensus has been reached. Here, we compare these proposals with an older
[...] Read more.
Recently, a series of papers addressed the problem of decomposing the information of two random variables into shared information, unique information and synergistic information. Several measures were proposed, although still no consensus has been reached. Here, we compare these proposals with an older approach to define synergistic information based on the projections on exponential families containing only up to k-th order interactions. We show that these measures are not compatible with a decomposition into unique, shared and synergistic information if one requires that all terms are always non-negative (local positivity). We illustrate the difference between the two measures for multivariate Gaussians. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Operational Reliability Assessment of Compressor Gearboxes with Normalized Lifting Wavelet Entropy from Condition Monitoring Information
Entropy 2015, 17(5), 3479-3500; https://doi.org/10.3390/e17053479
Received: 12 April 2015 / Accepted: 14 May 2015 / Published: 20 May 2015
Viewed by 2058 | PDF Full-text (1431 KB) | HTML Full-text | XML Full-text
Abstract
Classical reliability assessment methods have predominantly focused on probability and statistical theories, which are insufficient in assessing the operational reliability of individual mechanical equipment with time-varying characteristics. A new approach to assess machinery operational reliability with normalized lifting wavelet entropy from condition monitoring
[...] Read more.
Classical reliability assessment methods have predominantly focused on probability and statistical theories, which are insufficient in assessing the operational reliability of individual mechanical equipment with time-varying characteristics. A new approach to assess machinery operational reliability with normalized lifting wavelet entropy from condition monitoring information is proposed, which is different from classical reliability assessment methods depending on probability and statistics analysis. The machinery vibration signals with time-varying operational characteristics are firstly decomposed and reconstructed by means of a lifting wavelet package transform. The relative energy of every reconstructed signal is computed as an energy percentage of the reconstructed signal in the whole signal energy. Moreover, a normalized lifting wavelet entropy is defined by the relative energy to reveal the machinery operational uncertainty. Finally, operational reliability degree is defined by the quantitative value obtained by the normalized lifting wavelet entropy belonging to the range of [0, 1]. The proposed method is applied in the operational reliability assessment of the gearbox in an oxy-generator compressor to validate the effectiveness. Full article
(This article belongs to the Special Issue Wavelet Entropy: Computation and Applications)
Open AccessArticle Nonparametric Denoising Methods Based on Contourlet Transform with Sharp Frequency Localization: Application to Low Exposure Time Electron Microscopy Images
Entropy 2015, 17(5), 3461-3478; https://doi.org/10.3390/e17053461
Received: 24 February 2015 / Accepted: 29 April 2015 / Published: 20 May 2015
Cited by 5 | Viewed by 2116 | PDF Full-text (4085 KB) | HTML Full-text | XML Full-text
Abstract
Image denoising is a very important step in cryo-transmission electron microscopy (cryo-TEM) and the energy filtering TEM images before the 3D tomography reconstruction, as it addresses the problem of high noise in these images, that leads to a loss of the contained information.
[...] Read more.
Image denoising is a very important step in cryo-transmission electron microscopy (cryo-TEM) and the energy filtering TEM images before the 3D tomography reconstruction, as it addresses the problem of high noise in these images, that leads to a loss of the contained information. High noise levels contribute in particular to difficulties in the alignment required for 3D tomography reconstruction. This paper investigates the denoising of TEM images that are acquired with a very low exposure time, with the primary objectives of enhancing the quality of these low-exposure time TEM images and improving the alignment process. We propose denoising structures to combine multiple noisy copies of the TEM images. The structures are based on Bayesian estimation in the transform domains instead of the spatial domain to build a novel feature preserving image denoising structures; namely: wavelet domain, the contourlet transform domain and the contourlet transform with sharp frequency localization. Numerical image denoising experiments demonstrate the performance of the Bayesian approach in the contourlet transform domain in terms of improving the signal to noise ratio (SNR) and recovering fine details that may be hidden in the data. The SNR and the visual quality of the denoised images are considerably enhanced using these denoising structures that combine multiple noisy copies. The proposed methods also enable a reduction in the exposure time. Full article
Open AccessEditorial Maximum Entropy Applied to Inductive Logic and Reasoning
Entropy 2015, 17(5), 3458-3460; https://doi.org/10.3390/e17053458
Received: 8 May 2015 / Accepted: 13 May 2015 / Published: 18 May 2015
Viewed by 1531 | PDF Full-text (70 KB) | HTML Full-text | XML Full-text
Abstract
This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Open AccessArticle Heat Transfer and Pressure Drop Characteristics in Straight Microchannel of Printed Circuit Heat Exchangers
Entropy 2015, 17(5), 3438-3457; https://doi.org/10.3390/e17053438
Received: 11 January 2015 / Revised: 11 May 2015 / Accepted: 13 May 2015 / Published: 18 May 2015
Cited by 12 | Viewed by 3335 | PDF Full-text (4133 KB) | HTML Full-text | XML Full-text
Abstract
Performance tests were carried out for a microchannel printed circuit heat exchanger (PCHE), which was fabricated with micro photo-etching and diffusion bonding technologies. The microchannel PCHE was tested for Reynolds numbers in the range of 100‒850 varying the hot-side inlet temperature between 40
[...] Read more.
Performance tests were carried out for a microchannel printed circuit heat exchanger (PCHE), which was fabricated with micro photo-etching and diffusion bonding technologies. The microchannel PCHE was tested for Reynolds numbers in the range of 100‒850 varying the hot-side inlet temperature between 40 °C–50 °C while keeping the cold-side temperature fixed at 20 °C. It was found that the average heat transfer rate and heat transfer performance of the countercurrrent configuration were 6.8% and 10%‒15% higher, respectively, than those of the parallel flow. The average heat transfer rate, heat transfer performance and pressure drop increased with increasing Reynolds number in all experiments. Increasing inlet temperature did not affect the heat transfer performance while it slightly decreased the pressure drop in the experimental range considered. Empirical correlations have been developed for the heat transfer coefficient and pressure drop factor as functions of the Reynolds number. Full article
Open AccessArticle Minimum Error Entropy Algorithms with Sparsity Penalty Constraints
Entropy 2015, 17(5), 3419-3437; https://doi.org/10.3390/e17053419
Received: 30 January 2015 / Revised: 28 April 2015 / Accepted: 5 May 2015 / Published: 18 May 2015
Cited by 7 | Viewed by 1738 | PDF Full-text (915 KB) | HTML Full-text | XML Full-text
Abstract
Recently, sparse adaptive learning algorithms have been developed to exploit system sparsity as well as to mitigate various noise disturbances in many applications. In particular, in sparse channel estimation, the parameter vector with sparsity characteristic can be well estimated from noisy measurements through
[...] Read more.
Recently, sparse adaptive learning algorithms have been developed to exploit system sparsity as well as to mitigate various noise disturbances in many applications. In particular, in sparse channel estimation, the parameter vector with sparsity characteristic can be well estimated from noisy measurements through a sparse adaptive filter. In previous studies, most works use the mean square error (MSE) based cost to develop sparse filters, which is rational under the assumption of Gaussian distributions. However, Gaussian assumption does not always hold in real-world environments. To address this issue, we incorporate in this work an l1-norm or a reweighted l1-norm into the minimum error entropy (MEE) criterion to develop new sparse adaptive filters, which may perform much better than the MSE based methods, especially in heavy-tailed non-Gaussian situations, since the error entropy can capture higher-order statistics of the errors. In addition, a new approximator of l0-norm, based on the correntropy induced metric (CIM), is also used as a sparsity penalty term (SPT). We analyze the mean square convergence of the proposed new sparse adaptive filters. An energy conservation relation is derived and a sufficient condition is obtained, which ensures the mean square convergence. Simulation results confirm the superior performance of the new algorithms. Full article
Open AccessArticle Entropy Approximation in Lossy Source Coding Problem
Entropy 2015, 17(5), 3400-3418; https://doi.org/10.3390/e17053400
Received: 26 March 2015 / Revised: 11 May 2015 / Accepted: 12 May 2015 / Published: 18 May 2015
Cited by 4 | Viewed by 1527 | PDF Full-text (359 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we investigate a lossy source coding problem, where an upper limit on the permitted distortion is defined for every dataset element. It can be seen as an alternative approach to rate distortion theory where a bound on the allowed average
[...] Read more.
In this paper, we investigate a lossy source coding problem, where an upper limit on the permitted distortion is defined for every dataset element. It can be seen as an alternative approach to rate distortion theory where a bound on the allowed average error is specified. In order to find the entropy, which gives a statistical length of source code compatible with a fixed distortion bound, a corresponding optimization problem has to be solved. First, we show how to simplify this general optimization by reducing the number of coding partitions, which are irrelevant for the entropy calculation. In our main result, we present a fast and feasible for implementation greedy algorithm, which allows one to approximate the entropy within an additive error term of log2 e. The proof is based on the minimum entropy set cover problem, for which a similar bound was obtained. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle Non-Abelian Topological Approach to Non-Locality of a Hypergraph State
Entropy 2015, 17(5), 3376-3399; https://doi.org/10.3390/e17053376
Received: 16 February 2015 / Revised: 16 April 2015 / Accepted: 8 May 2015 / Published: 15 May 2015
Cited by 4 | Viewed by 1540 | PDF Full-text (3729 KB) | HTML Full-text | XML Full-text
Abstract
We present a theoretical study of new families of stochastic complex information modules encoded in the hypergraph states which are defined by the fractional entropic descriptor. The essential connection between the Lyapunov exponents and d-regular hypergraph fractal set is elucidated. To further
[...] Read more.
We present a theoretical study of new families of stochastic complex information modules encoded in the hypergraph states which are defined by the fractional entropic descriptor. The essential connection between the Lyapunov exponents and d-regular hypergraph fractal set is elucidated. To further resolve the divergence in the complexity of classical and quantum representation of a hypergraph, we have investigated the notion of non-amenability and its relation to combinatorics of dynamical self-organization for the case of fractal system of free group on finite generators. The exact relation between notion of hypergraph non-locality and quantum encoding through system sets of specified non-Abelian fractal geometric structures is presented. Obtained results give important impetus towards designing of approximation algorithms for chip imprinted circuits in scalable quantum information systems. Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Open AccessArticle Nonlinear Stochastic Control and Information Theoretic Dualities: Connections, Interdependencies and Thermodynamic Interpretations
Entropy 2015, 17(5), 3352-3375; https://doi.org/10.3390/e17053352
Received: 2 February 2015 / Revised: 21 April 2015 / Accepted: 29 April 2015 / Published: 15 May 2015
Cited by 6 | Viewed by 2550 | PDF Full-text (748 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present connections between recent developments on the linearly-solvable stochastic optimal control framework with early work in control theory based on the fundamental dualities between free energy and relative entropy. We extend these connections to nonlinear stochastic systems with non-affine
[...] Read more.
In this paper, we present connections between recent developments on the linearly-solvable stochastic optimal control framework with early work in control theory based on the fundamental dualities between free energy and relative entropy. We extend these connections to nonlinear stochastic systems with non-affine controls by using the generalized version of the Feynman–Kac lemma. We present alternative formulations of the linearly-solvable stochastic optimal control framework and discuss information theoretic and thermodynamic interpretations. On the algorithmic side, we present iterative stochastic optimal control algorithms and applications to nonlinear stochastic systems. We conclude with an overview of the frameworks presented and discuss limitations, differences and future directions. Full article
Open AccessFeature PaperArticle An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro
Entropy 2015, 17(5), 3332-3351; https://doi.org/10.3390/e17053332
Received: 13 March 2015 / Accepted: 11 May 2015 / Published: 14 May 2015
Cited by 2 | Viewed by 2930 | PDF Full-text (8499 KB) | HTML Full-text | XML Full-text
Abstract
An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In
[...] Read more.
An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In the micro-macro transition, it is shown how this local information quantity is transformed into a macroscopic entropy, as the local states are aggregated into macroscopic concentration variables. The information loss in this transition is identified, and the connection to the irreversibility of the macro dynamics and the second law of thermodynamics is discussed. This is then connected to a process of further coarse-graining towards higher characteristic length scales in the context of chemical reaction-diffusion dynamics capable of pattern formation. On these higher levels of coarse-graining, information flows across length scales and across space are defined. These flows obey a continuity equation for information, and they are connected to the thermodynamic constraints of the system, via an outflow of information from macroscopic to microscopic levels in the form of entropy production, as well as an inflow of information, from an external free energy source, if a spatial chemical pattern is to be maintained. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle A Mean-Variance Hybrid-Entropy Model for Portfolio Selection with Fuzzy Returns
Entropy 2015, 17(5), 3319-3331; https://doi.org/10.3390/e17053319
Received: 4 February 2015 / Accepted: 20 April 2015 / Published: 14 May 2015
Cited by 6 | Viewed by 1689 | PDF Full-text (791 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we define the portfolio return as fuzzy average yield and risk as hybrid-entropy and variance to deal with the portfolio selection problem with both random uncertainty and fuzzy uncertainty, and propose a mean-variance hybrid-entropy model (MVHEM). A multi-objective genetic algorithm
[...] Read more.
In this paper, we define the portfolio return as fuzzy average yield and risk as hybrid-entropy and variance to deal with the portfolio selection problem with both random uncertainty and fuzzy uncertainty, and propose a mean-variance hybrid-entropy model (MVHEM). A multi-objective genetic algorithm named Non-dominated Sorting Genetic Algorithm II (NSGA-II) is introduced to solve the model. We make empirical comparisons by using the data from the Shanghai and Shenzhen stock exchanges in China. The results show that the MVHEM generally performs better than the traditional portfolio selection models. Full article
Open AccessArticle The Homological Nature of Entropy
Entropy 2015, 17(5), 3253-3318; https://doi.org/10.3390/e17053253
Received: 31 January 2015 / Revised: 3 May 2015 / Accepted: 5 May 2015 / Published: 13 May 2015
Cited by 5 | Viewed by 3961 | PDF Full-text (510 KB) | HTML Full-text | XML Full-text
Abstract
We propose that entropy is a universal co-homological class in a theory associated to a family of observable quantities and a family of probability distributions. Three cases are presented: (1) classical probabilities and random variables; (2) quantum probabilities and observable operators; (3) dynamic
[...] Read more.
We propose that entropy is a universal co-homological class in a theory associated to a family of observable quantities and a family of probability distributions. Three cases are presented: (1) classical probabilities and random variables; (2) quantum probabilities and observable operators; (3) dynamic probabilities and observation trees. This gives rise to a new kind of topology for information processes, that accounts for the main information functions: entropy, mutual-informations at all orders, and Kullback–Leibler divergence and generalizes them in several ways. The article is divided into two parts, that can be read independently. In the first part, the introduction, we provide an overview of the results, some open questions, future results and lines of research, and discuss briefly the application to complex data. In the second part we give the complete definitions and proofs of the theorems A, C and E in the introduction, which show why entropy is the first homological invariant of a structure of information in four contexts: static classical or quantum probability, dynamics of classical or quantum strategies of observation of a finite system. Full article
Open AccessArticle Generalized Stochastic Fokker-Planck Equations
Entropy 2015, 17(5), 3205-3252; https://doi.org/10.3390/e17053205
Received: 2 March 2015 / Revised: 23 April 2015 / Accepted: 27 April 2015 / Published: 13 May 2015
Cited by 6 | Viewed by 2234 | PDF Full-text (382 KB) | HTML Full-text | XML Full-text
Abstract
We consider a system of Brownian particles with long-range interactions. We go beyond the mean field approximation and take fluctuations into account. We introduce a new class of stochastic Fokker-Planck equations associated with a generalized thermodynamical formalism. Generalized thermodynamics arises in the case
[...] Read more.
We consider a system of Brownian particles with long-range interactions. We go beyond the mean field approximation and take fluctuations into account. We introduce a new class of stochastic Fokker-Planck equations associated with a generalized thermodynamical formalism. Generalized thermodynamics arises in the case of complex systems experiencing small-scale constraints. In the limit of short-range interactions, we obtain a generalized class of stochastic Cahn-Hilliard equations. Our formalism has application for several systems of physical interest including self-gravitating Brownian particles, colloid particles at a fluid interface, superconductors of type II, nucleation, the chemotaxis of bacterial populations, and two-dimensional turbulence. We also introduce a new type of generalized entropy taking into account anomalous diffusion and exclusion or inclusion constraints. Full article
(This article belongs to the Special Issue Entropic Aspects in Statistical Physics of Complex Systems)
Open AccessArticle Quantum Data Locking for Secure Communication against an Eavesdropper with Time-Limited Storage
Entropy 2015, 17(5), 3194-3204; https://doi.org/10.3390/e17053194
Received: 6 April 2015 / Revised: 6 May 2015 / Accepted: 7 May 2015 / Published: 13 May 2015
Cited by 2 | Viewed by 1512 | PDF Full-text (94 KB) | HTML Full-text | XML Full-text
Abstract
Quantum cryptography allows for unconditionally secure communication against an eavesdropper endowed with unlimited computational power and perfect technologies, who is only constrained by the laws of physics. We review recent results showing that, under the assumption that the eavesdropper can store quantum information
[...] Read more.
Quantum cryptography allows for unconditionally secure communication against an eavesdropper endowed with unlimited computational power and perfect technologies, who is only constrained by the laws of physics. We review recent results showing that, under the assumption that the eavesdropper can store quantum information only for a limited time, it is possible to enhance the performance of quantum key distribution in both a quantitative and qualitative fashion. We consider quantum data locking as a cryptographic primitive and discuss secure communication and key distribution protocols. For the case of a lossy optical channel, this yields the theoretical possibility of generating secret key at a constant rate of 1 bit per mode at arbitrarily long communication distances. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Open AccessArticle Exact Solutions of Non-Linear Lattice Equations by an Improved Exp-Function Method
Entropy 2015, 17(5), 3182-3193; https://doi.org/10.3390/e17053182
Received: 9 April 2015 / Revised: 29 April 2015 / Accepted: 30 April 2015 / Published: 13 May 2015
Cited by 5 | Viewed by 1619 | PDF Full-text (233 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the exp-function method is improved to construct exact solutions of non-linear lattice equations by modifying its exponential function ansätz. The improved method has two advantages. One is that it can solve non-linear lattice equations with variable coefficients, and the other
[...] Read more.
In this paper, the exp-function method is improved to construct exact solutions of non-linear lattice equations by modifying its exponential function ansätz. The improved method has two advantages. One is that it can solve non-linear lattice equations with variable coefficients, and the other is that it is not necessary to balance the highest order derivative with the highest order nonlinear term in the procedure of determining the exponential function ansätz. To show the advantages of this improved method, a variable-coefficient mKdV lattice equation is considered. As a result, new exact solutions, which include kink-type solutions and bell-kink-type solutions, are obtained. Full article
(This article belongs to the Special Issue Non-Linear Lattice) Printed Edition available
Open AccessArticle Existence of Ulam Stability for Iterative Fractional Differential Equations Based on Fractional Entropy
Entropy 2015, 17(5), 3172-3181; https://doi.org/10.3390/e17053172
Received: 12 March 2015 / Revised: 27 April 2015 / Accepted: 11 May 2015 / Published: 13 May 2015
Cited by 15 | Viewed by 1475 | PDF Full-text (202 KB) | HTML Full-text | XML Full-text
Abstract
In this study, we introduce conditions for the existence of solutions for an iterative functional differential equation of fractional order. We prove that the solutions of the above class of fractional differential equations are bounded by Tsallis entropy. The method depends on the
[...] Read more.
In this study, we introduce conditions for the existence of solutions for an iterative functional differential equation of fractional order. We prove that the solutions of the above class of fractional differential equations are bounded by Tsallis entropy. The method depends on the concept of Hyers-Ulam stability. The arbitrary order is suggested in the sense of Riemann-Liouville calculus. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Open AccessArticle Effect of Heterogeneity in Initial Geographic Distribution on Opinions’ Competitiveness
Entropy 2015, 17(5), 3160-3171; https://doi.org/10.3390/e17053160
Received: 16 February 2015 / Revised: 7 May 2015 / Accepted: 11 May 2015 / Published: 13 May 2015
Cited by 1 | Viewed by 1715 | PDF Full-text (821 KB) | HTML Full-text | XML Full-text
Abstract
Spin dynamics on networks allows us to understand how a global consensus emerges out of individual opinions. Here, we are interested in the effect of heterogeneity in the initial geographic distribution of a competing opinion on the competitiveness of its own opinion. Accordingly,
[...] Read more.
Spin dynamics on networks allows us to understand how a global consensus emerges out of individual opinions. Here, we are interested in the effect of heterogeneity in the initial geographic distribution of a competing opinion on the competitiveness of its own opinion. Accordingly, in this work, we studied the effect of spatial heterogeneity on the majority rule dynamics using a three-state spin model, in which one state is neutral. Monte Carlo simulations were performed on square lattices divided into square blocks (cells). Accordingly, one competing opinion was distributed uniformly among cells, whereas the spatial distribution of the rival opinion was varied from the uniform to heterogeneous, with the median-to-mean ratio in the range from 1 to 0. When the size of discussion group is odd, the uncommitted agents disappear completely after 3.30 ± 0.05 update cycles, and then the system evolves in a two-state regime with complementary spatial distributions of two competing opinions. Even so, the initial heterogeneity in the spatial distribution of one of the competing opinions causes a decrease of this opinion competitiveness. That is, the opinion with initially heterogeneous spatial distribution has less probability to win, than the opinion with the initially uniform spatial distribution, even when the initial concentrations of both opinions are equal. We found that although the time to consensus , the opinion’s recession rate is determined during the first 3.3 update cycles. On the other hand, we found that the initial heterogeneity of the opinion spatial distribution assists the formation of quasi-stable regions, in which this opinion is dominant. The results of Monte Carlo simulations are discussed with regard to the electoral competition of political parties. Full article
(This article belongs to the Section Complexity)
Open AccessReview Continuous-Variable Entanglement Swapping
Entropy 2015, 17(5), 3152-3159; https://doi.org/10.3390/e17053152
Received: 26 March 2015 / Revised: 1 May 2015 / Accepted: 4 May 2015 / Published: 13 May 2015
Cited by 1 | Viewed by 1545 | PDF Full-text (217 KB) | HTML Full-text | XML Full-text
Abstract
We present a very brief overview of entanglement swapping as it relates to continuous-variable quantum information. The technical background required is discussed and the natural link to quantum teleportation is established before discussing the nature of Gaussian entanglement swapping. The limitations of Gaussian
[...] Read more.
We present a very brief overview of entanglement swapping as it relates to continuous-variable quantum information. The technical background required is discussed and the natural link to quantum teleportation is established before discussing the nature of Gaussian entanglement swapping. The limitations of Gaussian swapping are introduced, along with the general applications of swapping in the context of to quantum communication and entanglement distribution. In light of this, we briefly summarize a collection of entanglement swapping schemes which incorporate a non-Gaussian ingredient and the benefits of such schemes are noted. Finally, we motivate the need to further study and develop such schemes by highlighting requirements of a continuous-variable repeater. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Open AccessArticle 2D Temperature Analysis of Energy and Exergy Characteristics of Laminar Steady Flow across a Square Cylinder under Strong Blockage
Entropy 2015, 17(5), 3124-3151; https://doi.org/10.3390/e17053124
Received: 10 March 2015 / Revised: 30 April 2015 / Accepted: 7 May 2015 / Published: 12 May 2015
Cited by 2 | Viewed by 1723 | PDF Full-text (2601 KB) | HTML Full-text | XML Full-text
Abstract
Energy and exergy characteristics of a square cylinder (SC) in confined flow are investigated computationally by numerically handling the steady-state continuity, Navier-Stokes and energy equations in the Reynolds number range of Re = 10–50, where the blockage ratio (β = B/H) is kept
[...] Read more.
Energy and exergy characteristics of a square cylinder (SC) in confined flow are investigated computationally by numerically handling the steady-state continuity, Navier-Stokes and energy equations in the Reynolds number range of Re = 10–50, where the blockage ratio (β = B/H) is kept constant at the high level of β = 0.8. Computations indicated for the upstream region that, the mean non-dimensional streamwise (u/Uo) and spanwise (v/Uo) velocities attain the values of u/Uo = 0.840®0.879 and v/Uo = 0.236®0.386 (Re = 10®50) on the front-surface of the SC, implying that Reynolds number and blockage have stronger impact on the spanwise momentum activity. It is determined that flows with high Reynolds number interact with the front-surface of the SC developing thinner thermal boundary layers and greater temperature gradients, which promotes the thermal entropy generation values as well. The strict guidance of the throat, not only resulted in the fully developed flow character, but also imposed additional cooling; such that the analysis pointed out the drop of duct wall (y = 0.025 m) non-dimensional temperature values (ζ) from ζ = 0.387®0.926 (Re = 10®50) at xth = 0 mm to ζ = 0.002®0.266 at xth = 40 mm. In the downstream region, spanwise thermal disturbances are evaluated to be most inspectable in the vortex driven region, where the temperature values show decrease trends in the spanwise direction. In the corresponding domain, exergy destruction is determined to grow with Reynolds number and decrease in the streamwise direction (xds = 0®10 mm). Besides, asymmetric entropy distributions as well were recorded due to the comprehensive mixing caused by the vortex system. Full article
(This article belongs to the Special Issue Exergy: Analysis and Applications)
Open AccessReview The Multiscale Entropy Algorithm and Its Variants: A Review
Entropy 2015, 17(5), 3110-3123; https://doi.org/10.3390/e17053110
Received: 18 March 2015 / Accepted: 8 May 2015 / Published: 12 May 2015
Cited by 70 | Viewed by 3166 | PDF Full-text (197 KB) | HTML Full-text | XML Full-text
Abstract
Multiscale entropy (MSE) analysis was introduced in the 2002 to evaluate the complexity of a time series by quantifying its entropy over a range of temporal scales. The algorithm has been successfully applied in different research fields. Since its introduction, a number of
[...] Read more.
Multiscale entropy (MSE) analysis was introduced in the 2002 to evaluate the complexity of a time series by quantifying its entropy over a range of temporal scales. The algorithm has been successfully applied in different research fields. Since its introduction, a number of modifications and refinements have been proposed, some aimed at increasing the accuracy of the entropy estimates, others at exploring alternative coarse-graining procedures. In this review, we first describe the original MSE algorithm. Then, we review algorithms that have been introduced to improve the estimation of MSE. We also report a recent generalization of the method to higher moments. Full article
(This article belongs to the Special Issue Multiscale Entropy and Its Applications in Medicine and Biology)
Open AccessArticle Exponential Outer Synchronization between Two Uncertain Time-Varying Complex Networks with Nonlinear Coupling
Entropy 2015, 17(5), 3097-3109; https://doi.org/10.3390/e17053097
Received: 5 March 2015 / Revised: 27 April 2015 / Accepted: 5 May 2015 / Published: 11 May 2015
Cited by 13 | Viewed by 1741 | PDF Full-text (289 KB) | HTML Full-text | XML Full-text
Abstract
This paper studies the problem of exponential outer synchronization between two uncertain nonlinearly coupled complex networks with time delays. In order to synchronize uncertain complex networks, an adaptive control scheme is designed based on the Lyapunov stability theorem. Simultaneously, the unknown system parameters
[...] Read more.
This paper studies the problem of exponential outer synchronization between two uncertain nonlinearly coupled complex networks with time delays. In order to synchronize uncertain complex networks, an adaptive control scheme is designed based on the Lyapunov stability theorem. Simultaneously, the unknown system parameters of uncertain complex networks are identified when exponential outer synchronization occurs. Finally, numerical examples are provided to demonstrate the feasibility and effectiveness of the theoretical results. Full article
(This article belongs to the Special Issue Recent Advances in Chaos Theory and Complex Networks)
Open AccessArticle Predicting Community Evolution in Social Networks
Entropy 2015, 17(5), 3053-3096; https://doi.org/10.3390/e17053053
Received: 28 February 2015 / Revised: 4 May 2015 / Accepted: 5 May 2015 / Published: 11 May 2015
Cited by 14 | Viewed by 2528 | PDF Full-text (8691 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, sustained development of different social media can be observed worldwide. One of the relevant research domains intensively explored recently is analysis of social communities existing in social media as well as prediction of their future evolution taking into account collected historical evolution
[...] Read more.
Nowadays, sustained development of different social media can be observed worldwide. One of the relevant research domains intensively explored recently is analysis of social communities existing in social media as well as prediction of their future evolution taking into account collected historical evolution chains. These evolution chains proposed in the paper contain group states in the previous time frames and its historical transitions that were identified using one out of two methods: Stable Group Changes Identification (SGCI) and Group Evolution Discovery (GED). Based on the observed evolution chains of various length, structural network features are extracted, validated and selected as well as used to learn classification models. The experimental studies were performed on three real datasets with different profile: DBLP, Facebook and Polish blogosphere. The process of group prediction was analysed with respect to different classifiers as well as various descriptive feature sets extracted from evolution chains of different length. The results revealed that, in general, the longer evolution chains the better predictive abilities of the classification models. However, chains of length 3 to 7 enabled the GED-based method to almost reach its maximum possible prediction quality. For SGCI, this value was at the level of 3–5 last periods. Full article
(This article belongs to the Section Complexity)
Open AccessCommunication Dimensional Upgrade Approach for Spatial-Temporal Fusion of Trend Series in Subsidence Evaluation
Entropy 2015, 17(5), 3035-3052; https://doi.org/10.3390/e17053035
Received: 16 September 2014 / Revised: 15 April 2015 / Accepted: 29 April 2015 / Published: 11 May 2015
Cited by 2 | Viewed by 1987 | PDF Full-text (2419 KB) | HTML Full-text | XML Full-text
Abstract
Physical models and grey system models (GSMs) are commonly used to evaluate and predict physical behavior. A physical model avoids the incorrect trend series of a GSM, whereas a GSM avoids the assumptions and uncertainty of a physical model. A technique that combines
[...] Read more.
Physical models and grey system models (GSMs) are commonly used to evaluate and predict physical behavior. A physical model avoids the incorrect trend series of a GSM, whereas a GSM avoids the assumptions and uncertainty of a physical model. A technique that combines the results of physical models and GSMs would make prediction more reasonable and reliable. This study proposes a fusion method for combining two trend series, calculated using two one-dimensional models, respectively, that uses a slope criterion and a distance weighting factor in the temporal and spatial domains. The independent one-dimensional evaluations are upgraded to a spatially and temporally connected two-dimensional distribution. The proposed technique was applied to a subsidence problem in Jhuoshuei River Alluvial Fan, Taiwan. The fusion results show dramatic decreases of subsidence quantity and rate compared to those estimated by the GSM. The subsidence behavior estimated using the proposed method is physically reasonable due to a convergent trend of subsidence under the assumption of constant discharge of groundwater. The technique proposed in this study can be used in fields that require a combination of two trend series from physical and nonphysical models. Full article
(This article belongs to the Special Issue Entropy and Space-Time Analysis in Environment and Health)
Figures

Graphical abstract

Open AccessReview Log-Determinant Divergences Revisited: Alpha-Beta and Gamma Log-Det Divergences
Entropy 2015, 17(5), 2988-3034; https://doi.org/10.3390/e17052988
Received: 19 December 2014 / Revised: 18 March 2015 / Accepted: 5 May 2015 / Published: 8 May 2015
Cited by 11 | Viewed by 2151 | PDF Full-text (759 KB) | HTML Full-text | XML Full-text
Abstract
This work reviews and extends a family of log-determinant (log-det) divergences for symmetric positive definite (SPD) matrices and discusses their fundamental properties. We show how to use parameterized Alpha-Beta (AB) and Gamma log-det divergences to generate many well-known divergences; in particular, we consider
[...] Read more.
This work reviews and extends a family of log-determinant (log-det) divergences for symmetric positive definite (SPD) matrices and discusses their fundamental properties. We show how to use parameterized Alpha-Beta (AB) and Gamma log-det divergences to generate many well-known divergences; in particular, we consider the Stein’s loss, the S-divergence, also called Jensen-Bregman LogDet (JBLD) divergence, Logdet Zero (Bhattacharyya) divergence, Affine Invariant Riemannian Metric (AIRM), and other divergences. Moreover, we establish links and correspondences between log-det divergences and visualise them on an alpha-beta plane for various sets of parameters. We use this unifying framework to interpret and extend existing similarity measures for semidefinite covariance matrices in finite-dimensional Reproducing Kernel Hilbert Spaces (RKHS). This paper also shows how the Alpha-Beta family of log-det divergences relates to the divergences of multivariate and multilinear normal distributions. Closed form formulas are derived for Gamma divergences of two multivariate Gaussian densities; the special cases of the Kullback-Leibler, Bhattacharyya, Rényi, and Cauchy-Schwartz divergences are discussed. Symmetrized versions of log-det divergences are also considered and briefly reviewed. Finally, a class of divergences is extended to multiway divergences for separable covariance (or precision) matrices. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle Kolmogorov Complexity Based Information Measures Applied to the Analysis of Different River Flow Regimes
Entropy 2015, 17(5), 2973-2987; https://doi.org/10.3390/e17052973
Received: 14 January 2015 / Revised: 26 March 2015 / Accepted: 6 May 2015 / Published: 8 May 2015
Cited by 5 | Viewed by 1786 | PDF Full-text (4703 KB) | HTML Full-text | XML Full-text
Abstract
We have used the Kolmogorov complexities and the Kolmogorov complexity spectrum to quantify the randomness degree in river flow time series of seven rivers with different regimes in Bosnia and Herzegovina, representing their different type of courses, for the period 1965–1986. In particular,
[...] Read more.
We have used the Kolmogorov complexities and the Kolmogorov complexity spectrum to quantify the randomness degree in river flow time series of seven rivers with different regimes in Bosnia and Herzegovina, representing their different type of courses, for the period 1965–1986. In particular, we have examined: (i) the Neretva, Bosnia and the Drina (mountain and lowland parts), (ii) the Miljacka and the Una (mountain part) and the Vrbas and the Ukrina (lowland part) and then calculated the Kolmogorov complexity (KC) based on the Lempel–Ziv Algorithm (LZA) (lower—KCL and upper—KCU), Kolmogorov complexity spectrum highest value (KCM) and overall Kolmogorov complexity (KCO) values for each time series. The results indicate that the KCL, KCU, KCM and KCO values in seven rivers show some similarities regardless of the amplitude differences in their monthly flow rates. The KCL, KCU and KCM complexities as information measures do not “see” a difference between time series which have different amplitude variations but similar random components. However, it seems that the KCO information measures better takes into account both the amplitude and the place of the components in a time series. Full article
(This article belongs to the Special Issue Entropy in Hydrology)
Open AccessArticle Maximum Entropy Method for Operational Loads Feedback Using Concrete Dam Displacement
Entropy 2015, 17(5), 2958-2972; https://doi.org/10.3390/e17052958
Received: 7 February 2015 / Revised: 28 April 2015 / Accepted: 29 April 2015 / Published: 8 May 2015
Cited by 1 | Viewed by 1697 | PDF Full-text (404 KB) | HTML Full-text | XML Full-text
Abstract
Safety control of concrete dams is required due to the potential great loss of life and property in case of dam failure. The purpose of this paper is to feed back the operational control loads for concrete dam displacement using the maximum entropy
[...] Read more.
Safety control of concrete dams is required due to the potential great loss of life and property in case of dam failure. The purpose of this paper is to feed back the operational control loads for concrete dam displacement using the maximum entropy method. The proposed method is not aimed at a judgement about the safety conditions of the dam. When a strong trend-line effect is evident, the method should be carefully applied. In these cases, the hydrostatic and temperature effects are added to the irreversible displacements, thus maximum operational loads should be accordingly reduced. The probability density function for the extreme load effect component of dam displacement can be selected by employing the principle of maximum entropy, which is effective to construct the least subjective probability density distribution merely given the moments information from the stated data. The critical load effect component in the warning criterion can be determined through the corresponding cumulative distribution function obtained by the maximum entropy method. Then the control loads feedback of concrete dam displacement is realized by the proposed warning criterion. The proposed method is applied to a concrete dam. A comparison of the results shows that the maximum entropy method can feed back rational control loads for the dam displacement. The control loads diagram obtained can be a straightforward and visual tool to the operation and management department of the concrete dam. The result from the proposed method is recommended to be used due to minimal subjectivity. Full article
Open AccessArticle Oxygen Saturation and RR Intervals Feature Selection for Sleep Apnea Detection
Entropy 2015, 17(5), 2932-2957; https://doi.org/10.3390/e17052932
Received: 3 December 2014 / Revised: 30 April 2015 / Accepted: 4 May 2015 / Published: 7 May 2015
Cited by 9 | Viewed by 2814 | PDF Full-text (1857 KB) | HTML Full-text | XML Full-text
Abstract
A diagnostic system for sleep apnea based on oxygen saturation and RR intervals obtained from the EKG (electrocardiogram) is proposed with the goal to detect and quantify minute long segments of sleep with breathing pauses. We measured the discriminative capacity of combinations of
[...] Read more.
A diagnostic system for sleep apnea based on oxygen saturation and RR intervals obtained from the EKG (electrocardiogram) is proposed with the goal to detect and quantify minute long segments of sleep with breathing pauses. We measured the discriminative capacity of combinations of features obtained from RR series and oximetry to evaluate improvements of the performance compared to oximetry-based features alone. Time and frequency domain variables derived from oxygen saturation (SpO2) as well as linear and non-linear variables describing the RR series have been explored in recordings from 70 patients with suspected sleep apnea. We applied forward feature selection in order to select a minimal set of variables that are able to locate patterns indicating respiratory pauses. Linear discriminant analysis (LDA) was used to classify the presence of apnea during specific segments. The system will finally provide a global score indicating the presence of clinically significant apnea integrating the segment based apnea detection. LDA results in an accuracy of 87%; sensitivity of 76% and specificity of 91% (AUC = 0.90) with a global classification of 97% when only oxygen saturation is used. In case of additionally including features from the RR series; the system performance improves to an accuracy of 87%; sensitivity of 73% and specificity of 92% (AUC = 0.92), with a global classification rate of 100%. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics)
Open AccessArticle Three-Stage Quantum Cryptography Protocol under Collective-Rotation Noise
Entropy 2015, 17(5), 2919-2931; https://doi.org/10.3390/e17052919
Received: 31 March 2015 / Revised: 30 April 2015 / Accepted: 4 May 2015 / Published: 7 May 2015
Cited by 4 | Viewed by 1861 | PDF Full-text (413 KB) | HTML Full-text | XML Full-text
Abstract
Information security is increasingly important as society migrates to the information age. Classical cryptography widely used nowadays is based on computational complexity, which means that it assumes that solving some particular mathematical problems is hard on a classical computer. With the development of
[...] Read more.
Information security is increasingly important as society migrates to the information age. Classical cryptography widely used nowadays is based on computational complexity, which means that it assumes that solving some particular mathematical problems is hard on a classical computer. With the development of supercomputers and, potentially, quantum computers, classical cryptography has more and more potential risks. Quantum cryptography provides a solution which is based on the Heisenberg uncertainty principle and no-cloning theorem. While BB84-based quantum protocols are only secure when a single photon is used in communication, the three-stage quantum protocol is multi-photon tolerant. However, existing analyses assume perfect noiseless channels. In this paper, a multi-photon analysis is performed for the three-stage quantum protocol under the collective-rotation noise model. The analysis provides insights into the impact of the noise level on a three-stage quantum cryptography system. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Open AccessArticle AIM for Allostery: Using the Ising Model to Understand Information Processing and Transmission in Allosteric Biomolecular Systems
Entropy 2015, 17(5), 2895-2918; https://doi.org/10.3390/e17052895
Received: 5 March 2015 / Revised: 16 April 2015 / Accepted: 30 April 2015 / Published: 7 May 2015
Cited by 3 | Viewed by 3093 | PDF Full-text (2124 KB) | HTML Full-text | XML Full-text
Abstract
In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of biomolecular
[...] Read more.
In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of biomolecular “action at a distance” is termed allostery. Although allostery is ubiquitous in biological regulation and signal transduction, its treatment in theoretical models has mostly eschewed quantitative descriptions involving the system’s underlying structural components and their interactions. Here, we show how Ising models can be used to formulate an approach to allostery in a structural context of interactions between the constitutive components by building simple allosteric constructs we termed Allosteric Ising Models (AIMs). We introduce the use of AIMs in analytical and numerical calculations that relate thermodynamic descriptions of allostery to the structural context, and then show that many fundamental properties of allostery, such as the multiplicative property of parallel allosteric channels, are revealed from the analysis of such models. The power of exploring mechanistic structural models of allosteric function in more complex systems by using AIMs is demonstrated by building a model of allosteric signaling for an experimentally well-characterized asymmetric homodimer of the dopamine D2 receptor. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Figures

Graphical abstract

Open AccessReview Properties of Nonnegative Hermitian Matrices and New Entropic Inequalities for Noncomposite Quantum Systems
Entropy 2015, 17(5), 2876-2894; https://doi.org/10.3390/e17052876
Received: 30 December 2014 / Revised: 28 April 2015 / Accepted: 4 May 2015 / Published: 6 May 2015
Cited by 25 | Viewed by 1815 | PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
We consider the probability distributions, spin (qudit)-state tomograms and density matrices of quantum states, and their information characteristics, such as Shannon and von Neumann entropies and q-entropies, from the viewpoints of both well-known purely mathematical features of nonnegative numbers and nonnegative matrices and
[...] Read more.
We consider the probability distributions, spin (qudit)-state tomograms and density matrices of quantum states, and their information characteristics, such as Shannon and von Neumann entropies and q-entropies, from the viewpoints of both well-known purely mathematical features of nonnegative numbers and nonnegative matrices and their physical characteristics, such as entanglement and other quantum correlation phenomena. We review entropic inequalities such as the Araki–Lieb inequality and the subadditivity and strong subadditivity conditions known for bipartite and tripartite systems, and recently obtained for single qudit states. We present explicit matrix forms of the known and some new entropic inequalities associated with quantum states of composite and noncomposite systems. We discuss the tomographic probability distributions of qudit states and demonstrate the inequalities for tomographic entropies of the qudit states. In addition, we mention a possibility to use the discussed information properties of single qudit states in quantum technologies based on multilevel atoms and quantum circuits produced of Josephson junctions. Full article
(This article belongs to the Special Issue Entanglement Entropy)
Back to Top