Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 17, Issue 5 (May 2015), Pages 2556-3517

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-49
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Maximum Entropy Applied to Inductive Logic and Reasoning
Entropy 2015, 17(5), 3458-3460; doi:10.3390/e17053458
Received: 8 May 2015 / Accepted: 13 May 2015 / Published: 18 May 2015
PDF Full-text (70 KB) | HTML Full-text | XML Full-text
Abstract
This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)

Research

Jump to: Editorial, Review

Open AccessArticle Optimum Accelerated Degradation Tests for the Gamma Degradation Process Case under the Constraint of Total Cost
Entropy 2015, 17(5), 2556-2572; doi:10.3390/e17052556
Received: 25 February 2015 / Revised: 14 April 2015 / Accepted: 20 April 2015 / Published: 23 April 2015
Cited by 5 | PDF Full-text (762 KB) | HTML Full-text | XML Full-text
Abstract
An accelerated degradation test (ADT) is regarded as an effective alternative to an accelerated life test in the sense that an ADT can provide more accurate information on product reliability, even when few or no failures may be expected before the end of
[...] Read more.
An accelerated degradation test (ADT) is regarded as an effective alternative to an accelerated life test in the sense that an ADT can provide more accurate information on product reliability, even when few or no failures may be expected before the end of a practical test period. In this paper, statistical methods for optimal designing ADT plans are developed assuming that the degradation characteristic follows a gamma process (GP). The GP-based approach has an advantage that it can deal with more frequently encountered situations in which the degradation should always be nonnegative and strictly increasing over time. The optimal ADT plan is developed under the total experimental cost constraint by determining the optimal settings of variables such as the number of measurements, the measurement times, the test stress levels and the number of units allocated to each stress level such that the asymptotic variance of the maximum likelihood estimator of the q-th quantile of the lifetime distribution at the use condition is minimized. In addition, compromise plans are developed to provide means to check the adequacy of the assumed acceleration model. Finally, sensitivity analysis procedures for assessing the effects of the uncertainties in the pre-estimates of unknown parameters are illustrated with an example. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Open AccessArticle Source Localization by Entropic Inference and Backward Renormalization Group Priors
Entropy 2015, 17(5), 2573-2589; doi:10.3390/e17052573
Received: 3 February 2015 / Revised: 13 April 2015 / Accepted: 16 April 2015 / Published: 23 April 2015
Cited by 1 | PDF Full-text (4719 KB) | HTML Full-text | XML Full-text
Abstract
A systematic method of transferring information from coarser to finer resolution based on renormalization group (RG) transformations is introduced. It permits building informative priors in finer scales from posteriors in coarser scales since, under some conditions, RG transformations in the space of hyperparameters
[...] Read more.
A systematic method of transferring information from coarser to finer resolution based on renormalization group (RG) transformations is introduced. It permits building informative priors in finer scales from posteriors in coarser scales since, under some conditions, RG transformations in the space of hyperparameters can be inverted. These priors are updated using renormalized data into posteriors by Maximum Entropy. The resulting inference method, backward RG (BRG) priors, is tested by doing simulations of a functional magnetic resonance imaging (fMRI) experiment. Its results are compared with a Bayesian approach working in the finest available resolution. Using BRG priors sources can be partially identified even when signal to noise ratio levels are up to ~ -25dB improving vastly on the single step Bayesian approach. For low levels of noise the BRG prior is not an improvement over the single scale Bayesian method. Analysis of the histograms of hyperparameters can show how to distinguish if the method is failing, due to very high levels of noise, or if the identification of the sources is, at least partially possible. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Entropy and Recurrence Measures of a Financial Dynamic System by an Interacting Voter System
Entropy 2015, 17(5), 2590-2605; doi:10.3390/e17052590
Received: 25 January 2015 / Revised: 13 April 2015 / Accepted: 20 April 2015 / Published: 23 April 2015
Cited by 4 | PDF Full-text (6649 KB) | HTML Full-text | XML Full-text
Abstract
A financial time series agent-based model is reproduced and investigated by the statistical physics system, the finite-range interacting voter system. The voter system originally describes the collective behavior of voters who constantly update their positions on a particular topic, which is a continuous-time
[...] Read more.
A financial time series agent-based model is reproduced and investigated by the statistical physics system, the finite-range interacting voter system. The voter system originally describes the collective behavior of voters who constantly update their positions on a particular topic, which is a continuous-time Markov process. In the proposed model, the fluctuations of stock price changes are attributed to the market information interaction amongst the traders and certain similarities of investors’ behaviors. Further, the complexity of return series of the financial model is studied in comparison with two real stock indexes, the Shanghai Stock Exchange Composite Index and the Hang Seng Index, by composite multiscale entropy analysis and recurrence analysis. The empirical research shows that the simulation data for the proposed model could grasp some natural features of actual markets to some extent. Full article
Open AccessArticle Uncovering Discrete Non-Linear Dependence with Information Theory
Entropy 2015, 17(5), 2606-2623; doi:10.3390/e17052606
Received: 27 February 2015 / Revised: 21 April 2015 / Accepted: 22 April 2015 / Published: 23 April 2015
Cited by 2 | PDF Full-text (1874 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we model discrete time series as discrete Markov processes of arbitrary order and derive the approximate distribution of the Kullback-Leibler divergence between a known transition probability matrix and its sample estimate. We introduce two new information-theoretic measurements: information memory loss
[...] Read more.
In this paper, we model discrete time series as discrete Markov processes of arbitrary order and derive the approximate distribution of the Kullback-Leibler divergence between a known transition probability matrix and its sample estimate. We introduce two new information-theoretic measurements: information memory loss and information codependence structure. The former measures the memory content within a Markov process and determines its optimal order. The latter assesses the codependence among Markov processes. Both measurements are evaluated on toy examples and applied on high frequency foreign exchange data, focusing on 2008 financial crisis and 2010/2011 Euro crisis. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Recurrence Plot Based Damage Detection Method by Integrating Control Chart
Entropy 2015, 17(5), 2624-2641; doi:10.3390/e17052624
Received: 1 April 2015 / Revised: 13 April 2015 / Accepted: 22 April 2015 / Published: 24 April 2015
Cited by 3 | PDF Full-text (2852 KB) | HTML Full-text | XML Full-text
Abstract
Because of the importance of damage detection in manufacturing systems and other areas, many fault detection methods have been developed that are based on a vibration signal. Little work, however, has been reported in the literature on using a recurrence plot method to
[...] Read more.
Because of the importance of damage detection in manufacturing systems and other areas, many fault detection methods have been developed that are based on a vibration signal. Little work, however, has been reported in the literature on using a recurrence plot method to analyze the vibration signal for damage detection. In this paper, we develop a recurrence plot based fault detection method by integrating the statistical process control technique. The recurrence plots of the vibration signals are derived by using the recurrence plot (RP) method. Five types of features are extracted from the recurrence plots to quantify the vibration signals’ characteristic. Then, the control chart, a multivariate statistical process control technique, is used to monitor these features. The control chart technique, however, has the assumption that all the data should follow a normal distribution. The RP based bootstrap control chart is proposed to estimate the control chart parameters. The performance of the proposed RP based bootstrap control chart is evaluated by a simulation study and compared with other univariate bootstrap control charts based on recurrence plot features. A real case study of rolling element bearing fault detection demonstrates that the proposed fault detection method achieves a very good performance. Full article
Open AccessArticle Stochastic Processes via the Pathway Model
Entropy 2015, 17(5), 2642-2654; doi:10.3390/e17052642
Received: 11 March 2015 / Revised: 22 April 2015 / Accepted: 23 April 2015 / Published: 24 April 2015
Cited by 1 | PDF Full-text (319 KB) | HTML Full-text | XML Full-text
Abstract
After collecting data from observations or experiments, the next step is to analyze the data to build an appropriate mathematical or stochastic model to describe the data so that further studies can be done with the help of the model. In this article,
[...] Read more.
After collecting data from observations or experiments, the next step is to analyze the data to build an appropriate mathematical or stochastic model to describe the data so that further studies can be done with the help of the model. In this article, the input-output type mechanism is considered first, where reaction, diffusion, reaction-diffusion, and production-destruction type physical situations can fit in. Then techniques are described to produce thicker or thinner tails (power law behavior) in stochastic models. Then the pathway idea is described where one can switch to different functional forms of the probability density function through a parameter called the pathway parameter. The paper is a continuation of related solar neutrino research published previously in this journal. Full article
(This article belongs to the Section Statistical Mechanics)
Open AccessArticle State Feedback with Memory for Constrained Switched Positive Linear Systems
Entropy 2015, 17(5), 2655-2676; doi:10.3390/e17052655
Received: 4 February 2015 / Revised: 19 April 2015 / Accepted: 23 April 2015 / Published: 27 April 2015
Cited by 2 | PDF Full-text (367 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the stabilization problem in switched linear systems with time-varying delay under constrained state and control is investigated. The synthesis of bounded state-feedback controllers with memory ensures that a closed-loop state is positive and stable. Firstly, synthesis with a sign-restricted (nonnegative
[...] Read more.
In this paper, the stabilization problem in switched linear systems with time-varying delay under constrained state and control is investigated. The synthesis of bounded state-feedback controllers with memory ensures that a closed-loop state is positive and stable. Firstly, synthesis with a sign-restricted (nonnegative and negative) control is considered for general switched systems; then, the stabilization issue under bounded controls including the asymmetrically bounded controls and states constraints are addressed. In addition, the results are extended to systems with interval and polytopic uncertainties. All the proposed conditions are solvable in term of linear programming. Numerical examples illustrate the applicability of the results. Full article
(This article belongs to the Section Complexity)
Open AccessArticle Projective Synchronization of Chaotic Discrete Dynamical Systems via Linear State Error Feedback Control
Entropy 2015, 17(5), 2677-2687; doi:10.3390/e17052677
Received: 27 February 2015 / Revised: 7 April 2015 / Accepted: 22 April 2015 / Published: 27 April 2015
Cited by 6 | PDF Full-text (787 KB) | HTML Full-text | XML Full-text
Abstract
A projective synchronization scheme for a kind of n-dimensional discrete dynamical system is proposed by means of a linear feedback control technique. The scheme consists of master and slave discrete dynamical systems coupled by linear state error variables. A kind of novel 3-D
[...] Read more.
A projective synchronization scheme for a kind of n-dimensional discrete dynamical system is proposed by means of a linear feedback control technique. The scheme consists of master and slave discrete dynamical systems coupled by linear state error variables. A kind of novel 3-D chaotic discrete system is constructed, to which the test for chaos is applied. By using the stability principles of an upper or lower triangular matrix, two controllers for achieving projective synchronization are designed and illustrated with the novel systems. Lastly some numerical simulations are employed to validate the effectiveness of the proposed projective synchronization scheme. Full article
(This article belongs to the Section Complexity)
Open AccessArticle A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations
Entropy 2015, 17(5), 2688-2705; doi:10.3390/e17052688
Received: 30 November 2014 / Revised: 10 April 2015 / Accepted: 21 April 2015 / Published: 27 April 2015
Cited by 3 | PDF Full-text (1286 KB) | HTML Full-text | XML Full-text
Abstract
Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation
[...] Read more.
Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach. Full article
(This article belongs to the Special Issue Entropy in Bioinspired Intelligence)
Figures

Open AccessArticle Identifying the Most Relevant Lag with Runs
Entropy 2015, 17(5), 2706-2722; doi:10.3390/e17052706
Received: 19 February 2015 / Revised: 19 April 2015 / Accepted: 23 April 2015 / Published: 28 April 2015
PDF Full-text (1018 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a nonparametric statistical tool to identify the most relevant lag in the model description of a time series. It is also shown that it can be used for model identification. The statistic is based on the number of
[...] Read more.
In this paper, we propose a nonparametric statistical tool to identify the most relevant lag in the model description of a time series. It is also shown that it can be used for model identification. The statistic is based on the number of runs, when the time series is symbolized depending on the empirical quantiles of the time series. With a Monte Carlo simulation, we show the size and power performance of our new test statistic under linear and nonlinear data generating processes. From the theoretical point of view, it is the first time that symbolic analysis and runs are proposed to identifying characteristic lags and also to help in the identification of univariate time series models. From a more applied point of view, the results show the power and competitiveness of the proposed tool with respect to other techniques without presuming or specifying a model. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Open AccessArticle Finite Key Size Analysis of Two-Way Quantum Cryptography
Entropy 2015, 17(5), 2723-2740; doi:10.3390/e17052723
Received: 6 March 2015 / Revised: 13 April 2015 / Accepted: 23 April 2015 / Published: 30 April 2015
Cited by 1 | PDF Full-text (557 KB) | HTML Full-text | XML Full-text
Abstract
Quantum cryptographic protocols solve the longstanding problem of distributing a shared secret string to two distant users by typically making use of one-way quantum channel. However, alternative protocols exploiting two-way quantum channel have been proposed for the same goal and with potential advantages.
[...] Read more.
Quantum cryptographic protocols solve the longstanding problem of distributing a shared secret string to two distant users by typically making use of one-way quantum channel. However, alternative protocols exploiting two-way quantum channel have been proposed for the same goal and with potential advantages. Here, we overview a security proof for two-way quantum key distribution protocols, against the most general eavesdropping attack, that utilize an entropic uncertainty relation. Then, by resorting to the “smooth” version of involved entropies, we extend such a proof to the case of finite key size. The results will be compared to those available for one-way protocols showing some advantages. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Open AccessArticle Synthesis and Surface Thermodynamic Functions of CaMoO4 Nanocakes
Entropy 2015, 17(5), 2741-2748; doi:10.3390/e17052741
Received: 25 January 2015 / Revised: 13 April 2015 / Accepted: 16 April 2015 / Published: 30 April 2015
Cited by 9 | PDF Full-text (475 KB) | HTML Full-text | XML Full-text
Abstract
CaMoO4 nanocakes with uniform size and morphology were prepared on a large scale via a room temperature reverse-microemulsion method. The products were characterized in detail by X-ray powder diffraction, field-emission scanning electron microscopy, transmission electron microscopy, and high-resolution transmission electron microscopy. By
[...] Read more.
CaMoO4 nanocakes with uniform size and morphology were prepared on a large scale via a room temperature reverse-microemulsion method. The products were characterized in detail by X-ray powder diffraction, field-emission scanning electron microscopy, transmission electron microscopy, and high-resolution transmission electron microscopy. By establishing the relations between the thermodynamic functions of nano-CaMoO4 and bulk-CaMoO4 reaction systems, the equations for calculating the surface thermodynamic functions of nano-CaMoO4 were derived. Then, combined with in-situ microcalorimetry, the molar surface enthalpy, molar surface Gibbs free energy, and molar surface entropy of the prepared CaMoO4 nanocakes at 298.15 K were successfully obtained as (19.674 ± 0.017) kJ·mol−1, (619.704 ± 0.016) J·mol−1, and (63.908 ± 0.057) J·mol−1·K−1, respectively. Full article
(This article belongs to the Special Issue Nanothermodynamics)
Open AccessArticle Detection of Changes in Ground-Level Ozone Concentrations via Entropy
Entropy 2015, 17(5), 2749-2763; doi:10.3390/e17052749
Received: 4 March 2015 / Revised: 30 March 2015 / Accepted: 28 April 2015 / Published: 30 April 2015
PDF Full-text (2195 KB) | HTML Full-text | XML Full-text
Abstract
Ground-level ozone concentration is a key indicator of air quality. Theremay exist sudden changes in ozone concentration data over a long time horizon, which may be caused by the implementation of government regulations and policies, such as establishing exhaust emission limits for on-road
[...] Read more.
Ground-level ozone concentration is a key indicator of air quality. Theremay exist sudden changes in ozone concentration data over a long time horizon, which may be caused by the implementation of government regulations and policies, such as establishing exhaust emission limits for on-road vehicles. To monitor and assess the efficacy of these policies, we propose a methodology for detecting changes in ground-level ozone concentrations, which consists of three major steps: data transformation, simultaneous autoregressive modelling and change-point detection on the estimated entropy. To show the effectiveness of the proposed methodology, the methodology is applied to detect changes in ground-level ozone concentration data collected in the Toronto region of Canada between June and September for the years from 1988 to 2009. The proposed methodology is also applicable to other climate data. Full article
(This article belongs to the Special Issue Entropy and Space-Time Analysis in Environment and Health)
Open AccessArticle Estimating the Lower Limit of the Impact of Amines on Nucleation in the Earth’s Atmosphere
Entropy 2015, 17(5), 2764-2780; doi:10.3390/e17052764
Received: 27 January 2015 / Revised: 16 April 2015 / Accepted: 20 April 2015 / Published: 30 April 2015
Cited by 11 | PDF Full-text (523 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Amines, organic derivatives of NH3, are important common trace atmospheric species that can enhance new particle formation in the Earth’s atmosphere under favorable conditions. While methylamine (MA), dimethylamine (DMA) and trimethylamine (TMA) all efficiently enhance binary nucleation, MA may represent the
[...] Read more.
Amines, organic derivatives of NH3, are important common trace atmospheric species that can enhance new particle formation in the Earth’s atmosphere under favorable conditions. While methylamine (MA), dimethylamine (DMA) and trimethylamine (TMA) all efficiently enhance binary nucleation, MA may represent the lower limit of the enhancing effect of amines on atmospheric nucleation. In the present paper, we report new thermochemical data concerning MA-enhanced nucleation, which were obtained using the DFT PW91PW91/6-311++G (3df, 3pd) method, and investigate the enhancement in production of stable pre-nucleation clusters due to the MA. We found that the MA ternary nucleation begins to dominate over ternary nucleation of sulfuric acid, water and ammonia at [MA]/[NH3] > ~10−3. This means that under real atmospheric conditions ([MA] ~ 1 ppt, [NH3] ~ 1 ppb) the lower limit of the enhancement due to methylamines is either close to or higher than the typical effect of NH3. A very strong impact of the MA is observed at low RH; however it decreases quickly as the RH grows. Low RH and low ambient temperatures were found to be particularly favorable for the enhancement in production of stable sulfuric acid-water clusters due to the MA. Full article
Open AccessArticle The Grading Entropy-based Criteria for Structural Stability of Granular Materials and Filters
Entropy 2015, 17(5), 2781-2811; doi:10.3390/e17052781
Received: 23 October 2014 / Revised: 27 April 2015 / Accepted: 27 April 2015 / Published: 4 May 2015
Cited by 1 | PDF Full-text (936 KB) | HTML Full-text | XML Full-text
Abstract
This paper deals with three grading entropy-based rules that describe different soil structure stability phenomena: an internal stability rule, a filtering rule and a segregation rule. These rules are elaborated on the basis of a large amount of laboratory testing and from existing
[...] Read more.
This paper deals with three grading entropy-based rules that describe different soil structure stability phenomena: an internal stability rule, a filtering rule and a segregation rule. These rules are elaborated on the basis of a large amount of laboratory testing and from existing knowledge in the field. Use is made of the theory of grading entropy to derive parameters which incorporate all of the information of the grading curve into a pair of entropy-based parameters that allow soils with common behaviours to be grouped into domains on an entropy diagram. Applications of the derived entropy-based rules are presented by examining the reason of a dam failure, by testing against the existing filter rules from the literature, and by giving some examples for the design of non-segregating grading curves (discrete particle size distributions by dry weight). A physical basis for the internal stability rule is established, wherein the higher values of base entropy required for granular stability are shown to reflect the closeness between the mean and maximum grain diameters, which explains how there are sufficient coarser grains to achieve a stable grain skeleton. Full article
Open AccessArticle On the κ-Deformed Cyclic Functions and the Generalized Fourier Series in the Framework of the κ-Algebra
Entropy 2015, 17(5), 2812-2833; doi:10.3390/e17052812
Received: 30 March 2015 / Revised: 26 April 2015 / Accepted: 27 April 2015 / Published: 4 May 2015
Cited by 2 | PDF Full-text (349 KB) | HTML Full-text | XML Full-text
Abstract
We explore two possible generalizations of the Euler formula for the complex \(\kappa\)-exponential, which give two different sets of \(\kappa\)-deformed cyclic functions endowed with different analytical properties. In a case, the \(\kappa\)-sine and \(\kappa\)-cosine functions take real values on \(\Re\) and are characterized
[...] Read more.
We explore two possible generalizations of the Euler formula for the complex \(\kappa\)-exponential, which give two different sets of \(\kappa\)-deformed cyclic functions endowed with different analytical properties. In a case, the \(\kappa\)-sine and \(\kappa\)-cosine functions take real values on \(\Re\) and are characterized by an asymptotic log-periodic behavior. In the other case, the \(\kappa\)-cyclic functions take real values only in the region \(|x|\leq1/|\kappa|\), while, for \(|x|>1/|\kappa|\), they assume purely imaginary values with an increasing modulus. However, the main mathematical properties of the standard cyclic functions, opportunely reformulated in the formalism of the \(\kappa\)-mathematics, are fulfilled by the two sets of the \(\kappa\)-trigonometric functions. In both cases, we study the orthogonality and the completeness relations and introduce their respective generalized Fourier series for square integrable functions. Full article
(This article belongs to the Special Issue Entropic Aspects in Statistical Physics of Complex Systems)
Open AccessArticle A Novel Risk Metric for Staff Turnover in a Software Project Based on Information Entropy
Entropy 2015, 17(5), 2834-2852; doi:10.3390/e17052834
Received: 12 March 2015 / Revised: 20 April 2015 / Accepted: 27 April 2015 / Published: 4 May 2015
Cited by 1 | PDF Full-text (807 KB) | HTML Full-text | XML Full-text
Abstract
Staff turnover in a software project is a significant risk that can result in project failure. Despite the urgency of this issue, however, relevant studies are limited and are mostly qualitative; quantitative studies are extremely rare. This paper proposes a novel risk metric
[...] Read more.
Staff turnover in a software project is a significant risk that can result in project failure. Despite the urgency of this issue, however, relevant studies are limited and are mostly qualitative; quantitative studies are extremely rare. This paper proposes a novel risk metric for staff turnover in a software project based on the information entropy theory. To address the gaps of existing studies, five aspects are considered, namely, staff turnover probability, turnover type, staff level, software project complexity, and staff order degree. This paper develops a method of calculating staff turnover risk probability in a software project based on the field, equity, and goal congruence theories. The proposed method prevents the probability of subjective estimation. It is more objective and comprehensive and superior than existing research. This paper not only presents a detailed operable model, but also theoretically demonstrates the scientificity and rationality of the research. The case study performed in this study indicates that the approach is reasonable, effective, and feasible. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Conceptual Inadequacy of the Shore and Johnson Axioms for Wide Classes of Complex Systems
Entropy 2015, 17(5), 2853-2861; doi:10.3390/e17052853
Received: 9 April 2015 / Accepted: 4 May 2015 / Published: 5 May 2015
Cited by 4 | PDF Full-text (248 KB) | HTML Full-text | XML Full-text
Abstract
It is by now well known that the Boltzmann-Gibbs-von Neumann-Shannon logarithmic entropic functional (\(S_{BG}\)) is inadequate for wide classes of strongly correlated systems: see for instance the 2001 Brukner and Zeilinger's {\it Conceptual inadequacy of the Shannon information in quantum measurements}, among many
[...] Read more.
It is by now well known that the Boltzmann-Gibbs-von Neumann-Shannon logarithmic entropic functional (\(S_{BG}\)) is inadequate for wide classes of strongly correlated systems: see for instance the 2001 Brukner and Zeilinger's {\it Conceptual inadequacy of the Shannon information in quantum measurements}, among many other systems exhibiting various forms of complexity. On the other hand, the Shannon and Khinchin axioms uniquely mandate the BG form \(S_{BG}=-k\sum_i p_i \ln p_i\); the Shore and Johnson axioms follow the same path. Many natural, artificial and social systems have been satisfactorily approached with nonadditive entropies such as the \(S_q=k \frac{1-\sum_i p_i^q}{q-1}\) one (\(q \in {\cal R}; \,S_1=S_{BG}\)), basis of nonextensive statistical mechanics. Consistently, the Shannon 1948 and Khinchine 1953 uniqueness theorems have already been generalized in the literature, by Santos 1997 and Abe 2000 respectively, in order to uniquely mandate \(S_q\). We argue here that the same remains to be done with the Shore and Johnson 1980 axioms. We arrive to this conclusion by analyzing specific classes of strongly correlated complex systems that await such generalization. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Open AccessArticle Stabilization Effects of Dichotomous Noise on the Lifetime of the Superconducting State in a Long Josephson Junction
Entropy 2015, 17(5), 2862-2875; doi:10.3390/e17052862
Received: 6 March 2015 / Revised: 22 April 2015 / Accepted: 30 April 2015 / Published: 6 May 2015
Cited by 7 | PDF Full-text (883 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the superconducting lifetime of a long overdamped current-biased Josephson junction, in the presence of telegraph noise sources. The analysis is performed by randomly choosing the initial condition for the noise source. However, in order to investigate how the initial value of
[...] Read more.
We investigate the superconducting lifetime of a long overdamped current-biased Josephson junction, in the presence of telegraph noise sources. The analysis is performed by randomly choosing the initial condition for the noise source. However, in order to investigate how the initial value of the dichotomous noise affects the phase dynamics, we extend our analysis using two different fixed initial values for the source of random fluctuations. In our study, the phase dynamics of the Josephson junction is analyzed as a function of the noise signal intensity, for different values of the parameters of the system and external driving currents. We find that the mean lifetime of the superconductive metastable state as a function of the noise intensity is characterized by nonmonotonic behavior, strongly related to the soliton dynamics during the switching towards the resistive state. The role of the correlation time of the noise source is also taken into account. Noise-enhanced stability is observed in the investigated system. Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Open AccessArticle AIM for Allostery: Using the Ising Model to Understand Information Processing and Transmission in Allosteric Biomolecular Systems
Entropy 2015, 17(5), 2895-2918; doi:10.3390/e17052895
Received: 5 March 2015 / Revised: 16 April 2015 / Accepted: 30 April 2015 / Published: 7 May 2015
Cited by 3 | PDF Full-text (2124 KB) | HTML Full-text | XML Full-text
Abstract
In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of biomolecular
[...] Read more.
In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of biomolecular “action at a distance” is termed allostery. Although allostery is ubiquitous in biological regulation and signal transduction, its treatment in theoretical models has mostly eschewed quantitative descriptions involving the system’s underlying structural components and their interactions. Here, we show how Ising models can be used to formulate an approach to allostery in a structural context of interactions between the constitutive components by building simple allosteric constructs we termed Allosteric Ising Models (AIMs). We introduce the use of AIMs in analytical and numerical calculations that relate thermodynamic descriptions of allostery to the structural context, and then show that many fundamental properties of allostery, such as the multiplicative property of parallel allosteric channels, are revealed from the analysis of such models. The power of exploring mechanistic structural models of allosteric function in more complex systems by using AIMs is demonstrated by building a model of allosteric signaling for an experimentally well-characterized asymmetric homodimer of the dopamine D2 receptor. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Figures

Open AccessArticle Three-Stage Quantum Cryptography Protocol under Collective-Rotation Noise
Entropy 2015, 17(5), 2919-2931; doi:10.3390/e17052919
Received: 31 March 2015 / Revised: 30 April 2015 / Accepted: 4 May 2015 / Published: 7 May 2015
Cited by 2 | PDF Full-text (413 KB) | HTML Full-text | XML Full-text
Abstract
Information security is increasingly important as society migrates to the information age. Classical cryptography widely used nowadays is based on computational complexity, which means that it assumes that solving some particular mathematical problems is hard on a classical computer. With the development of
[...] Read more.
Information security is increasingly important as society migrates to the information age. Classical cryptography widely used nowadays is based on computational complexity, which means that it assumes that solving some particular mathematical problems is hard on a classical computer. With the development of supercomputers and, potentially, quantum computers, classical cryptography has more and more potential risks. Quantum cryptography provides a solution which is based on the Heisenberg uncertainty principle and no-cloning theorem. While BB84-based quantum protocols are only secure when a single photon is used in communication, the three-stage quantum protocol is multi-photon tolerant. However, existing analyses assume perfect noiseless channels. In this paper, a multi-photon analysis is performed for the three-stage quantum protocol under the collective-rotation noise model. The analysis provides insights into the impact of the noise level on a three-stage quantum cryptography system. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Open AccessArticle Oxygen Saturation and RR Intervals Feature Selection for Sleep Apnea Detection
Entropy 2015, 17(5), 2932-2957; doi:10.3390/e17052932
Received: 3 December 2014 / Revised: 30 April 2015 / Accepted: 4 May 2015 / Published: 7 May 2015
Cited by 4 | PDF Full-text (1857 KB) | HTML Full-text | XML Full-text
Abstract
A diagnostic system for sleep apnea based on oxygen saturation and RR intervals obtained from the EKG (electrocardiogram) is proposed with the goal to detect and quantify minute long segments of sleep with breathing pauses. We measured the discriminative capacity of combinations of
[...] Read more.
A diagnostic system for sleep apnea based on oxygen saturation and RR intervals obtained from the EKG (electrocardiogram) is proposed with the goal to detect and quantify minute long segments of sleep with breathing pauses. We measured the discriminative capacity of combinations of features obtained from RR series and oximetry to evaluate improvements of the performance compared to oximetry-based features alone. Time and frequency domain variables derived from oxygen saturation (SpO2) as well as linear and non-linear variables describing the RR series have been explored in recordings from 70 patients with suspected sleep apnea. We applied forward feature selection in order to select a minimal set of variables that are able to locate patterns indicating respiratory pauses. Linear discriminant analysis (LDA) was used to classify the presence of apnea during specific segments. The system will finally provide a global score indicating the presence of clinically significant apnea integrating the segment based apnea detection. LDA results in an accuracy of 87%; sensitivity of 76% and specificity of 91% (AUC = 0.90) with a global classification of 97% when only oxygen saturation is used. In case of additionally including features from the RR series; the system performance improves to an accuracy of 87%; sensitivity of 73% and specificity of 92% (AUC = 0.92), with a global classification rate of 100%. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics)
Open AccessArticle Maximum Entropy Method for Operational Loads Feedback Using Concrete Dam Displacement
Entropy 2015, 17(5), 2958-2972; doi:10.3390/e17052958
Received: 7 February 2015 / Revised: 28 April 2015 / Accepted: 29 April 2015 / Published: 8 May 2015
Cited by 1 | PDF Full-text (404 KB) | HTML Full-text | XML Full-text
Abstract
Safety control of concrete dams is required due to the potential great loss of life and property in case of dam failure. The purpose of this paper is to feed back the operational control loads for concrete dam displacement using the maximum entropy
[...] Read more.
Safety control of concrete dams is required due to the potential great loss of life and property in case of dam failure. The purpose of this paper is to feed back the operational control loads for concrete dam displacement using the maximum entropy method. The proposed method is not aimed at a judgement about the safety conditions of the dam. When a strong trend-line effect is evident, the method should be carefully applied. In these cases, the hydrostatic and temperature effects are added to the irreversible displacements, thus maximum operational loads should be accordingly reduced. The probability density function for the extreme load effect component of dam displacement can be selected by employing the principle of maximum entropy, which is effective to construct the least subjective probability density distribution merely given the moments information from the stated data. The critical load effect component in the warning criterion can be determined through the corresponding cumulative distribution function obtained by the maximum entropy method. Then the control loads feedback of concrete dam displacement is realized by the proposed warning criterion. The proposed method is applied to a concrete dam. A comparison of the results shows that the maximum entropy method can feed back rational control loads for the dam displacement. The control loads diagram obtained can be a straightforward and visual tool to the operation and management department of the concrete dam. The result from the proposed method is recommended to be used due to minimal subjectivity. Full article
Open AccessArticle Kolmogorov Complexity Based Information Measures Applied to the Analysis of Different River Flow Regimes
Entropy 2015, 17(5), 2973-2987; doi:10.3390/e17052973
Received: 14 January 2015 / Revised: 26 March 2015 / Accepted: 6 May 2015 / Published: 8 May 2015
Cited by 4 | PDF Full-text (4703 KB) | HTML Full-text | XML Full-text
Abstract
We have used the Kolmogorov complexities and the Kolmogorov complexity spectrum to quantify the randomness degree in river flow time series of seven rivers with different regimes in Bosnia and Herzegovina, representing their different type of courses, for the period 1965–1986. In particular,
[...] Read more.
We have used the Kolmogorov complexities and the Kolmogorov complexity spectrum to quantify the randomness degree in river flow time series of seven rivers with different regimes in Bosnia and Herzegovina, representing their different type of courses, for the period 1965–1986. In particular, we have examined: (i) the Neretva, Bosnia and the Drina (mountain and lowland parts), (ii) the Miljacka and the Una (mountain part) and the Vrbas and the Ukrina (lowland part) and then calculated the Kolmogorov complexity (KC) based on the Lempel–Ziv Algorithm (LZA) (lower—KCL and upper—KCU), Kolmogorov complexity spectrum highest value (KCM) and overall Kolmogorov complexity (KCO) values for each time series. The results indicate that the KCL, KCU, KCM and KCO values in seven rivers show some similarities regardless of the amplitude differences in their monthly flow rates. The KCL, KCU and KCM complexities as information measures do not “see” a difference between time series which have different amplitude variations but similar random components. However, it seems that the KCO information measures better takes into account both the amplitude and the place of the components in a time series. Full article
(This article belongs to the Special Issue Entropy in Hydrology)
Open AccessCommunication Dimensional Upgrade Approach for Spatial-Temporal Fusion of Trend Series in Subsidence Evaluation
Entropy 2015, 17(5), 3035-3052; doi:10.3390/e17053035
Received: 16 September 2014 / Revised: 15 April 2015 / Accepted: 29 April 2015 / Published: 11 May 2015
Cited by 2 | PDF Full-text (2419 KB) | HTML Full-text | XML Full-text
Abstract
Physical models and grey system models (GSMs) are commonly used to evaluate and predict physical behavior. A physical model avoids the incorrect trend series of a GSM, whereas a GSM avoids the assumptions and uncertainty of a physical model. A technique that combines
[...] Read more.
Physical models and grey system models (GSMs) are commonly used to evaluate and predict physical behavior. A physical model avoids the incorrect trend series of a GSM, whereas a GSM avoids the assumptions and uncertainty of a physical model. A technique that combines the results of physical models and GSMs would make prediction more reasonable and reliable. This study proposes a fusion method for combining two trend series, calculated using two one-dimensional models, respectively, that uses a slope criterion and a distance weighting factor in the temporal and spatial domains. The independent one-dimensional evaluations are upgraded to a spatially and temporally connected two-dimensional distribution. The proposed technique was applied to a subsidence problem in Jhuoshuei River Alluvial Fan, Taiwan. The fusion results show dramatic decreases of subsidence quantity and rate compared to those estimated by the GSM. The subsidence behavior estimated using the proposed method is physically reasonable due to a convergent trend of subsidence under the assumption of constant discharge of groundwater. The technique proposed in this study can be used in fields that require a combination of two trend series from physical and nonphysical models. Full article
(This article belongs to the Special Issue Entropy and Space-Time Analysis in Environment and Health)
Figures

Open AccessArticle Predicting Community Evolution in Social Networks
Entropy 2015, 17(5), 3053-3096; doi:10.3390/e17053053
Received: 28 February 2015 / Revised: 4 May 2015 / Accepted: 5 May 2015 / Published: 11 May 2015
Cited by 9 | PDF Full-text (8691 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, sustained development of different social media can be observed worldwide. One of the relevant research domains intensively explored recently is analysis of social communities existing in social media as well as prediction of their future evolution taking into account collected historical evolution
[...] Read more.
Nowadays, sustained development of different social media can be observed worldwide. One of the relevant research domains intensively explored recently is analysis of social communities existing in social media as well as prediction of their future evolution taking into account collected historical evolution chains. These evolution chains proposed in the paper contain group states in the previous time frames and its historical transitions that were identified using one out of two methods: Stable Group Changes Identification (SGCI) and Group Evolution Discovery (GED). Based on the observed evolution chains of various length, structural network features are extracted, validated and selected as well as used to learn classification models. The experimental studies were performed on three real datasets with different profile: DBLP, Facebook and Polish blogosphere. The process of group prediction was analysed with respect to different classifiers as well as various descriptive feature sets extracted from evolution chains of different length. The results revealed that, in general, the longer evolution chains the better predictive abilities of the classification models. However, chains of length 3 to 7 enabled the GED-based method to almost reach its maximum possible prediction quality. For SGCI, this value was at the level of 3–5 last periods. Full article
(This article belongs to the Section Complexity)
Open AccessArticle Exponential Outer Synchronization between Two Uncertain Time-Varying Complex Networks with Nonlinear Coupling
Entropy 2015, 17(5), 3097-3109; doi:10.3390/e17053097
Received: 5 March 2015 / Revised: 27 April 2015 / Accepted: 5 May 2015 / Published: 11 May 2015
Cited by 9 | PDF Full-text (289 KB) | HTML Full-text | XML Full-text
Abstract
This paper studies the problem of exponential outer synchronization between two uncertain nonlinearly coupled complex networks with time delays. In order to synchronize uncertain complex networks, an adaptive control scheme is designed based on the Lyapunov stability theorem. Simultaneously, the unknown system parameters
[...] Read more.
This paper studies the problem of exponential outer synchronization between two uncertain nonlinearly coupled complex networks with time delays. In order to synchronize uncertain complex networks, an adaptive control scheme is designed based on the Lyapunov stability theorem. Simultaneously, the unknown system parameters of uncertain complex networks are identified when exponential outer synchronization occurs. Finally, numerical examples are provided to demonstrate the feasibility and effectiveness of the theoretical results. Full article
(This article belongs to the Special Issue Recent Advances in Chaos Theory and Complex Networks)
Open AccessArticle 2D Temperature Analysis of Energy and Exergy Characteristics of Laminar Steady Flow across a Square Cylinder under Strong Blockage
Entropy 2015, 17(5), 3124-3151; doi:10.3390/e17053124
Received: 10 March 2015 / Revised: 30 April 2015 / Accepted: 7 May 2015 / Published: 12 May 2015
Cited by 1 | PDF Full-text (2601 KB) | HTML Full-text | XML Full-text
Abstract
Energy and exergy characteristics of a square cylinder (SC) in confined flow are investigated computationally by numerically handling the steady-state continuity, Navier-Stokes and energy equations in the Reynolds number range of Re = 10–50, where the blockage ratio (β = B/H) is kept
[...] Read more.
Energy and exergy characteristics of a square cylinder (SC) in confined flow are investigated computationally by numerically handling the steady-state continuity, Navier-Stokes and energy equations in the Reynolds number range of Re = 10–50, where the blockage ratio (β = B/H) is kept constant at the high level of β = 0.8. Computations indicated for the upstream region that, the mean non-dimensional streamwise (u/Uo) and spanwise (v/Uo) velocities attain the values of u/Uo = 0.840®0.879 and v/Uo = 0.236®0.386 (Re = 10®50) on the front-surface of the SC, implying that Reynolds number and blockage have stronger impact on the spanwise momentum activity. It is determined that flows with high Reynolds number interact with the front-surface of the SC developing thinner thermal boundary layers and greater temperature gradients, which promotes the thermal entropy generation values as well. The strict guidance of the throat, not only resulted in the fully developed flow character, but also imposed additional cooling; such that the analysis pointed out the drop of duct wall (y = 0.025 m) non-dimensional temperature values (ζ) from ζ = 0.387®0.926 (Re = 10®50) at xth = 0 mm to ζ = 0.002®0.266 at xth = 40 mm. In the downstream region, spanwise thermal disturbances are evaluated to be most inspectable in the vortex driven region, where the temperature values show decrease trends in the spanwise direction. In the corresponding domain, exergy destruction is determined to grow with Reynolds number and decrease in the streamwise direction (xds = 0®10 mm). Besides, asymmetric entropy distributions as well were recorded due to the comprehensive mixing caused by the vortex system. Full article
(This article belongs to the Special Issue Exergy: Analysis and Applications)
Open AccessArticle Effect of Heterogeneity in Initial Geographic Distribution on Opinions’ Competitiveness
Entropy 2015, 17(5), 3160-3171; doi:10.3390/e17053160
Received: 16 February 2015 / Revised: 7 May 2015 / Accepted: 11 May 2015 / Published: 13 May 2015
Cited by 1 | PDF Full-text (821 KB) | HTML Full-text | XML Full-text
Abstract
Spin dynamics on networks allows us to understand how a global consensus emerges out of individual opinions. Here, we are interested in the effect of heterogeneity in the initial geographic distribution of a competing opinion on the competitiveness of its own opinion. Accordingly,
[...] Read more.
Spin dynamics on networks allows us to understand how a global consensus emerges out of individual opinions. Here, we are interested in the effect of heterogeneity in the initial geographic distribution of a competing opinion on the competitiveness of its own opinion. Accordingly, in this work, we studied the effect of spatial heterogeneity on the majority rule dynamics using a three-state spin model, in which one state is neutral. Monte Carlo simulations were performed on square lattices divided into square blocks (cells). Accordingly, one competing opinion was distributed uniformly among cells, whereas the spatial distribution of the rival opinion was varied from the uniform to heterogeneous, with the median-to-mean ratio in the range from 1 to 0. When the size of discussion group is odd, the uncommitted agents disappear completely after 3.30 ± 0.05 update cycles, and then the system evolves in a two-state regime with complementary spatial distributions of two competing opinions. Even so, the initial heterogeneity in the spatial distribution of one of the competing opinions causes a decrease of this opinion competitiveness. That is, the opinion with initially heterogeneous spatial distribution has less probability to win, than the opinion with the initially uniform spatial distribution, even when the initial concentrations of both opinions are equal. We found that although the time to consensus , the opinion’s recession rate is determined during the first 3.3 update cycles. On the other hand, we found that the initial heterogeneity of the opinion spatial distribution assists the formation of quasi-stable regions, in which this opinion is dominant. The results of Monte Carlo simulations are discussed with regard to the electoral competition of political parties. Full article
(This article belongs to the Section Complexity)
Open AccessArticle Existence of Ulam Stability for Iterative Fractional Differential Equations Based on Fractional Entropy
Entropy 2015, 17(5), 3172-3181; doi:10.3390/e17053172
Received: 12 March 2015 / Revised: 27 April 2015 / Accepted: 11 May 2015 / Published: 13 May 2015
Cited by 8 | PDF Full-text (202 KB) | HTML Full-text | XML Full-text
Abstract
In this study, we introduce conditions for the existence of solutions for an iterative functional differential equation of fractional order. We prove that the solutions of the above class of fractional differential equations are bounded by Tsallis entropy. The method depends on the
[...] Read more.
In this study, we introduce conditions for the existence of solutions for an iterative functional differential equation of fractional order. We prove that the solutions of the above class of fractional differential equations are bounded by Tsallis entropy. The method depends on the concept of Hyers-Ulam stability. The arbitrary order is suggested in the sense of Riemann-Liouville calculus. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Open AccessArticle Exact Solutions of Non-Linear Lattice Equations by an Improved Exp-Function Method
Entropy 2015, 17(5), 3182-3193; doi:10.3390/e17053182
Received: 9 April 2015 / Revised: 29 April 2015 / Accepted: 30 April 2015 / Published: 13 May 2015
Cited by 3 | PDF Full-text (233 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the exp-function method is improved to construct exact solutions of non-linear lattice equations by modifying its exponential function ansätz. The improved method has two advantages. One is that it can solve non-linear lattice equations with variable coefficients, and the other
[...] Read more.
In this paper, the exp-function method is improved to construct exact solutions of non-linear lattice equations by modifying its exponential function ansätz. The improved method has two advantages. One is that it can solve non-linear lattice equations with variable coefficients, and the other is that it is not necessary to balance the highest order derivative with the highest order nonlinear term in the procedure of determining the exponential function ansätz. To show the advantages of this improved method, a variable-coefficient mKdV lattice equation is considered. As a result, new exact solutions, which include kink-type solutions and bell-kink-type solutions, are obtained. Full article
(This article belongs to the Special Issue Non-Linear Lattice) Printed Edition available
Open AccessArticle Quantum Data Locking for Secure Communication against an Eavesdropper with Time-Limited Storage
Entropy 2015, 17(5), 3194-3204; doi:10.3390/e17053194
Received: 6 April 2015 / Revised: 6 May 2015 / Accepted: 7 May 2015 / Published: 13 May 2015
Cited by 2 | PDF Full-text (94 KB) | HTML Full-text | XML Full-text
Abstract
Quantum cryptography allows for unconditionally secure communication against an eavesdropper endowed with unlimited computational power and perfect technologies, who is only constrained by the laws of physics. We review recent results showing that, under the assumption that the eavesdropper can store quantum information
[...] Read more.
Quantum cryptography allows for unconditionally secure communication against an eavesdropper endowed with unlimited computational power and perfect technologies, who is only constrained by the laws of physics. We review recent results showing that, under the assumption that the eavesdropper can store quantum information only for a limited time, it is possible to enhance the performance of quantum key distribution in both a quantitative and qualitative fashion. We consider quantum data locking as a cryptographic primitive and discuss secure communication and key distribution protocols. For the case of a lossy optical channel, this yields the theoretical possibility of generating secret key at a constant rate of 1 bit per mode at arbitrarily long communication distances. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Open AccessArticle Generalized Stochastic Fokker-Planck Equations
Entropy 2015, 17(5), 3205-3252; doi:10.3390/e17053205
Received: 2 March 2015 / Revised: 23 April 2015 / Accepted: 27 April 2015 / Published: 13 May 2015
Cited by 4 | PDF Full-text (382 KB) | HTML Full-text | XML Full-text
Abstract
We consider a system of Brownian particles with long-range interactions. We go beyond the mean field approximation and take fluctuations into account. We introduce a new class of stochastic Fokker-Planck equations associated with a generalized thermodynamical formalism. Generalized thermodynamics arises in the case
[...] Read more.
We consider a system of Brownian particles with long-range interactions. We go beyond the mean field approximation and take fluctuations into account. We introduce a new class of stochastic Fokker-Planck equations associated with a generalized thermodynamical formalism. Generalized thermodynamics arises in the case of complex systems experiencing small-scale constraints. In the limit of short-range interactions, we obtain a generalized class of stochastic Cahn-Hilliard equations. Our formalism has application for several systems of physical interest including self-gravitating Brownian particles, colloid particles at a fluid interface, superconductors of type II, nucleation, the chemotaxis of bacterial populations, and two-dimensional turbulence. We also introduce a new type of generalized entropy taking into account anomalous diffusion and exclusion or inclusion constraints. Full article
(This article belongs to the Special Issue Entropic Aspects in Statistical Physics of Complex Systems)
Open AccessArticle The Homological Nature of Entropy
Entropy 2015, 17(5), 3253-3318; doi:10.3390/e17053253
Received: 31 January 2015 / Revised: 3 May 2015 / Accepted: 5 May 2015 / Published: 13 May 2015
Cited by 3 | PDF Full-text (510 KB) | HTML Full-text | XML Full-text
Abstract
We propose that entropy is a universal co-homological class in a theory associated to a family of observable quantities and a family of probability distributions. Three cases are presented: (1) classical probabilities and random variables; (2) quantum probabilities and observable operators; (3) dynamic
[...] Read more.
We propose that entropy is a universal co-homological class in a theory associated to a family of observable quantities and a family of probability distributions. Three cases are presented: (1) classical probabilities and random variables; (2) quantum probabilities and observable operators; (3) dynamic probabilities and observation trees. This gives rise to a new kind of topology for information processes, that accounts for the main information functions: entropy, mutual-informations at all orders, and Kullback–Leibler divergence and generalizes them in several ways. The article is divided into two parts, that can be read independently. In the first part, the introduction, we provide an overview of the results, some open questions, future results and lines of research, and discuss briefly the application to complex data. In the second part we give the complete definitions and proofs of the theorems A, C and E in the introduction, which show why entropy is the first homological invariant of a structure of information in four contexts: static classical or quantum probability, dynamics of classical or quantum strategies of observation of a finite system. Full article
Open AccessArticle A Mean-Variance Hybrid-Entropy Model for Portfolio Selection with Fuzzy Returns
Entropy 2015, 17(5), 3319-3331; doi:10.3390/e17053319
Received: 4 February 2015 / Accepted: 20 April 2015 / Published: 14 May 2015
Cited by 3 | PDF Full-text (791 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we define the portfolio return as fuzzy average yield and risk as hybrid-entropy and variance to deal with the portfolio selection problem with both random uncertainty and fuzzy uncertainty, and propose a mean-variance hybrid-entropy model (MVHEM). A multi-objective genetic algorithm
[...] Read more.
In this paper, we define the portfolio return as fuzzy average yield and risk as hybrid-entropy and variance to deal with the portfolio selection problem with both random uncertainty and fuzzy uncertainty, and propose a mean-variance hybrid-entropy model (MVHEM). A multi-objective genetic algorithm named Non-dominated Sorting Genetic Algorithm II (NSGA-II) is introduced to solve the model. We make empirical comparisons by using the data from the Shanghai and Shenzhen stock exchanges in China. The results show that the MVHEM generally performs better than the traditional portfolio selection models. Full article
Open AccessFeature PaperArticle An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro
Entropy 2015, 17(5), 3332-3351; doi:10.3390/e17053332
Received: 13 March 2015 / Accepted: 11 May 2015 / Published: 14 May 2015
Cited by 1 | PDF Full-text (8499 KB) | HTML Full-text | XML Full-text
Abstract
An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In
[...] Read more.
An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In the micro-macro transition, it is shown how this local information quantity is transformed into a macroscopic entropy, as the local states are aggregated into macroscopic concentration variables. The information loss in this transition is identified, and the connection to the irreversibility of the macro dynamics and the second law of thermodynamics is discussed. This is then connected to a process of further coarse-graining towards higher characteristic length scales in the context of chemical reaction-diffusion dynamics capable of pattern formation. On these higher levels of coarse-graining, information flows across length scales and across space are defined. These flows obey a continuity equation for information, and they are connected to the thermodynamic constraints of the system, via an outflow of information from macroscopic to microscopic levels in the form of entropy production, as well as an inflow of information, from an external free energy source, if a spatial chemical pattern is to be maintained. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Nonlinear Stochastic Control and Information Theoretic Dualities: Connections, Interdependencies and Thermodynamic Interpretations
Entropy 2015, 17(5), 3352-3375; doi:10.3390/e17053352
Received: 2 February 2015 / Revised: 21 April 2015 / Accepted: 29 April 2015 / Published: 15 May 2015
Cited by 5 | PDF Full-text (748 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present connections between recent developments on the linearly-solvable stochastic optimal control framework with early work in control theory based on the fundamental dualities between free energy and relative entropy. We extend these connections to nonlinear stochastic systems with non-affine
[...] Read more.
In this paper, we present connections between recent developments on the linearly-solvable stochastic optimal control framework with early work in control theory based on the fundamental dualities between free energy and relative entropy. We extend these connections to nonlinear stochastic systems with non-affine controls by using the generalized version of the Feynman–Kac lemma. We present alternative formulations of the linearly-solvable stochastic optimal control framework and discuss information theoretic and thermodynamic interpretations. On the algorithmic side, we present iterative stochastic optimal control algorithms and applications to nonlinear stochastic systems. We conclude with an overview of the frameworks presented and discuss limitations, differences and future directions. Full article
Open AccessArticle Non-Abelian Topological Approach to Non-Locality of a Hypergraph State
Entropy 2015, 17(5), 3376-3399; doi:10.3390/e17053376
Received: 16 February 2015 / Revised: 16 April 2015 / Accepted: 8 May 2015 / Published: 15 May 2015
Cited by 3 | PDF Full-text (3729 KB) | HTML Full-text | XML Full-text
Abstract
We present a theoretical study of new families of stochastic complex information modules encoded in the hypergraph states which are defined by the fractional entropic descriptor. The essential connection between the Lyapunov exponents and d-regular hypergraph fractal set is elucidated. To further
[...] Read more.
We present a theoretical study of new families of stochastic complex information modules encoded in the hypergraph states which are defined by the fractional entropic descriptor. The essential connection between the Lyapunov exponents and d-regular hypergraph fractal set is elucidated. To further resolve the divergence in the complexity of classical and quantum representation of a hypergraph, we have investigated the notion of non-amenability and its relation to combinatorics of dynamical self-organization for the case of fractal system of free group on finite generators. The exact relation between notion of hypergraph non-locality and quantum encoding through system sets of specified non-Abelian fractal geometric structures is presented. Obtained results give important impetus towards designing of approximation algorithms for chip imprinted circuits in scalable quantum information systems. Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Open AccessArticle Entropy Approximation in Lossy Source Coding Problem
Entropy 2015, 17(5), 3400-3418; doi:10.3390/e17053400
Received: 26 March 2015 / Revised: 11 May 2015 / Accepted: 12 May 2015 / Published: 18 May 2015
Cited by 3 | PDF Full-text (359 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we investigate a lossy source coding problem, where an upper limit on the permitted distortion is defined for every dataset element. It can be seen as an alternative approach to rate distortion theory where a bound on the allowed average
[...] Read more.
In this paper, we investigate a lossy source coding problem, where an upper limit on the permitted distortion is defined for every dataset element. It can be seen as an alternative approach to rate distortion theory where a bound on the allowed average error is specified. In order to find the entropy, which gives a statistical length of source code compatible with a fixed distortion bound, a corresponding optimization problem has to be solved. First, we show how to simplify this general optimization by reducing the number of coding partitions, which are irrelevant for the entropy calculation. In our main result, we present a fast and feasible for implementation greedy algorithm, which allows one to approximate the entropy within an additive error term of log2 e. The proof is based on the minimum entropy set cover problem, for which a similar bound was obtained. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Minimum Error Entropy Algorithms with Sparsity Penalty Constraints
Entropy 2015, 17(5), 3419-3437; doi:10.3390/e17053419
Received: 30 January 2015 / Revised: 28 April 2015 / Accepted: 5 May 2015 / Published: 18 May 2015
Cited by 5 | PDF Full-text (915 KB) | HTML Full-text | XML Full-text
Abstract
Recently, sparse adaptive learning algorithms have been developed to exploit system sparsity as well as to mitigate various noise disturbances in many applications. In particular, in sparse channel estimation, the parameter vector with sparsity characteristic can be well estimated from noisy measurements through
[...] Read more.
Recently, sparse adaptive learning algorithms have been developed to exploit system sparsity as well as to mitigate various noise disturbances in many applications. In particular, in sparse channel estimation, the parameter vector with sparsity characteristic can be well estimated from noisy measurements through a sparse adaptive filter. In previous studies, most works use the mean square error (MSE) based cost to develop sparse filters, which is rational under the assumption of Gaussian distributions. However, Gaussian assumption does not always hold in real-world environments. To address this issue, we incorporate in this work an l1-norm or a reweighted l1-norm into the minimum error entropy (MEE) criterion to develop new sparse adaptive filters, which may perform much better than the MSE based methods, especially in heavy-tailed non-Gaussian situations, since the error entropy can capture higher-order statistics of the errors. In addition, a new approximator of l0-norm, based on the correntropy induced metric (CIM), is also used as a sparsity penalty term (SPT). We analyze the mean square convergence of the proposed new sparse adaptive filters. An energy conservation relation is derived and a sufficient condition is obtained, which ensures the mean square convergence. Simulation results confirm the superior performance of the new algorithms. Full article
Open AccessArticle Heat Transfer and Pressure Drop Characteristics in Straight Microchannel of Printed Circuit Heat Exchangers
Entropy 2015, 17(5), 3438-3457; doi:10.3390/e17053438
Received: 11 January 2015 / Revised: 11 May 2015 / Accepted: 13 May 2015 / Published: 18 May 2015
Cited by 9 | PDF Full-text (4133 KB) | HTML Full-text | XML Full-text
Abstract
Performance tests were carried out for a microchannel printed circuit heat exchanger (PCHE), which was fabricated with micro photo-etching and diffusion bonding technologies. The microchannel PCHE was tested for Reynolds numbers in the range of 100‒850 varying the hot-side inlet temperature between 40
[...] Read more.
Performance tests were carried out for a microchannel printed circuit heat exchanger (PCHE), which was fabricated with micro photo-etching and diffusion bonding technologies. The microchannel PCHE was tested for Reynolds numbers in the range of 100‒850 varying the hot-side inlet temperature between 40 °C–50 °C while keeping the cold-side temperature fixed at 20 °C. It was found that the average heat transfer rate and heat transfer performance of the countercurrrent configuration were 6.8% and 10%‒15% higher, respectively, than those of the parallel flow. The average heat transfer rate, heat transfer performance and pressure drop increased with increasing Reynolds number in all experiments. Increasing inlet temperature did not affect the heat transfer performance while it slightly decreased the pressure drop in the experimental range considered. Empirical correlations have been developed for the heat transfer coefficient and pressure drop factor as functions of the Reynolds number. Full article
Open AccessArticle Nonparametric Denoising Methods Based on Contourlet Transform with Sharp Frequency Localization: Application to Low Exposure Time Electron Microscopy Images
Entropy 2015, 17(5), 3461-3478; doi:10.3390/e17053461
Received: 24 February 2015 / Accepted: 29 April 2015 / Published: 20 May 2015
Cited by 3 | PDF Full-text (4085 KB) | HTML Full-text | XML Full-text
Abstract
Image denoising is a very important step in cryo-transmission electron microscopy (cryo-TEM) and the energy filtering TEM images before the 3D tomography reconstruction, as it addresses the problem of high noise in these images, that leads to a loss of the contained information.
[...] Read more.
Image denoising is a very important step in cryo-transmission electron microscopy (cryo-TEM) and the energy filtering TEM images before the 3D tomography reconstruction, as it addresses the problem of high noise in these images, that leads to a loss of the contained information. High noise levels contribute in particular to difficulties in the alignment required for 3D tomography reconstruction. This paper investigates the denoising of TEM images that are acquired with a very low exposure time, with the primary objectives of enhancing the quality of these low-exposure time TEM images and improving the alignment process. We propose denoising structures to combine multiple noisy copies of the TEM images. The structures are based on Bayesian estimation in the transform domains instead of the spatial domain to build a novel feature preserving image denoising structures; namely: wavelet domain, the contourlet transform domain and the contourlet transform with sharp frequency localization. Numerical image denoising experiments demonstrate the performance of the Bayesian approach in the contourlet transform domain in terms of improving the signal to noise ratio (SNR) and recovering fine details that may be hidden in the data. The SNR and the visual quality of the denoised images are considerably enhanced using these denoising structures that combine multiple noisy copies. The proposed methods also enable a reduction in the exposure time. Full article
Open AccessArticle Operational Reliability Assessment of Compressor Gearboxes with Normalized Lifting Wavelet Entropy from Condition Monitoring Information
Entropy 2015, 17(5), 3479-3500; doi:10.3390/e17053479
Received: 12 April 2015 / Accepted: 14 May 2015 / Published: 20 May 2015
PDF Full-text (1431 KB) | HTML Full-text | XML Full-text
Abstract
Classical reliability assessment methods have predominantly focused on probability and statistical theories, which are insufficient in assessing the operational reliability of individual mechanical equipment with time-varying characteristics. A new approach to assess machinery operational reliability with normalized lifting wavelet entropy from condition monitoring
[...] Read more.
Classical reliability assessment methods have predominantly focused on probability and statistical theories, which are insufficient in assessing the operational reliability of individual mechanical equipment with time-varying characteristics. A new approach to assess machinery operational reliability with normalized lifting wavelet entropy from condition monitoring information is proposed, which is different from classical reliability assessment methods depending on probability and statistics analysis. The machinery vibration signals with time-varying operational characteristics are firstly decomposed and reconstructed by means of a lifting wavelet package transform. The relative energy of every reconstructed signal is computed as an energy percentage of the reconstructed signal in the whole signal energy. Moreover, a normalized lifting wavelet entropy is defined by the relative energy to reveal the machinery operational uncertainty. Finally, operational reliability degree is defined by the quantitative value obtained by the normalized lifting wavelet entropy belonging to the range of [0, 1]. The proposed method is applied in the operational reliability assessment of the gearbox in an oxy-generator compressor to validate the effectiveness. Full article
(This article belongs to the Special Issue Wavelet Entropy: Computation and Applications)
Open AccessArticle Information Decomposition and Synergy
Entropy 2015, 17(5), 3501-3517; doi:10.3390/e17053501
Received: 26 March 2015 / Revised: 12 May 2015 / Accepted: 19 May 2015 / Published: 22 May 2015
Cited by 13 | PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
Recently, a series of papers addressed the problem of decomposing the information of two random variables into shared information, unique information and synergistic information. Several measures were proposed, although still no consensus has been reached. Here, we compare these proposals with an older
[...] Read more.
Recently, a series of papers addressed the problem of decomposing the information of two random variables into shared information, unique information and synergistic information. Several measures were proposed, although still no consensus has been reached. Here, we compare these proposals with an older approach to define synergistic information based on the projections on exponential families containing only up to k-th order interactions. We show that these measures are not compatible with a decomposition into unique, shared and synergistic information if one requires that all terms are always non-negative (local positivity). We illustrate the difference between the two measures for multivariate Gaussians. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)

Review

Jump to: Editorial, Research

Open AccessReview Properties of Nonnegative Hermitian Matrices and New Entropic Inequalities for Noncomposite Quantum Systems
Entropy 2015, 17(5), 2876-2894; doi:10.3390/e17052876
Received: 30 December 2014 / Revised: 28 April 2015 / Accepted: 4 May 2015 / Published: 6 May 2015
Cited by 19 | PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
We consider the probability distributions, spin (qudit)-state tomograms and density matrices of quantum states, and their information characteristics, such as Shannon and von Neumann entropies and q-entropies, from the viewpoints of both well-known purely mathematical features of nonnegative numbers and nonnegative matrices and
[...] Read more.
We consider the probability distributions, spin (qudit)-state tomograms and density matrices of quantum states, and their information characteristics, such as Shannon and von Neumann entropies and q-entropies, from the viewpoints of both well-known purely mathematical features of nonnegative numbers and nonnegative matrices and their physical characteristics, such as entanglement and other quantum correlation phenomena. We review entropic inequalities such as the Araki–Lieb inequality and the subadditivity and strong subadditivity conditions known for bipartite and tripartite systems, and recently obtained for single qudit states. We present explicit matrix forms of the known and some new entropic inequalities associated with quantum states of composite and noncomposite systems. We discuss the tomographic probability distributions of qudit states and demonstrate the inequalities for tomographic entropies of the qudit states. In addition, we mention a possibility to use the discussed information properties of single qudit states in quantum technologies based on multilevel atoms and quantum circuits produced of Josephson junctions. Full article
(This article belongs to the Special Issue Entanglement Entropy)
Open AccessReview Log-Determinant Divergences Revisited: Alpha-Beta and Gamma Log-Det Divergences
Entropy 2015, 17(5), 2988-3034; doi:10.3390/e17052988
Received: 19 December 2014 / Revised: 18 March 2015 / Accepted: 5 May 2015 / Published: 8 May 2015
Cited by 6 | PDF Full-text (759 KB) | HTML Full-text | XML Full-text
Abstract
This work reviews and extends a family of log-determinant (log-det) divergences for symmetric positive definite (SPD) matrices and discusses their fundamental properties. We show how to use parameterized Alpha-Beta (AB) and Gamma log-det divergences to generate many well-known divergences; in particular, we consider
[...] Read more.
This work reviews and extends a family of log-determinant (log-det) divergences for symmetric positive definite (SPD) matrices and discusses their fundamental properties. We show how to use parameterized Alpha-Beta (AB) and Gamma log-det divergences to generate many well-known divergences; in particular, we consider the Stein’s loss, the S-divergence, also called Jensen-Bregman LogDet (JBLD) divergence, Logdet Zero (Bhattacharyya) divergence, Affine Invariant Riemannian Metric (AIRM), and other divergences. Moreover, we establish links and correspondences between log-det divergences and visualise them on an alpha-beta plane for various sets of parameters. We use this unifying framework to interpret and extend existing similarity measures for semidefinite covariance matrices in finite-dimensional Reproducing Kernel Hilbert Spaces (RKHS). This paper also shows how the Alpha-Beta family of log-det divergences relates to the divergences of multivariate and multilinear normal distributions. Closed form formulas are derived for Gamma divergences of two multivariate Gaussian densities; the special cases of the Kullback-Leibler, Bhattacharyya, Rényi, and Cauchy-Schwartz divergences are discussed. Symmetrized versions of log-det divergences are also considered and briefly reviewed. Finally, a class of divergences is extended to multiway divergences for separable covariance (or precision) matrices. Full article
(This article belongs to the Section Information Theory)
Open AccessReview The Multiscale Entropy Algorithm and Its Variants: A Review
Entropy 2015, 17(5), 3110-3123; doi:10.3390/e17053110
Received: 18 March 2015 / Accepted: 8 May 2015 / Published: 12 May 2015
Cited by 36 | PDF Full-text (197 KB) | HTML Full-text | XML Full-text
Abstract
Multiscale entropy (MSE) analysis was introduced in the 2002 to evaluate the complexity of a time series by quantifying its entropy over a range of temporal scales. The algorithm has been successfully applied in different research fields. Since its introduction, a number of
[...] Read more.
Multiscale entropy (MSE) analysis was introduced in the 2002 to evaluate the complexity of a time series by quantifying its entropy over a range of temporal scales. The algorithm has been successfully applied in different research fields. Since its introduction, a number of modifications and refinements have been proposed, some aimed at increasing the accuracy of the entropy estimates, others at exploring alternative coarse-graining procedures. In this review, we first describe the original MSE algorithm. Then, we review algorithms that have been introduced to improve the estimation of MSE. We also report a recent generalization of the method to higher moments. Full article
(This article belongs to the Special Issue Multiscale Entropy and Its Applications in Medicine and Biology)
Open AccessReview Continuous-Variable Entanglement Swapping
Entropy 2015, 17(5), 3152-3159; doi:10.3390/e17053152
Received: 26 March 2015 / Revised: 1 May 2015 / Accepted: 4 May 2015 / Published: 13 May 2015
PDF Full-text (217 KB) | HTML Full-text | XML Full-text
Abstract
We present a very brief overview of entanglement swapping as it relates to continuous-variable quantum information. The technical background required is discussed and the natural link to quantum teleportation is established before discussing the nature of Gaussian entanglement swapping. The limitations of Gaussian
[...] Read more.
We present a very brief overview of entanglement swapping as it relates to continuous-variable quantum information. The technical background required is discussed and the natural link to quantum teleportation is established before discussing the nature of Gaussian entanglement swapping. The limitations of Gaussian swapping are introduced, along with the general applications of swapping in the context of to quantum communication and entanglement distribution. In light of this, we briefly summarize a collection of entanglement swapping schemes which incorporate a non-Gaussian ingredient and the benefits of such schemes are noted. Finally, we motivate the need to further study and develop such schemes by highlighting requirements of a continuous-variable repeater. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Back to Top