Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 6 (June 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story Estimation of flood magnitude for a given recurrence interval T (T-year flood) at a specific [...] Read more.
Displaying articles 1-53
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Automatic Epileptic Seizure Detection in EEG Signals Using Multi-Domain Feature Extraction and Nonlinear Analysis
Entropy 2017, 19(6), 222; doi:10.3390/e19060222
Received: 14 March 2017 / Revised: 5 May 2017 / Accepted: 9 May 2017 / Published: 27 May 2017
PDF Full-text (889 KB) | HTML Full-text | XML Full-text
Abstract
Epileptic seizure detection is commonly implemented by expert clinicians with visual observation of electroencephalography (EEG) signals, which tends to be time consuming and sensitive to bias. The epileptic detection in most previous research suffers from low power and unsuitability for processing large datasets.
[...] Read more.
Epileptic seizure detection is commonly implemented by expert clinicians with visual observation of electroencephalography (EEG) signals, which tends to be time consuming and sensitive to bias. The epileptic detection in most previous research suffers from low power and unsuitability for processing large datasets. Therefore, a computerized epileptic seizure detection method is highly required to eradicate the aforementioned problems, expedite epilepsy research and aid medical professionals. In this work, we propose an automatic epilepsy diagnosis framework based on the combination of multi-domain feature extraction and nonlinear analysis of EEG signals. Firstly, EEG signals are pre-processed by using the wavelet threshold method to remove the artifacts. We then extract representative features in the time domain, frequency domain, time-frequency domain and nonlinear analysis features based on the information theory. These features are further extracted in five frequency sub-bands based on the clinical interest, and the dimension of the original feature space is then reduced by using both a principal component analysis and an analysis of variance. Furthermore, the optimal combination of the extracted features is identified and evaluated via different classifiers for the epileptic seizure detection of EEG signals. Finally, the performance of the proposed method is investigated by using a public EEG database at the University Hospital Bonn, Germany. Experimental results demonstrate that the proposed epileptic seizure detection method can achieve a high average accuracy of 99.25%, indicating a powerful method in the detection and classification of epileptic seizures. The proposed seizure detection scheme is thus hoped to eliminate the burden of expert clinicians when they are processing a large number of data by visual observation and to speed-up the epilepsy diagnosis. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle An Entropy-Based Generalized Gamma Distribution for Flood Frequency Analysis
Entropy 2017, 19(6), 239; doi:10.3390/e19060239
Received: 17 April 2017 / Revised: 17 May 2017 / Accepted: 19 May 2017 / Published: 2 June 2017
PDF Full-text (3256 KB) | HTML Full-text | XML Full-text
Abstract
Flood frequency analysis (FFA) is needed for the design of water engineering and hydraulic structures. The choice of an appropriate frequency distribution is one of the most important issues in FFA. A key problem in FFA is that no single distribution has been
[...] Read more.
Flood frequency analysis (FFA) is needed for the design of water engineering and hydraulic structures. The choice of an appropriate frequency distribution is one of the most important issues in FFA. A key problem in FFA is that no single distribution has been accepted as a global standard. The common practice is to try some candidate distributions and select the one best fitting the data, based on a goodness of fit criterion. However, this practice entails much calculation. Sometimes generalized distributions, which can specialize into several simpler distributions, are fitted, for they may provide a better fit to data. Therefore, the generalized gamma (GG) distribution was employed for FFA in this study. The principle of maximum entropy (POME) was used to estimate GG parameters. Monte Carlo simulation was carried out to evaluate the performance of the GG distribution and to compare with widely used distributions. Finally, the T-year design flood was calculated using the GG and compared with that with other distributions. Results show that the GG distribution is either superior or comparable to other distributions. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Axiomatic Characterization of the Quantum Relative Entropy and Free Energy
Entropy 2017, 19(6), 241; doi:10.3390/e19060241
Received: 6 April 2017 / Revised: 3 May 2017 / Accepted: 16 May 2017 / Published: 23 May 2017
PDF Full-text (249 KB) | HTML Full-text | XML Full-text
Abstract
Building upon work by Matsumoto, we show that the quantum relative entropy with full-rank second argument is determined by four simple axioms: (i) Continuity in the first argument; (ii) the validity of the data-processing inequality; (iii) additivity under tensor products; and (iv) super-additivity.
[...] Read more.
Building upon work by Matsumoto, we show that the quantum relative entropy with full-rank second argument is determined by four simple axioms: (i) Continuity in the first argument; (ii) the validity of the data-processing inequality; (iii) additivity under tensor products; and (iv) super-additivity. This observation has immediate implications for quantum thermodynamics, which we discuss. Specifically, we demonstrate that, under reasonable restrictions, the free energy is singled out as a measure of athermality. In particular, we consider an extended class of Gibbs-preserving maps as free operations in a resource-theoretic framework, in which a catalyst is allowed to build up correlations with the system at hand. The free energy is the only extensive and continuous function that is monotonic under such free operations. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Open AccessArticle A Framework for Designing the Architectures of Deep Convolutional Neural Networks
Entropy 2017, 19(6), 242; doi:10.3390/e19060242
Received: 11 April 2017 / Revised: 17 May 2017 / Accepted: 18 May 2017 / Published: 24 May 2017
PDF Full-text (2754 KB) | HTML Full-text | XML Full-text
Abstract
Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert
[...] Read more.
Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback
Entropy 2017, 19(6), 243; doi:10.3390/e19060243
Received: 25 April 2017 / Revised: 17 May 2017 / Accepted: 19 May 2017 / Published: 24 May 2017
PDF Full-text (1375 KB) | HTML Full-text | XML Full-text
Abstract
This work studies the relationship between the energy allocated for transmitting a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless channel output feedback (GBCF) and the resulting distortion at the receivers. Our goal is to characterize the minimum
[...] Read more.
This work studies the relationship between the energy allocated for transmitting a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless channel output feedback (GBCF) and the resulting distortion at the receivers. Our goal is to characterize the minimum transmission energy required for broadcasting a pair of source samples, such that each source can be reconstructed at its respective receiver to within a target distortion, when the source-channel bandwidth ratio is not restricted. This minimum transmission energy is defined as the energy-distortion tradeoff (EDT). We derive a lower bound and three upper bounds on the optimal EDT. For the upper bounds, we analyze the EDT of three transmission schemes: two schemes are based on separate source-channel coding and apply encoding over multiple samples of source pairs, and the third scheme is a joint source-channel coding scheme that applies uncoded linear transmission on a single source-sample pair and is obtained by extending the Ozarow–Leung (OL) scheme. Numerical simulations show that the EDT of the OL-based scheme is close to that of the better of the two separation-based schemes, which makes the OL scheme attractive for energy-efficient, low-latency and low-complexity source transmission over GBCFs. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle The Tale of Two Financial Crises: An Entropic Perspective
Entropy 2017, 19(6), 244; doi:10.3390/e19060244
Received: 18 April 2017 / Revised: 12 May 2017 / Accepted: 19 May 2017 / Published: 24 May 2017
PDF Full-text (351 KB) | HTML Full-text | XML Full-text
Abstract
This paper provides a comparative analysis of stock market dynamics of the 1987 and 2008 financial crises and discusses the extent to which risk management measures based on entropy can be successful in predicting aggregate market expectations. We find that the Tsallis entropy
[...] Read more.
This paper provides a comparative analysis of stock market dynamics of the 1987 and 2008 financial crises and discusses the extent to which risk management measures based on entropy can be successful in predicting aggregate market expectations. We find that the Tsallis entropy is more appropriate for the short and sudden market crash of 1987, while the approximate entropy is the dominant predictor of the prolonged, fundamental crisis of 2008. We conclude by suggesting the use of entropy as a market sentiment indicator in technical analysis. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle Entropy Analysis of Monetary Unions
Entropy 2017, 19(6), 245; doi:10.3390/e19060245
Received: 4 March 2017 / Revised: 13 May 2017 / Accepted: 17 May 2017 / Published: 24 May 2017
PDF Full-text (6813 KB) | HTML Full-text | XML Full-text
Abstract
This paper is an exercise of quantitative dynamic analysis applied to bilateral relationships among partners within two monetary union case studies: the Portuguese escudo zone monetary union (EZMU) and the European euro zone monetary union (EMU). Real world data are tackled and measures
[...] Read more.
This paper is an exercise of quantitative dynamic analysis applied to bilateral relationships among partners within two monetary union case studies: the Portuguese escudo zone monetary union (EZMU) and the European euro zone monetary union (EMU). Real world data are tackled and measures that are usual in complex system analysis, such as entropy, mutual information, Canberra distance, and Jensen–Shannon divergence, are adopted. The emerging relationships are visualized by means of the multidimensional scaling and hierarchical clustering computational techniques. Results bring evidence on long-run stochastic dynamics that lead to asymmetric indebtedness mechanisms among the partners of a monetary union and sustainability difficulties. The consequences of unsustainability and disruption of monetary unions have high importance for the discussion on optimal currency areas from a geopolitical perspective. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Glassy States of Aging Social Networks
Entropy 2017, 19(6), 246; doi:10.3390/e19060246
Received: 11 April 2017 / Revised: 12 May 2017 / Accepted: 14 May 2017 / Published: 30 May 2017
PDF Full-text (559 KB) | HTML Full-text | XML Full-text
Abstract
Individuals often develop reluctance to change their social relations, called “secondary homebody”, even though their interactions with their environment evolve with time. Some memory effect is loosely present deforcing changes. In other words, in the presence of memory, relations do not change easily.
[...] Read more.
Individuals often develop reluctance to change their social relations, called “secondary homebody”, even though their interactions with their environment evolve with time. Some memory effect is loosely present deforcing changes. In other words, in the presence of memory, relations do not change easily. In order to investigate some history or memory effect on social networks, we introduce a temporal kernel function into the Heider conventional balance theory, allowing for the “quality” of past relations to contribute to the evolution of the system. This memory effect is shown to lead to the emergence of aged networks, thereby perfectly describing—and what is more, measuring—the aging process of links (“social relations”). It is shown that such a memory does not change the dynamical attractors of the system, but does prolong the time necessary to reach the “balanced states”. The general trend goes toward obtaining either global (“paradise” or “bipolar”) or local (“jammed”) balanced states, but is profoundly affected by aged relations. The resistance of elder links against changes decelerates the evolution of the system and traps it into so named glassy states. In contrast to balance configurations which live on stable states, such long-lived glassy states can survive in unstable states. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy
Entropy 2017, 19(6), 247; doi:10.3390/e19060247
Received: 24 March 2017 / Revised: 29 April 2017 / Accepted: 19 May 2017 / Published: 25 May 2017
PDF Full-text (277 KB) | HTML Full-text | XML Full-text
Abstract
Variable selection methods play an important role in the field of attribute mining. The Naive Bayes (NB) classifier is a very simple and popular classification method that yields good results in a short processing time. Hence, it is a very appropriate classifier for
[...] Read more.
Variable selection methods play an important role in the field of attribute mining. The Naive Bayes (NB) classifier is a very simple and popular classification method that yields good results in a short processing time. Hence, it is a very appropriate classifier for very large datasets. The method has a high dependence on the relationships between the variables. The Info-Gain (IG) measure, which is based on general entropy, can be used as a quick variable selection method. This measure ranks the importance of the attribute variables on a variable under study via the information obtained from a dataset. The main drawback is that it is always non-negative and it requires setting the information threshold to select the set of most important variables for each dataset. We introduce here a new quick variable selection method that generalizes the method based on the Info-Gain measure. It uses imprecise probabilities and the maximum entropy measure to select the most informative variables without setting a threshold. This new variable selection method, combined with the Naive Bayes classifier, improves the original method and provides a valuable tool for handling datasets with a very large number of features and a huge amount of data, where more complex methods are not computationally feasible. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle On the Configurational Entropy of Nanoscale Solutions for More Accurate Surface and Bulk Nano-Thermodynamic Calculations
Entropy 2017, 19(6), 248; doi:10.3390/e19060248
Received: 10 April 2017 / Revised: 10 May 2017 / Accepted: 11 May 2017 / Published: 27 May 2017
PDF Full-text (1270 KB) | HTML Full-text | XML Full-text
Abstract
The configurational entropy of nanoscale solutions is discussed in this paper. As follows from the comparison of the exact equation of Boltzmann and its Stirling approximation (widely used for both macroscale and nanoscale solutions today), the latter significantly over-estimates the former for nano-phases
[...] Read more.
The configurational entropy of nanoscale solutions is discussed in this paper. As follows from the comparison of the exact equation of Boltzmann and its Stirling approximation (widely used for both macroscale and nanoscale solutions today), the latter significantly over-estimates the former for nano-phases and surface regions. On the other hand, the exact Boltzmann equation cannot be used for practical calculations, as it requires the calculation of the factorial of the number of atoms in a phase, and those factorials are such large numbers that they cannot be handled by commonly used computer codes. Herewith, a correction term is introduced in this paper to replace the Stirling approximation by the so-called “de Moivre approximation”. This new approximation is a continuous function of the number of atoms/molecules and the composition of the nano-solution. This correction becomes negligible for phases larger than 15 nm in diameter. However, the correction term does not cause mathematical difficulties, even if it is used for macro-phases. Using this correction, future nano-thermodynamic calculations will become more precise. Equations are worked out for both integral and partial configurational entropies of multi-component nano-solutions. The equations are correct only for nano-solutions, which contain at least a single atom of each component (below this concentration, there is no sense to make any calculations). Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle Horton Ratios Link Self-Similarity with Maximum Entropy of Eco-Geomorphological Properties in Stream Networks
Entropy 2017, 19(6), 249; doi:10.3390/e19060249
Received: 17 March 2017 / Revised: 25 May 2017 / Accepted: 27 May 2017 / Published: 30 May 2017
PDF Full-text (1674 KB) | HTML Full-text | XML Full-text
Abstract
Stream networks are branched structures wherein water and energy move between land and atmosphere, modulated by evapotranspiration and its interaction with the gravitational dissipation of potential energy as runoff. These actions vary among climates characterized by Budyko theory, yet have not been integrated
[...] Read more.
Stream networks are branched structures wherein water and energy move between land and atmosphere, modulated by evapotranspiration and its interaction with the gravitational dissipation of potential energy as runoff. These actions vary among climates characterized by Budyko theory, yet have not been integrated with Horton scaling, the ubiquitous pattern of eco-hydrological variation among Strahler streams that populate river basins. From Budyko theory, we reveal optimum entropy coincident with high biodiversity. Basins on either side of optimum respond in opposite ways to precipitation, which we evaluated for the classic Hubbard Brook experiment in New Hampshire and for the Whitewater River basin in Kansas. We demonstrate that Horton ratios are equivalent to Lagrange multipliers used in the extremum function leading to Shannon information entropy being maximal, subject to constraints. Properties of stream networks vary with constraints and inter-annual variation in water balance that challenge vegetation to match expected resource supply throughout the network. The entropy-Horton framework informs questions of biodiversity, resilience to perturbations in water supply, changes in potential evapotranspiration, and land use changes that move ecosystems away from optimal entropy with concomitant loss of productivity and biodiversity. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
Figures

Figure 1

Open AccessArticle Deriving Proper Uniform Priors for Regression Coefficients, Parts I, II, and III
Entropy 2017, 19(6), 250; doi:10.3390/e19060250
Received: 24 February 2017 / Revised: 7 April 2017 / Accepted: 27 April 2017 / Published: 30 May 2017
PDF Full-text (12969 KB) | HTML Full-text | XML Full-text
Abstract
It is a relatively well-known fact that in problems of Bayesian model selection, improper priors should, in general, be avoided. In this paper we will derive and discuss a collection of four proper uniform priors which lie on an ascending scale of informativeness.
[...] Read more.
It is a relatively well-known fact that in problems of Bayesian model selection, improper priors should, in general, be avoided. In this paper we will derive and discuss a collection of four proper uniform priors which lie on an ascending scale of informativeness. It will turn out that these priors lead us to evidences that are closely associated with the implied evidence of the Bayesian Information Criterion (BIC) and the Akaike Information Criterion (AIC). All the discussed evidences are then used in two small Monte Carlo studies, wherein for different sample sizes and noise levels the evidences are used to select between competing C-spline regression models. Also, there is given, for illustrative purposes, an outline on how to construct simple trivariate C-spline regression models. In regards to the length of this paper, only one half of this paper consists of theory and derivations, the other half consists of graphs and outputs of the two Monte Carlo studies. Full article
(This article belongs to the Special Issue Selected Papers from MaxEnt 2016)
Figures

Figure 1

Open AccessArticle Multiscale Entropy Analysis of the Differential RR Interval Time Series Signal and Its Application in Detecting Congestive Heart Failure
Entropy 2017, 19(6), 251; doi:10.3390/e19060251
Received: 21 March 2017 / Revised: 19 May 2017 / Accepted: 25 May 2017 / Published: 31 May 2017
PDF Full-text (2451 KB) | HTML Full-text | XML Full-text
Abstract
Cardiovascular systems essentially have multiscale control mechanisms. Multiscale entropy (MSE) analysis permits the dynamic characterization of the cardiovascular time series for both short-term and long-term processes, and thus can be more illuminating. The traditional MSE analysis for heart rate variability (HRV) is performed
[...] Read more.
Cardiovascular systems essentially have multiscale control mechanisms. Multiscale entropy (MSE) analysis permits the dynamic characterization of the cardiovascular time series for both short-term and long-term processes, and thus can be more illuminating. The traditional MSE analysis for heart rate variability (HRV) is performed on the original RR interval time series (named as MSE_RR). In this study, we proposed an MSE analysis for the differential RR interval time series signal, named as MSE_dRR. The motivation of using the differential RR interval time series signal is that this signal has a direct link with the inherent non-linear property of electrical rhythm of the heart. The effectiveness of the MSE_RR and MSE_dRR were tested and compared on the long-term MIT-Boston’s Beth Israel Hospital (MIT-BIH) 54 normal sinus rhythm (NSR) and 29 congestive heart failure (CHF) RR interval recordings, aiming to explore which one is better for distinguishing the CHF patients from the NSR subjects. Four RR interval length for analysis were used ( N = 500 , N = 1000 , N = 2000 and N = 5000 ). The results showed that MSE_RR did not report significant differences between the NSR and CHF groups at several scales for each RR segment length type (Scales 7, 8 and 10 for N = 500 , Scales 3 and 10 for N = 1000 , Scales 2 and 3 for both N = 2000 and N = 5000 ). However, the new MSE_dRR gave significant separation for the two groups for all RR segment length types except N = 500 at Scales 9 and 10. The area under curve (AUC) values from the receiver operating characteristic (ROC) curve were used to further quantify the performances. The mean AUC of the new MSE_dRR from Scales 1–10 are 79.5%, 83.1%, 83.5% and 83.1% for N = 500 , N = 1000 , N = 2000 and N = 5000 , respectively, whereas the mean AUC of MSE_RR are only 68.6%, 69.8%, 69.6% and 67.1%, respectively. The five-fold cross validation support vector machine (SVM) classifier reports the classification Accuracy ( A c c ) of MSE_RR as 73.5%, 75.9% and 74.6% for N = 1000 , N = 2000 and N = 5000 , respectively, while for the new MSE_dRR analysis accuracy was 85.5%, 85.6% and 85.6%. Different biosignal editing methods (direct deletion and interpolation) did not change the analytical results. In summary, this study demonstrated that compared with MSE_RR, MSE_dRR reports better statistical stability and better discrimination ability for the NSR and CHF groups. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics II)
Figures

Figure 1

Open AccessArticle Off-Line Handwritten Signature Recognition by Wavelet Entropy and Neural Network
Entropy 2017, 19(6), 252; doi:10.3390/e19060252
Received: 7 March 2017 / Revised: 7 May 2017 / Accepted: 10 May 2017 / Published: 31 May 2017
PDF Full-text (2358 KB) | HTML Full-text | XML Full-text
Abstract
Handwritten signatures are widely utilized as a form of personal recognition. However, they have the unfortunate shortcoming of being easily abused by those who would fake the identification or intent of an individual which might be very harmful. Therefore, the need for an
[...] Read more.
Handwritten signatures are widely utilized as a form of personal recognition. However, they have the unfortunate shortcoming of being easily abused by those who would fake the identification or intent of an individual which might be very harmful. Therefore, the need for an automatic signature recognition system is crucial. In this paper, a signature recognition approach based on a probabilistic neural network (PNN) and wavelet transform average framing entropy (AFE) is proposed. The system was tested with a wavelet packet (WP) entropy denoted as a WP entropy neural network system (WPENN) and with a discrete wavelet transform (DWT) entropy denoted as a DWT entropy neural network system (DWENN). Our investigation was conducted over several wavelet families and different entropy types. Identification tasks, as well as verification tasks, were investigated for a comprehensive signature system study. Several other methods used in the literature were considered for comparison. Two databases were used for algorithm testing. The best recognition rate result was achieved by WPENN whereby the threshold entropy reached 92%. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Figures

Figure 1

Open AccessArticle Ruling out Higher-Order Interference from Purity Principles
Entropy 2017, 19(6), 253; doi:10.3390/e19060253
Received: 21 April 2017 / Revised: 21 May 2017 / Accepted: 22 May 2017 / Published: 1 June 2017
PDF Full-text (383 KB) | HTML Full-text | XML Full-text
Abstract
As first noted by Rafael Sorkin, there is a limit to quantum interference. The interference pattern formed in a multi-slit experiment is a function of the interference patterns formed between pairs of slits; there are no genuinely new features resulting from considering three
[...] Read more.
As first noted by Rafael Sorkin, there is a limit to quantum interference. The interference pattern formed in a multi-slit experiment is a function of the interference patterns formed between pairs of slits; there are no genuinely new features resulting from considering three slits instead of two. Sorkin has introduced a hierarchy of mathematically conceivable higher-order interference behaviours, where classical theory lies at the first level of this hierarchy and quantum theory theory at the second. Informally, the order in this hierarchy corresponds to the number of slits on which the interference pattern has an irreducible dependence. Many authors have wondered why quantum interference is limited to the second level of this hierarchy. Does the existence of higher-order interference violate some natural physical principle that we believe should be fundamental? In the current work we show that such principles can be found which limit interference behaviour to second-order, or “quantum-like”, interference, but that do not restrict us to the entire quantum formalism. We work within the operational framework of generalised probabilistic theories, and prove that any theory satisfying Causality, Purity Preservation, Pure Sharpness, and Purification—four principles that formalise the fundamental character of purity in nature—exhibits at most second-order interference. Hence these theories are, at least conceptually, very “close” to quantum theory. Along the way we show that systems in such theories correspond to Euclidean Jordan algebras. Hence, they are self-dual and, moreover, multi-slit experiments in such theories are described by pure projectors. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle Generalized Beta Distribution of the Second Kind for Flood Frequency Analysis
Entropy 2017, 19(6), 254; doi:10.3390/e19060254
Received: 18 April 2017 / Revised: 18 May 2017 / Accepted: 26 May 2017 / Published: 12 June 2017
PDF Full-text (2984 KB) | HTML Full-text | XML Full-text
Abstract
Estimation of flood magnitude for a given recurrence interval T (T-year flood) at a specific location is needed for design of hydraulic and civil infrastructure facilities. A key step in the estimation or flood frequency analysis (FFA) is the selection of
[...] Read more.
Estimation of flood magnitude for a given recurrence interval T (T-year flood) at a specific location is needed for design of hydraulic and civil infrastructure facilities. A key step in the estimation or flood frequency analysis (FFA) is the selection of a suitable distribution. More than one distribution is often found to be adequate for FFA on a given watershed and choosing the best one is often less than objective. In this study, the generalized beta distribution of the second kind (GB2) was introduced for FFA. The principle of maximum entropy (POME) method was proposed to estimate the GB2 parameters. The performance of GB2 distribution was evaluated using flood data from gauging stations on the Colorado River, USA. Frequency estimates from the GB2 distribution were also compared with those of commonly used distributions. Also, the evolution of frequency distribution along the stream from upstream to downstream was investigated. It concludes that the GB2 is appealing for FFA, since it has four parameters and includes some well-known distributions. Results of case study demonstrate that the parameters estimated by POME method are found reasonable. According to the RMSD and AIC values, the performance of the GB2 distribution is better than that of the widely used distributions in hydrology. When using different distributions for FFA, significant different design flood values are obtained. For a given return period, the design flood value of the downstream gauging stations is larger than that of the upstream gauging station. In addition, there is an evolution of distribution. Along the Yampa River, the distribution for FFA changes from the four-parameter GB2 distribution to the three-parameter Burr XII distribution. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Application of Entropy Generation to Improve Heat Transfer of Heat Sinks in Electric Machines
Entropy 2017, 19(6), 255; doi:10.3390/e19060255
Received: 15 April 2017 / Revised: 24 May 2017 / Accepted: 26 May 2017 / Published: 2 June 2017
PDF Full-text (9622 KB) | HTML Full-text | XML Full-text
Abstract
To intensify heat transfer within the complex three-dimensional flow field found in technical devices, all relevant transport phenomena have to be taken into account. In this work, a generic procedure based on a detailed analysis of entropy generation is developed to improve heat
[...] Read more.
To intensify heat transfer within the complex three-dimensional flow field found in technical devices, all relevant transport phenomena have to be taken into account. In this work, a generic procedure based on a detailed analysis of entropy generation is developed to improve heat sinks found in electric machines. It enables a simultaneous consideration of temperature and velocity distributions, lumped into a single, scalar value, which can be used to directly identify regions with a high potential for heat transfer improvement. By analyzing the resulting entropy fields, it is demonstrated that the improved design obtained by this procedure is noticeably better, compared to those obtained with a classical analysis considering separately temperature and velocity distributions. This opens the door for an efficient, computer-based optimization of heat transfer in real applications. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Open AccessArticle The Exergy Loss Distribution and the Heat Transfer Capability in Subcritical Organic Rankine Cycle
Entropy 2017, 19(6), 256; doi:10.3390/e19060256
Received: 28 March 2017 / Revised: 18 May 2017 / Accepted: 31 May 2017 / Published: 3 June 2017
PDF Full-text (4171 KB) | HTML Full-text | XML Full-text
Abstract
Taking net power output as the optimization objective, the exergy loss distribution of the subcritical Organic Rankine Cycle (ORC) system by using R245fa as the working fluid was calculated under the optimal conditions. The influences of heat source temperature, the evaporator pinch point
[...] Read more.
Taking net power output as the optimization objective, the exergy loss distribution of the subcritical Organic Rankine Cycle (ORC) system by using R245fa as the working fluid was calculated under the optimal conditions. The influences of heat source temperature, the evaporator pinch point temperature difference, the expander isentropic efficiency and the cooling water temperature rise on the exergy loss distribution of subcritical ORC system are comprehensively discussed. It is found that there exists a critical value of expander isentropic efficiency and cooling water temperature rise, respectively, under certain conditions. The magnitude of critical value will affect the relative distribution of exergy loss in the expander, the evaporator and the condenser. The research results will help to better understand the characteristics of the exergy loss distribution in an ORC system. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Time-Shift Multiscale Entropy Analysis of Physiological Signals
Entropy 2017, 19(6), 257; doi:10.3390/e19060257
Received: 17 April 2017 / Revised: 1 June 2017 / Accepted: 2 June 2017 / Published: 5 June 2017
PDF Full-text (878 KB) | HTML Full-text | XML Full-text
Abstract
Measures of predictability in physiological signals using entropy measures have been widely applied in many areas of research. Multiscale entropy expresses different levels of either approximate entropy or sample entropy by means of multiple factors for generating multiple time series, enabling the capture
[...] Read more.
Measures of predictability in physiological signals using entropy measures have been widely applied in many areas of research. Multiscale entropy expresses different levels of either approximate entropy or sample entropy by means of multiple factors for generating multiple time series, enabling the capture of more useful information than using a scalar value produced by the two entropy methods. This paper presents the use of different time shifts on various intervals of time series to discover different entropy patterns of the time series. Examples and experimental results using white noise, 1/ f noise, photoplethysmography, and electromyography signals suggest the validity and better performance of the proposed time-shift multiscale entropy analysis of physiological signals than the multiscale entropy. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Self-Organized Patterns Induced by Neimark-Sacker, Flip and Turing Bifurcations in a Discrete Predator-Prey Model with Lesie-Gower Functional Response
Entropy 2017, 19(6), 258; doi:10.3390/e19060258
Received: 19 April 2017 / Revised: 22 May 2017 / Accepted: 1 June 2017 / Published: 7 June 2017
PDF Full-text (3323 KB) | HTML Full-text | XML Full-text
Abstract
The formation of self-organized patterns in predator-prey models has been a very hot topic recently. The dynamics of these models, bifurcations and pattern formations are so complex that studies are urgently needed. In this research, we transformed a continuous predator-prey model with Lesie-Gower
[...] Read more.
The formation of self-organized patterns in predator-prey models has been a very hot topic recently. The dynamics of these models, bifurcations and pattern formations are so complex that studies are urgently needed. In this research, we transformed a continuous predator-prey model with Lesie-Gower functional response into a discrete model. Fixed points and stability analyses were studied. Around the stable fixed point, bifurcation analyses including: flip, Neimark-Sacker and Turing bifurcation were done and bifurcation conditions were obtained. Based on these bifurcation conditions, parameters values were selected to carry out numerical simulations on pattern formation. The simulation results showed that Neimark-Sacker bifurcation induced spots, spirals and transitional patterns from spots to spirals. Turing bifurcation induced labyrinth patterns and spirals coupled with mosaic patterns, while flip bifurcation induced many irregular complex patterns. Compared with former studies on continuous predator-prey model with Lesie-Gower functional response, our research on the discrete model demonstrated more complex dynamics and varieties of self-organized patterns. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Structural Correlations in the Italian Overnight Money Market: An Analysis Based on Network Configuration Models
Entropy 2017, 19(6), 259; doi:10.3390/e19060259
Received: 21 March 2017 / Revised: 27 May 2017 / Accepted: 29 May 2017 / Published: 6 June 2017
PDF Full-text (3276 KB) | HTML Full-text | XML Full-text
Abstract
We study the structural correlations in the Italian overnight money market over the period 1999–2010. We show that the structural correlations vary across different versions of the network. Moreover, we employ different configuration models and examine whether higher-level characteristics of the observed network
[...] Read more.
We study the structural correlations in the Italian overnight money market over the period 1999–2010. We show that the structural correlations vary across different versions of the network. Moreover, we employ different configuration models and examine whether higher-level characteristics of the observed network can be statistically reconstructed by maximizing the entropy of a randomized ensemble of networks restricted only by the lower-order features of the observed network. We find that often many of the high order correlations in the observed network can be considered emergent from the information embedded in the degree sequence in the binary version and in both the degree and strength sequences in the weighted version. However, this information is not enough to allow the models to account for all the patterns in the observed higher order structural correlations. In particular, one of the main features of the observed network that remains unexplained is the abnormally high level of weighted clustering in the years preceding the crisis, i.e., the huge increase in various indirect exposures generated via more intensive interbank credit links. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle Information Distances versus Entropy Metric
Entropy 2017, 19(6), 260; doi:10.3390/e19060260
Received: 18 April 2017 / Revised: 2 June 2017 / Accepted: 2 June 2017 / Published: 7 June 2017
PDF Full-text (222 KB) | HTML Full-text | XML Full-text
Abstract
Information distance has become an important tool in a wide variety of applications. Various types of information distance have been made over the years. These information distance measures are different from entropy metric, as the former is based on Kolmogorov complexity and the
[...] Read more.
Information distance has become an important tool in a wide variety of applications. Various types of information distance have been made over the years. These information distance measures are different from entropy metric, as the former is based on Kolmogorov complexity and the latter on Shannon entropy. However, for any computable probability distributions, up to a constant, the expected value of Kolmogorov complexity equals the Shannon entropy. We study the similar relationship between entropy and information distance. We also study the relationship between entropy and the normalized versions of information distances. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Spurious Results of Fluctuation Analysis Techniques in Magnitude and Sign Correlations
Entropy 2017, 19(6), 261; doi:10.3390/e19060261
Received: 10 May 2017 / Revised: 31 May 2017 / Accepted: 2 June 2017 / Published: 7 June 2017
PDF Full-text (385 KB) | HTML Full-text | XML Full-text
Abstract
Fluctuation Analysis (FA) and specially Detrended Fluctuation Analysis (DFA) are techniques commonly used to quantify correlations and scaling properties of complex time series such as the observable outputs of great variety of dynamical systems, from Economics to Physiology. Often, such correlated time series
[...] Read more.
Fluctuation Analysis (FA) and specially Detrended Fluctuation Analysis (DFA) are techniques commonly used to quantify correlations and scaling properties of complex time series such as the observable outputs of great variety of dynamical systems, from Economics to Physiology. Often, such correlated time series are analyzed using the magnitude and sign decomposition, i.e., by using FA or DFA to study separately the sign and the magnitude series obtained from the original signal. This approach allows for distinguishing between systems with the same linear correlations but different dynamical properties. However, here we present analytical and numerical evidence showing that FA and DFA can lead to spurious results when applied to sign and magnitude series obtained from power-law correlated time series of fractional Gaussian noise (fGn) type. Specifically, we show that: (i) the autocorrelation functions of the sign and magnitude series obtained from fGns are always power-laws; However, (ii) when the sign series presents power-law anticorrelations, FA and DFA wrongly interpret the sign series as purely uncorrelated; Similarly, (iii) when analyzing power-law correlated magnitude (or volatility) series, FA and DFA fail to retrieve the real scaling properties, and identify the magnitude series as purely uncorrelated noise; Finally, (iv) using the relationship between FA and DFA and the autocorrelation function of the time series, we explain analytically the reason for the FA and DFA spurious results, which turns out to be an intrinsic property of both techniques when applied to sign and magnitude series. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Projection to Mixture Families and Rate-Distortion Bounds with Power Distortion Measures
Entropy 2017, 19(6), 262; doi:10.3390/e19060262
Received: 4 May 2017 / Revised: 22 May 2017 / Accepted: 5 June 2017 / Published: 7 June 2017
PDF Full-text (298 KB) | HTML Full-text | XML Full-text
Abstract
The explicit form of the rate-distortion function has rarely been obtained, except for few cases where the Shannon lower bound coincides with the rate-distortion function for the entire range of the positive rate. From an information geometrical point of view, the evaluation of
[...] Read more.
The explicit form of the rate-distortion function has rarely been obtained, except for few cases where the Shannon lower bound coincides with the rate-distortion function for the entire range of the positive rate. From an information geometrical point of view, the evaluation of the rate-distortion function is achieved by a projection to the mixture family defined by the distortion measure. In this paper, we consider the β -th power distortion measure, and prove that β -generalized Gaussian distribution is the only source that can make the Shannon lower bound tight at the minimum distortion level at zero rate. We demonstrate that the tightness of the Shannon lower bound for β = 1 (Laplacian source) and β = 2 (Gaussian source) yields upper bounds to the rate-distortion function of power distortion measures with a different power. These bounds evaluate from above the projection of the source distribution to the mixture family of the generalized Gaussian models. Applying similar arguments to ϵ -insensitive distortion measures, we consider the tightness of the Shannon lower bound and derive an upper bound to the distortion-rate function which is accurate at low rates. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Exergy Dynamics of Systems in Thermal or Concentration Non-Equilibrium
Entropy 2017, 19(6), 263; doi:10.3390/e19060263
Received: 23 March 2017 / Revised: 29 May 2017 / Accepted: 2 June 2017 / Published: 8 June 2017
PDF Full-text (3288 KB) | HTML Full-text | XML Full-text
Abstract
The paper addresses the problem of the existence and quantification of the exergy of non-equilibrium systems. Assuming that both energy and exergy are a priori concepts, the Gibbs “available energy” A is calculated for arbitrary temperature or concentration distributions across the body, with
[...] Read more.
The paper addresses the problem of the existence and quantification of the exergy of non-equilibrium systems. Assuming that both energy and exergy are a priori concepts, the Gibbs “available energy” A is calculated for arbitrary temperature or concentration distributions across the body, with an accuracy that depends only on the information one has of the initial distribution. It is shown that A exponentially relaxes to its equilibrium value, and it is then demonstrated that its value is different from that of the non-equilibrium exergy, the difference depending on the imposed boundary conditions on the system and thus the two quantities are shown to be incommensurable. It is finally argued that all iso-energetic non-equilibrium states can be ranked in terms of their non-equilibrium exergy content, and that each point of the Gibbs plane corresponds therefore to a set of possible initial distributions, each one with its own exergy-decay history. The non-equilibrium exergy is always larger than its equilibrium counterpart and constitutes the “real” total exergy content of the system, i.e., the real maximum work extractable from the initial system. A systematic application of this paradigm may be beneficial for meaningful future applications in the fields of engineering and natural science. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Joint Characteristic Timescales and Entropy Production Analyses for Model Reduction of Combustion Systems
Entropy 2017, 19(6), 264; doi:10.3390/e19060264
Received: 15 April 2017 / Revised: 4 June 2017 / Accepted: 6 June 2017 / Published: 9 June 2017
PDF Full-text (2722 KB) | HTML Full-text | XML Full-text
Abstract
The reduction of chemical kinetics describing combustion processes remains one of the major topics in the combustion theory and its applications. Problems concerning the estimation of reaction mechanisms real dimension remain unsolved, this being a critical point in the development of reduction models.
[...] Read more.
The reduction of chemical kinetics describing combustion processes remains one of the major topics in the combustion theory and its applications. Problems concerning the estimation of reaction mechanisms real dimension remain unsolved, this being a critical point in the development of reduction models. In this study, we suggest a combination of local timescale and entropy production analyses to cope with this problem. In particular, the framework of skeletal mechanism is in the focus of the study as a practical and most straightforward implementation strategy for reduced mechanisms. Hydrogen and methane/dimethyl ether reaction mechanisms are considered for illustration and validation purposes. Two skeletal mechanism versions were obtained for methane/dimethyl ether combustion system by varying the tolerance used to identify important reactions in the characteristic timescale analysis of the system. Comparisons of ignition delay times and species profiles calculated with the detailed and the reduced models are presented. The results of the application show transparently the potential of the suggested approach to be automatically implemented for the reduction of large chemical kinetic models. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Modeling Multi-Event Non-Point Source Pollution in a Data-Scarce Catchment Using ANN and Entropy Analysis
Entropy 2017, 19(6), 265; doi:10.3390/e19060265
Received: 5 May 2017 / Revised: 7 June 2017 / Accepted: 7 June 2017 / Published: 19 June 2017
PDF Full-text (1708 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS) prediction and evaluation.
[...] Read more.
Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS) prediction and evaluation. The artificial neural network (ANN) was used to extend the runoff–pollutant relationship from complete data events to other data-scarce events. The interpolation method was then used to solve the problem of tail deviation in the simulated pollutographs. In addition, the entropy method was utilized to train the ANN for comprehensive evaluations. A case study was performed in the Three Gorges Reservoir Region, China. Results showed that the ANN performed well in the NPS simulation, especially for light rainfall events, and the phosphorus predictions were always more accurate than the nitrogen predictions under scarce data conditions. In addition, peak pollutant data scarcity had a significant impact on the model performance. Furthermore, these traditional indicators would lead to certain information loss during the model evaluation, but the entropy weighting method could provide a more accurate model evaluation. These results would be valuable for monitoring schemes and the quantitation of event-based NPS pollution, especially in data-poor catchments. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Open AccessArticle Model-Based Approaches to Active Perception and Control
Entropy 2017, 19(6), 266; doi:10.3390/e19060266
Received: 3 May 2017 / Revised: 6 June 2017 / Accepted: 7 June 2017 / Published: 9 June 2017
PDF Full-text (268 KB) | HTML Full-text | XML Full-text
Abstract
There is an on-going debate in cognitive (neuro) science and philosophy between classical cognitive theory and embodied, embedded, extended, and enactive (“4-Es”) views of cognition—a family of theories that emphasize the role of the body in cognition and the importance of brain-body-environment interaction
[...] Read more.
There is an on-going debate in cognitive (neuro) science and philosophy between classical cognitive theory and embodied, embedded, extended, and enactive (“4-Es”) views of cognition—a family of theories that emphasize the role of the body in cognition and the importance of brain-body-environment interaction over and above internal representation. This debate touches foundational issues, such as whether the brain internally represents the external environment, and “infers” or “computes” something. Here we focus on two (4-Es-based) criticisms to traditional cognitive theories—to the notions of passive perception and of serial information processing—and discuss alternative ways to address them, by appealing to frameworks that use, or do not use, notions of internal modelling and inference. Our analysis illustrates that: an explicitly inferential framework can capture some key aspects of embodied and enactive theories of cognition; some claims of computational and dynamical theories can be reconciled rather than seen as alternative explanations of cognitive phenomena; and some aspects of cognitive processing (e.g., detached cognitive operations, such as planning and imagination) that are sometimes puzzling to explain from enactive and non-representational perspectives can, instead, be captured nicely from the perspective that internal generative models and predictive processing mediate adaptive control loops. Full article
(This article belongs to the Special Issue Information-Processing and Embodied, Embedded, Enactive Cognition)
Open AccessArticle Kullback–Leibler Divergence and Mutual Information of Partitions in Product MV Algebras
Entropy 2017, 19(6), 267; doi:10.3390/e19060267
Received: 11 May 2017 / Revised: 5 June 2017 / Accepted: 7 June 2017 / Published: 10 June 2017
PDF Full-text (325 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of the paper is to introduce, using the known results concerning the entropy in product MV algebras, the concepts of mutual information and Kullback–Leibler divergence for the case of product MV algebras and examine algebraic properties of the proposed measures. In
[...] Read more.
The purpose of the paper is to introduce, using the known results concerning the entropy in product MV algebras, the concepts of mutual information and Kullback–Leibler divergence for the case of product MV algebras and examine algebraic properties of the proposed measures. In particular, a convexity of Kullback–Leibler divergence with respect to states in product MV algebras is proved, and chain rules for mutual information and Kullback–Leibler divergence are established. In addition, the data processing inequality for conditionally independent partitions in product MV algebras is proved. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Information Geometry of Non-Equilibrium Processes in a Bistable System with a Cubic Damping
Entropy 2017, 19(6), 268; doi:10.3390/e19060268
Received: 29 April 2017 / Revised: 2 June 2017 / Accepted: 6 June 2017 / Published: 11 June 2017
PDF Full-text (487 KB) | HTML Full-text | XML Full-text
Abstract
A probabilistic description is essential for understanding the dynamics of stochastic systems far from equilibrium, given uncertainty inherent in the systems. To compare different Probability Density Functions (PDFs), it is extremely useful to quantify the difference among different PDFs by assigning an appropriate
[...] Read more.
A probabilistic description is essential for understanding the dynamics of stochastic systems far from equilibrium, given uncertainty inherent in the systems. To compare different Probability Density Functions (PDFs), it is extremely useful to quantify the difference among different PDFs by assigning an appropriate metric to probability such that the distance increases with the difference between the two PDFs. This metric structure then provides a key link between stochastic systems and information geometry. For a non-equilibrium process, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory of the system quantifies the total number of different states that the system undergoes in time and is called the information length. By using this concept, we investigate the information geometry of non-equilibrium processes involved in disorder-order transitions between the critical and subcritical states in a bistable system. Specifically, we compute time-dependent PDFs, information length, the rate of change in information length, entropy change and Fisher information in disorder-to-order and order-to-disorder transitions and discuss similarities and disparities between the two transitions. In particular, we show that the total information length in order-to-disorder transition is much larger than that in disorder-to-order transition and elucidate the link to the drastically different evolution of entropy in both transitions. We also provide the comparison of the results with those in the case of the transition between the subcritical and supercritical states and discuss implications for fitness. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle A Novel Distance Metric: Generalized Relative Entropy
Entropy 2017, 19(6), 269; doi:10.3390/e19060269
Received: 26 May 2017 / Revised: 7 June 2017 / Accepted: 7 June 2017 / Published: 13 June 2017
PDF Full-text (1133 KB) | HTML Full-text | XML Full-text
Abstract
Information entropy and its extension, which are important generalizations of entropy, are currently applied to many research domains. In this paper, a novel generalized relative entropy is constructed to avoid some defects of traditional relative entropy. We present the structure of generalized relative
[...] Read more.
Information entropy and its extension, which are important generalizations of entropy, are currently applied to many research domains. In this paper, a novel generalized relative entropy is constructed to avoid some defects of traditional relative entropy. We present the structure of generalized relative entropy after the discussion of defects in relative entropy. Moreover, some properties of the provided generalized relative entropy are presented and proved. The provided generalized relative entropy is proved to have a finite range and is a finite distance metric. Finally, we predict nucleosome positioning of fly and yeast based on generalized relative entropy and relative entropy respectively. The experimental results show that the properties of generalized relative entropy are better than relative entropy. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle An Information-Spectrum Approach to the Capacity Region of the Interference Channel
Entropy 2017, 19(6), 270; doi:10.3390/e19060270
Received: 8 April 2017 / Revised: 2 June 2017 / Accepted: 10 June 2017 / Published: 13 June 2017
PDF Full-text (607 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a general formula for the capacity region of a general interference channel with two pairs of users is derived, which reveals that the capacity region is the union of a family of rectangles. In the region, each rectangle is determined
[...] Read more.
In this paper, a general formula for the capacity region of a general interference channel with two pairs of users is derived, which reveals that the capacity region is the union of a family of rectangles. In the region, each rectangle is determined by a pair of spectral inf-mutual information rates. The presented formula provides us with useful insights into the interference channels in spite of the difficulty of computing it. Specially, when the inputs are discrete, ergodic Markov processes and the channel is stationary memoryless, the formula can be evaluated by the BCJR (Bahl-Cocke-Jelinek-Raviv) algorithm. Also the formula suggests that considering the structure of the interference processes contributes to obtaining tighter inner bounds than the simplest one (obtained by treating the interference as noise). This is verified numerically by calculating the mutual information rates for Gaussian interference channels with embedded convolutional codes. Moreover, we present a coding scheme to approach the theoretical achievable rate pairs. Numerical results show that the decoding gains can be achieved by considering the structure of the interference. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle Peierls–Bogolyubov’s Inequality for Deformed Exponentials
Entropy 2017, 19(6), 271; doi:10.3390/e19060271
Received: 14 April 2017 / Revised: 2 June 2017 / Accepted: 5 June 2017 / Published: 12 June 2017
PDF Full-text (243 KB) | HTML Full-text | XML Full-text
Abstract
We study the convexity or concavity of certain trace functions for the deformed logarithmic and exponential functions, and in this way obtain new trace inequalities for deformed exponentials that may be considered as generalizations of Peierls–Bogolyubov’s inequality. We use these results to improve
[...] Read more.
We study the convexity or concavity of certain trace functions for the deformed logarithmic and exponential functions, and in this way obtain new trace inequalities for deformed exponentials that may be considered as generalizations of Peierls–Bogolyubov’s inequality. We use these results to improve previously-known lower bounds for the Tsallis relative entropy. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Open AccessArticle Game-Theoretic Optimization of Bilateral Contract Transaction for Generation Companies and Large Consumers with Incomplete Information
Entropy 2017, 19(6), 272; doi:10.3390/e19060272
Received: 1 April 2017 / Revised: 26 May 2017 / Accepted: 10 June 2017 / Published: 13 June 2017
PDF Full-text (478 KB) | HTML Full-text | XML Full-text
Abstract
Bilateral contract transaction among generation companies and large consumers is attracting much attention in the electricity market. A large consumer can purchase energy from generation companies directly under a bilateral contract, which can guarantee the economic interests of both sides. However, in pursuit
[...] Read more.
Bilateral contract transaction among generation companies and large consumers is attracting much attention in the electricity market. A large consumer can purchase energy from generation companies directly under a bilateral contract, which can guarantee the economic interests of both sides. However, in pursuit of more profit, the competitions in the transaction exist not only between the company side and the consumer side, but also among generation companies. In order to maximize its profit, each company needs to optimize bidding price to attract large consumers. In this paper, a master–slave game is proposed to describe the competitions among generation companies and large consumers. Furthermore, a Bayesian game approach is formulated to describe the competitions among generation companies considering the incomplete information. In the model, the goal of each company is to determine the optimal bidding price with Bayesian game; and based on the bidding price provided by companies and the predicted spot price, large consumers decide their personnel purchase strategy to minimize their cost. Simulation results show that each participant in the transaction can benefit from the proposed game. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Multiscale Information Theory and the Marginal Utility of Information
Entropy 2017, 19(6), 273; doi:10.3390/e19060273
Received: 28 February 2017 / Revised: 26 May 2017 / Accepted: 9 June 2017 / Published: 13 June 2017
PDF Full-text (709 KB) | HTML Full-text | XML Full-text
Abstract
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior
[...] Read more.
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior among system components results in overlapping or shared information. A system’s structure is revealed in the sharing of information across the system’s dependencies, each of which has an associated scale. Counting information according to its scale yields the quantity of scale-weighted information, which is conserved when a system is reorganized. In the interest of flexibility we allow information to be quantified using any function that satisfies two basic axioms. Shannon information and vector space dimension are examples. We discuss two quantitative indices that summarize system structure: an existing index, the complexity profile, and a new index, the marginal utility of information. Using simple examples, we show how these indices capture the multiscale structure of complex systems in a quantitative way. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint
Entropy 2017, 19(6), 274; doi:10.3390/e19060274
Received: 2 May 2017 / Revised: 3 June 2017 / Accepted: 9 June 2017 / Published: 13 June 2017
PDF Full-text (1954 KB) | HTML Full-text | XML Full-text
Abstract
This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT) models with stochastic (or uncertain) constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT) models (such as log-normal, log-Cauchy,
[...] Read more.
This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT) models with stochastic (or uncertain) constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT) models (such as log-normal, log-Cauchy, and log-logistic FT models) as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC) sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt) prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle The Entropy of Words—Learnability and Expressivity across More than 1000 Languages
Entropy 2017, 19(6), 275; doi:10.3390/e19060275
Received: 26 April 2017 / Revised: 2 June 2017 / Accepted: 6 June 2017 / Published: 14 June 2017
PDF Full-text (6804 KB) | HTML Full-text | XML Full-text
Abstract
The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics and language sciences more generally. Information theory gives us tools at hand to measure precisely the average amount of choice associated
[...] Read more.
The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics and language sciences more generally. Information theory gives us tools at hand to measure precisely the average amount of choice associated with words: the word entropy. Here, we use three parallel corpora, encompassing ca. 450 million words in 1916 texts and 1259 languages, to tackle some of the major conceptual and practical problems of word entropy estimation: dependence on text size, register, style and estimation method, as well as non-independence of words in co-text. We present two main findings: Firstly, word entropies display relatively narrow, unimodal distributions. There is no language in our sample with a unigram entropy of less than six bits/word. We argue that this is in line with information-theoretic models of communication. Languages are held in a narrow range by two fundamental pressures: word learnability and word expressivity, with a potential bias towards expressivity. Secondly, there is a strong linear relationship between unigram entropies and entropy rates. The entropy difference between words with and without co-textual information is narrowly distributed around ca. three bits/word. In other words, knowing the preceding text reduces the uncertainty of words by roughly the same amount across languages of the world. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Open AccessArticle Noise Enhancement for Weighted Sum of Type I and II Error Probabilities with Constraints
Entropy 2017, 19(6), 276; doi:10.3390/e19060276
Received: 12 April 2017 / Revised: 8 June 2017 / Accepted: 12 June 2017 / Published: 14 June 2017
PDF Full-text (1937 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the noise-enhanced detection problem is investigated for the binary hypothesis-testing. The optimal additive noise is determined according to a criterion proposed by DeGroot and Schervish (2011), which aims to minimize the weighted sum of type I and II error probabilities
[...] Read more.
In this paper, the noise-enhanced detection problem is investigated for the binary hypothesis-testing. The optimal additive noise is determined according to a criterion proposed by DeGroot and Schervish (2011), which aims to minimize the weighted sum of type I and II error probabilities under constraints on type I and II error probabilities. Based on a generic composite hypothesis-testing formulation, the optimal additive noise is obtained. The sufficient conditions are also deduced to verify whether the usage of the additive noise can or cannot improve the detectability of a given detector. In addition, some additional results are obtained according to the specificity of the binary hypothesis-testing, and an algorithm is developed for finding the corresponding optimal noise. Finally, numerical examples are given to verify the theoretical results and proofs of the main theorems are presented in the Appendix. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Weak Fault Diagnosis of Wind Turbine Gearboxes Based on MED-LMD
Entropy 2017, 19(6), 277; doi:10.3390/e19060277
Received: 12 April 2017 / Revised: 2 June 2017 / Accepted: 7 June 2017 / Published: 15 June 2017
PDF Full-text (2861 KB) | HTML Full-text | XML Full-text
Abstract
In view of the problem that the fault signal of the rolling bearing is weak and the fault feature is difficult to extract in the strong noise environment, a method based on minimum entropy deconvolution (MED) and local mean deconvolution (LMD) is proposed
[...] Read more.
In view of the problem that the fault signal of the rolling bearing is weak and the fault feature is difficult to extract in the strong noise environment, a method based on minimum entropy deconvolution (MED) and local mean deconvolution (LMD) is proposed to extract the weak fault features of the rolling bearing. Through the analysis of the simulation signal, we find that LMD has many limitations for the feature extraction of weak signals under strong background noise. In order to eliminate the noise interference and extract the characteristics of the weak fault, MED is employed as the pre-filter to remove noise. This method is applied to the weak fault feature extraction of rolling bearings; that is, using MED to reduce the noise of the wind turbine gearbox test bench under strong background noise, and then using the LMD method to decompose the denoised signals into several product functions (PFs), and finally analyzing the PF components that have strong correlation by a cyclic autocorrelation function. The finding is that the failure of the wind power gearbox is generated from the micro-bending of the high-speed shaft and the pitting of the #10 bearing outer race at the output end of the high-speed shaft. This method is compared with LMD, which shows the effectiveness of this method. This paper provides a new method for the extraction of multiple faults and weak features in strong background noise. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle Entropy Generation Rates through the Dissipation of Ordered Regions in Helium Boundary-Layer Flows
Entropy 2017, 19(6), 278; doi:10.3390/e19060278
Received: 23 March 2017 / Revised: 9 June 2017 / Accepted: 12 June 2017 / Published: 15 June 2017
PDF Full-text (5791 KB) | HTML Full-text | XML Full-text
Abstract
The results of the computation of entropy generation rates through the dissipation of ordered regions within selected helium boundary layer flows are presented. Entropy generation rates in helium boundary layer flows for five cases of increasing temperature and pressure are considered. The basic
[...] Read more.
The results of the computation of entropy generation rates through the dissipation of ordered regions within selected helium boundary layer flows are presented. Entropy generation rates in helium boundary layer flows for five cases of increasing temperature and pressure are considered. The basic format of a turbulent spot is used as the flow model. Statistical processing of the time-dependent series solutions of the nonlinear, coupled Lorenz-type differential equations for the spectral velocity wave components in the three-dimensional boundary layer configuration yields the local volumetric entropy generation rates. Extension of the computational method to the transition from laminar to fully turbulent flow is discussed. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations
Entropy 2017, 19(6), 280; doi:10.3390/e19060280
Received: 17 May 2017 / Revised: 14 June 2017 / Accepted: 14 June 2017 / Published: 15 June 2017
PDF Full-text (5968 KB) | HTML Full-text | XML Full-text
Abstract
A semi-analytical method is proposed to calculate stochastic quasi-periodic responses of limit cycles in non-equilibrium dynamical systems excited by periodic forces and weak random fluctuations, approximately. First, a kind of 1/N-stroboscopic map is introduced to discretize the quasi-periodic torus into closed
[...] Read more.
A semi-analytical method is proposed to calculate stochastic quasi-periodic responses of limit cycles in non-equilibrium dynamical systems excited by periodic forces and weak random fluctuations, approximately. First, a kind of 1/N-stroboscopic map is introduced to discretize the quasi-periodic torus into closed curves, which are then approximated by periodic points. Using a stochastic sensitivity function of discrete time systems, the transverse dispersion of these circles can be quantified. Furthermore, combined with the longitudinal distribution of the circles, the probability density function of these closed curves in stroboscopic sections can be determined. The validity of this approach is shown through a van der Pol oscillator and Brusselator. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation
Entropy 2017, 19(6), 281; doi:10.3390/e19060281
Received: 30 April 2017 / Revised: 11 June 2017 / Accepted: 13 June 2017 / Published: 15 June 2017
PDF Full-text (2740 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel
[...] Read more.
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel framework. The proposed CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm is derived and analyzed in detail. A desired zero attraction term is put forward in the updating equation of the proposed CIMSM-PNLMS algorithm to force the inactive coefficients to zero. The performance of the proposed CIMSM-PNLMS algorithm is investigated for estimating an underwater communication channel estimation and an echo channel. The obtained results demonstrate that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessFeature PaperArticle Correntropy-Based Pulse Rate Variability Analysis in Children with Sleep Disordered Breathing
Entropy 2017, 19(6), 282; doi:10.3390/e19060282
Received: 6 May 2017 / Revised: 7 June 2017 / Accepted: 9 June 2017 / Published: 16 June 2017
PDF Full-text (460 KB) | HTML Full-text | XML Full-text
Abstract
Pulse rate variability (PRV), an alternative measure of heart rate variability (HRV), is altered during obstructive sleep apnea. Correntropy spectral density (CSD) is a novel spectral analysis that includes nonlinear information. We recruited 160 children and recorded SpO2 and photoplethysmography (PPG), alongside
[...] Read more.
Pulse rate variability (PRV), an alternative measure of heart rate variability (HRV), is altered during obstructive sleep apnea. Correntropy spectral density (CSD) is a novel spectral analysis that includes nonlinear information. We recruited 160 children and recorded SpO2 and photoplethysmography (PPG), alongside standard polysomnography. PPG signals were divided into 1-min epochs and apnea/hypoapnea (A/H) epochs labeled. CSD was applied to the pulse-to-pulse interval time series (PPIs) and five features extracted: the total spectral power (TP: 0.01–0.6 Hz), the power in the very low frequency band (VLF: 0.01–0.04 Hz), the normalized power in the low and high frequency bands (LFn: 0.04–0.15 Hz, HFn: 0.15–0.6 Hz), and the LF/HF ratio. Nonlinearity was assessed with the surrogate data technique. Multivariate logistic regression models were developed for CSD and power spectral density (PSD) analysis to detect epochs with A/H events. The CSD-based features and model identified epochs with and without A/H events more accurately relative to PSD-based analysis (area under the curve (AUC) 0.72 vs. 0.67) due to the nonlinearity of the data. In conclusion, CSD-based PRV analysis provided enhanced performance in detecting A/H epochs, however, a combination with overnight SpO2 analysis is suggested for optimal results. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders)
Figures

Figure 1

Open AccessArticle LSTM-CRF for Drug-Named Entity Recognition
Entropy 2017, 19(6), 283; doi:10.3390/e19060283
Received: 1 April 2017 / Revised: 9 June 2017 / Accepted: 9 June 2017 / Published: 17 June 2017
PDF Full-text (279 KB) | HTML Full-text | XML Full-text
Abstract
Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific
[...] Read more.
Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific knowledge which are difficult to collect and define. Therefore, we offer an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (LSTM) with Conditional Random Fields decoding (LSTM-CRF). Two kinds of word representations are used in this work: word embedding, which is trained from a large amount of text, and character-based representation, which can capture orthographic feature of words. Experimental results on the DDI2011 and DDI2013 dataset show the effect of the proposed LSTM-CRF method. Our method outperforms the best system in the DDI2013 challenge. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Multiscale Entropy Analysis of Unattended Oximetric Recordings to Assist in the Screening of Paediatric Sleep Apnoea at Home
Entropy 2017, 19(6), 284; doi:10.3390/e19060284
Received: 6 May 2017 / Revised: 12 June 2017 / Accepted: 14 June 2017 / Published: 17 June 2017
PDF Full-text (2045 KB) | HTML Full-text | XML Full-text
Abstract
Untreated paediatric obstructive sleep apnoea syndrome (OSAS) can severely affect the development and quality of life of children. In-hospital polysomnography (PSG) is the gold standard for a definitive diagnosis though it is relatively unavailable and particularly intrusive. Nocturnal portable oximetry has emerged as
[...] Read more.
Untreated paediatric obstructive sleep apnoea syndrome (OSAS) can severely affect the development and quality of life of children. In-hospital polysomnography (PSG) is the gold standard for a definitive diagnosis though it is relatively unavailable and particularly intrusive. Nocturnal portable oximetry has emerged as a reliable technique for OSAS screening. Nevertheless, additional evidences are demanded. Our study is aimed at assessing the usefulness of multiscale entropy (MSE) to characterise oximetric recordings. We hypothesise that MSE could provide relevant information of blood oxygen saturation (SpO2) dynamics in the detection of childhood OSAS. In order to achieve this goal, a dataset composed of unattended SpO2 recordings from 50 children showing clinical suspicion of OSAS was analysed. SpO2 was parameterised by means of MSE and conventional oximetric indices. An optimum feature subset composed of five MSE-derived features and four conventional clinical indices were obtained using automated bidirectional stepwise feature selection. Logistic regression (LR) was used for classification. Our optimum LR model reached 83.5% accuracy (84.5% sensitivity and 83.0% specificity). Our results suggest that MSE provides relevant information from oximetry that is complementary to conventional approaches. Therefore, MSE may be useful to improve the diagnostic ability of unattended oximetry as a simplified screening test for childhood OSAS. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders)
Figures

Figure 1

Open AccessArticle On the Simplification of Statistical Mechanics for Space Plasmas
Entropy 2017, 19(6), 285; doi:10.3390/e19060285
Received: 17 May 2017 / Revised: 15 June 2017 / Accepted: 16 June 2017 / Published: 18 June 2017
PDF Full-text (1195 KB) | HTML Full-text | XML Full-text
Abstract
Space plasmas are frequently described by kappa distributions. Non-extensive statistical mechanics involves the maximization of the Tsallis entropic form under the constraints of canonical ensemble, considering also a dyadic formalism between the ordinary and escort probability distributions. This paper addresses the statistical origin
[...] Read more.
Space plasmas are frequently described by kappa distributions. Non-extensive statistical mechanics involves the maximization of the Tsallis entropic form under the constraints of canonical ensemble, considering also a dyadic formalism between the ordinary and escort probability distributions. This paper addresses the statistical origin of kappa distributions, and shows that they can be connected with non-extensive statistical mechanics without considering the dyadic formalism of ordinary/escort distributions. While this concept does significantly simplify the usage of the theory, it costs the definition of a dyadic entropic formulation, in order to preserve the consistency between statistical mechanics and thermodynamics. Therefore, the simplification of the theory by means of avoiding dyadic formalism is impossible within the framework of non-extensive statistical mechanics. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Assessing Probabilistic Inference by Comparing the Generalized Mean of the Model and Source Probabilities
Entropy 2017, 19(6), 286; doi:10.3390/e19060286
Received: 31 May 2017 / Revised: 10 June 2017 / Accepted: 12 June 2017 / Published: 19 June 2017
PDF Full-text (2284 KB) | HTML Full-text | XML Full-text
Abstract
An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the
[...] Read more.
An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the actual outcome and is referred to as the “Accuracy”. Upper and lower error bars on the accuracy are provided by the arithmetic mean and the −2/3 mean. The arithmetic is called the “Decisiveness” due to its similarity with the cost of a decision and the −2/3 mean is called the “Robustness”, due to its sensitivity to outlier errors. Visualization of inference performance is facilitated by plotting the reported model probabilities versus the histogram calculated source probabilities. The visualization of the calibration between model and source is summarized on both axes by the arithmetic, geometric, and −2/3 means. From information theory, the performance of the inference is related to the cross-entropy between the model and source distribution. Just as cross-entropy is the sum of the entropy and the divergence; the accuracy of a model can be decomposed into a component due to the source uncertainty and the divergence between the source and model. Translated to the probability domain these quantities are plotted as the average model probability versus the average source probability. The divergence probability is the average model probability divided by the average source probability. When an inference is over/under-confident, the arithmetic mean of the model increases/decreases, while the −2/3 mean decreases/increases, respectively. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Information Technology Project Portfolio Implementation Process Optimization Based on Complex Network Theory and Entropy
Entropy 2017, 19(6), 287; doi:10.3390/e19060287
Received: 4 April 2017 / Revised: 12 June 2017 / Accepted: 13 June 2017 / Published: 19 June 2017
PDF Full-text (6965 KB) | HTML Full-text | XML Full-text
Abstract
In traditional information technology project portfolio management (ITPPM), managers often pay more attention to the optimization of portfolio selection in the initial stage. In fact, during the portfolio implementation process, there are still issues to be optimized. Organizing cooperation will enhance the efficiency,
[...] Read more.
In traditional information technology project portfolio management (ITPPM), managers often pay more attention to the optimization of portfolio selection in the initial stage. In fact, during the portfolio implementation process, there are still issues to be optimized. Organizing cooperation will enhance the efficiency, although it brings more immediate risk due to the complex variety of links between projects. In order to balance the efficiency and risk, an optimization method is presented based on the complex network theory and entropy, which will assist portfolio managers in recognizing the structure of the portfolio and determine the cooperation range. Firstly, a complex network model for an IT project portfolio is constructed, in which the project is simulated as an artificial life agent. At the same time, the portfolio is viewed as a small scale of society. Following this, social network analysis is used to detect and divide communities in order to estimate the roles of projects between different portfolios. Based on these, the efficiency and the risk are measured using entropy and are balanced through searching for adequate hierarchy community divisions. Thus, the activities of cooperation in organizations, risk management, and so on—which are usually viewed as an important art—can be discussed and conducted based on quantity calculations. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Open AccessArticle Inconsistency of Template Estimation by Minimizing of the Variance/Pre-Variance in the Quotient Space
Entropy 2017, 19(6), 288; doi:10.3390/e19060288
Received: 27 April 2017 / Revised: 7 June 2017 / Accepted: 17 June 2017 / Published: 20 June 2017
PDF Full-text (340 KB) | HTML Full-text | XML Full-text
Abstract
We tackle the problem of template estimation when data have been randomly deformed under a group action in the presence of noise. In order to estimate the template, one often minimizes the variance when the influence of the transformations have been removed (computation
[...] Read more.
We tackle the problem of template estimation when data have been randomly deformed under a group action in the presence of noise. In order to estimate the template, one often minimizes the variance when the influence of the transformations have been removed (computation of the Fréchet mean in the quotient space). The consistency bias is defined as the distance (possibly zero) between the orbit of the template and the orbit of one element which minimizes the variance. In the first part, we restrict ourselves to isometric group action, in this case the Hilbertian distance is invariant under the group action. We establish an asymptotic behavior of the consistency bias which is linear with respect to the noise level. As a result the inconsistency is unavoidable as soon as the noise is enough. In practice, template estimation with a finite sample is often done with an algorithm called “max-max”. In the second part, also in the case of isometric group finite, we show the convergence of this algorithm to an empirical Karcher mean. Our numerical experiments show that the bias observed in practice can not be attributed to the small sample size or to a convergence problem but is indeed due to the previously studied inconsistency. In a third part, we also present some insights of the case of a non invariant distance with respect to the group action. We will see that the inconsistency still holds as soon as the noise level is large enough. Moreover we prove the inconsistency even when a regularization term is added. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle The Mehler-Fock Transform in Signal Processing
Entropy 2017, 19(6), 289; doi:10.3390/e19060289
Received: 25 April 2017 / Revised: 13 June 2017 / Accepted: 15 June 2017 / Published: 20 June 2017
PDF Full-text (943 KB) | HTML Full-text | XML Full-text
Abstract
Many signals can be described as functions on the unit disk (ball). In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N) as their symmetry groups. One illustration of this construction is
[...] Read more.
Many signals can be described as functions on the unit disk (ball). In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N) as their symmetry groups. One illustration of this construction is three-dimensional color spaces in which chroma properties are described by points on the unit disk. A combination of principal component analysis and the Perron-Frobenius theorem can be used to show that perspective projections map positive signals (i.e., functions with positive values) to a product of the positive half-axis and the unit ball. The representation theory (harmonic analysis) of the group SU(1,1) leads to an integral transform, the Mehler-Fock-transform (MFT), that decomposes functions, depending on the radial coordinate only, into combinations of associated Legendre functions. This transformation is applied to kernel density estimators of probability distributions on the unit disk. It is shown that the transform separates the influence of the data and the measured data. The application of the transform is illustrated by studying the statistical distribution of RGB vectors obtained from a common set of object points under different illuminants. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Enthalpy of Mixing in Al–Tb Liquid
Entropy 2017, 19(6), 290; doi:10.3390/e19060290
Received: 10 April 2017 / Revised: 14 June 2017 / Accepted: 15 June 2017 / Published: 21 June 2017
PDF Full-text (5166 KB) | HTML Full-text | XML Full-text
Abstract
The liquid-phase enthalpy of mixing for Al–Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy
[...] Read more.
The liquid-phase enthalpy of mixing for Al–Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared with the Miedema model prediction of mixing enthalpy. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle The Entropic Linkage between Equity and Bond Market Dynamics
Entropy 2017, 19(6), 292; doi:10.3390/e19060292
Received: 29 April 2017 / Revised: 17 June 2017 / Accepted: 17 June 2017 / Published: 21 June 2017
PDF Full-text (2440 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
An alternative derivation of the yield curve based on entropy or the loss of information as it is communicated through time is introduced. Given this focus on entropy growth in communication the Shannon entropy will be utilized. Additionally, Shannon entropy’s close relationship to
[...] Read more.
An alternative derivation of the yield curve based on entropy or the loss of information as it is communicated through time is introduced. Given this focus on entropy growth in communication the Shannon entropy will be utilized. Additionally, Shannon entropy’s close relationship to the Kullback–Leibler divergence is used to provide a more precise understanding of this new yield curve. The derivation of the entropic yield curve is completed with the use of the Burnashev reliability function which serves as a weighting between the true and error distributions. The deep connections between the entropic yield curve and the popular Nelson–Siegel specification are also examined. Finally, this entropically derived yield curve is used to provide an estimate of the economy’s implied information processing ratio. This information theoretic ratio offers a new causal link between bond and equity markets, and is a valuable new tool for the modeling and prediction of stock market behavior. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview Maxwell’s Demon—A Historical Review
Entropy 2017, 19(6), 240; doi:10.3390/e19060240
Received: 20 April 2017 / Revised: 12 May 2017 / Accepted: 19 May 2017 / Published: 23 May 2017
PDF Full-text (1018 KB) | HTML Full-text | XML Full-text
Abstract
For more than 140 years Maxwell’s demon has intrigued, enlightened, mystified, frustrated, and challenged physicists in unique and interesting ways. Maxwell’s original conception was brilliant and insightful, but over the years numerous different versions of Maxwell’s demon have been presented. Most versions have
[...] Read more.
For more than 140 years Maxwell’s demon has intrigued, enlightened, mystified, frustrated, and challenged physicists in unique and interesting ways. Maxwell’s original conception was brilliant and insightful, but over the years numerous different versions of Maxwell’s demon have been presented. Most versions have been answered with reasonable physical arguments, with each of these answers (apparently) keeping the second law of thermodynamics intact. Though the laws of physics did not change in this process of questioning and answering, we have learned a lot along the way about statistical mechanics and thermodynamics. This paper will review a selected history and discuss some of the interesting historical characters who have participated. Full article
(This article belongs to the Special Issue Limits to the Second Law of Thermodynamics: Experiment and Theory)
Figures

Figure 1

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
loading...
Back to Top