Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 6 (June 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Estimation of flood magnitude for a given recurrence interval T (T-year flood) at a specific [...] Read more.
Displaying articles 1-53
Export citation of selected articles as:
Open AccessArticle
The Entropic Linkage between Equity and Bond Market Dynamics
Entropy 2017, 19(6), 292; https://doi.org/10.3390/e19060292
Received: 29 April 2017 / Revised: 17 June 2017 / Accepted: 17 June 2017 / Published: 21 June 2017
Cited by 8 | Viewed by 3445 | PDF Full-text (2440 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
An alternative derivation of the yield curve based on entropy or the loss of information as it is communicated through time is introduced. Given this focus on entropy growth in communication the Shannon entropy will be utilized. Additionally, Shannon entropy’s close relationship to [...] Read more.
An alternative derivation of the yield curve based on entropy or the loss of information as it is communicated through time is introduced. Given this focus on entropy growth in communication the Shannon entropy will be utilized. Additionally, Shannon entropy’s close relationship to the Kullback–Leibler divergence is used to provide a more precise understanding of this new yield curve. The derivation of the entropic yield curve is completed with the use of the Burnashev reliability function which serves as a weighting between the true and error distributions. The deep connections between the entropic yield curve and the popular Nelson–Siegel specification are also examined. Finally, this entropically derived yield curve is used to provide an estimate of the economy’s implied information processing ratio. This information theoretic ratio offers a new causal link between bond and equity markets, and is a valuable new tool for the modeling and prediction of stock market behavior. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle
Enthalpy of Mixing in Al–Tb Liquid
Entropy 2017, 19(6), 290; https://doi.org/10.3390/e19060290
Received: 10 April 2017 / Revised: 14 June 2017 / Accepted: 15 June 2017 / Published: 21 June 2017
Cited by 1 | Viewed by 1638 | PDF Full-text (5166 KB) | HTML Full-text | XML Full-text
Abstract
The liquid-phase enthalpy of mixing for Al–Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy [...] Read more.
The liquid-phase enthalpy of mixing for Al–Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared with the Miedema model prediction of mixing enthalpy. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle
The Mehler-Fock Transform in Signal Processing
Entropy 2017, 19(6), 289; https://doi.org/10.3390/e19060289
Received: 25 April 2017 / Revised: 13 June 2017 / Accepted: 15 June 2017 / Published: 20 June 2017
Viewed by 1375 | PDF Full-text (943 KB) | HTML Full-text | XML Full-text
Abstract
Many signals can be described as functions on the unit disk (ball). In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N) as their symmetry groups. One illustration of this construction is [...] Read more.
Many signals can be described as functions on the unit disk (ball). In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N) as their symmetry groups. One illustration of this construction is three-dimensional color spaces in which chroma properties are described by points on the unit disk. A combination of principal component analysis and the Perron-Frobenius theorem can be used to show that perspective projections map positive signals (i.e., functions with positive values) to a product of the positive half-axis and the unit ball. The representation theory (harmonic analysis) of the group SU(1,1) leads to an integral transform, the Mehler-Fock-transform (MFT), that decomposes functions, depending on the radial coordinate only, into combinations of associated Legendre functions. This transformation is applied to kernel density estimators of probability distributions on the unit disk. It is shown that the transform separates the influence of the data and the measured data. The application of the transform is illustrated by studying the statistical distribution of RGB vectors obtained from a common set of object points under different illuminants. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle
Inconsistency of Template Estimation by Minimizing of the Variance/Pre-Variance in the Quotient Space
Entropy 2017, 19(6), 288; https://doi.org/10.3390/e19060288
Received: 27 April 2017 / Revised: 7 June 2017 / Accepted: 17 June 2017 / Published: 20 June 2017
Cited by 1 | Viewed by 1326 | PDF Full-text (340 KB) | HTML Full-text | XML Full-text
Abstract
We tackle the problem of template estimation when data have been randomly deformed under a group action in the presence of noise. In order to estimate the template, one often minimizes the variance when the influence of the transformations have been removed (computation [...] Read more.
We tackle the problem of template estimation when data have been randomly deformed under a group action in the presence of noise. In order to estimate the template, one often minimizes the variance when the influence of the transformations have been removed (computation of the Fréchet mean in the quotient space). The consistency bias is defined as the distance (possibly zero) between the orbit of the template and the orbit of one element which minimizes the variance. In the first part, we restrict ourselves to isometric group action, in this case the Hilbertian distance is invariant under the group action. We establish an asymptotic behavior of the consistency bias which is linear with respect to the noise level. As a result the inconsistency is unavoidable as soon as the noise is enough. In practice, template estimation with a finite sample is often done with an algorithm called “max-max”. In the second part, also in the case of isometric group finite, we show the convergence of this algorithm to an empirical Karcher mean. Our numerical experiments show that the bias observed in practice can not be attributed to the small sample size or to a convergence problem but is indeed due to the previously studied inconsistency. In a third part, we also present some insights of the case of a non invariant distance with respect to the group action. We will see that the inconsistency still holds as soon as the noise level is large enough. Moreover we prove the inconsistency even when a regularization term is added. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle
Information Technology Project Portfolio Implementation Process Optimization Based on Complex Network Theory and Entropy
Entropy 2017, 19(6), 287; https://doi.org/10.3390/e19060287
Received: 4 April 2017 / Revised: 12 June 2017 / Accepted: 13 June 2017 / Published: 19 June 2017
Cited by 2 | Viewed by 1903 | PDF Full-text (6965 KB) | HTML Full-text | XML Full-text
Abstract
In traditional information technology project portfolio management (ITPPM), managers often pay more attention to the optimization of portfolio selection in the initial stage. In fact, during the portfolio implementation process, there are still issues to be optimized. Organizing cooperation will enhance the efficiency, [...] Read more.
In traditional information technology project portfolio management (ITPPM), managers often pay more attention to the optimization of portfolio selection in the initial stage. In fact, during the portfolio implementation process, there are still issues to be optimized. Organizing cooperation will enhance the efficiency, although it brings more immediate risk due to the complex variety of links between projects. In order to balance the efficiency and risk, an optimization method is presented based on the complex network theory and entropy, which will assist portfolio managers in recognizing the structure of the portfolio and determine the cooperation range. Firstly, a complex network model for an IT project portfolio is constructed, in which the project is simulated as an artificial life agent. At the same time, the portfolio is viewed as a small scale of society. Following this, social network analysis is used to detect and divide communities in order to estimate the roles of projects between different portfolios. Based on these, the efficiency and the risk are measured using entropy and are balanced through searching for adequate hierarchy community divisions. Thus, the activities of cooperation in organizations, risk management, and so on—which are usually viewed as an important art—can be discussed and conducted based on quantity calculations. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Graphical abstract

Open AccessArticle
Assessing Probabilistic Inference by Comparing the Generalized Mean of the Model and Source Probabilities
Entropy 2017, 19(6), 286; https://doi.org/10.3390/e19060286
Received: 31 May 2017 / Revised: 10 June 2017 / Accepted: 12 June 2017 / Published: 19 June 2017
Cited by 1 | Viewed by 1797 | PDF Full-text (2284 KB) | HTML Full-text | XML Full-text
Abstract
An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the [...] Read more.
An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the actual outcome and is referred to as the “Accuracy”. Upper and lower error bars on the accuracy are provided by the arithmetic mean and the −2/3 mean. The arithmetic is called the “Decisiveness” due to its similarity with the cost of a decision and the −2/3 mean is called the “Robustness”, due to its sensitivity to outlier errors. Visualization of inference performance is facilitated by plotting the reported model probabilities versus the histogram calculated source probabilities. The visualization of the calibration between model and source is summarized on both axes by the arithmetic, geometric, and −2/3 means. From information theory, the performance of the inference is related to the cross-entropy between the model and source distribution. Just as cross-entropy is the sum of the entropy and the divergence; the accuracy of a model can be decomposed into a component due to the source uncertainty and the divergence between the source and model. Translated to the probability domain these quantities are plotted as the average model probability versus the average source probability. The divergence probability is the average model probability divided by the average source probability. When an inference is over/under-confident, the arithmetic mean of the model increases/decreases, while the −2/3 mean decreases/increases, respectively. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle
Modeling Multi-Event Non-Point Source Pollution in a Data-Scarce Catchment Using ANN and Entropy Analysis
Entropy 2017, 19(6), 265; https://doi.org/10.3390/e19060265
Received: 5 May 2017 / Revised: 7 June 2017 / Accepted: 7 June 2017 / Published: 19 June 2017
Cited by 7 | Viewed by 1747 | PDF Full-text (1708 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS) prediction and evaluation. [...] Read more.
Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS) prediction and evaluation. The artificial neural network (ANN) was used to extend the runoff–pollutant relationship from complete data events to other data-scarce events. The interpolation method was then used to solve the problem of tail deviation in the simulated pollutographs. In addition, the entropy method was utilized to train the ANN for comprehensive evaluations. A case study was performed in the Three Gorges Reservoir Region, China. Results showed that the ANN performed well in the NPS simulation, especially for light rainfall events, and the phosphorus predictions were always more accurate than the nitrogen predictions under scarce data conditions. In addition, peak pollutant data scarcity had a significant impact on the model performance. Furthermore, these traditional indicators would lead to certain information loss during the model evaluation, but the entropy weighting method could provide a more accurate model evaluation. These results would be valuable for monitoring schemes and the quantitation of event-based NPS pollution, especially in data-poor catchments. Full article
Figures

Graphical abstract

Open AccessArticle
On the Simplification of Statistical Mechanics for Space Plasmas
Entropy 2017, 19(6), 285; https://doi.org/10.3390/e19060285
Received: 17 May 2017 / Revised: 15 June 2017 / Accepted: 16 June 2017 / Published: 18 June 2017
Cited by 4 | Viewed by 1662 | PDF Full-text (1195 KB) | HTML Full-text | XML Full-text
Abstract
Space plasmas are frequently described by kappa distributions. Non-extensive statistical mechanics involves the maximization of the Tsallis entropic form under the constraints of canonical ensemble, considering also a dyadic formalism between the ordinary and escort probability distributions. This paper addresses the statistical origin [...] Read more.
Space plasmas are frequently described by kappa distributions. Non-extensive statistical mechanics involves the maximization of the Tsallis entropic form under the constraints of canonical ensemble, considering also a dyadic formalism between the ordinary and escort probability distributions. This paper addresses the statistical origin of kappa distributions, and shows that they can be connected with non-extensive statistical mechanics without considering the dyadic formalism of ordinary/escort distributions. While this concept does significantly simplify the usage of the theory, it costs the definition of a dyadic entropic formulation, in order to preserve the consistency between statistical mechanics and thermodynamics. Therefore, the simplification of the theory by means of avoiding dyadic formalism is impossible within the framework of non-extensive statistical mechanics. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Figures

Figure 1

Open AccessArticle
Multiscale Entropy Analysis of Unattended Oximetric Recordings to Assist in the Screening of Paediatric Sleep Apnoea at Home
Entropy 2017, 19(6), 284; https://doi.org/10.3390/e19060284
Received: 6 May 2017 / Revised: 12 June 2017 / Accepted: 14 June 2017 / Published: 17 June 2017
Cited by 6 | Viewed by 1274 | PDF Full-text (2045 KB) | HTML Full-text | XML Full-text
Abstract
Untreated paediatric obstructive sleep apnoea syndrome (OSAS) can severely affect the development and quality of life of children. In-hospital polysomnography (PSG) is the gold standard for a definitive diagnosis though it is relatively unavailable and particularly intrusive. Nocturnal portable oximetry has emerged as [...] Read more.
Untreated paediatric obstructive sleep apnoea syndrome (OSAS) can severely affect the development and quality of life of children. In-hospital polysomnography (PSG) is the gold standard for a definitive diagnosis though it is relatively unavailable and particularly intrusive. Nocturnal portable oximetry has emerged as a reliable technique for OSAS screening. Nevertheless, additional evidences are demanded. Our study is aimed at assessing the usefulness of multiscale entropy (MSE) to characterise oximetric recordings. We hypothesise that MSE could provide relevant information of blood oxygen saturation (SpO2) dynamics in the detection of childhood OSAS. In order to achieve this goal, a dataset composed of unattended SpO2 recordings from 50 children showing clinical suspicion of OSAS was analysed. SpO2 was parameterised by means of MSE and conventional oximetric indices. An optimum feature subset composed of five MSE-derived features and four conventional clinical indices were obtained using automated bidirectional stepwise feature selection. Logistic regression (LR) was used for classification. Our optimum LR model reached 83.5% accuracy (84.5% sensitivity and 83.0% specificity). Our results suggest that MSE provides relevant information from oximetry that is complementary to conventional approaches. Therefore, MSE may be useful to improve the diagnostic ability of unattended oximetry as a simplified screening test for childhood OSAS. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders)
Figures

Figure 1

Open AccessArticle
LSTM-CRF for Drug-Named Entity Recognition
Entropy 2017, 19(6), 283; https://doi.org/10.3390/e19060283
Received: 1 April 2017 / Revised: 9 June 2017 / Accepted: 9 June 2017 / Published: 17 June 2017
Cited by 9 | Viewed by 4013 | PDF Full-text (279 KB) | HTML Full-text | XML Full-text
Abstract
Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific [...] Read more.
Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific knowledge which are difficult to collect and define. Therefore, we offer an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (LSTM) with Conditional Random Fields decoding (LSTM-CRF). Two kinds of word representations are used in this work: word embedding, which is trained from a large amount of text, and character-based representation, which can capture orthographic feature of words. Experimental results on the DDI2011 and DDI2013 dataset show the effect of the proposed LSTM-CRF method. Our method outperforms the best system in the DDI2013 challenge. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessFeature PaperArticle
Correntropy-Based Pulse Rate Variability Analysis in Children with Sleep Disordered Breathing
Entropy 2017, 19(6), 282; https://doi.org/10.3390/e19060282
Received: 6 May 2017 / Revised: 7 June 2017 / Accepted: 9 June 2017 / Published: 16 June 2017
Cited by 2 | Viewed by 1503 | PDF Full-text (460 KB) | HTML Full-text | XML Full-text
Abstract
Pulse rate variability (PRV), an alternative measure of heart rate variability (HRV), is altered during obstructive sleep apnea. Correntropy spectral density (CSD) is a novel spectral analysis that includes nonlinear information. We recruited 160 children and recorded SpO2 and photoplethysmography (PPG), alongside [...] Read more.
Pulse rate variability (PRV), an alternative measure of heart rate variability (HRV), is altered during obstructive sleep apnea. Correntropy spectral density (CSD) is a novel spectral analysis that includes nonlinear information. We recruited 160 children and recorded SpO2 and photoplethysmography (PPG), alongside standard polysomnography. PPG signals were divided into 1-min epochs and apnea/hypoapnea (A/H) epochs labeled. CSD was applied to the pulse-to-pulse interval time series (PPIs) and five features extracted: the total spectral power (TP: 0.01–0.6 Hz), the power in the very low frequency band (VLF: 0.01–0.04 Hz), the normalized power in the low and high frequency bands (LFn: 0.04–0.15 Hz, HFn: 0.15–0.6 Hz), and the LF/HF ratio. Nonlinearity was assessed with the surrogate data technique. Multivariate logistic regression models were developed for CSD and power spectral density (PSD) analysis to detect epochs with A/H events. The CSD-based features and model identified epochs with and without A/H events more accurately relative to PSD-based analysis (area under the curve (AUC) 0.72 vs. 0.67) due to the nonlinearity of the data. In conclusion, CSD-based PRV analysis provided enhanced performance in detecting A/H epochs, however, a combination with overnight SpO2 analysis is suggested for optimal results. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders)
Figures

Figure 1

Open AccessArticle
An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation
Entropy 2017, 19(6), 281; https://doi.org/10.3390/e19060281
Received: 30 April 2017 / Revised: 11 June 2017 / Accepted: 13 June 2017 / Published: 15 June 2017
Cited by 2 | Viewed by 1321 | PDF Full-text (2740 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel [...] Read more.
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel framework. The proposed CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm is derived and analyzed in detail. A desired zero attraction term is put forward in the updating equation of the proposed CIMSM-PNLMS algorithm to force the inactive coefficients to zero. The performance of the proposed CIMSM-PNLMS algorithm is investigated for estimating an underwater communication channel estimation and an echo channel. The obtained results demonstrate that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle
Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations
Entropy 2017, 19(6), 280; https://doi.org/10.3390/e19060280
Received: 17 May 2017 / Revised: 14 June 2017 / Accepted: 14 June 2017 / Published: 15 June 2017
Cited by 2 | Viewed by 1546 | PDF Full-text (5968 KB) | HTML Full-text | XML Full-text
Abstract
A semi-analytical method is proposed to calculate stochastic quasi-periodic responses of limit cycles in non-equilibrium dynamical systems excited by periodic forces and weak random fluctuations, approximately. First, a kind of 1/N-stroboscopic map is introduced to discretize the quasi-periodic torus into closed [...] Read more.
A semi-analytical method is proposed to calculate stochastic quasi-periodic responses of limit cycles in non-equilibrium dynamical systems excited by periodic forces and weak random fluctuations, approximately. First, a kind of 1/N-stroboscopic map is introduced to discretize the quasi-periodic torus into closed curves, which are then approximated by periodic points. Using a stochastic sensitivity function of discrete time systems, the transverse dispersion of these circles can be quantified. Furthermore, combined with the longitudinal distribution of the circles, the probability density function of these closed curves in stroboscopic sections can be determined. The validity of this approach is shown through a van der Pol oscillator and Brusselator. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle
Entropy Generation Rates through the Dissipation of Ordered Regions in Helium Boundary-Layer Flows
Entropy 2017, 19(6), 278; https://doi.org/10.3390/e19060278
Received: 23 March 2017 / Revised: 9 June 2017 / Accepted: 12 June 2017 / Published: 15 June 2017
Cited by 2 | Viewed by 1501 | PDF Full-text (5791 KB) | HTML Full-text | XML Full-text
Abstract
The results of the computation of entropy generation rates through the dissipation of ordered regions within selected helium boundary layer flows are presented. Entropy generation rates in helium boundary layer flows for five cases of increasing temperature and pressure are considered. The basic [...] Read more.
The results of the computation of entropy generation rates through the dissipation of ordered regions within selected helium boundary layer flows are presented. Entropy generation rates in helium boundary layer flows for five cases of increasing temperature and pressure are considered. The basic format of a turbulent spot is used as the flow model. Statistical processing of the time-dependent series solutions of the nonlinear, coupled Lorenz-type differential equations for the spectral velocity wave components in the three-dimensional boundary layer configuration yields the local volumetric entropy generation rates. Extension of the computational method to the transition from laminar to fully turbulent flow is discussed. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle
Weak Fault Diagnosis of Wind Turbine Gearboxes Based on MED-LMD
Entropy 2017, 19(6), 277; https://doi.org/10.3390/e19060277
Received: 12 April 2017 / Revised: 2 June 2017 / Accepted: 7 June 2017 / Published: 15 June 2017
Cited by 13 | Viewed by 2203 | PDF Full-text (2861 KB) | HTML Full-text | XML Full-text
Abstract
In view of the problem that the fault signal of the rolling bearing is weak and the fault feature is difficult to extract in the strong noise environment, a method based on minimum entropy deconvolution (MED) and local mean deconvolution (LMD) is proposed [...] Read more.
In view of the problem that the fault signal of the rolling bearing is weak and the fault feature is difficult to extract in the strong noise environment, a method based on minimum entropy deconvolution (MED) and local mean deconvolution (LMD) is proposed to extract the weak fault features of the rolling bearing. Through the analysis of the simulation signal, we find that LMD has many limitations for the feature extraction of weak signals under strong background noise. In order to eliminate the noise interference and extract the characteristics of the weak fault, MED is employed as the pre-filter to remove noise. This method is applied to the weak fault feature extraction of rolling bearings; that is, using MED to reduce the noise of the wind turbine gearbox test bench under strong background noise, and then using the LMD method to decompose the denoised signals into several product functions (PFs), and finally analyzing the PF components that have strong correlation by a cyclic autocorrelation function. The finding is that the failure of the wind power gearbox is generated from the micro-bending of the high-speed shaft and the pitting of the #10 bearing outer race at the output end of the high-speed shaft. This method is compared with LMD, which shows the effectiveness of this method. This paper provides a new method for the extraction of multiple faults and weak features in strong background noise. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle
Noise Enhancement for Weighted Sum of Type I and II Error Probabilities with Constraints
Entropy 2017, 19(6), 276; https://doi.org/10.3390/e19060276
Received: 12 April 2017 / Revised: 8 June 2017 / Accepted: 12 June 2017 / Published: 14 June 2017
Viewed by 1226 | PDF Full-text (1937 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the noise-enhanced detection problem is investigated for the binary hypothesis-testing. The optimal additive noise is determined according to a criterion proposed by DeGroot and Schervish (2011), which aims to minimize the weighted sum of type I and II error probabilities [...] Read more.
In this paper, the noise-enhanced detection problem is investigated for the binary hypothesis-testing. The optimal additive noise is determined according to a criterion proposed by DeGroot and Schervish (2011), which aims to minimize the weighted sum of type I and II error probabilities under constraints on type I and II error probabilities. Based on a generic composite hypothesis-testing formulation, the optimal additive noise is obtained. The sufficient conditions are also deduced to verify whether the usage of the additive noise can or cannot improve the detectability of a given detector. In addition, some additional results are obtained according to the specificity of the binary hypothesis-testing, and an algorithm is developed for finding the corresponding optimal noise. Finally, numerical examples are given to verify the theoretical results and proofs of the main theorems are presented in the Appendix. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle
The Entropy of Words—Learnability and Expressivity across More than 1000 Languages
Entropy 2017, 19(6), 275; https://doi.org/10.3390/e19060275
Received: 26 April 2017 / Revised: 2 June 2017 / Accepted: 6 June 2017 / Published: 14 June 2017
Cited by 9 | Viewed by 2734 | PDF Full-text (6804 KB) | HTML Full-text | XML Full-text
Abstract
The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics and language sciences more generally. Information theory gives us tools at hand to measure precisely the average amount of choice associated [...] Read more.
The choice associated with words is a fundamental property of natural languages. It lies at the heart of quantitative linguistics, computational linguistics and language sciences more generally. Information theory gives us tools at hand to measure precisely the average amount of choice associated with words: the word entropy. Here, we use three parallel corpora, encompassing ca. 450 million words in 1916 texts and 1259 languages, to tackle some of the major conceptual and practical problems of word entropy estimation: dependence on text size, register, style and estimation method, as well as non-independence of words in co-text. We present two main findings: Firstly, word entropies display relatively narrow, unimodal distributions. There is no language in our sample with a unigram entropy of less than six bits/word. We argue that this is in line with information-theoretic models of communication. Languages are held in a narrow range by two fundamental pressures: word learnability and word expressivity, with a potential bias towards expressivity. Secondly, there is a strong linear relationship between unigram entropies and entropy rates. The entropy difference between words with and without co-textual information is narrowly distributed around ca. three bits/word. In other words, knowing the preceding text reduces the uncertainty of words by roughly the same amount across languages of the world. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Graphical abstract

Open AccessArticle
Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint
Entropy 2017, 19(6), 274; https://doi.org/10.3390/e19060274
Received: 2 May 2017 / Revised: 3 June 2017 / Accepted: 9 June 2017 / Published: 13 June 2017
Viewed by 1844 | PDF Full-text (1954 KB) | HTML Full-text | XML Full-text
Abstract
This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT) models with stochastic (or uncertain) constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT) models (such as log-normal, log-Cauchy, [...] Read more.
This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT) models with stochastic (or uncertain) constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT) models (such as log-normal, log-Cauchy, and log-logistic FT models) as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC) sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt) prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle
Multiscale Information Theory and the Marginal Utility of Information
Entropy 2017, 19(6), 273; https://doi.org/10.3390/e19060273
Received: 28 February 2017 / Revised: 26 May 2017 / Accepted: 9 June 2017 / Published: 13 June 2017
Cited by 6 | Viewed by 5015 | PDF Full-text (709 KB) | HTML Full-text | XML Full-text
Abstract
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior [...] Read more.
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior among system components results in overlapping or shared information. A system’s structure is revealed in the sharing of information across the system’s dependencies, each of which has an associated scale. Counting information according to its scale yields the quantity of scale-weighted information, which is conserved when a system is reorganized. In the interest of flexibility we allow information to be quantified using any function that satisfies two basic axioms. Shannon information and vector space dimension are examples. We discuss two quantitative indices that summarize system structure: an existing index, the complexity profile, and a new index, the marginal utility of information. Using simple examples, we show how these indices capture the multiscale structure of complex systems in a quantitative way. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³)) Printed Edition available
Figures

Figure 1

Open AccessArticle
Game-Theoretic Optimization of Bilateral Contract Transaction for Generation Companies and Large Consumers with Incomplete Information
Entropy 2017, 19(6), 272; https://doi.org/10.3390/e19060272
Received: 1 April 2017 / Revised: 26 May 2017 / Accepted: 10 June 2017 / Published: 13 June 2017
Cited by 2 | Viewed by 1785 | PDF Full-text (478 KB) | HTML Full-text | XML Full-text
Abstract
Bilateral contract transaction among generation companies and large consumers is attracting much attention in the electricity market. A large consumer can purchase energy from generation companies directly under a bilateral contract, which can guarantee the economic interests of both sides. However, in pursuit [...] Read more.
Bilateral contract transaction among generation companies and large consumers is attracting much attention in the electricity market. A large consumer can purchase energy from generation companies directly under a bilateral contract, which can guarantee the economic interests of both sides. However, in pursuit of more profit, the competitions in the transaction exist not only between the company side and the consumer side, but also among generation companies. In order to maximize its profit, each company needs to optimize bidding price to attract large consumers. In this paper, a master–slave game is proposed to describe the competitions among generation companies and large consumers. Furthermore, a Bayesian game approach is formulated to describe the competitions among generation companies considering the incomplete information. In the model, the goal of each company is to determine the optimal bidding price with Bayesian game; and based on the bidding price provided by companies and the predicted spot price, large consumers decide their personnel purchase strategy to minimize their cost. Simulation results show that each participant in the transaction can benefit from the proposed game. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle
An Information-Spectrum Approach to the Capacity Region of the Interference Channel
Entropy 2017, 19(6), 270; https://doi.org/10.3390/e19060270
Received: 8 April 2017 / Revised: 2 June 2017 / Accepted: 10 June 2017 / Published: 13 June 2017
Cited by 3 | Viewed by 1273 | PDF Full-text (607 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a general formula for the capacity region of a general interference channel with two pairs of users is derived, which reveals that the capacity region is the union of a family of rectangles. In the region, each rectangle is determined [...] Read more.
In this paper, a general formula for the capacity region of a general interference channel with two pairs of users is derived, which reveals that the capacity region is the union of a family of rectangles. In the region, each rectangle is determined by a pair of spectral inf-mutual information rates. The presented formula provides us with useful insights into the interference channels in spite of the difficulty of computing it. Specially, when the inputs are discrete, ergodic Markov processes and the channel is stationary memoryless, the formula can be evaluated by the BCJR (Bahl-Cocke-Jelinek-Raviv) algorithm. Also the formula suggests that considering the structure of the interference processes contributes to obtaining tighter inner bounds than the simplest one (obtained by treating the interference as noise). This is verified numerically by calculating the mutual information rates for Gaussian interference channels with embedded convolutional codes. Moreover, we present a coding scheme to approach the theoretical achievable rate pairs. Numerical results show that the decoding gains can be achieved by considering the structure of the interference. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle
A Novel Distance Metric: Generalized Relative Entropy
Entropy 2017, 19(6), 269; https://doi.org/10.3390/e19060269
Received: 26 May 2017 / Revised: 7 June 2017 / Accepted: 7 June 2017 / Published: 13 June 2017
Cited by 27 | Viewed by 2065 | PDF Full-text (1133 KB) | HTML Full-text | XML Full-text
Abstract
Information entropy and its extension, which are important generalizations of entropy, are currently applied to many research domains. In this paper, a novel generalized relative entropy is constructed to avoid some defects of traditional relative entropy. We present the structure of generalized relative [...] Read more.
Information entropy and its extension, which are important generalizations of entropy, are currently applied to many research domains. In this paper, a novel generalized relative entropy is constructed to avoid some defects of traditional relative entropy. We present the structure of generalized relative entropy after the discussion of defects in relative entropy. Moreover, some properties of the provided generalized relative entropy are presented and proved. The provided generalized relative entropy is proved to have a finite range and is a finite distance metric. Finally, we predict nucleosome positioning of fly and yeast based on generalized relative entropy and relative entropy respectively. The experimental results show that the properties of generalized relative entropy are better than relative entropy. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle
Peierls–Bogolyubov’s Inequality for Deformed Exponentials
Entropy 2017, 19(6), 271; https://doi.org/10.3390/e19060271
Received: 14 April 2017 / Revised: 2 June 2017 / Accepted: 5 June 2017 / Published: 12 June 2017
Viewed by 1263 | PDF Full-text (243 KB) | HTML Full-text | XML Full-text
Abstract
We study the convexity or concavity of certain trace functions for the deformed logarithmic and exponential functions, and in this way obtain new trace inequalities for deformed exponentials that may be considered as generalizations of Peierls–Bogolyubov’s inequality. We use these results to improve [...] Read more.
We study the convexity or concavity of certain trace functions for the deformed logarithmic and exponential functions, and in this way obtain new trace inequalities for deformed exponentials that may be considered as generalizations of Peierls–Bogolyubov’s inequality. We use these results to improve previously-known lower bounds for the Tsallis relative entropy. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Open AccessArticle
Generalized Beta Distribution of the Second Kind for Flood Frequency Analysis
Entropy 2017, 19(6), 254; https://doi.org/10.3390/e19060254
Received: 18 April 2017 / Revised: 18 May 2017 / Accepted: 26 May 2017 / Published: 12 June 2017
Cited by 10 | Viewed by 2092 | PDF Full-text (2984 KB) | HTML Full-text | XML Full-text
Abstract
Estimation of flood magnitude for a given recurrence interval T (T-year flood) at a specific location is needed for design of hydraulic and civil infrastructure facilities. A key step in the estimation or flood frequency analysis (FFA) is the selection of [...] Read more.
Estimation of flood magnitude for a given recurrence interval T (T-year flood) at a specific location is needed for design of hydraulic and civil infrastructure facilities. A key step in the estimation or flood frequency analysis (FFA) is the selection of a suitable distribution. More than one distribution is often found to be adequate for FFA on a given watershed and choosing the best one is often less than objective. In this study, the generalized beta distribution of the second kind (GB2) was introduced for FFA. The principle of maximum entropy (POME) method was proposed to estimate the GB2 parameters. The performance of GB2 distribution was evaluated using flood data from gauging stations on the Colorado River, USA. Frequency estimates from the GB2 distribution were also compared with those of commonly used distributions. Also, the evolution of frequency distribution along the stream from upstream to downstream was investigated. It concludes that the GB2 is appealing for FFA, since it has four parameters and includes some well-known distributions. Results of case study demonstrate that the parameters estimated by POME method are found reasonable. According to the RMSD and AIC values, the performance of the GB2 distribution is better than that of the widely used distributions in hydrology. When using different distributions for FFA, significant different design flood values are obtained. For a given return period, the design flood value of the downstream gauging stations is larger than that of the upstream gauging station. In addition, there is an evolution of distribution. Along the Yampa River, the distribution for FFA changes from the four-parameter GB2 distribution to the three-parameter Burr XII distribution. Full article
Figures

Figure 1

Open AccessArticle
Information Geometry of Non-Equilibrium Processes in a Bistable System with a Cubic Damping
Entropy 2017, 19(6), 268; https://doi.org/10.3390/e19060268
Received: 29 April 2017 / Revised: 2 June 2017 / Accepted: 6 June 2017 / Published: 11 June 2017
Cited by 12 | Viewed by 1703 | PDF Full-text (487 KB) | HTML Full-text | XML Full-text
Abstract
A probabilistic description is essential for understanding the dynamics of stochastic systems far from equilibrium, given uncertainty inherent in the systems. To compare different Probability Density Functions (PDFs), it is extremely useful to quantify the difference among different PDFs by assigning an appropriate [...] Read more.
A probabilistic description is essential for understanding the dynamics of stochastic systems far from equilibrium, given uncertainty inherent in the systems. To compare different Probability Density Functions (PDFs), it is extremely useful to quantify the difference among different PDFs by assigning an appropriate metric to probability such that the distance increases with the difference between the two PDFs. This metric structure then provides a key link between stochastic systems and information geometry. For a non-equilibrium process, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory of the system quantifies the total number of different states that the system undergoes in time and is called the information length. By using this concept, we investigate the information geometry of non-equilibrium processes involved in disorder-order transitions between the critical and subcritical states in a bistable system. Specifically, we compute time-dependent PDFs, information length, the rate of change in information length, entropy change and Fisher information in disorder-to-order and order-to-disorder transitions and discuss similarities and disparities between the two transitions. In particular, we show that the total information length in order-to-disorder transition is much larger than that in disorder-to-order transition and elucidate the link to the drastically different evolution of entropy in both transitions. We also provide the comparison of the results with those in the case of the transition between the subcritical and supercritical states and discuss implications for fitness. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle
Kullback–Leibler Divergence and Mutual Information of Partitions in Product MV Algebras
Entropy 2017, 19(6), 267; https://doi.org/10.3390/e19060267
Received: 11 May 2017 / Revised: 5 June 2017 / Accepted: 7 June 2017 / Published: 10 June 2017
Cited by 7 | Viewed by 1503 | PDF Full-text (325 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of the paper is to introduce, using the known results concerning the entropy in product MV algebras, the concepts of mutual information and Kullback–Leibler divergence for the case of product MV algebras and examine algebraic properties of the proposed measures. In [...] Read more.
The purpose of the paper is to introduce, using the known results concerning the entropy in product MV algebras, the concepts of mutual information and Kullback–Leibler divergence for the case of product MV algebras and examine algebraic properties of the proposed measures. In particular, a convexity of Kullback–Leibler divergence with respect to states in product MV algebras is proved, and chain rules for mutual information and Kullback–Leibler divergence are established. In addition, the data processing inequality for conditionally independent partitions in product MV algebras is proved. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
Model-Based Approaches to Active Perception and Control
Entropy 2017, 19(6), 266; https://doi.org/10.3390/e19060266
Received: 3 May 2017 / Revised: 6 June 2017 / Accepted: 7 June 2017 / Published: 9 June 2017
Cited by 7 | Viewed by 1834 | PDF Full-text (268 KB) | HTML Full-text | XML Full-text
Abstract
There is an on-going debate in cognitive (neuro) science and philosophy between classical cognitive theory and embodied, embedded, extended, and enactive (“4-Es”) views of cognition—a family of theories that emphasize the role of the body in cognition and the importance of brain-body-environment interaction [...] Read more.
There is an on-going debate in cognitive (neuro) science and philosophy between classical cognitive theory and embodied, embedded, extended, and enactive (“4-Es”) views of cognition—a family of theories that emphasize the role of the body in cognition and the importance of brain-body-environment interaction over and above internal representation. This debate touches foundational issues, such as whether the brain internally represents the external environment, and “infers” or “computes” something. Here we focus on two (4-Es-based) criticisms to traditional cognitive theories—to the notions of passive perception and of serial information processing—and discuss alternative ways to address them, by appealing to frameworks that use, or do not use, notions of internal modelling and inference. Our analysis illustrates that: an explicitly inferential framework can capture some key aspects of embodied and enactive theories of cognition; some claims of computational and dynamical theories can be reconciled rather than seen as alternative explanations of cognitive phenomena; and some aspects of cognitive processing (e.g., detached cognitive operations, such as planning and imagination) that are sometimes puzzling to explain from enactive and non-representational perspectives can, instead, be captured nicely from the perspective that internal generative models and predictive processing mediate adaptive control loops. Full article
Open AccessArticle
Joint Characteristic Timescales and Entropy Production Analyses for Model Reduction of Combustion Systems
Entropy 2017, 19(6), 264; https://doi.org/10.3390/e19060264
Received: 15 April 2017 / Revised: 4 June 2017 / Accepted: 6 June 2017 / Published: 9 June 2017
Cited by 3 | Viewed by 1436 | PDF Full-text (2722 KB) | HTML Full-text | XML Full-text
Abstract
The reduction of chemical kinetics describing combustion processes remains one of the major topics in the combustion theory and its applications. Problems concerning the estimation of reaction mechanisms real dimension remain unsolved, this being a critical point in the development of reduction models. [...] Read more.
The reduction of chemical kinetics describing combustion processes remains one of the major topics in the combustion theory and its applications. Problems concerning the estimation of reaction mechanisms real dimension remain unsolved, this being a critical point in the development of reduction models. In this study, we suggest a combination of local timescale and entropy production analyses to cope with this problem. In particular, the framework of skeletal mechanism is in the focus of the study as a practical and most straightforward implementation strategy for reduced mechanisms. Hydrogen and methane/dimethyl ether reaction mechanisms are considered for illustration and validation purposes. Two skeletal mechanism versions were obtained for methane/dimethyl ether combustion system by varying the tolerance used to identify important reactions in the characteristic timescale analysis of the system. Comparisons of ignition delay times and species profiles calculated with the detailed and the reduced models are presented. The results of the application show transparently the potential of the suggested approach to be automatically implemented for the reduction of large chemical kinetic models. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle
Exergy Dynamics of Systems in Thermal or Concentration Non-Equilibrium
Entropy 2017, 19(6), 263; https://doi.org/10.3390/e19060263
Received: 23 March 2017 / Revised: 29 May 2017 / Accepted: 2 June 2017 / Published: 8 June 2017
Cited by 6 | Viewed by 1494 | PDF Full-text (3288 KB) | HTML Full-text | XML Full-text
Abstract
The paper addresses the problem of the existence and quantification of the exergy of non-equilibrium systems. Assuming that both energy and exergy are a priori concepts, the Gibbs “available energy” A is calculated for arbitrary temperature or concentration distributions across the body, with [...] Read more.
The paper addresses the problem of the existence and quantification of the exergy of non-equilibrium systems. Assuming that both energy and exergy are a priori concepts, the Gibbs “available energy” A is calculated for arbitrary temperature or concentration distributions across the body, with an accuracy that depends only on the information one has of the initial distribution. It is shown that A exponentially relaxes to its equilibrium value, and it is then demonstrated that its value is different from that of the non-equilibrium exergy, the difference depending on the imposed boundary conditions on the system and thus the two quantities are shown to be incommensurable. It is finally argued that all iso-energetic non-equilibrium states can be ranked in terms of their non-equilibrium exergy content, and that each point of the Gibbs plane corresponds therefore to a set of possible initial distributions, each one with its own exergy-decay history. The non-equilibrium exergy is always larger than its equilibrium counterpart and constitutes the “real” total exergy content of the system, i.e., the real maximum work extractable from the initial system. A systematic application of this paradigm may be beneficial for meaningful future applications in the fields of engineering and natural science. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle
Projection to Mixture Families and Rate-Distortion Bounds with Power Distortion Measures
Entropy 2017, 19(6), 262; https://doi.org/10.3390/e19060262
Received: 4 May 2017 / Revised: 22 May 2017 / Accepted: 5 June 2017 / Published: 7 June 2017
Cited by 1 | Viewed by 1676 | PDF Full-text (298 KB) | HTML Full-text | XML Full-text
Abstract
The explicit form of the rate-distortion function has rarely been obtained, except for few cases where the Shannon lower bound coincides with the rate-distortion function for the entire range of the positive rate. From an information geometrical point of view, the evaluation of [...] Read more.
The explicit form of the rate-distortion function has rarely been obtained, except for few cases where the Shannon lower bound coincides with the rate-distortion function for the entire range of the positive rate. From an information geometrical point of view, the evaluation of the rate-distortion function is achieved by a projection to the mixture family defined by the distortion measure. In this paper, we consider the β -th power distortion measure, and prove that β -generalized Gaussian distribution is the only source that can make the Shannon lower bound tight at the minimum distortion level at zero rate. We demonstrate that the tightness of the Shannon lower bound for β = 1 (Laplacian source) and β = 2 (Gaussian source) yields upper bounds to the rate-distortion function of power distortion measures with a different power. These bounds evaluate from above the projection of the source distribution to the mixture family of the generalized Gaussian models. Applying similar arguments to ϵ -insensitive distortion measures, we consider the tightness of the Shannon lower bound and derive an upper bound to the distortion-rate function which is accurate at low rates. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top