Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 3 (March 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The Integrated Information Theory (IIT) proposes that in order to quantify information integration [...] Read more.
View options order results:
result details:
Displaying articles 1-65
Export citation of selected articles as:
Open AccessArticle Some Inequalities Combining Rough and Random Information
Entropy 2018, 20(3), 211; https://doi.org/10.3390/e20030211
Received: 1 February 2018 / Revised: 18 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
Viewed by 802 | PDF Full-text (744 KB) | HTML Full-text | XML Full-text
Abstract
Rough random theory, generally applied to statistics, decision-making, and so on, is an extension of rough set theory and probability theory, in which a rough random variable is described as a random variable taking “rough variable” values. In order to extend and enrich
[...] Read more.
Rough random theory, generally applied to statistics, decision-making, and so on, is an extension of rough set theory and probability theory, in which a rough random variable is described as a random variable taking “rough variable” values. In order to extend and enrich the research area of rough random theory, in this paper, the well-known probabilistic inequalities (Markov inequality, Chebyshev inequality, Holder’s inequality, Minkowski inequality and Jensen’s inequality) are proven for rough random variables, which gives a firm theoretical support to the further development of rough random theory. Besides, considering that the critical values always act as a vital tool in engineering, science and other application fields, some significant properties of the critical values of rough random variables involving the continuity and the monotonicity are investigated deeply to provide a novel analytical approach for dealing with the rough random optimization problems. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Open AccessArticle Amplitude- and Fluctuation-Based Dispersion Entropy
Entropy 2018, 20(3), 210; https://doi.org/10.3390/e20030210
Received: 2 November 2017 / Revised: 5 February 2018 / Accepted: 13 March 2018 / Published: 20 March 2018
Cited by 1 | Viewed by 972 | PDF Full-text (956 KB) | HTML Full-text | XML Full-text
Abstract
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of
[...] Read more.
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of different mappings has not been studied yet. Here, we investigate the effect of linear and nonlinear mapping approaches in DispEn. We also inspect the sensitivity of different parameters of DispEn to noise. Moreover, we develop fluctuation-based DispEn (FDispEn) as a measure to deal with only the fluctuations of time series. Furthermore, the original and fluctuation-based forbidden dispersion patterns are introduced to discriminate deterministic from stochastic time series. Finally, we compare the performance of DispEn, FDispEn, permutation entropy, sample entropy, and Lempel–Ziv complexity on two physiological datasets. The results show that DispEn is the most consistent technique to distinguish various dynamics of the biomedical signals. Due to their advantages over existing entropy methods, DispEn and FDispEn are expected to be broadly used for the characterization of a wide variety of real-world time series. The MATLAB codes used in this paper are freely available at http://dx.doi.org/10.7488/ds/2326. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Prior and Posterior Linear Pooling for Combining Expert Opinions: Uses and Impact on Bayesian Networks—The Case of the Wayfinding Model
Entropy 2018, 20(3), 209; https://doi.org/10.3390/e20030209
Received: 15 December 2017 / Revised: 17 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
Viewed by 1307 | PDF Full-text (1924 KB) | HTML Full-text | XML Full-text
Abstract
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability
[...] Read more.
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability assessments from multiple experts. In particular, Prior Linear Pooling (PrLP), which pools opinions and then places them into the BN, is a common method. This paper considers this approach and an alternative pooling method, Posterior Linear Pooling (PoLP). The PoLP method constructs a BN for each expert, and then pools the resulting probabilities at the nodes of interest. The advantages and disadvantages of these two methods are identified and compared and the methods are applied to an existing BN, the Wayfinding Bayesian Network Model, to investigate the behavior of different groups of people and how these different methods may be able to capture such differences. The paper focusses on six nodes Human Factors, Environmental Factors, Wayfinding, Communication, Visual Elements of Communication and Navigation Pathway, and three subgroups Gender (Female, Male), Travel Experience (Experienced, Inexperienced), and Travel Purpose (Business, Personal), and finds that different behaviors can indeed be captured by the different methods. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessArticle Deconstructing Cross-Entropy for Probabilistic Binary Classifiers
Entropy 2018, 20(3), 208; https://doi.org/10.3390/e20030208
Received: 22 February 2018 / Revised: 16 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
Viewed by 1416 | PDF Full-text (1789 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze
[...] Read more.
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze its motivation, meaning and interpretation from an information-theoretical point of view. In this sense, this article presents several contributions: First, we explicitly analyze the contribution to cross-entropy of (i) prior knowledge; and (ii) the value of the features in the form of a likelihood ratio. Second, we introduce a decomposition of cross-entropy into two components: discrimination and calibration. This decomposition enables the measurement of different performance aspects of a classifier in a more precise way; and justifies previously reported strategies to obtain reliable probabilities by means of the calibration of the output of a discriminating classifier. Third, we give different information-theoretical interpretations of cross-entropy, which can be useful in different application scenarios, and which are related to the concept of reference probabilities. Fourth, we present an analysis tool, the Empirical Cross-Entropy (ECE) plot, a compact representation of cross-entropy and its aforementioned decomposition. We show the power of ECE plots, as compared to other classical performance representations, in two diverse experimental examples: a speaker verification system, and a forensic case where some glass findings are present. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Graphical abstract

Open AccessArticle Application of Entropy Ensemble Filter in Neural Network Forecasts of Tropical Pacific Sea Surface Temperatures
Entropy 2018, 20(3), 207; https://doi.org/10.3390/e20030207
Received: 1 February 2018 / Revised: 13 March 2018 / Accepted: 15 March 2018 / Published: 20 March 2018
Cited by 1 | Viewed by 1014 | PDF Full-text (5543 KB) | HTML Full-text | XML Full-text
Abstract
Recently, the Entropy Ensemble Filter (EEF) method was proposed to mitigate the computational cost of the Bootstrap AGGregatING (bagging) method. This method uses the most informative training data sets in the model ensemble rather than all ensemble members created by the conventional bagging.
[...] Read more.
Recently, the Entropy Ensemble Filter (EEF) method was proposed to mitigate the computational cost of the Bootstrap AGGregatING (bagging) method. This method uses the most informative training data sets in the model ensemble rather than all ensemble members created by the conventional bagging. In this study, we evaluate, for the first time, the application of the EEF method in Neural Network (NN) modeling of El Nino-southern oscillation. Specifically, we forecast the first five principal components (PCs) of sea surface temperature monthly anomaly fields over tropical Pacific, at different lead times (from 3 to 15 months, with a three-month increment) for the period 1979–2017. We apply the EEF method in a multiple-linear regression (MLR) model and two NN models, one using Bayesian regularization and one Levenberg-Marquardt algorithm for training, and evaluate their performance and computational efficiency relative to the same models with conventional bagging. All models perform equally well at the lead time of 3 and 6 months, while at higher lead times, the MLR model’s skill deteriorates faster than the nonlinear models. The neural network models with both bagging methods produce equally successful forecasts with the same computational efficiency. It remains to be shown whether this finding is sensitive to the dataset size. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Graphical abstract

Open AccessArticle Irreversibility and Action of the Heat Conduction Process
Entropy 2018, 20(3), 206; https://doi.org/10.3390/e20030206
Received: 24 January 2018 / Revised: 21 February 2018 / Accepted: 14 March 2018 / Published: 20 March 2018
Cited by 4 | Viewed by 1042 | PDF Full-text (604 KB) | HTML Full-text | XML Full-text
Abstract
Irreversibility (that is, the “one-sidedness” of time) of a physical process can be characterized by using Lyapunov functions in the modern theory of stability. In this theoretical framework, entropy and its production rate have been generally regarded as Lyapunov functions in order to
[...] Read more.
Irreversibility (that is, the “one-sidedness” of time) of a physical process can be characterized by using Lyapunov functions in the modern theory of stability. In this theoretical framework, entropy and its production rate have been generally regarded as Lyapunov functions in order to measure the irreversibility of various physical processes. In fact, the Lyapunov function is not always unique. In the represent work, a rigorous proof is given that the entransy and its dissipation rate can also serve as Lyapunov functions associated with the irreversibility of the heat conduction process without the conversion between heat and work. In addition, the variation of the entransy dissipation rate can lead to Fourier’s heat conduction law, while the entropy production rate cannot. This shows that the entransy dissipation rate, rather than the entropy production rate, is the unique action for the heat conduction process, and can be used to establish the finite element method for the approximate solution of heat conduction problems and the optimization of heat transfer processes. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Generalized Lagrangian Path Approach to Manifestly-Covariant Quantum Gravity Theory
Entropy 2018, 20(3), 205; https://doi.org/10.3390/e20030205
Received: 10 January 2018 / Revised: 25 February 2018 / Accepted: 8 March 2018 / Published: 19 March 2018
Cited by 1 | Viewed by 1396 | PDF Full-text (410 KB) | HTML Full-text | XML Full-text
Abstract
A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP) approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The
[...] Read more.
A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP) approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The result is established in the framework of the manifestly-covariant quantum gravity theory (CQG-theory) proposed recently and the related CQG-wave equation advancing in proper-time the quantum state associated with massive gravitons. Generally non-stationary analytical solutions for the CQG-wave equation with non-vanishing cosmological constant are determined in such a framework, which exhibit Gaussian-like probability densities that are non-dispersive in proper-time. As a remarkable outcome of the theory achieved by implementing these analytical solutions, the existence of an emergent gravity phenomenon is proven to hold. Accordingly, it is shown that a mean-field background space-time metric tensor can be expressed in terms of a suitable statistical average of stochastic fluctuations of the quantum gravitational field whose quantum-wave dynamics is described by GLP trajectories. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Open AccessArticle Output-Feedback Control for Discrete-Time Spreading Models in Complex Networks
Entropy 2018, 20(3), 204; https://doi.org/10.3390/e20030204
Received: 3 February 2018 / Revised: 6 March 2018 / Accepted: 7 March 2018 / Published: 19 March 2018
Cited by 1 | Viewed by 975 | PDF Full-text (820 KB) | HTML Full-text | XML Full-text
Abstract
The problem of stabilizing the spreading process to a prescribed probability distribution over a complex network is considered, where the dynamics of the nodes in the network is given by discrete-time Markov-chain processes. Conditions for the positioning and identification of actuators and sensors
[...] Read more.
The problem of stabilizing the spreading process to a prescribed probability distribution over a complex network is considered, where the dynamics of the nodes in the network is given by discrete-time Markov-chain processes. Conditions for the positioning and identification of actuators and sensors are provided, and sufficient conditions for the exponential stability of the desired distribution are derived. Simulations results for a network of N = 10 6 corroborate our theoretical findings. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessFeature PaperArticle Computational Information Geometry for Binary Classification of High-Dimensional Random Tensors
Entropy 2018, 20(3), 203; https://doi.org/10.3390/e20030203
Received: 25 January 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 17 March 2018
Viewed by 746 | PDF Full-text (520 KB) | HTML Full-text | XML Full-text
Abstract
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero
[...] Read more.
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank-R tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R , i.e., for 1 q Q , where R , N q with R 1 / q / N q converge towards a finite constant or a noisy tensor admitting TucKer Decomposition (TKD) of multilinear ( M 1 , , M Q ) -rank with large factors of size N q × M q , i.e., for 1 q Q , where N q , M q with M q / N q converge towards a finite constant. The classification of the random entries (coefficients) of the core tensor in the CPD/TKD is hard to study since the exact derivation of the minimal Bayes’ error probability is mathematically intractable. To circumvent this difficulty, the Chernoff Upper Bound (CUB) for larger SNR and the Fisher information at low SNR are derived and studied, based on information geometry theory. The tightest CUB is reached for the value minimizing the error exponent, denoted by s . In general, due to the asymmetry of the s-divergence, the Bhattacharyya Upper Bound (BUB) (that is, the Chernoff Information calculated at s = 1 / 2 ) cannot solve this problem effectively. As a consequence, we rely on a costly numerical optimization strategy to find s . However, thanks to powerful random matrix theory tools, a simple analytical expression of s is provided with respect to the Signal to Noise Ratio (SNR) in the two schemes considered. This work shows that the BUB is the tightest bound at low SNRs. However, for higher SNRs, the latest property is no longer true. Full article
Figures

Figure 1

Open AccessArticle Global Reliability Sensitivity Analysis Based on Maximum Entropy and 2-Layer Polynomial Chaos Expansion
Entropy 2018, 20(3), 202; https://doi.org/10.3390/e20030202
Received: 5 February 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
Viewed by 927 | PDF Full-text (3431 KB) | HTML Full-text | XML Full-text
Abstract
To optimize contributions of uncertain input variables on the statistical parameter of given model, e.g., reliability, global reliability sensitivity analysis (GRSA) provides an appropriate tool to quantify the effects. However, it may be difficult to calculate global reliability sensitivity indices compared with the
[...] Read more.
To optimize contributions of uncertain input variables on the statistical parameter of given model, e.g., reliability, global reliability sensitivity analysis (GRSA) provides an appropriate tool to quantify the effects. However, it may be difficult to calculate global reliability sensitivity indices compared with the traditional global sensitivity indices of model output, because statistical parameters are more difficult to obtain, Monte Carlo simulation (MCS)-related methods seem to be the only ways for GRSA but they are usually computationally demanding. This paper presents a new non-MCS calculation to evaluate global reliability sensitivity indices. This method proposes: (i) a 2-layer polynomial chaos expansion (PCE) framework to solve the global reliability sensitivity indices; and (ii) an efficient method to build a surrogate model of the statistical parameter using the maximum entropy (ME) method with the moments provided by PCE. This method has a dramatically reduced computational cost compared with traditional approaches. Two examples are introduced to demonstrate the efficiency and accuracy of the proposed method. It also suggests that the important ranking of model output and associated failure probability may be different, which could help improve the understanding of the given model in further optimization design. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Global Optimization Employing Gaussian Process-Based Bayesian Surrogates
Entropy 2018, 20(3), 201; https://doi.org/10.3390/e20030201
Received: 22 December 2017 / Revised: 8 March 2018 / Accepted: 13 March 2018 / Published: 16 March 2018
Cited by 1 | Viewed by 1336 | PDF Full-text (639 KB) | HTML Full-text | XML Full-text
Abstract
The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has
[...] Read more.
The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has been acquired, one could be interested in taking these data as a basis for finding an extremum and possibly an input parameter set for further computer simulations to determine it—a task which belongs to the realm of global optimization. Within the Bayesian framework we utilize Gaussian processes for the creation of a surrogate model function adjusted self-consistently via hyperparameters to represent the data. Although the probability distribution of the hyperparameters may be widely spread over phase space, we make the assumption that only the use of their expectation values is sufficient. While this shortcut facilitates a quickly accessible surrogate, it is somewhat justified by the fact that we are not interested in a full representation of the model by the surrogate but to reveal its maximum. To accomplish this the surrogate is fed to a utility function whose extremum determines the new parameter set for the next data point to obtain. Moreover, we propose to alternate between two utility functions—expected improvement and maximum variance—in order to avoid the drawbacks of each. Subsequent data points are drawn from the model function until the procedure either remains in the points found or the surrogate model does not change with the iteration. The procedure is applied to mock data in one and two dimensions in order to demonstrate proof of principle of the proposed approach. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Leggett-Garg Inequalities for Quantum Fluctuating Work
Entropy 2018, 20(3), 200; https://doi.org/10.3390/e20030200
Received: 1 February 2018 / Revised: 28 February 2018 / Accepted: 8 March 2018 / Published: 16 March 2018
Viewed by 924 | PDF Full-text (317 KB) | HTML Full-text | XML Full-text
Abstract
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a
[...] Read more.
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a quantum system unitarily driven in time. It is shown that these inequalities can be violated in a driven two-level system, thereby demonstrating that there exists no general macrorealistic description of quantum work. These violations are shown to emerge within the standard Two-Projective-Measurement scheme as well as for alternative definitions of fluctuating work that are based on weak measurement. Our results elucidate the influences of temporal correlations on work extraction in the quantum regime and highlight a key difference between quantum and classical thermodynamics. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Figures

Figure 1

Open AccessArticle Criticality Analysis of the Lower Ionosphere Perturbations Prior to the 2016 Kumamoto (Japan) Earthquakes as Based on VLF Electromagnetic Wave Propagation Data Observed at Multiple Stations
Entropy 2018, 20(3), 199; https://doi.org/10.3390/e20030199
Received: 17 February 2018 / Revised: 12 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
Cited by 3 | Viewed by 1125 | PDF Full-text (3502 KB) | HTML Full-text | XML Full-text
Abstract
The perturbations of the ionosphere which are observed prior to significant earthquakes (EQs) have long been investigated and could be considered promising for short-term EQ prediction. One way to monitor ionospheric perturbations is by studying VLF/LF electromagnetic wave propagation through the lower ionosphere
[...] Read more.
The perturbations of the ionosphere which are observed prior to significant earthquakes (EQs) have long been investigated and could be considered promising for short-term EQ prediction. One way to monitor ionospheric perturbations is by studying VLF/LF electromagnetic wave propagation through the lower ionosphere between specific transmitters and receivers. For this purpose, a network of eight receivers has been deployed throughout Japan which receive subionospheric signals from different transmitters located both in the same and other countries. In this study we analyze, in terms of the recently proposed natural time analysis, the data recorded by the above-mentioned network prior to the catastrophic 2016 Kumamoto fault-type EQs, which were as huge as the former 1995 Kobe EQ. These EQs occurred within a two-day period (14 April: M W = 6.2 and M W = 6.0 , 15 April: M W = 7.0 ) at shallow depths (~10 km), while their epicenters were adjacent. Our results show that lower ionospheric perturbations present critical dynamics from two weeks up to two days before the main shock occurrence. The results are compared to those by the conventional nighttime fluctuation method obtained for the same dataset and exhibit consistency. Finally, the temporal evolutions of criticality in ionospheric parameters and those in the lithosphere as seen from the ULF electromagnetic emissions are discussed in the context of the lithosphere-atmosphere-ionosphere coupling. Full article
Figures

Figure 1

Open AccessArticle Modulation Signal Recognition Based on Information Entropy and Ensemble Learning
Entropy 2018, 20(3), 198; https://doi.org/10.3390/e20030198
Received: 30 January 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
Cited by 2 | Viewed by 951 | PDF Full-text (1772 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, information entropy and ensemble learning based signal recognition theory and algorithms have been proposed. We have extracted 16 kinds of entropy features out of 9 types of modulated signals. The types of information entropy used are numerous, including Rényi entropy
[...] Read more.
In this paper, information entropy and ensemble learning based signal recognition theory and algorithms have been proposed. We have extracted 16 kinds of entropy features out of 9 types of modulated signals. The types of information entropy used are numerous, including Rényi entropy and energy entropy based on S Transform and Generalized S Transform. We have used three feature selection algorithms, including sequence forward selection (SFS), sequence forward floating selection (SFFS) and RELIEF-F to select the optimal feature subset from 16 entropy features. We use five classifiers, including k-nearest neighbor (KNN), support vector machine (SVM), Adaboost, Gradient Boosting Decision Tree (GBDT) and eXtreme Gradient Boosting (XGBoost) to classify the original feature set and the feature subsets selected by different feature selection algorithms. The simulation results show that the feature subsets selected by SFS and SFFS algorithms are the best, with a 48% increase in recognition rate over the original feature set when using KNN classifier and a 34% increase when using SVM classifier. For the other three classifiers, the original feature set can achieve the best recognition performance. The XGBoost classifier has the best recognition performance, the overall recognition rate is 97.74% and the recognition rate can reach 82% when the signal to noise ratio (SNR) is −10 dB. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle Low Probability of Intercept-Based Radar Waveform Design for Spectral Coexistence of Distributed Multiple-Radar and Wireless Communication Systems in Clutter
Entropy 2018, 20(3), 197; https://doi.org/10.3390/e20030197
Received: 17 December 2017 / Revised: 21 February 2018 / Accepted: 23 February 2018 / Published: 16 March 2018
Viewed by 1201 | PDF Full-text (1067 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the problem of low probability of intercept (LPI)-based radar waveform design for distributed multiple-radar system (DMRS) is studied, which consists of multiple radars coexisting with a wireless communication system in the same frequency band. The primary objective of the multiple-radar
[...] Read more.
In this paper, the problem of low probability of intercept (LPI)-based radar waveform design for distributed multiple-radar system (DMRS) is studied, which consists of multiple radars coexisting with a wireless communication system in the same frequency band. The primary objective of the multiple-radar system is to minimize the total transmitted energy by optimizing the transmission waveform of each radar with the communication signals acting as interference to the radar system, while meeting a desired target detection/characterization performance. Firstly, signal-to-clutter-plus-noise ratio (SCNR) and mutual information (MI) are used as the practical metrics to evaluate target detection and characterization performance, respectively. Then, the SCNR- and MI-based optimal radar waveform optimization methods are formulated. The resulting waveform optimization problems are solved through the well-known bisection search technique. Simulation results demonstrate utilizing various examples and scenarios that the proposed radar waveform design schemes can evidently improve the LPI performance of DMRS without interfering with friendly communications. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle Real-Time ECG-Based Detection of Fatigue Driving Using Sample Entropy
Entropy 2018, 20(3), 196; https://doi.org/10.3390/e20030196
Received: 3 February 2018 / Revised: 3 March 2018 / Accepted: 13 March 2018 / Published: 15 March 2018
Cited by 3 | Viewed by 1172 | PDF Full-text (2564 KB) | HTML Full-text | XML Full-text
Abstract
In present work, the heart rate variability (HRV) characteristics, calculated by sample entropy (SampEn), were used to analyze the driving fatigue state at successive driving stages. Combined with the relative power spectrum ratio β/(θ + α), subjective questionnaire, and brain network parameters of
[...] Read more.
In present work, the heart rate variability (HRV) characteristics, calculated by sample entropy (SampEn), were used to analyze the driving fatigue state at successive driving stages. Combined with the relative power spectrum ratio β/(θ + α), subjective questionnaire, and brain network parameters of electroencephalogram (EEG) signals, the relationships between the different characteristics for driving fatigue were discussed. Thus, it can conclude that the HRV characteristics (RR SampEn and R peaks SampEn), as well as the relative power spectrum ratio β/(θ + α) of the channels (C3, C4, P3, P4), the subjective questionnaire, and the brain network parameters, can effectively detect driving fatigue at various driving stages. In addition, the method for collecting ECG signals from the palm part does not need patch electrodes, is convenient, and will be practical to use in actual driving situations in the future. Full article
Figures

Figure 1

Open AccessArticle Trustworthiness Measurement Algorithm for TWfMS Based on Software Behaviour Entropy
Entropy 2018, 20(3), 195; https://doi.org/10.3390/e20030195
Received: 3 February 2018 / Revised: 10 March 2018 / Accepted: 10 March 2018 / Published: 14 March 2018
Cited by 2 | Viewed by 1441 | PDF Full-text (783 KB) | HTML Full-text | XML Full-text
Abstract
As the virtual mirror of complex real-time business processes of organisations’ underlying information systems, the workflow management system (WfMS) has emerged in recent decades as a new self-autonomous paradigm in the open, dynamic, distributed computing environment. In order to construct a trustworthy workflow
[...] Read more.
As the virtual mirror of complex real-time business processes of organisations’ underlying information systems, the workflow management system (WfMS) has emerged in recent decades as a new self-autonomous paradigm in the open, dynamic, distributed computing environment. In order to construct a trustworthy workflow management system (TWfMS), the design of a software behaviour trustworthiness measurement algorithm is an urgent task for researchers. Accompanying the trustworthiness mechanism, the measurement algorithm, with uncertain software behaviour trustworthiness information of the WfMS, should be resolved as an infrastructure. Based on the framework presented in our research prior to this paper, we firstly introduce a formal model for the WfMS trustworthiness measurement, with the main property reasoning based on calculus operators. Secondly, this paper proposes a novel measurement algorithm from the software behaviour entropy of calculus operators through the principle of maximum entropy (POME) and the data mining method. Thirdly, the trustworthiness measurement algorithm for incomplete software behaviour tests and runtime information is discussed and compared by means of a detailed explanation. Finally, we provide conclusions and discuss certain future research areas of the TWfMS. Full article
Figures

Figure 1

Open AccessArticle Estimating Multivariate Discrete Distributions Using Bernstein Copulas
Entropy 2018, 20(3), 194; https://doi.org/10.3390/e20030194
Received: 18 December 2017 / Revised: 1 March 2018 / Accepted: 7 March 2018 / Published: 14 March 2018
Cited by 1 | Viewed by 1118 | PDF Full-text (3054 KB) | HTML Full-text | XML Full-text
Abstract
Measuring the dependence between random variables is one of the most fundamental problems in statistics, and therefore, determining the joint distribution of the relevant variables is crucial. Copulas have recently become an important tool for properly inferring the joint distribution of the variables
[...] Read more.
Measuring the dependence between random variables is one of the most fundamental problems in statistics, and therefore, determining the joint distribution of the relevant variables is crucial. Copulas have recently become an important tool for properly inferring the joint distribution of the variables of interest. Although many studies have addressed the case of continuous variables, few studies have focused on treating discrete variables. This paper presents a nonparametric approach to the estimation of joint discrete distributions with bounded support using copulas and Bernstein polynomials. We present an application in real obsessive-compulsive disorder data. Full article
Figures

Figure 1

Open AccessArticle An Information-Theoretic Perspective on the Quantum Bit Commitment Impossibility Theorem
Entropy 2018, 20(3), 193; https://doi.org/10.3390/e20030193
Received: 16 January 2018 / Revised: 17 February 2018 / Accepted: 12 March 2018 / Published: 13 March 2018
Viewed by 842 | PDF Full-text (394 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a different approach to pinpoint the causes for which an unconditionally secure quantum bit commitment protocol cannot be realized, beyond the technical details on which the proof of Mayers’ no-go theorem is constructed. We have adopted the tools of quantum
[...] Read more.
This paper proposes a different approach to pinpoint the causes for which an unconditionally secure quantum bit commitment protocol cannot be realized, beyond the technical details on which the proof of Mayers’ no-go theorem is constructed. We have adopted the tools of quantum entropy analysis to investigate the conditions under which the security properties of quantum bit commitment can be circumvented. Our study has revealed that cheating the binding property requires the quantum system acting as the safe to harbor the same amount of uncertainty with respect to both observers (Alice and Bob) as well as the use of entanglement. Our analysis also suggests that the ability to cheat one of the two fundamental properties of bit commitment by any of the two participants depends on how much information is leaked from one side of the system to the other and how much remains hidden from the other participant. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness)
Figures

Figure 1

Open AccessArticle Thermodynamic Properties of a Regular Black Hole in Gravity Coupling to Nonlinear Electrodynamics
Entropy 2018, 20(3), 192; https://doi.org/10.3390/e20030192
Received: 29 September 2017 / Revised: 25 November 2017 / Accepted: 22 January 2018 / Published: 13 March 2018
Cited by 3 | Viewed by 892 | PDF Full-text (263 KB) | HTML Full-text | XML Full-text
Abstract
We first calculate the heat capacities of the nonlinear electrodynamics (NED) black hole for fixed mass and electric charge, and the electric capacitances for fixed mass and entropy. Then, we study the properties of the Ruppeiner thermodynamic geometry of the NED black hole.
[...] Read more.
We first calculate the heat capacities of the nonlinear electrodynamics (NED) black hole for fixed mass and electric charge, and the electric capacitances for fixed mass and entropy. Then, we study the properties of the Ruppeiner thermodynamic geometry of the NED black hole. Lastly, some discussions on the thermal stability of the NED black hole and the implication to the flatness of its Ruppeiner thermodynamic geometry are given. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics II)
Figures

Figure 1

Open AccessArticle Gaussian Processes and Polynomial Chaos Expansion for Regression Problem: Linkage via the RKHS and Comparison via the KL Divergence
Entropy 2018, 20(3), 191; https://doi.org/10.3390/e20030191
Received: 21 January 2018 / Revised: 6 March 2018 / Accepted: 12 March 2018 / Published: 12 March 2018
Viewed by 1213 | PDF Full-text (1199 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we examine two widely-used approaches, the polynomial chaos expansion (PCE) and Gaussian process (GP) regression, for the development of surrogate models. The theoretical differences between the PCE and GP approximations are discussed. A state-of-the-art PCE approach is constructed based on
[...] Read more.
In this paper, we examine two widely-used approaches, the polynomial chaos expansion (PCE) and Gaussian process (GP) regression, for the development of surrogate models. The theoretical differences between the PCE and GP approximations are discussed. A state-of-the-art PCE approach is constructed based on high precision quadrature points; however, the need for truncation may result in potential precision loss; the GP approach performs well on small datasets and allows a fine and precise trade-off between fitting the data and smoothing, but its overall performance depends largely on the training dataset. The reproducing kernel Hilbert space (RKHS) and Mercer’s theorem are introduced to form a linkage between the two methods. The theorem has proven that the two surrogates can be embedded in two isomorphic RKHS, by which we propose a novel method named Gaussian process on polynomial chaos basis (GPCB) that incorporates the PCE and GP. A theoretical comparison is made between the PCE and GPCB with the help of the Kullback–Leibler divergence. We present that the GPCB is as stable and accurate as the PCE method. Furthermore, the GPCB is a one-step Bayesian method that chooses the best subset of RKHS in which the true function should lie, while the PCE method requires an adaptive procedure. Simulations of 1D and 2D benchmark functions show that GPCB outperforms both the PCE and classical GP methods. In order to solve high dimensional problems, a random sample scheme with a constructive design (i.e., tensor product of quadrature points) is proposed to generate a valid training dataset for the GPCB method. This approach utilizes the nature of the high numerical accuracy underlying the quadrature points while ensuring the computational feasibility. Finally, the experimental results show that our sample strategy has a higher accuracy than classical experimental designs; meanwhile, it is suitable for solving high dimensional problems. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle Game Theoretic Approach for Systematic Feature Selection; Application in False Alarm Detection in Intensive Care Units
Entropy 2018, 20(3), 190; https://doi.org/10.3390/e20030190
Received: 10 January 2018 / Revised: 27 February 2018 / Accepted: 5 March 2018 / Published: 12 March 2018
Cited by 1 | Viewed by 1039 | PDF Full-text (472 KB) | HTML Full-text | XML Full-text
Abstract
Intensive Care Units (ICUs) are equipped with many sophisticated sensors and monitoring devices to provide the highest quality of care for critically ill patients. However, these devices might generate false alarms that reduce standard of care and result in desensitization of caregivers to
[...] Read more.
Intensive Care Units (ICUs) are equipped with many sophisticated sensors and monitoring devices to provide the highest quality of care for critically ill patients. However, these devices might generate false alarms that reduce standard of care and result in desensitization of caregivers to alarms. Therefore, reducing the number of false alarms is of great importance. Many approaches such as signal processing and machine learning, and designing more accurate sensors have been developed for this purpose. However, the significant intrinsic correlation among the extracted features from different sensors has been mostly overlooked. A majority of current data mining techniques fail to capture such correlation among the collected signals from different sensors that limits their alarm recognition capabilities. Here, we propose a novel information-theoretic predictive modeling technique based on the idea of coalition game theory to enhance the accuracy of false alarm detection in ICUs by accounting for the synergistic power of signal attributes in the feature selection stage. This approach brings together techniques from information theory and game theory to account for inter-features mutual information in determining the most correlated predictors with respect to false alarm by calculating Banzhaf power of each feature. The numerical results show that the proposed method can enhance classification accuracy and improve the area under the ROC (receiver operating characteristic) curve compared to other feature selection techniques, when integrated in classifiers such as Bayes-Net that consider inter-features dependencies. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessArticle An Investigation into the Relationship among Psychiatric, Demographic and Socio-Economic Variables with Bayesian Network Modeling
Entropy 2018, 20(3), 189; https://doi.org/10.3390/e20030189
Received: 29 January 2018 / Revised: 6 March 2018 / Accepted: 9 March 2018 / Published: 12 March 2018
Viewed by 998 | PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to investigate the factors influencing the Beck Depression Inventory score, the Beck Hopelessness Scale score and the Rosenberg Self-Esteem score and the relationships among the psychiatric, demographic and socio-economic variables with Bayesian network modeling. The data of
[...] Read more.
The aim of this paper is to investigate the factors influencing the Beck Depression Inventory score, the Beck Hopelessness Scale score and the Rosenberg Self-Esteem score and the relationships among the psychiatric, demographic and socio-economic variables with Bayesian network modeling. The data of 823 university students consist of 21 continuous and discrete relevant psychiatric, demographic and socio-economic variables. After the discretization of the continuous variables by two approaches, two Bayesian networks models are constructed using the b n l e a r n package in R, and the results are presented via figures and probabilities. One of the most significant results is that in the first Bayesian network model, the gender of the students influences the level of depression, with female students being more depressive. In the second model, social activity directly influences the level of depression. In each model, depression influences both the level of hopelessness and self-esteem in students; additionally, as the level of depression increases, the level of hopelessness increases, but the level of self-esteem drops. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessArticle Some Iterative Properties of ( F 1 , F 2 ) -Chaos in Non-Autonomous Discrete Systems
Entropy 2018, 20(3), 188; https://doi.org/10.3390/e20030188
Received: 24 January 2018 / Revised: 3 March 2018 / Accepted: 7 March 2018 / Published: 12 March 2018
Cited by 2 | Viewed by 716 | PDF Full-text (249 KB) | HTML Full-text | XML Full-text
Abstract
This paper is concerned with invariance (F1,F2)-scrambled sets under iterations. The main results are an extension of the compound invariance of Li–Yorke chaos and distributional chaos. New definitions of (F1,F2)
[...] Read more.
This paper is concerned with invariance ( F 1 , F 2 ) -scrambled sets under iterations. The main results are an extension of the compound invariance of Li–Yorke chaos and distributional chaos. New definitions of ( F 1 , F 2 ) -scrambled sets in non-autonomous discrete systems are given. For a positive integer k, the properties P ( k ) and Q ( k ) of Furstenberg families are introduced. It is shown that, for any positive integer k, for any s [ 0 , 1 ] , Furstenberg family M ¯ ( s ) has properties P ( k ) and Q ( k ) , where M ¯ ( s ) denotes the family of all infinite subsets of Z + whose upper density is not less than s. Then, the following conclusion is obtained. D is an ( M ¯ ( s ) , M ¯ ( t ) ) -scrambled set of ( X , f 1 , ) if and only if D is an ( M ¯ ( s ) , M ¯ ( t ) ) -scrambled set of ( X , f 1 , [ m ] ) . Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Open AccessDiscussion On the Possibility of Calculating Entropy, Free Energy, and Enthalpy of Vitreous Substances
Entropy 2018, 20(3), 187; https://doi.org/10.3390/e20030187
Received: 26 January 2018 / Revised: 23 February 2018 / Accepted: 8 March 2018 / Published: 11 March 2018
Cited by 2 | Viewed by 1088 | PDF Full-text (557 KB) | HTML Full-text | XML Full-text
Abstract
A critical analysis for the arguments in support of, and against, the traditional approach to thermodynamics of vitreous state is provided. In this approach one presumes that there is a continuous variation of the entropy in the glass-liquid transition temperature range, or a
[...] Read more.
A critical analysis for the arguments in support of, and against, the traditional approach to thermodynamics of vitreous state is provided. In this approach one presumes that there is a continuous variation of the entropy in the glass-liquid transition temperature range, or a “continuous entropy approach” towards 0 K which produces a positive value of the entropy at T → 0 K. I find that arguments given against this traditional approach use a different understanding of the thermodynamics of glass transition on cooling a liquid, because it suggests a discontinuity or “entropy loss approach” in the variation of entropy in the glass-liquid transition range. That is based on: (1) an unjustifiable use of the classical Boltzmann statistics for interpreting the value of entropy at absolute zero; (2) the rejection of thermodynamic analysis of systems with broken ergodicity, even though the possibility of analogous analysis was proposed already by Gibbs; (3) the possibility of a finite change in entropy of a system without absorption or release of heat; and, (4) describing the thermodynamic properties of glasses in the framework of functions, instead of functionals. The last one is necessary because for glasses the entropy and enthalpy are not functions of the state, but functionals, as defined by Gibbs’ in his classification. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Open AccessArticle Conformal Flattening for Deformed Information Geometries on the Probability Simplex
Entropy 2018, 20(3), 186; https://doi.org/10.3390/e20030186
Received: 20 February 2018 / Revised: 8 March 2018 / Accepted: 8 March 2018 / Published: 10 March 2018
Cited by 1 | Viewed by 796 | PDF Full-text (253 KB) | HTML Full-text | XML Full-text
Abstract
Recent progress of theories and applications regarding statistical models with generalized exponential functions in statistical science is giving an impact on the movement to deform the standard structure of information geometry. For this purpose, various representing functions are playing central roles. In this
[...] Read more.
Recent progress of theories and applications regarding statistical models with generalized exponential functions in statistical science is giving an impact on the movement to deform the standard structure of information geometry. For this purpose, various representing functions are playing central roles. In this paper, we consider two important notions in information geometry, i.e., invariance and dual flatness, from a viewpoint of representing functions. We first characterize a pair of representing functions that realizes the invariant geometry by solving a system of ordinary differential equations. Next, by proposing a new transformation technique, i.e., conformal flattening, we construct dually flat geometries from a certain class of non-flat geometries. Finally, we apply the results to demonstrate several properties of gradient flows on the probability simplex. Full article
(This article belongs to the Special Issue New Trends in Statistical Physics of Complex Systems)
Open AccessArticle A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications
Entropy 2018, 20(3), 185; https://doi.org/10.3390/e20030185
Received: 18 January 2018 / Revised: 6 March 2018 / Accepted: 6 March 2018 / Published: 9 March 2018
Cited by 2 | Viewed by 980 | PDF Full-text (496 KB) | HTML Full-text | XML Full-text
Abstract
We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new
[...] Read more.
We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure d ( x , x ^ ) = | x x ^ | r , with r 1 , and we establish that the difference between the rate-distortion function and the Shannon lower bound is at most log ( π e ) 1 . 5 bits, independently of r and the target distortion d. For mean-square error distortion, the difference is at most log ( π e 2 ) 1 bit, regardless of d. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most log ( π e 2 ) 1 bit. Our results generalize to the case of a random vector X with possibly dependent coordinates. Our proof technique leverages tools from convex geometry. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Figures

Figure 1

Open AccessArticle Fisher Information Based Meteorological Factors Introduction and Features Selection for Short-Term Load Forecasting
Entropy 2018, 20(3), 184; https://doi.org/10.3390/e20030184
Received: 21 January 2018 / Revised: 26 February 2018 / Accepted: 7 March 2018 / Published: 9 March 2018
Viewed by 889 | PDF Full-text (945 KB) | HTML Full-text | XML Full-text
Abstract
Weather information is an important factor in short-term load forecasting (STLF). However, for a long time, more importance has always been attached to forecasting models instead of other processes such as the introduction of weather factors or feature selection for STLF. The main
[...] Read more.
Weather information is an important factor in short-term load forecasting (STLF). However, for a long time, more importance has always been attached to forecasting models instead of other processes such as the introduction of weather factors or feature selection for STLF. The main aim of this paper is to develop a novel methodology based on Fisher information for meteorological variables introduction and variable selection in STLF. Fisher information computation for one-dimensional and multidimensional weather variables is first described, and then the introduction of meteorological factors and variables selection for STLF models are discussed in detail. On this basis, different forecasting models with the proposed methodology are established. The proposed methodology is implemented on real data obtained from Electric Power Utility of Zhenjiang, Jiangsu Province, in southeast China. The results show the advantages of the proposed methodology in comparison with other traditional ones regarding prediction accuracy, and it has very good practical significance. Therefore, it can be used as a unified method for introducing weather variables into STLF models, and selecting their features. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Equilibrium States in Two-Temperature Systems
Entropy 2018, 20(3), 183; https://doi.org/10.3390/e20030183
Received: 24 January 2018 / Revised: 24 February 2018 / Accepted: 24 February 2018 / Published: 9 March 2018
Cited by 2 | Viewed by 747 | PDF Full-text (2922 KB) | HTML Full-text | XML Full-text
Abstract
Systems characterized by more than one temperature usually appear in nonequilibrium statistical mechanics. In some cases, e.g., glasses, there is a temperature at which fast variables become thermalized, and another case associated with modes that evolve towards an equilibrium state in a very
[...] Read more.
Systems characterized by more than one temperature usually appear in nonequilibrium statistical mechanics. In some cases, e.g., glasses, there is a temperature at which fast variables become thermalized, and another case associated with modes that evolve towards an equilibrium state in a very slow way. Recently, it was shown that a system of vortices interacting repulsively, considered as an appropriate model for type-II superconductors, presents an equilibrium state characterized by two temperatures. The main novelty concerns the fact that apart from the usual temperature T, related to fluctuations in particle velocities, an additional temperature θ was introduced, associated with fluctuations in particle positions. Since they present physically distinct characteristics, the system may reach an equilibrium state, characterized by finite and different values of these temperatures. In the application of type-II superconductors, it was shown that θ T , so that thermal effects could be neglected, leading to a consistent thermodynamic framework based solely on the temperature θ . In the present work, a more general situation, concerning a system characterized by two distinct temperatures θ 1 and θ 2 , which may be of the same order of magnitude, is discussed. These temperatures appear as coefficients of different diffusion contributions of a nonlinear Fokker-Planck equation. An H-theorem is proven, relating such a Fokker-Planck equation to a sum of two entropic forms, each of them associated with a given diffusion term; as a consequence, the corresponding stationary state may be considered as an equilibrium state, characterized by two temperatures. One of the conditions for such a state to occur is that the different temperature parameters, θ 1 and θ 2 , should be thermodynamically conjugated to distinct entropic forms, S 1 and S 2 , respectively. A functional Λ [ P ] Λ ( S 1 [ P ] , S 2 [ P ] ) is introduced, which presents properties characteristic of an entropic form; moreover, a thermodynamically conjugated temperature parameter γ ( θ 1 , θ 2 ) can be consistently defined, so that an alternative physical description is proposed in terms of these pairs of variables. The physical consequences, and particularly, the fact that the equilibrium-state distribution, obtained from the Fokker-Planck equation, should coincide with the one from entropy extremization, are discussed. Full article
(This article belongs to the Special Issue New Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Gaussian Optimality for Derivatives of Differential Entropy Using Linear Matrix Inequalities
Entropy 2018, 20(3), 182; https://doi.org/10.3390/e20030182
Received: 23 January 2018 / Revised: 22 February 2018 / Accepted: 5 March 2018 / Published: 9 March 2018
Cited by 1 | Viewed by 970 | PDF Full-text (792 KB) | HTML Full-text | XML Full-text
Abstract
Let Z be a standard Gaussian random variable, X be independent of Z, and t be a strictly positive scalar. For the derivatives in t of the differential entropy of X+tZ, McKean noticed that Gaussian X achieves the
[...] Read more.
Let Z be a standard Gaussian random variable, X be independent of Z, and t be a strictly positive scalar. For the derivatives in t of the differential entropy of X + t Z , McKean noticed that Gaussian X achieves the extreme for the first and second derivatives, among distributions with a fixed variance, and he conjectured that this holds for general orders of derivatives. This conjecture implies that the signs of the derivatives alternate. Recently, Cheng and Geng proved that this alternation holds for the first four orders. In this work, we employ the technique of linear matrix inequalities to show that: firstly, Cheng and Geng’s method may not generalize to higher orders; secondly, when the probability density function of X + t Z is log-concave, McKean’s conjecture holds for orders up to at least five. As a corollary, we also recover Toscani’s result on the sign of the third derivative of the entropy power of X + t Z , using a much simpler argument. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Back to Top