Next Issue
Volume 21, August
Previous Issue
Volume 21, June

Table of Contents

Entropy, Volume 21, Issue 7 (July 2019) – 95 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) This perspective discusses parallels between high entropy at individual sequence positions of [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Multi-Party Quantum Summation Based on Quantum Teleportation
Entropy 2019, 21(7), 719; https://doi.org/10.3390/e21070719 - 23 Jul 2019
Cited by 2 | Viewed by 1111
Abstract
We present a secure multi-party quantum summation protocol based on quantum teleportation, in which a malicious, but non-collusive, third party (TP) helps compute the summation. In our protocol, TP is in charge of entanglement distribution and Bell states are shared between participants. Users [...] Read more.
We present a secure multi-party quantum summation protocol based on quantum teleportation, in which a malicious, but non-collusive, third party (TP) helps compute the summation. In our protocol, TP is in charge of entanglement distribution and Bell states are shared between participants. Users encode the qubits in their hand according to their private bits and perform Bell-state measurements. After obtaining participants’ measurement results, TP can figure out the summation. The participants do not need to send their encoded states to others, and the protocol is therefore congenitally free from Trojan horse attacks. In addition, our protocol can be made secure against loss errors, because the entanglement distribution occurs only once at the beginning of our protocol. We show that our protocol is secure against attacks by the participants as well as the outsiders. Full article
(This article belongs to the collection Quantum Information)
Show Figures

Figure 1

Open AccessFeature PaperArticle
On the Rarefied Gas Experiments
Entropy 2019, 21(7), 718; https://doi.org/10.3390/e21070718 - 23 Jul 2019
Cited by 5 | Viewed by 870
Abstract
There are limits of validity of classical constitutive laws such as Fourier and Navier-Stokes equations. Phenomena beyond those limits have been experimentally found many decades ago. However, it is still not clear what theory would be appropriate to model different non-classical phenomena under [...] Read more.
There are limits of validity of classical constitutive laws such as Fourier and Navier-Stokes equations. Phenomena beyond those limits have been experimentally found many decades ago. However, it is still not clear what theory would be appropriate to model different non-classical phenomena under different conditions considering either the low-temperature or composite material structure. In this paper, a modeling problem of rarefied gases is addressed. The discussion covers the mass density dependence of material parameters, the scaling properties of different theories and aspects of how to model an experiment. In the following, two frameworks and their properties are presented. One of them is the kinetic theory based Rational Extended Thermodynamics; the other one is the non-equilibrium thermodynamics with internal variables and current multipliers. In order to compare these theories, an experiment on sound speed in rarefied gases at high frequencies, performed by Rhodes, is analyzed in detail. It is shown that the density dependence of material parameters could have a severe impact on modeling capabilities and influences the scaling properties. Full article
Show Figures

Figure 1

Open AccessArticle
Efficiency Bounds for Minimally Nonlinear Irreversible Heat Engines with Broken Time-Reversal Symmetry
Entropy 2019, 21(7), 717; https://doi.org/10.3390/e21070717 - 23 Jul 2019
Cited by 1 | Viewed by 888
Abstract
We study the minimally nonlinear irreversible heat engines in which the time-reversal symmetry for the systems may be broken. The expressions for the power and the efficiency are derived, in which the effects of the nonlinear terms due to dissipations are included. We [...] Read more.
We study the minimally nonlinear irreversible heat engines in which the time-reversal symmetry for the systems may be broken. The expressions for the power and the efficiency are derived, in which the effects of the nonlinear terms due to dissipations are included. We show that, as within the linear responses, the minimally nonlinear irreversible heat engines can enable attainment of Carnot efficiency at positive power. We also find that the Curzon-Ahlborn limit imposed on the efficiency at maximum power can be overcome if the time-reversal symmetry is broken. Full article
(This article belongs to the Special Issue Thermodynamic Approaches in Modern Engineering Systems)
Show Figures

Figure 1

Open AccessCommunication
A Note on the Entropy Force in Kinetic Theory and Black Holes
Entropy 2019, 21(7), 716; https://doi.org/10.3390/e21070716 - 23 Jul 2019
Cited by 1 | Viewed by 981
Abstract
The entropy force is the collective effect of inhomogeneity in disorder in a statistical many particle system. We demonstrate its presumable effect on one particular astrophysical object, the black hole. We then derive the kinetic equations of a large system of particles including [...] Read more.
The entropy force is the collective effect of inhomogeneity in disorder in a statistical many particle system. We demonstrate its presumable effect on one particular astrophysical object, the black hole. We then derive the kinetic equations of a large system of particles including the entropy force. It adds a collective therefore integral term to the Klimontovich equation for the evolution of the one-particle distribution function. Its integral character transforms the basic one particle kinetic equation into an integro-differential equation already on the elementary level, showing that not only the microscopic forces but the hole system reacts to its evolution of its probability distribution in a holistic way. It also causes a collisionless dissipative term which however is small in the inverse particle number and thus negligible. However it contributes an entropic collisional dissipation term. The latter is defined via the particle correlations but lacks any singularities and thus is large scale. It allows also for the derivation of a kinetic equation for the entropy density in phase space. This turns out to be of same structure as the equation for the phase space density. The entropy density determines itself holistically via the integral entropy force thus providing a self-controlled evolution of entropy in phase space. Full article
Open AccessEditor’s ChoiceArticle
Dynamic Maximum Entropy Reduction
Entropy 2019, 21(7), 715; https://doi.org/10.3390/e21070715 - 22 Jul 2019
Cited by 7 | Viewed by 1365
Abstract
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state [...] Read more.
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state variables. The method is based on explicit recognition of the state and conjugate variables, which can relax towards the respective quasi-equilibria in different ways. Detailed state variables are reduced using the usual principle of maximum entropy (MaxEnt), whereas relaxation of conjugate variables guarantees that the reduced equations are closed. Moreover, an infinite chain of consecutive DynMaxEnt approximations can be constructed. The method is demonstrated on a particle with friction, complex fluids (equipped with conformation and Reynolds stress tensors), hyperbolic heat conduction and magnetohydrodynamics. Full article
Show Figures

Figure 1

Open AccessArticle
Efficacy of Quantitative Muscle Ultrasound Using Texture-Feature Parametric Imaging in Detecting Pompe Disease in Children
Entropy 2019, 21(7), 714; https://doi.org/10.3390/e21070714 - 22 Jul 2019
Cited by 1 | Viewed by 1051
Abstract
Pompe disease is a hereditary neuromuscular disorder attributed to acid α-glucosidase deficiency, and accurately identifying this disease is essential. Our aim was to discriminate normal muscles from neuropathic muscles in children affected by Pompe disease using a texture-feature parametric imaging method that simultaneously [...] Read more.
Pompe disease is a hereditary neuromuscular disorder attributed to acid α-glucosidase deficiency, and accurately identifying this disease is essential. Our aim was to discriminate normal muscles from neuropathic muscles in children affected by Pompe disease using a texture-feature parametric imaging method that simultaneously considers microstructure and macrostructure. The study included 22 children aged 0.02–54 months with Pompe disease and six healthy children aged 2–12 months with normal muscles. For each subject, transverse ultrasound images of the bilateral rectus femoris and sartorius muscles were obtained. Gray-level co-occurrence matrix-based Haralick’s features were used for constructing parametric images and identifying neuropathic muscles: autocorrelation (AUT), contrast, energy (ENE), entropy (ENT), maximum probability (MAXP), variance (VAR), and cluster prominence (CPR). Stepwise regression was used in feature selection. The Fisher linear discriminant analysis was used for combination of the selected features to distinguish between normal and pathological muscles. The VAR and CPR were the optimal feature set for classifying normal and pathological rectus femoris muscles, whereas the ENE, VAR, and CPR were the optimal feature set for distinguishing between normal and pathological sartorius muscles. The two feature sets were combined to discriminate between children with and without neuropathic muscles affected by Pompe disease, achieving an accuracy of 94.6%, a specificity of 100%, a sensitivity of 93.2%, and an area under the receiver operating characteristic curve of 0.98 ± 0.02. The CPR for the rectus femoris muscles and the AUT, ENT, MAXP, and VAR for the sartorius muscles exhibited statistically significant differences in distinguishing between the infantile-onset Pompe disease and late-onset Pompe disease groups (p < 0.05). Texture-feature parametric imaging can be used to quantify and map tissue structures in skeletal muscles and distinguish between pathological and normal muscles in children or newborns. Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

Open AccessArticle
Surrogate Data Preserving All the Properties of Ordinal Patterns up to a Certain Length
Entropy 2019, 21(7), 713; https://doi.org/10.3390/e21070713 - 22 Jul 2019
Viewed by 857
Abstract
We propose a method for generating surrogate data that preserves all the properties of ordinal patterns up to a certain length, such as the numbers of allowed/forbidden ordinal patterns and transition likelihoods from ordinal patterns into others. The null hypothesis is that the [...] Read more.
We propose a method for generating surrogate data that preserves all the properties of ordinal patterns up to a certain length, such as the numbers of allowed/forbidden ordinal patterns and transition likelihoods from ordinal patterns into others. The null hypothesis is that the details of the underlying dynamics do not matter beyond the refinements of ordinal patterns finer than a predefined length. The proposed surrogate data help construct a test of determinism that is free from the common linearity assumption for a null-hypothesis. Full article
Show Figures

Figure 1

Open AccessEditor’s ChoiceCommunication
Derivations of the Core Functions of the Maximum Entropy Theory of Ecology
Entropy 2019, 21(7), 712; https://doi.org/10.3390/e21070712 - 21 Jul 2019
Cited by 7 | Viewed by 1460
Abstract
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the [...] Read more.
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the METE framework, “ecological state variables” (representing total area, total species richness, total abundance, and total metabolic energy) describe macroecological properties of an ecosystem. METE incorporates these state variables into constraints on underlying probability distributions. The method of Lagrange multipliers and maximization of information entropy (MaxEnt) lead to predicted functional forms of distributions of interest. We demonstrate how information entropy is maximized for the general case of a distribution, which has empirical information that provides constraints on the overall predictions. We then show how METE’s two core functions are derived. These functions, called the “Spatial Structure Function” and the “Ecosystem Structure Function” are the core pieces of the theory, from which all the predictions of METE follow (including the Species Area Relationship, the Species Abundance Distribution, and various metabolic distributions). Primarily, we consider the discrete distributions predicted by METE. We also explore the parameter space defined by the METE’s state variables and Lagrange multipliers. We aim to provide a comprehensive resource for ecologists who want to understand the derivations and assumptions of the basic mathematical structure of METE. Full article
(This article belongs to the Special Issue Information Theory Applications in Biology)
Show Figures

Graphical abstract

Open AccessArticle
Thermal Optimization of a Dual Pressure Goswami Cycle for Low Grade Thermal Sources
Entropy 2019, 21(7), 711; https://doi.org/10.3390/e21070711 - 20 Jul 2019
Cited by 2 | Viewed by 1086
Abstract
This paper presents a theoretical investigation of a new configuration of the combined power and cooling cycle known as the Goswami cycle. The new configuration consists of two turbines operating at two different working pressures with a low-heat source temperature, below 150 °C. [...] Read more.
This paper presents a theoretical investigation of a new configuration of the combined power and cooling cycle known as the Goswami cycle. The new configuration consists of two turbines operating at two different working pressures with a low-heat source temperature, below 150 °C. A comprehensive analysis was conducted to determine the effect of key operation parameters such as ammonia mass fraction at the absorber outlet and boiler-rectifier, on the power output, cooling capacity, effective first efficiency, and effective exergy efficiency, while the performance of the dual-pressure configuration was compared with the original single pressure cycle. In addition, a Pareto optimization with a genetic algorithm was conducted to obtain the best power and cooling output combinations to maximize effective first law efficiency. Results showed that the new dual-pressure configuration generated more power than the single pressure cycle, by producing up to 327.8 kW, while the single pressure cycle produced up to 110.8 kW at a 150 °C boiler temperature. However, the results also showed that it reduced the cooling output as there was less mass flow rate in the refrigeration unit. Optimization results showed that optimum effective first law efficiency ranged between 9.1% and 13.7%. The maximum effective first law efficiency at the lowest net power (32 kW) and cooling (0.38 kW) outputs was also shown. On the other hand, it presented 13.6% effective first law efficiency when the net power output was 100 kW and the cooling capacity was 0.38 kW. Full article
(This article belongs to the Special Issue Thermodynamic Optimization)
Show Figures

Graphical abstract

Open AccessArticle
On MV-Algebraic Versions of the Strong Law of Large Numbers
Entropy 2019, 21(7), 710; https://doi.org/10.3390/e21070710 - 19 Jul 2019
Viewed by 858
Abstract
Many-valued (MV; the many-valued logics considered by Łukasiewicz)-algebras are algebraic systems that generalize Boolean algebras. The MV-algebraic probability theory involves the notions of the state and observable, which abstract the probability measure and the random variable, both considered in the Kolmogorov probability theory. [...] Read more.
Many-valued (MV; the many-valued logics considered by Łukasiewicz)-algebras are algebraic systems that generalize Boolean algebras. The MV-algebraic probability theory involves the notions of the state and observable, which abstract the probability measure and the random variable, both considered in the Kolmogorov probability theory. Within the MV-algebraic probability theory, many important theorems (such as various versions of the central limit theorem or the individual ergodic theorem) have been recently studied and proven. In particular, the counterpart of the Kolmogorov strong law of large numbers (SLLN) for sequences of independent observables has been considered. In this paper, we prove generalized MV-algebraic versions of the SLLN, i.e., counterparts of the Marcinkiewicz–Zygmund and Brunk–Prokhorov SLLN for independent observables, as well as the Korchevsky SLLN, where the independence of observables is not assumed. To this end, we apply the classical probability theory and some measure-theoretic methods. We also analyze examples of applications of the proven theorems. Our results open new directions of development of the MV-algebraic probability theory. They can also be applied to the problem of entropy estimation. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
A Peak Traffic Congestion Prediction Method Based on Bus Driving Time
Entropy 2019, 21(7), 709; https://doi.org/10.3390/e21070709 - 19 Jul 2019
Cited by 5 | Viewed by 1170
Abstract
Road traffic congestion has a large impact on travel. The accurate prediction of traffic congestion has become a hot topic in intelligent transportation systems (ITS). Recently, a variety of traffic congestion prediction methods have been proposed. However, most approaches focus on floating car [...] Read more.
Road traffic congestion has a large impact on travel. The accurate prediction of traffic congestion has become a hot topic in intelligent transportation systems (ITS). Recently, a variety of traffic congestion prediction methods have been proposed. However, most approaches focus on floating car data, and the prediction accuracy is often unstable due to large fluctuations in floating speed. Targeting these challenges, we propose a method of traffic congestion prediction based on bus driving time (TCP-DT) using long short-term memory (LSTM) technology. Firstly, we collected a total of 66,228 bus driving records from 50 buses for 66 working days in Guangzhou, China. Secondly, the actual and standard bus driving times were calculated by processing the buses’ GPS trajectories and bus station data. Congestion time is defined as the interval between actual and standard driving time. Thirdly, congestion time prediction based on LSTM (T-LSTM) was adopted to predict future bus congestion times. Finally, the congestion index and classification (CI-C) model was used to calculate the congestion indices and classify the level of congestion into five categories according to three classification methods. Our experimental results show that the T-LSTM model can effectively predict the congestion time of six road sections at different time periods, and the average mean absolute percentage error ( M A P E ¯ ) and root mean square error ( R M S E ¯ ) of prediction are 11.25% and 14.91 in the morning peak, and 12.3% and 14.57 in the evening peak, respectively. The TCP-DT method can effectively predict traffic congestion status and provide a driving route with the least congestion time for vehicles. Full article
Show Figures

Figure 1

Open AccessArticle
Gaussian Belief Propagation for Solving Network Utility Maximization with Delivery Contracts
Entropy 2019, 21(7), 708; https://doi.org/10.3390/e21070708 - 19 Jul 2019
Viewed by 818
Abstract
Classical network utility maximization (NUM) models fail to capture network dynamics, which are of increasing importance for modeling network behaviors. In this paper, we consider the NUM with delivery contracts, which are constraints to the classical model to describe network dynamics. This paper [...] Read more.
Classical network utility maximization (NUM) models fail to capture network dynamics, which are of increasing importance for modeling network behaviors. In this paper, we consider the NUM with delivery contracts, which are constraints to the classical model to describe network dynamics. This paper investigates a method to distributively solve the given problem. We first transform the problem into an equivalent model of linear equations by dual decomposition theory, and then use Gaussian belief propagation algorithm to solve the equivalent issue distributively. The proposed algorithm has faster convergence speed than the existing first-order methods and distributed Newton method. Experimental results have demonstrated the effectiveness of our proposed approach. Full article
Show Figures

Figure 1

Open AccessArticle
Electricity Consumption Forecasting using Support Vector Regression with the Mixture Maximum Correntropy Criterion
Entropy 2019, 21(7), 707; https://doi.org/10.3390/e21070707 - 19 Jul 2019
Viewed by 891
Abstract
The electricity consumption forecasting (ECF) technology plays a crucial role in the electricity market. The support vector regression (SVR) is a nonlinear prediction model that can be used for ECF. The electricity consumption (EC) data are usually nonlinear and non-Gaussian and present outliers. [...] Read more.
The electricity consumption forecasting (ECF) technology plays a crucial role in the electricity market. The support vector regression (SVR) is a nonlinear prediction model that can be used for ECF. The electricity consumption (EC) data are usually nonlinear and non-Gaussian and present outliers. The traditional SVR with the mean-square error (MSE), however, is insensitive to outliers and cannot correctly represent the statistical information of errors in non-Gaussian situations. To address this problem, a novel robust forecasting method is developed in this work by using the mixture maximum correntropy criterion (MMCC). The MMCC, as a novel cost function of information theoretic, can be used to solve non-Gaussian signal processing; therefore, in the original SVR, the MSE is replaced by the MMCC to develop a novel robust SVR method (called MMCCSVR) for ECF. Besides, the factors influencing users’ EC are investigated by a data statistical analysis method. We find that the historical temperature and historical EC are the main factors affecting future EC, and thus these two factors are used as the input in the proposed model. Finally, real EC data from a shopping mall in Guangzhou, China, are utilized to test the proposed ECF method. The forecasting results show that the proposed ECF method can effectively improve the accuracy of ECF compared with the traditional SVR and other forecasting algorithms. Full article
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)
Show Figures

Figure 1

Open AccessArticle
Refined Multiscale Entropy Using Fuzzy Metrics: Validation and Application to Nociception Assessment
Entropy 2019, 21(7), 706; https://doi.org/10.3390/e21070706 - 18 Jul 2019
Cited by 1 | Viewed by 1178
Abstract
The refined multiscale entropy (RMSE) approach is commonly applied to assess complexity as a function of the time scale. RMSE is normally based on the computation of sample entropy (SampEn) estimating complexity as conditional entropy. However, SampEn is dependent on the length and [...] Read more.
The refined multiscale entropy (RMSE) approach is commonly applied to assess complexity as a function of the time scale. RMSE is normally based on the computation of sample entropy (SampEn) estimating complexity as conditional entropy. However, SampEn is dependent on the length and standard deviation of the data. Recently, fuzzy entropy (FuzEn) has been proposed, including several refinements, as an alternative to counteract these limitations. In this work, FuzEn, translated FuzEn (TFuzEn), translated-reflected FuzEn (TRFuzEn), inherent FuzEn (IFuzEn), and inherent translated FuzEn (ITFuzEn) were exploited as entropy-based measures in the computation of RMSE and their performance was compared to that of SampEn. FuzEn metrics were applied to synthetic time series of different lengths to evaluate the consistency of the different approaches. In addition, electroencephalograms of patients under sedation-analgesia procedure were analyzed based on the patient’s response after the application of painful stimulation, such as nail bed compression or endoscopy tube insertion. Significant differences in FuzEn metrics were observed over simulations and real data as a function of the data length and the pain responses. Findings indicated that FuzEn, when exploited in RMSE applications, showed similar behavior to SampEn in long series, but its consistency was better than that of SampEn in short series both over simulations and real data. Conversely, its variants should be utilized with more caution, especially whether processes exhibit an important deterministic component and/or in nociception prediction at long scales. Full article
(This article belongs to the Special Issue Information Dynamics in Brain and Physiological Networks)
Show Figures

Figure 1

Open AccessArticle
Quantum Features of Macroscopic Fields: Entropy and Dynamics
Entropy 2019, 21(7), 705; https://doi.org/10.3390/e21070705 - 18 Jul 2019
Viewed by 959
Abstract
Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random [...] Read more.
Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random scattering of waves by environment. The proposed reduced state of the field combines averaged field with the two-point correlation function called single-particle density matrix. The evolution equation for the reduced state of the field is obtained by reduction of the generalized quasi-free dynamical semigroups describing irreversible evolution of bosonic quantum field and the definition of entropy for the reduced state of the field follows from the von Neumann entropy of quantum field states. The presented formalism can be applied, for example, to superradiance phenomena and allows unifying the Mueller and Jones calculi in polarization optics. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
Open AccessArticle
Thermodynamics and Stability of Non-Equilibrium Steady States in Open Systems
Entropy 2019, 21(7), 704; https://doi.org/10.3390/e21070704 - 18 Jul 2019
Cited by 7 | Viewed by 1089
Abstract
Thermodynamical arguments are known to be useful in the construction of physically motivated Lyapunov functionals for nonlinear stability analysis of spatially homogeneous equilibrium states in thermodynamically isolated systems. Unfortunately, the limitation to isolated systems is essential, and standard arguments are not applicable even [...] Read more.
Thermodynamical arguments are known to be useful in the construction of physically motivated Lyapunov functionals for nonlinear stability analysis of spatially homogeneous equilibrium states in thermodynamically isolated systems. Unfortunately, the limitation to isolated systems is essential, and standard arguments are not applicable even for some very simple thermodynamically open systems. On the other hand, the nonlinear stability of thermodynamically open systems is usually investigated using the so-called energy method. The mathematical quantity that is referred to as the “energy” is, however, in most cases not linked to the energy in the physical sense of the word. Consequently, it would seem that genuine thermo-dynamical concepts are of no use in the nonlinear stability analysis of thermodynamically open systems. We show that this is not the case. In particular, we propose a construction that in the case of a simple heat conduction problem leads to a physically well-motivated Lyapunov type functional, which effectively replaces the artificial Lyapunov functional used in the standard energy method. The proposed construction seems to be general enough to be applied in complex thermomechanical settings. Full article
(This article belongs to the Special Issue Thermodynamic Approaches in Modern Engineering Systems)
Show Figures

Figure 1

Open AccessArticle
Information Geometrical Characterization of Quantum Statistical Models in Quantum Estimation Theory
Entropy 2019, 21(7), 703; https://doi.org/10.3390/e21070703 - 18 Jul 2019
Cited by 14 | Viewed by 965
Abstract
In this paper, we classify quantum statistical models based on their information geometric properties and the estimation error bound, known as the Holevo bound, into four different classes: classical, quasi-classical, D-invariant, and asymptotically classical models. We then characterize each model by several equivalent [...] Read more.
In this paper, we classify quantum statistical models based on their information geometric properties and the estimation error bound, known as the Holevo bound, into four different classes: classical, quasi-classical, D-invariant, and asymptotically classical models. We then characterize each model by several equivalent conditions and discuss their properties. This result enables us to explore the relationships among these four models as well as reveals the geometrical understanding of quantum statistical models. In particular, we show that each class of model can be identified by comparing quantum Fisher metrics and the properties of the tangent spaces of the quantum statistical model. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Open AccessArticle
Model Description of Similarity-Based Recommendation Systems
Entropy 2019, 21(7), 702; https://doi.org/10.3390/e21070702 - 17 Jul 2019
Viewed by 965
Abstract
The quality of online services highly depends on the accuracy of the recommendations they can provide to users. Researchers have proposed various similarity measures based on the assumption that similar people like or dislike similar items or people, in order to improve the [...] Read more.
The quality of online services highly depends on the accuracy of the recommendations they can provide to users. Researchers have proposed various similarity measures based on the assumption that similar people like or dislike similar items or people, in order to improve the accuracy of their services. Additionally, statistical models, such as the stochastic block models, have been used to understand network structures. In this paper, we discuss the relationship between similarity-based methods and statistical models using the Bernoulli mixture models and the expectation-maximization (EM) algorithm. The Bernoulli mixture model naturally leads to a completely positive matrix as the similarity matrix. We prove that most of the commonly used similarity measures yield completely positive matrices as the similarity matrix. Based on this relationship, we propose an algorithm to transform the similarity matrix to the Bernoulli mixture model. Such a correspondence provides a statistical interpretation to similarity-based methods. Using this algorithm, we conduct numerical experiments using synthetic data and real-world data provided from an online dating site, and report the efficiency of the recommendation system based on the Bernoulli mixture models. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
A Security Enhanced Encryption Scheme and Evaluation of Its Cryptographic Security
Entropy 2019, 21(7), 701; https://doi.org/10.3390/e21070701 - 17 Jul 2019
Viewed by 1090
Abstract
An approach for security enhancement of a class of encryption schemes is pointed out and its security is analyzed. The approach is based on certain results of coding and information theory regarding communication channels with erasures and deletion errors. In the security enhanced [...] Read more.
An approach for security enhancement of a class of encryption schemes is pointed out and its security is analyzed. The approach is based on certain results of coding and information theory regarding communication channels with erasures and deletion errors. In the security enhanced encryption scheme, the wiretapper faces a problem of cryptanalysis after a communication channel with bits deletion and a legitimate party faces a problem of decryption after a channel with bit erasures. This paper proposes the encryption-decryption paradigm for the security enhancement of lightweight block ciphers based on dedicated error-correction coding and a simulator of the deletion channel controlled by the secret key. The security enhancement is analyzed in terms of the related probabilities, equivocation, mutual information and channel capacity. The cryptographic evaluation of the enhanced encryption includes employment of certain recent results regarding the upper-bounds on the capacity of channels with deletion errors. It is shown that the probability of correct classification which determines the cryptographic security depends on the deletion channel capacity, i.e., the equivocation after this channel, and number of codewords in employed error-correction coding scheme. Consequently, assuming that the basic encryption scheme has certain security level, it is shown that the security enhancement factor is a function of the deletion rate and dimension of the vectors subject to error-correction encoding, i.e., dimension of the encryption block. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Rateless Codes-Based Secure Communication Employing Transmit Antenna Selection and Harvest-To-Jam under Joint Effect of Interference and Hardware Impairments
Entropy 2019, 21(7), 700; https://doi.org/10.3390/e21070700 - 16 Jul 2019
Cited by 4 | Viewed by 1125
Abstract
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a [...] Read more.
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a cooperative jammer node harvests energy from radio frequency (RF) signals of the source and the interference sources to generate jamming noises on the eavesdropper. The data transmission terminates as soon as the destination can receive a sufficient number of the encoded packets for decoding the original data of the source. To obtain secure communication, the destination must receive sufficient encoded packets before the eavesdropper. The combination of the TAS and harvest-to-jam techniques obtains the security and efficient energy via reducing the number of the data transmission, increasing the quality of the data channel, decreasing the quality of the eavesdropping channel, and supporting the energy for the jammer. The main contribution of this paper is to derive exact closed-form expressions of outage probability (OP), probability of successful and secure communication (SS), intercept probability (IP) and average number of time slots used by the source over Rayleigh fading channel under the joint impact of co-channel interference and hardware impairments. Then, Monte Carlo simulations are presented to verify the theoretical results. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

Open AccessArticle
Multivariate Pointwise Information-Driven Data Sampling and Visualization
Entropy 2019, 21(7), 699; https://doi.org/10.3390/e21070699 - 16 Jul 2019
Cited by 1 | Viewed by 1210
Abstract
With increasing computing capabilities of modern supercomputers, the size of the data generated from the scientific simulations is growing rapidly. As a result, application scientists need effective data summarization techniques that can reduce large-scale multivariate spatiotemporal data sets while preserving the important data [...] Read more.
With increasing computing capabilities of modern supercomputers, the size of the data generated from the scientific simulations is growing rapidly. As a result, application scientists need effective data summarization techniques that can reduce large-scale multivariate spatiotemporal data sets while preserving the important data properties so that the reduced data can answer domain-specific queries involving multiple variables with sufficient accuracy. While analyzing complex scientific events, domain experts often analyze and visualize two or more variables together to obtain a better understanding of the characteristics of the data features. Therefore, data summarization techniques are required to analyze multi-variable relationships in detail and then perform data reduction such that the important features involving multiple variables are preserved in the reduced data. To achieve this, in this work, we propose a data sub-sampling algorithm for performing statistical data summarization that leverages pointwise information theoretic measures to quantify the statistical association of data points considering multiple variables and generates a sub-sampled data that preserves the statistical association among multi-variables. Using such reduced sampled data, we show that multivariate feature query and analysis can be done effectively. The efficacy of the proposed multivariate association driven sampling algorithm is presented by applying it on several scientific data sets. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Show Figures

Figure 1

Open AccessArticle
Increased Sample Entropy in EEGs During the Functional Rehabilitation of an Injured Brain
Entropy 2019, 21(7), 698; https://doi.org/10.3390/e21070698 - 16 Jul 2019
Cited by 5 | Viewed by 920
Abstract
Complex nerve remodeling occurs in the injured brain area during functional rehabilitation after a brain injury; however, its mechanism has not been thoroughly elucidated. Neural remodeling can lead to changes in the electrophysiological activity, which can be detected in an electroencephalogram (EEG). In [...] Read more.
Complex nerve remodeling occurs in the injured brain area during functional rehabilitation after a brain injury; however, its mechanism has not been thoroughly elucidated. Neural remodeling can lead to changes in the electrophysiological activity, which can be detected in an electroencephalogram (EEG). In this paper, we used EEG band energy, approximate entropy (ApEn), sample entropy (SampEn), and Lempel–Ziv complexity (LZC) features to characterize the intrinsic rehabilitation dynamics of the injured brain area, thus providing a means of detecting and exploring the mechanism of neurological remodeling during the recovery process after brain injury. The rats in the injury group (n = 12) and sham group (n = 12) were used to record the bilateral symmetrical EEG on days 1, 4, and 7 after a unilateral brain injury in awake model rats. The open field test (OFT) experiments were performed in the following three groups: an injury group, a sham group, and a control group (n = 10). An analysis of the EEG data using the energy, ApEn, SampEn, and LZC features demonstrated that the increase in SampEn was associated with the functional recovery. After the brain injury, the energy values of the delta1 bands on day 4; the delta2 bands on days 4 and 7; the theta, alpha, and beta bands and the values of ApEn, SampEn, and LZC of the cortical EEG signal on days 1, 4 and 7 were significantly lower in the injured brain area than in the non-injured area. During the process of recovery for the injured brain area, the values of the beta bands, ApEn, and SampEn of the injury group increased significantly, and gradually became equal to the value of the sham group. The improvement in the motor function of the model rats significantly correlated with the increase in SampEn. This study provides a method based on EEG nonlinear features for measuring neural remodeling in injured brain areas during brain function recovery. The results may aid in the study of neural remodeling mechanisms. Full article
Show Figures

Figure 1

Open AccessArticle
Entropy and Semi-Entropies of LR Fuzzy Numbers’ Linear Function with Applications to Fuzzy Programming
Entropy 2019, 21(7), 697; https://doi.org/10.3390/e21070697 - 16 Jul 2019
Viewed by 943
Abstract
As a crucial concept of characterizing uncertainty, entropy has been widely used in fuzzy programming problems, while involving complicated calculations. To simplify the operations so as to broaden its applicable areas, this paper investigates the entropy within the framework of credibility theory and [...] Read more.
As a crucial concept of characterizing uncertainty, entropy has been widely used in fuzzy programming problems, while involving complicated calculations. To simplify the operations so as to broaden its applicable areas, this paper investigates the entropy within the framework of credibility theory and derives the formulas for calculating the entropy of regular LR fuzzy numbers by virtue of the inverse credibility distribution. By verifying the favorable property of this operator, a calculation formula of a linear function’s entropy is also proposed. Furthermore, considering the strength of semi-entropy in measuring one-side uncertainty, the lower and upper semi-entropies, as well as the corresponding formulas are suggested to handle return-oriented and cost-oriented problems, respectively. Finally, utilizing entropy and semi-entropies as risk measures, two types of entropy optimization models and their equivalent formulations derived from the proposed formulas are given according to different decision criteria, providing an effective modeling method for fuzzy programming from the perspective of entropy. The numerical examples demonstrate the high efficiency and good performance of the proposed methods in decision making. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessReview
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
Entropy 2019, 21(7), 696; https://doi.org/10.3390/e21070696 - 15 Jul 2019
Cited by 5 | Viewed by 1490
Abstract
The pillars of contemporary theoretical physics are classical mechanics, Maxwell electromagnetism, relativity, quantum mechanics, and Boltzmann–Gibbs (BG) statistical mechanics –including its connection with thermodynamics. The BG theory describes amazingly well the thermal equilibrium of a plethora of so-called simple systems. However, BG statistical [...] Read more.
The pillars of contemporary theoretical physics are classical mechanics, Maxwell electromagnetism, relativity, quantum mechanics, and Boltzmann–Gibbs (BG) statistical mechanics –including its connection with thermodynamics. The BG theory describes amazingly well the thermal equilibrium of a plethora of so-called simple systems. However, BG statistical mechanics and its basic additive entropy S B G started, in recent decades, to exhibit failures or inadequacies in an increasing number of complex systems. The emergence of such intriguing features became apparent in quantum systems as well, such as black holes and other area-law-like scenarios for the von Neumann entropy. In a different arena, the efficiency of the Shannon entropy—as the BG functional is currently called in engineering and communication theory—started to be perceived as not necessarily optimal in the processing of images (e.g., medical ones) and time series (e.g., economic ones). Such is the case in the presence of generic long-range space correlations, long memory, sub-exponential sensitivity to the initial conditions (hence vanishing largest Lyapunov exponents), and similar features. Finally, we witnessed, during the last two decades, an explosion of asymptotically scale-free complex networks. This wide range of important systems eventually gave support, since 1988, to the generalization of the BG theory. Nonadditive entropies generalizing the BG one and their consequences have been introduced and intensively studied worldwide. The present review focuses on these concepts and their predictions, verifications, and applications in physics and elsewhere. Some selected examples (in quantum information, high- and low-energy physics, low-dimensional nonlinear dynamical systems, earthquakes, turbulence, long-range interacting systems, and scale-free networks) illustrate successful applications. The grounding thermodynamical framework is briefly described as well. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

Open AccessArticle
A Method for Improving Controlling Factors Based on Information Fusion for Debris Flow Susceptibility Mapping: A Case Study in Jilin Province, China
Entropy 2019, 21(7), 695; https://doi.org/10.3390/e21070695 - 15 Jul 2019
Cited by 1 | Viewed by 935
Abstract
Debris flow is one of the most frequently occurring geological disasters in Jilin province, China, and such disasters often result in the loss of human life and property. The objective of this study is to propose and verify an information fusion (IF) method [...] Read more.
Debris flow is one of the most frequently occurring geological disasters in Jilin province, China, and such disasters often result in the loss of human life and property. The objective of this study is to propose and verify an information fusion (IF) method in order to improve the factors controlling debris flow as well as the accuracy of the debris flow susceptibility map. Nine layers of factors controlling debris flow (i.e., topography, elevation, annual precipitation, distance to water system, slope angle, slope aspect, population density, lithology and vegetation coverage) were taken as the predictors. The controlling factors were improved by using the IF method. Based on the original controlling factors and the improved controlling factors, debris flow susceptibility maps were developed while using the statistical index (SI) model, the analytic hierarchy process (AHP) model, the random forest (RF) model, and their four integrated models. The results were compared using receiver operating characteristic (ROC) curve, and the spatial consistency of the debris flow susceptibility maps was analyzed while using Spearman’s rank correlation coefficients. The results show that the IF method that was used to improve the controlling factors can effectively enhance the performance of the debris flow susceptibility maps, with the IF-SI-RF model exhibiting the best performance in terms of debris flow susceptibility mapping. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering II)
Show Figures

Figure 1

Open AccessReview
Twenty Years of Entropy Research: A Bibliometric Overview
Entropy 2019, 21(7), 694; https://doi.org/10.3390/e21070694 - 15 Jul 2019
Cited by 4 | Viewed by 1351
Abstract
Entropy, founded in 1999, is an emerging international journal in the field of entropy and information studies. In the year of 2018, the journal enjoyed its 20th anniversary, and therefore, it is quite reasonable and meaningful to conduct a retrospective as its [...] Read more.
Entropy, founded in 1999, is an emerging international journal in the field of entropy and information studies. In the year of 2018, the journal enjoyed its 20th anniversary, and therefore, it is quite reasonable and meaningful to conduct a retrospective as its birthday gift. In accordance with Entropy’s distinctive name and research area, this paper creatively provides a bibliometric analysis method to not only look back at the vicissitude of the entire entropy topic, but also witness the journal’s growth and influence during this process. Based on 123,063 records extracted from the Web of Science, the work in sequence analyzes publication outputs, high-cited literature, and reference co-citation networks, in the aspects of the topic and the journal, respectively. The results indicate that the topic now has become a tremendous research domain and is still roaring ahead with great potentiality, widely researched by different kinds of disciplines. The most significant hotspots so far are suggested as the theoretical or practical innovation of graph entropy, permutation entropy, and pseudo-additive entropy. Furthermore, with the rapid growth in recent years, Entropy has attracted many dominant authors of the topic and experiences a distinctive geographical publication distribution. More importantly, in the midst of the topic, the journal has made enormous contributions to major research areas, particularly being a spear head in the studies of multiscale entropy and permutation entropy. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

Open AccessArticle
A Feature Extraction Method of Ship-Radiated Noise Based on Fluctuation-Based Dispersion Entropy and Intrinsic Time-Scale Decomposition
Entropy 2019, 21(7), 693; https://doi.org/10.3390/e21070693 - 15 Jul 2019
Cited by 4 | Viewed by 1118
Abstract
To improve the feature extraction of ship-radiated noise in a complex ocean environment, fluctuation-based dispersion entropy is used to extract the features of ten types of ship-radiated noise. Since fluctuation-based dispersion entropy only analyzes the ship-radiated noise signal in single scale and it [...] Read more.
To improve the feature extraction of ship-radiated noise in a complex ocean environment, fluctuation-based dispersion entropy is used to extract the features of ten types of ship-radiated noise. Since fluctuation-based dispersion entropy only analyzes the ship-radiated noise signal in single scale and it cannot distinguish different types of ship-radiated noise effectively, a new method of ship-radiated noise feature extraction is proposed based on fluctuation-based dispersion entropy (FDispEn) and intrinsic time-scale decomposition (ITD). Firstly, ten types of ship-radiated noise signals are decomposed into a series of proper rotation components (PRCs) by ITD, and the FDispEn of each PRC is calculated. Then, the correlation between each PRC and the original signal are calculated, and the FDispEn of each PRC is analyzed to select the Max-relative PRC fluctuation-based dispersion entropy as the feature parameter. Finally, by comparing the Max-relative PRC fluctuation-based dispersion entropy of a certain number of the above ten types of ship-radiated noise signals with FDispEn, it is discovered that the Max-relative PRC fluctuation-based dispersion entropy is at the same level for similar ship-radiated noise, but is distinct for different types of ship-radiated noise. The Max-relative PRC fluctuation-based dispersion entropy as the feature vector is sent into the support vector machine (SVM) classifier to classify and recognize ten types of ship-radiated noise. The experimental results demonstrate that the recognition rate of the proposed method reaches 95.8763%. Consequently, the proposed method can effectively achieve the classification of ship-radiated noise. Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics)
Show Figures

Figure 1

Open AccessArticle
Why the Tsirelson Bound? Bub’s Question and Fuchs’ Desideratum
Entropy 2019, 21(7), 692; https://doi.org/10.3390/e21070692 - 15 Jul 2019
Cited by 4 | Viewed by 1040
Abstract
To answer Wheeler’s question “Why the quantum?” via quantum information theory according to Bub, one must explain both why the world is quantum rather than classical and why the world is quantum rather than superquantum, i.e., “Why the Tsirelson bound?” We show that [...] Read more.
To answer Wheeler’s question “Why the quantum?” via quantum information theory according to Bub, one must explain both why the world is quantum rather than classical and why the world is quantum rather than superquantum, i.e., “Why the Tsirelson bound?” We show that the quantum correlations and quantum states corresponding to the Bell basis states, which uniquely produce the Tsirelson bound for the Clauser–Horne–Shimony–Holt (CHSH) quantity, can be derived from conservation per no preferred reference frame (NPRF). A reference frame in this context is defined by a measurement configuration, just as with the light postulate of special relativity. We therefore argue that the Tsirelson bound is ultimately based on NPRF just as the postulates of special relativity. This constraint-based/principle answer to Bub’s question addresses Fuchs’ desideratum that we “take the structure of quantum theory and change it from this very overt mathematical speak ... into something like [special relativity].” Thus, the answer to Bub’s question per Fuchs’ desideratum is, “the Tsirelson bound obtains due to conservation per NPRF”. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Open AccessArticle
An Information Entropy-Based Modeling Method for the Measurement System
Entropy 2019, 21(7), 691; https://doi.org/10.3390/e21070691 - 15 Jul 2019
Cited by 2 | Viewed by 916
Abstract
Measurement is a key method to obtain information from the real world and is widely used in human life. A unified model of measurement systems is critical to the design and optimization of measurement systems. However, the existing models of measurement systems are [...] Read more.
Measurement is a key method to obtain information from the real world and is widely used in human life. A unified model of measurement systems is critical to the design and optimization of measurement systems. However, the existing models of measurement systems are too abstract. To a certain extent, this makes it difficult to have a clear overall understanding of measurement systems and how to implement information acquisition. Meanwhile, this also leads to limitations in the application of these models. Information entropy is a measure of information or uncertainty of a random variable and has strong representation ability. In this paper, an information entropy-based modeling method for measurement system is proposed. First, a modeling idea based on the viewpoint of information and uncertainty is described. Second, an entropy balance equation based on the chain rule for entropy is proposed for system modeling. Then, the entropy balance equation is used to establish the information entropy-based model of the measurement system. Finally, three cases of typical measurement units or processes are analyzed using the proposed method. Compared with the existing modeling approaches, the proposed method considers the modeling problem from the perspective of information and uncertainty. It focuses on the information loss of the measurand in the transmission process and the characterization of the specific role of the measurement unit. The proposed model can intuitively describe the processing and changes of information in the measurement system. It does not conflict with the existing models of the measurement system, but can complement the existing models of measurement systems, thus further enriching the existing measurement theory. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
A Sequence-Based Damage Identification Method for Composite Rotors by Applying the Kullback–Leibler Divergence, a Two-Sample Kolmogorov–Smirnov Test and a Statistical Hidden Markov Model
Entropy 2019, 21(7), 690; https://doi.org/10.3390/e21070690 - 15 Jul 2019
Cited by 1 | Viewed by 1024
Abstract
Composite structures undergo a gradual damage evolution from initial inter-fibre cracks to extended damage up to failure. However, most composites could remain in service despite the existence of damage. Prerequisite for a service extension is a reliable and component-specific damage identification. Therefore, a [...] Read more.
Composite structures undergo a gradual damage evolution from initial inter-fibre cracks to extended damage up to failure. However, most composites could remain in service despite the existence of damage. Prerequisite for a service extension is a reliable and component-specific damage identification. Therefore, a vibration-based damage identification method is presented that takes into consideration the gradual damage behaviour and the resulting changes of the structural dynamic behaviour of composite rotors. These changes are transformed into a sequence of distinct states and used as an input database for three diagnostic models, based on the Kullback–Leibler divergence, the two-sample Kolmogorov–Smirnov test and a statistical hidden Markov model. To identify the present damage state based on the damage-dependent modal properties, a sequence-based diagnostic system has been developed, which estimates the similarity between the present unclassified sequence and obtained sequences of damage-dependent vibration responses. The diagnostic performance evaluation delivers promising results for the further development of the proposed diagnostic method. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop