Next Issue
Volume 19, March
Previous Issue
Volume 19, January

Table of Contents

Entropy, Volume 19, Issue 2 (February 2017) – 40 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) It is argued that one should make a clear distinction between Shannon’s Measure of Information [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Quantifying Synergistic Information Using Intermediate Stochastic Variables
Entropy 2017, 19(2), 85; https://doi.org/10.3390/e19020085 - 22 Feb 2017
Cited by 11 | Viewed by 2991
Abstract
Quantifying synergy among stochastic variables is an important open problem in information theory. Information synergy occurs when multiple sources together predict an outcome variable better than the sum of single-source predictions. It is an essential phenomenon in biology such as in neuronal networks [...] Read more.
Quantifying synergy among stochastic variables is an important open problem in information theory. Information synergy occurs when multiple sources together predict an outcome variable better than the sum of single-source predictions. It is an essential phenomenon in biology such as in neuronal networks and cellular regulatory processes, where different information flows integrate to produce a single response, but also in social cooperation processes as well as in statistical inference tasks in machine learning. Here we propose a metric of synergistic entropy and synergistic information from first principles. The proposed measure relies on so-called synergistic random variables (SRVs) which are constructed to have zero mutual information about individual source variables but non-zero mutual information about the complete set of source variables. We prove several basic and desired properties of our measure, including bounds and additivity properties. In addition, we prove several important consequences of our measure, including the fact that different types of synergistic information may co-exist between the same sets of variables. A numerical implementation is provided, which we use to demonstrate that synergy is associated with resilience to noise. Our measure may be a marked step forward in the study of multivariate information theory and its numerous applications. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Open AccessArticle
The More You Know, the More You Can Grow: An Information Theoretic Approach to Growth in the Information Age
Entropy 2017, 19(2), 82; https://doi.org/10.3390/e19020082 - 22 Feb 2017
Cited by 6 | Viewed by 3146
Abstract
In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory [...] Read more.
In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory is used to show that information (in bits) is effectively a quantifiable ingredient of growth. The article presents a single equation that allows both to describe hands-off natural selection of evolving populations and to optimize population fitness in uncertain environments through intervention. The setup analyzes the communication channel between the growing population and its uncertain environment. The role of information in population growth can be thought of as the optimization of information flow over this (more or less) noisy channel. Optimized growth implies that the population absorbs all communicated environmental structure during evolutionary updating (measured by their mutual information). This is achieved by endogenously adjusting the population structure to the exogenous environmental pattern (through bet-hedging/portfolio management). The setup can be applied to decompose the growth of any discrete population in stationary, stochastic environments (economic, cultural, or biological). Two empirical examples from the information economy reveal inherent trade-offs among the involved information quantities during growth optimization. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Using k-Mix-Neighborhood Subdigraphs to Compute Canonical Labelings of Digraphs
Entropy 2017, 19(2), 79; https://doi.org/10.3390/e19020079 - 22 Feb 2017
Cited by 1 | Viewed by 3271
Abstract
This paper presents a novel theory and method to calculate the canonical labelings of digraphs whose definition is entirely different from the traditional definition of Nauty. It indicates the mutual relationships that exist between the canonical labeling of a digraph and the [...] Read more.
This paper presents a novel theory and method to calculate the canonical labelings of digraphs whose definition is entirely different from the traditional definition of Nauty. It indicates the mutual relationships that exist between the canonical labeling of a digraph and the canonical labeling of its complement graph. It systematically examines the link between computing the canonical labeling of a digraph and the k-neighborhood and k-mix-neighborhood subdigraphs. To facilitate the presentation, it introduces several concepts including mix diffusion outdegree sequence and entire mix diffusion outdegree sequences. For each node in a digraph G, it assigns an attribute m_NearestNode to enhance the accuracy of calculating canonical labeling. The four theorems proved here demonstrate how to determine the first nodes added into M a x Q ( G ) . Further, the other two theorems stated below deal with identifying the second nodes added into M a x Q ( G ) . When computing C m a x ( G ) , if M a x Q ( G ) already contains the first i vertices u 1 , u 2 , , u i , Diffusion Theorem provides a guideline on how to choose the subsequent node of M a x Q ( G ) . Besides, the Mix Diffusion Theorem shows that the selection of the ( i + 1 ) th vertex of M a x Q ( G ) for computing C m a x ( G ) is from the open mix-neighborhood subdigraph N + + ( Q ) of the nodes set Q = { u 1 , u 2 , , u i } . It also offers two theorems to calculate the C m a x ( G ) of the disconnected digraphs. The four algorithms implemented in it illustrate how to calculate M a x Q ( G ) of a digraph. Through software testing, the correctness of our algorithms is preliminarily verified. Our method can be utilized to mine the frequent subdigraph. We also guess that if there exists a vertex v S + ( G ) satisfying conditions C m a x ( G v ) C m a x ( G w ) for each w S + ( G ) w v , then u 1 = v for M a x Q ( G ) . Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Sequential Batch Design for Gaussian Processes Employing Marginalization †
Entropy 2017, 19(2), 84; https://doi.org/10.3390/e19020084 - 21 Feb 2017
Cited by 1 | Viewed by 1867
Abstract
Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the [...] Read more.
Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the quality of the fit—as the utility function, we establish an optimized and automated sequential parameter selection procedure. However, it is also often desirable to utilize the parallel running capabilities of present computer technology and abandon the sequential parameter selection for a faster overall turn-around time (wall-clock time). This paper proposes to achieve this by marginalizing over the expected outcomes at optimized test points in order to set up a pool of starting values for batch execution. For a one-dimensional test case, the numerical results are validated with the analytical solution. Eventually, a systematic convergence study demonstrates the advantage of the optimized approach over randomly chosen parameter settings. Full article
(This article belongs to the Special Issue Selected Papers from MaxEnt 2016)
Show Figures

Figure 1

Open AccessArticle
Breakdown Point of Robust Support Vector Machines
Entropy 2017, 19(2), 83; https://doi.org/10.3390/e19020083 - 21 Feb 2017
Cited by 1 | Viewed by 2496
Abstract
Support vector machine (SVM) is one of the most successful learning methods for solving classification problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassification is defined by a convex loss [...] Read more.
Support vector machine (SVM) is one of the most successful learning methods for solving classification problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassification is defined by a convex loss called the hinge loss, and the unboundedness of the convex loss causes the sensitivity to outliers. To deal with outliers, robust SVMs have been proposed by replacing the convex loss with a non-convex bounded loss called the ramp loss. In this paper, we study the breakdown point of robust SVMs. The breakdown point is a robustness measure that is the largest amount of contamination such that the estimated classifier still gives information about the non-contaminated data. The main contribution of this paper is to show an exact evaluation of the breakdown point of robust SVMs. For learning parameters such as the regularization parameter, we derive a simple formula that guarantees the robustness of the classifier. When the learning parameters are determined with a grid search using cross-validation, our formula works to reduce the number of candidate search points. Furthermore, the theoretical findings are confirmed in numerical experiments. We show that the statistical properties of robust SVMs are well explained by a theoretical analysis of the breakdown point. Full article
Show Figures

Figure 1

Open AccessArticle
Towards Operational Definition of Postictal Stage: Spectral Entropy as a Marker of Seizure Ending
Entropy 2017, 19(2), 81; https://doi.org/10.3390/e19020081 - 21 Feb 2017
Cited by 4 | Viewed by 2334
Abstract
The postictal period is characterized by several neurological alterations, but its exact limits are clinically or even electroencephalographically hard to determine in most cases. We aim to provide quantitative functions or conditions with a clearly distinguishable behavior during the ictal-postictal transition. Spectral methods [...] Read more.
The postictal period is characterized by several neurological alterations, but its exact limits are clinically or even electroencephalographically hard to determine in most cases. We aim to provide quantitative functions or conditions with a clearly distinguishable behavior during the ictal-postictal transition. Spectral methods were used to analyze foramen ovale electrodes (FOE) recordings during the ictal/postictal transition in 31 seizures of 15 patients with strictly unilateral drug resistant temporal lobe epilepsy. In particular, density of links, spectral entropy, and relative spectral power were analyzed. Partial simple seizures are accompanied by an ipsilateral increase in the relative Delta power and a decrease in synchronization in a 66% and 91% of the cases, respectively, after seizures offset. Complex partial seizures showed a decrease in the spectral entropy in 94% of cases, both ipsilateral and contralateral sides (100% and 73%, respectively) mainly due to an increase of relative Delta activity. Seizure offset is defined as the moment at which the “seizure termination mechanisms” actually end, which is quantified in the spectral entropy value. We propose as a definition for the postictal start the time when the ipsilateral SE reaches the first global minimum. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Show Figures

Figure 1

Open AccessArticle
A Risk-Free Protection Index Model for Portfolio Selection with Entropy Constraint under an Uncertainty Framework
Entropy 2017, 19(2), 80; https://doi.org/10.3390/e19020080 - 21 Feb 2017
Cited by 2 | Viewed by 1423
Abstract
This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free [...] Read more.
This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free interest rate we define a risk-free protection index (RFPI), which can measure the protection degree when the loss of risk assets happens. Third, note that the proportion entropy serves as a complementary means to reduce the risk by the preset diversification requirement. We put forward a risk-free protection index model with an entropy constraint under an uncertainty framework by applying the RFPI, Huang’s risk index model (RIM), and mean-variance-entropy model (MVEM). Furthermore, to solve our portfolio model, an algorithm is given to estimate the uncertain expected return and standard deviation of different risk assets by applying the Delphi method. Finally, an example is provided to show that the risk-free protection index model performs better than the traditional MVEM and RIM. Full article
Show Figures

Figure 1

Open AccessArticle
Admitting Spontaneous Violations of the Second Law in Continuum Thermomechanics
Entropy 2017, 19(2), 78; https://doi.org/10.3390/e19020078 - 21 Feb 2017
Cited by 2 | Viewed by 1799
Abstract
We survey new extensions of continuum mechanics incorporating spontaneous violations of the Second Law (SL), which involve the viscous flow and heat conduction. First, following an account of the Fluctuation Theorem (FT) of statistical mechanics that generalizes the SL, the irreversible entropy is [...] Read more.
We survey new extensions of continuum mechanics incorporating spontaneous violations of the Second Law (SL), which involve the viscous flow and heat conduction. First, following an account of the Fluctuation Theorem (FT) of statistical mechanics that generalizes the SL, the irreversible entropy is shown to evolve as a submartingale. Next, a stochastic thermomechanics is formulated consistent with the FT, which, according to a revision of classical axioms of continuum mechanics, must be set up on random fields. This development leads to a reformulation of thermoviscous fluids and inelastic solids. These two unconventional constitutive behaviors may jointly occur in nano-poromechanics. Full article
(This article belongs to the Special Issue Limits to the Second Law of Thermodynamics: Experiment and Theory)
Open AccessArticle
User-Centric Key Entropy: Study of Biometric Key Derivation Subject to Spoofing Attacks
Entropy 2017, 19(2), 70; https://doi.org/10.3390/e19020070 - 21 Feb 2017
Cited by 4 | Viewed by 2307
Abstract
Biometric data can be used as input for PKI key pair generation. The concept of not saving the private key is very appealing, but the implementation of such a system shouldn’t be rushed because it might prove less secure then current PKI infrastructure. [...] Read more.
Biometric data can be used as input for PKI key pair generation. The concept of not saving the private key is very appealing, but the implementation of such a system shouldn’t be rushed because it might prove less secure then current PKI infrastructure. One biometric characteristic can be easily spoofed, so it was believed that multi-modal biometrics would offer more security, because spoofing two or more biometrics would be very hard. This notion, of increased security of multi-modal biometric systems, was disproved for authentication and matching, studies showing that not only multi-modal biometric systems are not more secure, but they introduce additional vulnerabilities. This paper is a study on the implications of spoofing biometric data for retrieving the derived key. We demonstrate that spoofed biometrics can yield the same key, which in turn will lead an attacker to obtain the private key. A practical implementation is proposed using fingerprint and iris as biometrics and the fuzzy extractor for biometric key extraction. Our experiments show what happens when the biometric data is spoofed for both uni-modal systems and multi-modal. In case of multi-modal system tests were performed when spoofing one biometric or both. We provide detailed analysis of every scenario in regard to successful tests and overall key entropy. Our paper defines a biometric PKI scenario and an in depth security analysis for it. The analysis can be viewed as a blueprint for implementations of future similar systems, because it highlights the main security vulnerabilities for bioPKI. The analysis is not constrained to the biometric part of the system, but covers CA security, sensor security, communication interception, RSA encryption vulnerabilities regarding key entropy, and much more. Full article
Show Figures

Figure 1

Open AccessArticle
Energy Transfer between Colloids via Critical Interactions
Entropy 2017, 19(2), 77; https://doi.org/10.3390/e19020077 - 17 Feb 2017
Cited by 9 | Viewed by 2077
Abstract
We report the observation of a temperature-controlled synchronization of two Brownian-particles in a binary mixture close to the critical point of the demixing transition. The two beads are trapped by two optical tweezers whose distance is periodically modulated. We notice that the motion [...] Read more.
We report the observation of a temperature-controlled synchronization of two Brownian-particles in a binary mixture close to the critical point of the demixing transition. The two beads are trapped by two optical tweezers whose distance is periodically modulated. We notice that the motion synchronization of the two beads appears when the critical temperature is approached. In contrast, when the fluid is far from its critical temperature, the displacements of the two beads are uncorrelated. Small changes in temperature can radically change the global dynamics of the system. We show that the synchronisation is induced by the critical Casimir forces. Finally, we present the measure of the energy transfers inside the system produced by the critical interaction. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Show Figures

Figure 1

Open AccessArticle
A Comparison of Postural Stability during Upright Standing between Normal and Flatfooted Individuals, Based on COP-Based Measures
Entropy 2017, 19(2), 76; https://doi.org/10.3390/e19020076 - 16 Feb 2017
Cited by 3 | Viewed by 2062
Abstract
Aging causes foot arches to collapse, possibly leading to foot deformities and falls. This paper proposes a set of measures involving an entropy-based method used for two groups of young adults with dissimilar foot arches to explore and quantize postural stability on a [...] Read more.
Aging causes foot arches to collapse, possibly leading to foot deformities and falls. This paper proposes a set of measures involving an entropy-based method used for two groups of young adults with dissimilar foot arches to explore and quantize postural stability on a force plate in an upright position. Fifty-four healthy young adults aged 18–30 years participated in this study. These were categorized into two groups: normal (37 participants) and flatfooted (17 participants). We collected the center of pressure (COP) displacement trajectories of participants during upright standing, on a force plate, in a static position, with eyes open (EO), or eyes closed (EC). These nonstationary time-series signals were quantized using entropy-based measures and traditional measures used to assess postural stability, and the results obtained from these measures were compared. The appropriate combinations of entropy-based measures revealed that, with respect to postural stability, the two groups differed significantly (p < 0.05) under both EO and EC conditions. The traditional commonly-used COP-based measures only revealed differences under EO conditions. Entropy-based measures are thus suitable for examining differences in postural stability for flatfooted people, and may be used by clinicians after further refinement. Full article
(This article belongs to the Special Issue Multivariate Entropy Measures and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Information Loss in Binomial Data Due to Data Compression
Entropy 2017, 19(2), 75; https://doi.org/10.3390/e19020075 - 16 Feb 2017
Cited by 2 | Viewed by 1814
Abstract
This paper explores the idea of information loss through data compression, as occurs in the course of any data analysis, illustrated via detailed consideration of the Binomial distribution. We examine situations where the full sequence of binomial outcomes is retained, situations where only [...] Read more.
This paper explores the idea of information loss through data compression, as occurs in the course of any data analysis, illustrated via detailed consideration of the Binomial distribution. We examine situations where the full sequence of binomial outcomes is retained, situations where only the total number of successes is retained, and in-between situations. We show that a familiar decomposition of the Shannon entropy H can be rewritten as a decomposition into H t o t a l , H l o s t , and H c o m p , or the total, lost and compressed (remaining) components, respectively. We relate this new decomposition to Landauer’s principle, and we discuss some implications for the “information-dynamic” theory being developed in connection with our broader program to develop a measure of statistical evidence on a properly calibrated scale. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
An Approach to Data Analysis in 5G Networks
Entropy 2017, 19(2), 74; https://doi.org/10.3390/e19020074 - 16 Feb 2017
Cited by 7 | Viewed by 2220
Abstract
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order [...] Read more.
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order to solve or mitigate common network problems in a dynamic and proactive way. In this context, this paper presents the design of Self-Organized Network Management in Virtualized and Software Defined Networks (SELFNET) Analyzer Module, which main objective is to identify suspicious or unexpected situations based on metrics provided by different network components and sensors. The SELFNET Analyzer Module provides a modular architecture driven by use cases where analytic functions can be easily extended. This paper also proposes the data specification to define the data inputs to be taking into account in diagnosis process. This data specification has been implemented with different use cases within SELFNET Project, proving its effectiveness. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Show Figures

Figure 1

Open AccessArticle
Identifying Critical States through the Relevance Index
Entropy 2017, 19(2), 73; https://doi.org/10.3390/e19020073 - 16 Feb 2017
Cited by 8 | Viewed by 1996
Abstract
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, [...] Read more.
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, depending on the values of some parameters. In this paper, we show that the relevance index may be effectively used to identify critical states in complex systems. The relevance index was originally developed to identify relevant sets of variables in dynamical systems, but in this paper, we show that it is also able to capture features of criticality. The index is applied to two prominent examples showing slightly different meanings of criticality, namely the Ising model and random Boolean networks. Results show that this index is maximized at critical states and is robust with respect to system size and sampling effort. It can therefore be used to detect criticality. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³)) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Classification of Normal and Pre-Ictal EEG Signals Using Permutation Entropies and a Generalized Linear Model as a Classifier
Entropy 2017, 19(2), 72; https://doi.org/10.3390/e19020072 - 16 Feb 2017
Cited by 14 | Viewed by 2318
Abstract
In this contribution, a comparison between different permutation entropies as classifiers of electroencephalogram (EEG) records corresponding to normal and pre-ictal states is made. A discrete probability distribution function derived from symbolization techniques applied to the EEG signal is used to calculate the Tsallis [...] Read more.
In this contribution, a comparison between different permutation entropies as classifiers of electroencephalogram (EEG) records corresponding to normal and pre-ictal states is made. A discrete probability distribution function derived from symbolization techniques applied to the EEG signal is used to calculate the Tsallis entropy, Shannon Entropy, Renyi Entropy, and Min Entropy, and they are used separately as the only independent variable in a logistic regression model in order to evaluate its capacity as a classification variable in a inferential manner. The area under the Receiver Operating Characteristic (ROC) curve, along with the accuracy, sensitivity, and specificity are used to compare the models. All the permutation entropies are excellent classifiers, with an accuracy greater than 94.5% in every case, and a sensitivity greater than 97%. Accounting for the amplitude in the symbolization technique retains more information of the signal than its counterparts, and it could be a good candidate for automatic classification of EEG signals. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Show Figures

Figure 1

Open AccessArticle
Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss
Entropy 2017, 19(2), 71; https://doi.org/10.3390/e19020071 - 16 Feb 2017
Cited by 17 | Viewed by 2672
Abstract
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or [...] Read more.
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or redundant and synergy components. In this work, we extend this framework focusing on the lattices that underpin the decomposition. We generalize the type of constructible lattices and examine the relations between different lattices, for example, relating bivariate and trivariate decompositions. We point out that, in information gain lattices, redundancy components are invariant across decompositions, but unique and synergy components are decomposition-dependent. Exploiting the connection between different lattices, we propose a procedure to construct, in the general multivariate case, information gain decompositions from measures of synergy or unique information. We then introduce an alternative type of lattices, information loss lattices, with the role and invariance properties of redundancy and synergy components reversed with respect to gain lattices, and which provide an alternative procedure to build multivariate decompositions. We finally show how information gain and information loss dual lattices lead to a self-consistent unique decomposition, which allows a deeper understanding of the origin and meaning of synergy and redundancy. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³)) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Two Thermoeconomic Diagnosis Methods Applied to Representative Operating Data of a Commercial Transcritical Refrigeration Plant
Entropy 2017, 19(2), 69; https://doi.org/10.3390/e19020069 - 15 Feb 2017
Cited by 5 | Viewed by 1625
Abstract
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the quality of [...] Read more.
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the quality of malfunction identification. Both methods were applicable to locate and categorise the malfunctions when using steady state data without measurement uncertainties. By introduction of measurement uncertainty, the categorisation of malfunctions became increasingly difficult, though depending on the magnitude of the uncertainties. Two different uncertainty scenarios were evaluated, as the use of repeated measurements yields a lower magnitude of uncertainty. The two methods show similar performance in the presented study for both of the considered measurement uncertainty scenarios. However, only in the low measurement uncertainty scenario, both methods are applicable to locate the causes of the malfunctions. For both the scenarios an outlier limit was found, which determines if it was possible to reject a high relative indicator based on measurement uncertainty. For high uncertainties, the threshold value of the relative indicator was 35, whereas for low uncertainties one of the methods resulted in a threshold at 8. Additionally, the contribution of different measuring instruments to the relative indicator in two central components was analysed. It shows that the contribution was component dependent. Full article
(This article belongs to the Special Issue Thermoeconomics for Energy Efficiency)
Show Figures

Figure 1

Open AccessArticle
Kinetic Theory of a Confined Quasi-Two-Dimensional Gas of Hard Spheres
Entropy 2017, 19(2), 68; https://doi.org/10.3390/e19020068 - 14 Feb 2017
Cited by 4 | Viewed by 2130
Abstract
The dynamics of a system of hard spheres enclosed between two parallel plates separated a distance smaller than two particle diameters is described at the level of kinetic theory. The interest focuses on the behavior of the quasi-two-dimensional fluid seen when looking at [...] Read more.
The dynamics of a system of hard spheres enclosed between two parallel plates separated a distance smaller than two particle diameters is described at the level of kinetic theory. The interest focuses on the behavior of the quasi-two-dimensional fluid seen when looking at the system from above or below. In the first part, a collisional model for the effective two-dimensional dynamics is analyzed. Although it is able to describe quite well the homogeneous evolution observed in the experiments, it is shown that it fails to predict the existence of non-equilibrium phase transitions, and in particular, the bimodal regime exhibited by the real system. A critical revision analysis of the model is presented , and as a starting point to get a more accurate description, the Boltzmann equation for the quasi-two-dimensional gas has been derived. In the elastic case, the solutions of the equation verify an H-theorem implying a monotonic tendency to a non-uniform steady state. As an example of application of the kinetic equation, here the evolution equations for the vertical and horizontal temperatures of the system are derived in the homogeneous approximation, and the results compared with molecular dynamics simulation results. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Show Figures

Figure 1

Open AccessArticle
An Android Malicious Code Detection Method Based on Improved DCA Algorithm
Entropy 2017, 19(2), 65; https://doi.org/10.3390/e19020065 - 11 Feb 2017
Cited by 4 | Viewed by 2553
Abstract
Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source [...] Read more.
Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source code can not completely extract malware’s code features. The Android malware static detection methods whose features used are only obtained from the AndroidManifest.xml file are easily affected by useless permissions. Therefore, there are some limitations in current Android malware static detection methods. The current Android malware dynamic detection algorithm is mostly required to customize the system or needs system root permissions. Based on the Dendritic Cell Algorithm (DCA), this paper proposes an Android malware algorithm that has a higher detection rate, does not need to modify the system, and reduces the impact of code obfuscation to a certain degree. This algorithm is applied to an Android malware detection method based on oriented Dalvik disassembly sequence and application interface (API) calling sequence. Through the designed experiments, the effectiveness of this method is verified for the detection of Android malware. Full article
Show Figures

Figure 1

Open AccessArticle
Investigation into Multi-Temporal Scale Complexity of Streamflows and Water Levels in the Poyang Lake Basin, China
Entropy 2017, 19(2), 67; https://doi.org/10.3390/e19020067 - 10 Feb 2017
Cited by 7 | Viewed by 1856
Abstract
The streamflow and water level complexity of the Poyang Lake basin has been investigated over multiple time-scales using daily observations of the water level and streamflow spanning from 1954 through 2013. The composite multiscale sample entropy was applied to measure the complexity and [...] Read more.
The streamflow and water level complexity of the Poyang Lake basin has been investigated over multiple time-scales using daily observations of the water level and streamflow spanning from 1954 through 2013. The composite multiscale sample entropy was applied to measure the complexity and the Mann-Kendall algorithm was applied to detect the temporal changes in the complexity. The results show that the streamflow and water level complexity increases as the time-scale increases. The sample entropy of the streamflow increases when the timescale increases from a daily to a seasonal scale, also the sample entropy of the water level increases when the time-scale increases from a daily to a monthly scale. The water outflows of Poyang Lake, which is impacted mainly by the inflow processes, lake regulation, and the streamflow processes of the Yangtze River, is more complex than the water inflows. The streamflow and water level complexity over most of the time-scales, between the daily and monthly scales, is dominated by the increasing trend. This indicates the enhanced randomness, disorderliness, and irregularity of the streamflows and water levels. This investigation can help provide a better understanding to the hydrological features of large freshwater lakes. Ongoing research will be made to analyze and understand the mechanisms of the streamflow and water level complexity changes within the context of climate change and anthropogenic activities. Full article
Show Figures

Figure 1

Open AccessConcept Paper
Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies
Entropy 2017, 19(2), 66; https://doi.org/10.3390/e19020066 - 10 Feb 2017
Cited by 6 | Viewed by 2990
Abstract
The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision [...] Read more.
The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
Open AccessArticle
Bullwhip Entropy Analysis and Chaos Control in the Supply Chain with Sales Game and Consumer Returns
Entropy 2017, 19(2), 64; https://doi.org/10.3390/e19020064 - 10 Feb 2017
Cited by 10 | Viewed by 2023
Abstract
In this paper, we study a supply chain system which consists of one manufacturer and two retailers including a traditional retailer and an online retailer. In order to gain a larger market share, the retailers often take the sales as a decision-making variable [...] Read more.
In this paper, we study a supply chain system which consists of one manufacturer and two retailers including a traditional retailer and an online retailer. In order to gain a larger market share, the retailers often take the sales as a decision-making variable in the competition game. We devote ourselves to analyze the bullwhip effect in the supply chain with sales game and consumer returns via the theory of entropy and complexity and take the delayed feedback control method to control the system’s chaotic state. The impact of a statutory 7-day no reason for return policy for online retailers is also investigated. The bounded rational expectation is adopt to forecast the future demand in the sales game system with weak noise. Our results show that high return rates will hurt the profits of both the retailers and the adjustment speed of the bounded rational sales expectation has an important impact on the bullwhip effect. There is a stable area for retailers where the bullwhip effect doesn’t appear. The supply chain system suffers a great bullwhip effect in the quasi-periodic state and the quasi-chaotic state. The purpose of chaos control on the sales game can be achieved and the bullwhip effect would be effectively mitigated by using the delayed feedback control method. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Open AccessArticle
Response Surface Methodology Control Rod Position Optimization of a Pressurized Water Reactor Core Considering Both High Safety and Low Energy Dissipation
Entropy 2017, 19(2), 63; https://doi.org/10.3390/e19020063 - 10 Feb 2017
Cited by 3 | Viewed by 2192
Abstract
Response Surface Methodology (RSM) is introduced to optimize the control rod positions in a pressurized water reactor (PWR) core. The widely used 3D-IAEA benchmark problem is selected as the typical PWR core and the neutron flux field is solved. Besides, some additional thermal [...] Read more.
Response Surface Methodology (RSM) is introduced to optimize the control rod positions in a pressurized water reactor (PWR) core. The widely used 3D-IAEA benchmark problem is selected as the typical PWR core and the neutron flux field is solved. Besides, some additional thermal parameters are assumed to obtain the temperature distribution. Then the total and local entropy production is calculated to evaluate the energy dissipation. Using RSM, three directions of optimization are taken, which aim to determine the minimum of power peak factor Pmax, peak temperature Tmax and total entropy production Stot. These parameters reflect the safety and energy dissipation in the core. Finally, an optimization scheme was obtained, which reduced Pmax, Tmax and Stot by 23%, 8.7% and 16%, respectively. The optimization results are satisfactory. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Show Figures

Graphical abstract

Open AccessEditorial
Complex and Fractional Dynamics
Entropy 2017, 19(2), 62; https://doi.org/10.3390/e19020062 - 08 Feb 2017
Cited by 4 | Viewed by 1547
Abstract
Complex systems (CS) are pervasive in many areas, namely financial markets; highway transportation; telecommunication networks; world and country economies; social networks; immunological systems; living organisms; computational systems; and electrical and mechanical structures. CS are often composed of a large number of interconnected and [...] Read more.
Complex systems (CS) are pervasive in many areas, namely financial markets; highway transportation; telecommunication networks; world and country economies; social networks; immunological systems; living organisms; computational systems; and electrical and mechanical structures. CS are often composed of a large number of interconnected and interacting entities exhibiting much richer global scale dynamics than could be inferred from the properties and behavior of individual elements. [...]
Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Open AccessEditorial
Computational Complexity
Entropy 2017, 19(2), 61; https://doi.org/10.3390/e19020061 - 07 Feb 2017
Viewed by 1591
Abstract
Complex systems (CS) involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS [...] Read more.
Complex systems (CS) involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...]
Full article
(This article belongs to the Special Issue Computational Complexity)
Open AccessArticle
Nonlinear Wave Equations Related to Nonextensive Thermostatistics
Entropy 2017, 19(2), 60; https://doi.org/10.3390/e19020060 - 07 Feb 2017
Cited by 9 | Viewed by 2263
Abstract
We advance two nonlinear wave equations related to the nonextensive thermostatistical formalism based upon the power-law nonadditive Sq entropies. Our present contribution is in line with recent developments, where nonlinear extensions inspired on the q-thermostatistical formalism have been proposed for the [...] Read more.
We advance two nonlinear wave equations related to the nonextensive thermostatistical formalism based upon the power-law nonadditive S q entropies. Our present contribution is in line with recent developments, where nonlinear extensions inspired on the q-thermostatistical formalism have been proposed for the Schroedinger, Klein–Gordon, and Dirac wave equations. These previously introduced equations share the interesting feature of admitting q-plane wave solutions. In contrast with these recent developments, one of the nonlinear wave equations that we propose exhibits real q-Gaussian solutions, and the other one admits exponential plane wave solutions modulated by a q-Gaussian. These q-Gaussians are q-exponentials whose arguments are quadratic functions of the space and time variables. The q-Gaussians are at the heart of nonextensive thermostatistics. The wave equations that we analyze in this work illustrate new possible dynamical scenarios leading to time-dependent q-Gaussians. One of the nonlinear wave equations considered here is a wave equation endowed with a nonlinear potential term, and can be regarded as a nonlinear Klein–Gordon equation. The other equation we study is a nonlinear Schroedinger-like equation. Full article
Show Figures

Figure 1

Open AccessArticle
On the Binary Input Gaussian Wiretap Channel with/without Output Quantization
Entropy 2017, 19(2), 59; https://doi.org/10.3390/e19020059 - 04 Feb 2017
Cited by 1 | Viewed by 1700
Abstract
In this paper, we investigate the effect of output quantization on the secrecy capacity of the binary-input Gaussian wiretap channel. As a result, a closed-form expression with infinite summation terms of the secrecy capacity of the binary-input Gaussian wiretap channel is derived for [...] Read more.
In this paper, we investigate the effect of output quantization on the secrecy capacity of the binary-input Gaussian wiretap channel. As a result, a closed-form expression with infinite summation terms of the secrecy capacity of the binary-input Gaussian wiretap channel is derived for the case when both the legitimate receiver and the eavesdropper have unquantized outputs. In particular, computable tight upper and lower bounds on the secrecy capacity are obtained. Theoretically, we prove that when the legitimate receiver has unquantized outputs while the eavesdropper has binary quantized outputs, the secrecy capacity is larger than that when both the legitimate receiver and the eavesdropper have unquantized outputs or both have binary quantized outputs. Further, numerical results show that in the low signal-to-noise ratio (SNR) (of the main channel) region, the secrecy capacity of the binary input Gaussian wiretap channel when both the legitimate receiver and the eavesdropper have unquantized outputs is larger than the capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs; as the SNR increases, the secrecy capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs tends to overtake. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Entropy 2017, 19(2), 58; https://doi.org/10.3390/e19020058 - 02 Feb 2017
Cited by 4 | Viewed by 2240
Abstract
We compare the application of Bayesian inference and the maximum entropy (MaxEnt) method for the analysis of flow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of flow rates and other variables, [...] Read more.
We compare the application of Bayesian inference and the maximum entropy (MaxEnt) method for the analysis of flow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of flow rates and other variables, when there is insufficient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf) by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method finds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation. Full article
Open AccessArticle
The Second Law: From Carnot to Thomson-Clausius, to the Theory of Exergy, and to the Entropy-Growth Potential Principle
Entropy 2017, 19(2), 57; https://doi.org/10.3390/e19020057 - 28 Jan 2017
Cited by 2 | Viewed by 2408
Abstract
At its origins, thermodynamics was the study of heat and engines. Carnot transformed it into a scientific discipline by explaining engine power in terms of transfer of “caloric”. That idea became the second law of thermodynamics when Thomson and Clausius reconciled Carnot’s theory [...] Read more.
At its origins, thermodynamics was the study of heat and engines. Carnot transformed it into a scientific discipline by explaining engine power in terms of transfer of “caloric”. That idea became the second law of thermodynamics when Thomson and Clausius reconciled Carnot’s theory with Joule’s conflicting thesis that power was derived from the consumption of heat, which was determined to be a form of energy. Eventually, Clausius formulated the 2nd-law as the universal entropy growth principle: the synthesis of transfer vs. consumption led to what became known as the mechanical theory of heat (MTH). However, by making universal-interconvertibility the cornerstone of MTH their synthesis-project was a defective one, which precluded MTH from developing the full expression of the second law. This paper reiterates that universal-interconvertibility is demonstrably false—as the case has been made by many others—by clarifying the true meaning of the mechanical equivalent of heat. And, presents a two-part formulation of the second law: universal entropy growth principle as well as a new principle that no change in Nature happens without entropy growth potential. With the new principle as its cornerstone replacing universal-interconvertibility, thermodynamics transcends the defective MTH and becomes a coherent conceptual system. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Open AccessArticle
Bateman–Feshbach Tikochinsky and Caldirola–Kanai Oscillators with New Fractional Differentiation
Entropy 2017, 19(2), 55; https://doi.org/10.3390/e19020055 - 28 Jan 2017
Cited by 50 | Viewed by 2124
Abstract
In this work, the study of the fractional behavior of the Bateman–Feshbach–Tikochinsky and Caldirola–Kanai oscillators by using different fractional derivatives is presented. We obtained the Euler–Lagrange and the Hamiltonian formalisms in order to represent the dynamic models based on the Liouville–Caputo, Caputo–Fabrizio–Caputo and [...] Read more.
In this work, the study of the fractional behavior of the Bateman–Feshbach–Tikochinsky and Caldirola–Kanai oscillators by using different fractional derivatives is presented. We obtained the Euler–Lagrange and the Hamiltonian formalisms in order to represent the dynamic models based on the Liouville–Caputo, Caputo–Fabrizio–Caputo and the new fractional derivative based on the Mittag–Leffler kernel with arbitrary order α. Simulation results are presented in order to show the fractional behavior of the oscillators, and the classical behavior is recovered when α is equal to 1. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop