Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 2 (February 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story It is argued that one should make a clear distinction between Shannon’s Measure of Information [...] Read more.
View options order results:
result details:
Displaying articles 1-40
Export citation of selected articles as:

Editorial

Jump to: Research, Other

Open AccessEditorial Computational Complexity
Entropy 2017, 19(2), 61; doi:10.3390/e19020061
Received: 26 January 2017 / Revised: 5 February 2017 / Accepted: 5 February 2017 / Published: 7 February 2017
PDF Full-text (627 KB) | HTML Full-text | XML Full-text
Abstract
Complex systems (CS) involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS
[...] Read more.
Complex systems (CS) involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...]
Full article
(This article belongs to the Special Issue Computational Complexity)
Open AccessEditorial Complex and Fractional Dynamics
Entropy 2017, 19(2), 62; doi:10.3390/e19020062
Received: 26 January 2017 / Revised: 6 February 2017 / Accepted: 6 February 2017 / Published: 8 February 2017
Cited by 1 | PDF Full-text (163 KB) | HTML Full-text | XML Full-text
Abstract
Complex systems (CS) are pervasive in many areas, namely financial markets; highway transportation; telecommunication networks; world and country economies; social networks; immunological systems; living organisms; computational systems; and electrical and mechanical structures. CS are often composed of a large number of interconnected and
[...] Read more.
Complex systems (CS) are pervasive in many areas, namely financial markets; highway transportation; telecommunication networks; world and country economies; social networks; immunological systems; living organisms; computational systems; and electrical and mechanical structures. CS are often composed of a large number of interconnected and interacting entities exhibiting much richer global scale dynamics than could be inferred from the properties and behavior of individual elements. [...]
Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)

Research

Jump to: Editorial, Other

Open AccessArticle A Risk-Free Protection Index Model for Portfolio Selection with Entropy Constraint under an Uncertainty Framework
Entropy 2017, 19(2), 80; doi:10.3390/e19020080
Received: 21 December 2016 / Revised: 14 February 2017 / Accepted: 15 February 2017 / Published: 21 February 2017
PDF Full-text (1042 KB) | HTML Full-text | XML Full-text
Abstract
This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free
[...] Read more.
This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free interest rate we define a risk-free protection index (RFPI), which can measure the protection degree when the loss of risk assets happens. Third, note that the proportion entropy serves as a complementary means to reduce the risk by the preset diversification requirement. We put forward a risk-free protection index model with an entropy constraint under an uncertainty framework by applying the RFPI, Huang’s risk index model (RIM), and mean-variance-entropy model (MVEM). Furthermore, to solve our portfolio model, an algorithm is given to estimate the uncertain expected return and standard deviation of different risk assets by applying the Delphi method. Finally, an example is provided to show that the risk-free protection index model performs better than the traditional MVEM and RIM. Full article
Figures

Figure 1

Open AccessArticle Multiplicity of Homoclinic Solutions for Fractional Hamiltonian Systems with Subquadratic Potential
Entropy 2017, 19(2), 50; doi:10.3390/e19020050
Received: 24 November 2016 / Revised: 11 January 2017 / Accepted: 19 January 2017 / Published: 24 January 2017
PDF Full-text (326 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we study the existence of homoclinic solutions for the fractional Hamiltonian systems with left and right Liouville–Weyl derivatives. We establish some new results concerning the existence and multiplicity of homoclinic solutions for the given system by using Clark’s theorem from
[...] Read more.
In this paper, we study the existence of homoclinic solutions for the fractional Hamiltonian systems with left and right Liouville–Weyl derivatives. We establish some new results concerning the existence and multiplicity of homoclinic solutions for the given system by using Clark’s theorem from critical point theory and fountain theorem. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Open AccessArticle User-Centric Key Entropy: Study of Biometric Key Derivation Subject to Spoofing Attacks
Entropy 2017, 19(2), 70; doi:10.3390/e19020070
Received: 30 November 2016 / Revised: 22 January 2017 / Accepted: 9 February 2017 / Published: 21 February 2017
PDF Full-text (2098 KB) | HTML Full-text | XML Full-text
Abstract
Biometric data can be used as input for PKI key pair generation. The concept of not saving the private key is very appealing, but the implementation of such a system shouldn’t be rushed because it might prove less secure then current PKI infrastructure.
[...] Read more.
Biometric data can be used as input for PKI key pair generation. The concept of not saving the private key is very appealing, but the implementation of such a system shouldn’t be rushed because it might prove less secure then current PKI infrastructure. One biometric characteristic can be easily spoofed, so it was believed that multi-modal biometrics would offer more security, because spoofing two or more biometrics would be very hard. This notion, of increased security of multi-modal biometric systems, was disproved for authentication and matching, studies showing that not only multi-modal biometric systems are not more secure, but they introduce additional vulnerabilities. This paper is a study on the implications of spoofing biometric data for retrieving the derived key. We demonstrate that spoofed biometrics can yield the same key, which in turn will lead an attacker to obtain the private key. A practical implementation is proposed using fingerprint and iris as biometrics and the fuzzy extractor for biometric key extraction. Our experiments show what happens when the biometric data is spoofed for both uni-modal systems and multi-modal. In case of multi-modal system tests were performed when spoofing one biometric or both. We provide detailed analysis of every scenario in regard to successful tests and overall key entropy. Our paper defines a biometric PKI scenario and an in depth security analysis for it. The analysis can be viewed as a blueprint for implementations of future similar systems, because it highlights the main security vulnerabilities for bioPKI. The analysis is not constrained to the biometric part of the system, but covers CA security, sensor security, communication interception, RSA encryption vulnerabilities regarding key entropy, and much more. Full article
Figures

Figure 1

Open AccessArticle Nonlinear Wave Equations Related to Nonextensive Thermostatistics
Entropy 2017, 19(2), 60; doi:10.3390/e19020060
Received: 25 December 2016 / Revised: 31 January 2017 / Accepted: 4 February 2017 / Published: 7 February 2017
Cited by 1 | PDF Full-text (299 KB) | HTML Full-text | XML Full-text
Abstract
We advance two nonlinear wave equations related to the nonextensive thermostatistical formalism based upon the power-law nonadditive Sq entropies. Our present contribution is in line with recent developments, where nonlinear extensions inspired on the q-thermostatistical formalism have been proposed for the
[...] Read more.
We advance two nonlinear wave equations related to the nonextensive thermostatistical formalism based upon the power-law nonadditive S q entropies. Our present contribution is in line with recent developments, where nonlinear extensions inspired on the q-thermostatistical formalism have been proposed for the Schroedinger, Klein–Gordon, and Dirac wave equations. These previously introduced equations share the interesting feature of admitting q-plane wave solutions. In contrast with these recent developments, one of the nonlinear wave equations that we propose exhibits real q-Gaussian solutions, and the other one admits exponential plane wave solutions modulated by a q-Gaussian. These q-Gaussians are q-exponentials whose arguments are quadratic functions of the space and time variables. The q-Gaussians are at the heart of nonextensive thermostatistics. The wave equations that we analyze in this work illustrate new possible dynamical scenarios leading to time-dependent q-Gaussians. One of the nonlinear wave equations considered here is a wave equation endowed with a nonlinear potential term, and can be regarded as a nonlinear Klein–Gordon equation. The other equation we study is a nonlinear Schroedinger-like equation. Full article
Figures

Figure 1

Open AccessArticle Towards Operational Definition of Postictal Stage: Spectral Entropy as a Marker of Seizure Ending
Entropy 2017, 19(2), 81; doi:10.3390/e19020081
Received: 29 November 2016 / Revised: 15 February 2017 / Accepted: 16 February 2017 / Published: 21 February 2017
PDF Full-text (2703 KB) | HTML Full-text | XML Full-text
Abstract
The postictal period is characterized by several neurological alterations, but its exact limits are clinically or even electroencephalographically hard to determine in most cases. We aim to provide quantitative functions or conditions with a clearly distinguishable behavior during the ictal-postictal transition. Spectral methods
[...] Read more.
The postictal period is characterized by several neurological alterations, but its exact limits are clinically or even electroencephalographically hard to determine in most cases. We aim to provide quantitative functions or conditions with a clearly distinguishable behavior during the ictal-postictal transition. Spectral methods were used to analyze foramen ovale electrodes (FOE) recordings during the ictal/postictal transition in 31 seizures of 15 patients with strictly unilateral drug resistant temporal lobe epilepsy. In particular, density of links, spectral entropy, and relative spectral power were analyzed. Partial simple seizures are accompanied by an ipsilateral increase in the relative Delta power and a decrease in synchronization in a 66% and 91% of the cases, respectively, after seizures offset. Complex partial seizures showed a decrease in the spectral entropy in 94% of cases, both ipsilateral and contralateral sides (100% and 73%, respectively) mainly due to an increase of relative Delta activity. Seizure offset is defined as the moment at which the “seizure termination mechanisms” actually end, which is quantified in the spectral entropy value. We propose as a definition for the postictal start the time when the ipsilateral SE reaches the first global minimum. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Figures

Figure 1

Open AccessArticle Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss
Entropy 2017, 19(2), 71; doi:10.3390/e19020071
Received: 30 December 2016 / Revised: 12 February 2017 / Accepted: 13 February 2017 / Published: 16 February 2017
Cited by 1 | PDF Full-text (2240 KB) | HTML Full-text | XML Full-text
Abstract
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or
[...] Read more.
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or redundant and synergy components. In this work, we extend this framework focusing on the lattices that underpin the decomposition. We generalize the type of constructible lattices and examine the relations between different lattices, for example, relating bivariate and trivariate decompositions. We point out that, in information gain lattices, redundancy components are invariant across decompositions, but unique and synergy components are decomposition-dependent. Exploiting the connection between different lattices, we propose a procedure to construct, in the general multivariate case, information gain decompositions from measures of synergy or unique information. We then introduce an alternative type of lattices, information loss lattices, with the role and invariance properties of redundancy and synergy components reversed with respect to gain lattices, and which provide an alternative procedure to build multivariate decompositions. We finally show how information gain and information loss dual lattices lead to a self-consistent unique decomposition, which allows a deeper understanding of the origin and meaning of synergy and redundancy. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Entropies of the Chinese Land Use/Cover Change from 1990 to 2010 at a County Level
Entropy 2017, 19(2), 51; doi:10.3390/e19020051
Received: 30 October 2016 / Revised: 12 January 2017 / Accepted: 20 January 2017 / Published: 25 January 2017
Cited by 1 | PDF Full-text (2435 KB) | HTML Full-text | XML Full-text
Abstract
Land Use/Cover Change (LUCC) has gradually became an important direction in the research of global changes. LUCC is a complex system, and entropy is a measure of the degree of disorder of a system. According to land use information entropy, this paper analyzes
[...] Read more.
Land Use/Cover Change (LUCC) has gradually became an important direction in the research of global changes. LUCC is a complex system, and entropy is a measure of the degree of disorder of a system. According to land use information entropy, this paper analyzes changes in land use from the perspective of the system. Research on the entropy of LUCC structures has a certain “guiding role” for the optimization and adjustment of regional land use structure. Based on the five periods of LUCC data from the year of 1990 to 2010, this paper focuses on analyzing three types of LUCC entropies among counties in China—namely, Shannon, Renyi, and Tsallis entropies. The findings suggest that: (1) Shannon entropy can reflect the volatility of the LUCC, that Renyi and Tsallis entropies also have this function when their parameter has a positive value, and that Renyi and Tsallis entropies can reflect the extreme case of the LUCC when their parameter has a negative value.; (2) The entropy of China’s LUCC is uneven in time and space distributions, and that there is a large trend during 1990–2010, the central region generally has high entropy in space. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
Figures

Figure 1

Open AccessArticle Classification of Normal and Pre-Ictal EEG Signals Using Permutation Entropies and a Generalized Linear Model as a Classifier
Entropy 2017, 19(2), 72; doi:10.3390/e19020072
Received: 17 November 2016 / Revised: 26 January 2017 / Accepted: 10 February 2017 / Published: 16 February 2017
PDF Full-text (475 KB) | HTML Full-text | XML Full-text
Abstract
In this contribution, a comparison between different permutation entropies as classifiers of electroencephalogram (EEG) records corresponding to normal and pre-ictal states is made. A discrete probability distribution function derived from symbolization techniques applied to the EEG signal is used to calculate the Tsallis
[...] Read more.
In this contribution, a comparison between different permutation entropies as classifiers of electroencephalogram (EEG) records corresponding to normal and pre-ictal states is made. A discrete probability distribution function derived from symbolization techniques applied to the EEG signal is used to calculate the Tsallis entropy, Shannon Entropy, Renyi Entropy, and Min Entropy, and they are used separately as the only independent variable in a logistic regression model in order to evaluate its capacity as a classification variable in a inferential manner. The area under the Receiver Operating Characteristic (ROC) curve, along with the accuracy, sensitivity, and specificity are used to compare the models. All the permutation entropies are excellent classifiers, with an accuracy greater than 94.5% in every case, and a sensitivity greater than 97%. Accounting for the amplitude in the symbolization technique retains more information of the signal than its counterparts, and it could be a good candidate for automatic classification of EEG signals. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Figures

Figure 1

Open AccessArticle The More You Know, the More You Can Grow: An Information Theoretic Approach to Growth in the Information Age
Entropy 2017, 19(2), 82; doi:10.3390/e19020082
Received: 13 December 2016 / Revised: 10 February 2017 / Accepted: 13 February 2017 / Published: 22 February 2017
PDF Full-text (2428 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory
[...] Read more.
In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory is used to show that information (in bits) is effectively a quantifiable ingredient of growth. The article presents a single equation that allows both to describe hands-off natural selection of evolving populations and to optimize population fitness in uncertain environments through intervention. The setup analyzes the communication channel between the growing population and its uncertain environment. The role of information in population growth can be thought of as the optimization of information flow over this (more or less) noisy channel. Optimized growth implies that the population absorbs all communicated environmental structure during evolutionary updating (measured by their mutual information). This is achieved by endogenously adjusting the population structure to the exogenous environmental pattern (through bet-hedging/portfolio management). The setup can be applied to decompose the growth of any discrete population in stationary, stochastic environments (economic, cultural, or biological). Two empirical examples from the information economy reveal inherent trade-offs among the involved information quantities during growth optimization. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Research and Application of a Novel Hybrid Model Based on Data Selection and Artificial Intelligence Algorithm for Short Term Load Forecasting
Entropy 2017, 19(2), 52; doi:10.3390/e19020052
Received: 5 November 2016 / Revised: 18 January 2017 / Accepted: 22 January 2017 / Published: 25 January 2017
PDF Full-text (8342 KB) | HTML Full-text | XML Full-text
Abstract
Machine learning plays a vital role in several modern economic and industrial fields, and selecting an optimized machine learning method to improve time series’ forecasting accuracy is challenging. Advanced machine learning methods, e.g., the support vector regression (SVR) model, are widely employed in
[...] Read more.
Machine learning plays a vital role in several modern economic and industrial fields, and selecting an optimized machine learning method to improve time series’ forecasting accuracy is challenging. Advanced machine learning methods, e.g., the support vector regression (SVR) model, are widely employed in forecasting fields, but the individual SVR pays no attention to the significance of data selection, signal processing and optimization, which cannot always satisfy the requirements of time series forecasting. By preprocessing and analyzing the original time series, in this paper, a hybrid SVR model is developed, considering periodicity, trend and randomness, and combined with data selection, signal processing and an optimization algorithm for short-term load forecasting. Case studies of electricity power data from New South Wales and Singapore are regarded as exemplifications to estimate the performance of the developed novel model. The experimental results demonstrate that the proposed hybrid method is not only robust but also capable of achieving significant improvement compared with the traditional single models and can be an effective and efficient tool for power load forecasting. Full article
Figures

Figure 1

Open AccessArticle Response Surface Methodology Control Rod Position Optimization of a Pressurized Water Reactor Core Considering Both High Safety and Low Energy Dissipation
Entropy 2017, 19(2), 63; doi:10.3390/e19020063
Received: 30 November 2016 / Revised: 15 January 2017 / Accepted: 6 February 2017 / Published: 10 February 2017
PDF Full-text (12388 KB) | HTML Full-text | XML Full-text
Abstract
Response Surface Methodology (RSM) is introduced to optimize the control rod positions in a pressurized water reactor (PWR) core. The widely used 3D-IAEA benchmark problem is selected as the typical PWR core and the neutron flux field is solved. Besides, some additional thermal
[...] Read more.
Response Surface Methodology (RSM) is introduced to optimize the control rod positions in a pressurized water reactor (PWR) core. The widely used 3D-IAEA benchmark problem is selected as the typical PWR core and the neutron flux field is solved. Besides, some additional thermal parameters are assumed to obtain the temperature distribution. Then the total and local entropy production is calculated to evaluate the energy dissipation. Using RSM, three directions of optimization are taken, which aim to determine the minimum of power peak factor Pmax, peak temperature Tmax and total entropy production Stot. These parameters reflect the safety and energy dissipation in the core. Finally, an optimization scheme was obtained, which reduced Pmax, Tmax and Stot by 23%, 8.7% and 16%, respectively. The optimization results are satisfactory. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Figures

Open AccessArticle A Mixed Geographically and Temporally Weighted Regression: Exploring Spatial-Temporal Variations from Global and Local Perspectives
Entropy 2017, 19(2), 53; doi:10.3390/e19020053
Received: 16 December 2016 / Revised: 16 January 2017 / Accepted: 23 January 2017 / Published: 26 January 2017
PDF Full-text (3385 KB) | HTML Full-text | XML Full-text
Abstract
To capture both global stationarity and spatiotemporal non-stationarity, a novel mixed geographically and temporally weighted regression (MGTWR) model accounting for global and local effects in both space and time is presented. Since the constant and spatial-temporal varying coefficients could not be estimated in
[...] Read more.
To capture both global stationarity and spatiotemporal non-stationarity, a novel mixed geographically and temporally weighted regression (MGTWR) model accounting for global and local effects in both space and time is presented. Since the constant and spatial-temporal varying coefficients could not be estimated in one step, a two-stage least squares estimation is introduced to calibrate the model. Both simulations and real-world datasets are used to test and verify the performance of the proposed MGTWR model. Additionally, an Akaike Information Criterion (AIC) is adopted as a key model fitting diagnostic. The experiments demonstrate that the MGTWR model yields more accurate results than do traditional spatially weighted regression models. For instance, the MGTWR model decreased AIC value by 2.7066, 36.368 and 112.812 with respect to those of the mixed geographically weighted regression (MGWR) model and by 45.5628, −38.774 and 35.656 with respect to those of the geographical and temporal weighted regression (GTWR) model for the three simulation datasets. Moreover, compared to the MGWR and GTWR models, the MGTWR model obtained the lowest AIC value and mean square error (MSE) and the highest coefficient of determination (R2) and adjusted coefficient of determination (R2adj). In addition, our experiments proved the existence of both global stationarity and spatiotemporal non-stationarity, as well as the practical ability of the proposed method. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Identifying Critical States through the Relevance Index
Entropy 2017, 19(2), 73; doi:10.3390/e19020073
Received: 7 January 2017 / Revised: 11 February 2017 / Accepted: 13 February 2017 / Published: 16 February 2017
PDF Full-text (731 KB) | HTML Full-text | XML Full-text
Abstract
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit,
[...] Read more.
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, depending on the values of some parameters. In this paper, we show that the relevance index may be effectively used to identify critical states in complex systems. The relevance index was originally developed to identify relevant sets of variables in dynamical systems, but in this paper, we show that it is also able to capture features of criticality. The index is applied to two prominent examples showing slightly different meanings of criticality, namely the Ising model and random Boolean networks. Results show that this index is maximized at critical states and is robust with respect to system size and sampling effort. It can therefore be used to detect criticality. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Breakdown Point of Robust Support Vector Machines
Entropy 2017, 19(2), 83; doi:10.3390/e19020083
Received: 15 January 2017 / Accepted: 16 February 2017 / Published: 21 February 2017
Cited by 1 | PDF Full-text (785 KB) | HTML Full-text | XML Full-text
Abstract
Support vector machine (SVM) is one of the most successful learning methods for solving classification problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassification is defined by a convex loss
[...] Read more.
Support vector machine (SVM) is one of the most successful learning methods for solving classification problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassification is defined by a convex loss called the hinge loss, and the unboundedness of the convex loss causes the sensitivity to outliers. To deal with outliers, robust SVMs have been proposed by replacing the convex loss with a non-convex bounded loss called the ramp loss. In this paper, we study the breakdown point of robust SVMs. The breakdown point is a robustness measure that is the largest amount of contamination such that the estimated classifier still gives information about the non-contaminated data. The main contribution of this paper is to show an exact evaluation of the breakdown point of robust SVMs. For learning parameters such as the regularization parameter, we derive a simple formula that guarantees the robustness of the classifier. When the learning parameters are determined with a grid search using cross-validation, our formula works to reduce the number of candidate search points. Furthermore, the theoretical findings are confirmed in numerical experiments. We show that the statistical properties of robust SVMs are well explained by a theoretical analysis of the breakdown point. Full article
Figures

Figure 1

Open AccessArticle Information Geometric Approach to Recursive Update in Nonlinear Filtering
Entropy 2017, 19(2), 54; doi:10.3390/e19020054
Received: 25 November 2016 / Revised: 14 January 2017 / Accepted: 20 January 2017 / Published: 26 January 2017
PDF Full-text (931 KB) | HTML Full-text | XML Full-text
Abstract
The measurement update stage in the nonlinear filtering is considered in the viewpoint of information geometry, and the filtered state is considered as an optimization estimation in parameter space has been corresponded with the iteration in the statistical manifold, then a recursive method
[...] Read more.
The measurement update stage in the nonlinear filtering is considered in the viewpoint of information geometry, and the filtered state is considered as an optimization estimation in parameter space has been corresponded with the iteration in the statistical manifold, then a recursive method is proposed in this paper. This method is derived based on the natural gradient descent on the statistical manifold, which constructed by the posterior probability density function (PDF) of state conditional on the measurement. The derivation procedure is processing in the geometric viewpoint, and gives a geometric interpretation for the iteration update. Besides, the proposed method can be seen as an extended for the Kalman filter and its variants. For the one step in our proposed method, it is identical to the Extended Kalman filter (EKF) in the nonlinear case, while traditional Kalman filter in the linear case. Benefited from the natural gradient descent used in the update stage, our proposed method performs better than the existing methods, and the results have showed in the numerical experiments. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Bullwhip Entropy Analysis and Chaos Control in the Supply Chain with Sales Game and Consumer Returns
Entropy 2017, 19(2), 64; doi:10.3390/e19020064
Received: 29 November 2016 / Accepted: 3 February 2017 / Published: 10 February 2017
Cited by 1 | PDF Full-text (4069 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we study a supply chain system which consists of one manufacturer and two retailers including a traditional retailer and an online retailer. In order to gain a larger market share, the retailers often take the sales as a decision-making variable
[...] Read more.
In this paper, we study a supply chain system which consists of one manufacturer and two retailers including a traditional retailer and an online retailer. In order to gain a larger market share, the retailers often take the sales as a decision-making variable in the competition game. We devote ourselves to analyze the bullwhip effect in the supply chain with sales game and consumer returns via the theory of entropy and complexity and take the delayed feedback control method to control the system’s chaotic state. The impact of a statutory 7-day no reason for return policy for online retailers is also investigated. The bounded rational expectation is adopt to forecast the future demand in the sales game system with weak noise. Our results show that high return rates will hurt the profits of both the retailers and the adjustment speed of the bounded rational sales expectation has an important impact on the bullwhip effect. There is a stable area for retailers where the bullwhip effect doesn’t appear. The supply chain system suffers a great bullwhip effect in the quasi-periodic state and the quasi-chaotic state. The purpose of chaos control on the sales game can be achieved and the bullwhip effect would be effectively mitigated by using the delayed feedback control method. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Sequential Batch Design for Gaussian Processes Employing Marginalization †
Entropy 2017, 19(2), 84; doi:10.3390/e19020084
Received: 30 November 2016 / Revised: 16 February 2017 / Accepted: 17 February 2017 / Published: 21 February 2017
PDF Full-text (463 KB) | HTML Full-text | XML Full-text
Abstract
Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the
[...] Read more.
Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the quality of the fit—as the utility function, we establish an optimized and automated sequential parameter selection procedure. However, it is also often desirable to utilize the parallel running capabilities of present computer technology and abandon the sequential parameter selection for a faster overall turn-around time (wall-clock time). This paper proposes to achieve this by marginalizing over the expected outcomes at optimized test points in order to set up a pool of starting values for batch execution. For a one-dimensional test case, the numerical results are validated with the analytical solution. Eventually, a systematic convergence study demonstrates the advantage of the optimized approach over randomly chosen parameter settings. Full article
(This article belongs to the Special Issue Selected Papers from MaxEnt 2016)
Figures

Figure 1

Open AccessArticle An Approach to Data Analysis in 5G Networks
Entropy 2017, 19(2), 74; doi:10.3390/e19020074
Received: 16 January 2017 / Revised: 13 February 2017 / Accepted: 14 February 2017 / Published: 16 February 2017
Cited by 2 | PDF Full-text (1495 KB) | HTML Full-text | XML Full-text
Abstract
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order
[...] Read more.
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order to solve or mitigate common network problems in a dynamic and proactive way. In this context, this paper presents the design of Self-Organized Network Management in Virtualized and Software Defined Networks (SELFNET) Analyzer Module, which main objective is to identify suspicious or unexpected situations based on metrics provided by different network components and sensors. The SELFNET Analyzer Module provides a modular architecture driven by use cases where analytic functions can be easily extended. This paper also proposes the data specification to define the data inputs to be taking into account in diagnosis process. This data specification has been implemented with different use cases within SELFNET Project, proving its effectiveness. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Bateman–Feshbach Tikochinsky and Caldirola–Kanai Oscillators with New Fractional Differentiation
Entropy 2017, 19(2), 55; doi:10.3390/e19020055
Received: 23 November 2016 / Revised: 23 January 2017 / Accepted: 24 January 2017 / Published: 28 January 2017
Cited by 1 | PDF Full-text (1849 KB) | HTML Full-text | XML Full-text
Abstract
In this work, the study of the fractional behavior of the Bateman–Feshbach–Tikochinsky and Caldirola–Kanai oscillators by using different fractional derivatives is presented. We obtained the Euler–Lagrange and the Hamiltonian formalisms in order to represent the dynamic models based on the Liouville–Caputo, Caputo–Fabrizio–Caputo and
[...] Read more.
In this work, the study of the fractional behavior of the Bateman–Feshbach–Tikochinsky and Caldirola–Kanai oscillators by using different fractional derivatives is presented. We obtained the Euler–Lagrange and the Hamiltonian formalisms in order to represent the dynamic models based on the Liouville–Caputo, Caputo–Fabrizio–Caputo and the new fractional derivative based on the Mittag–Leffler kernel with arbitrary order α. Simulation results are presented in order to show the fractional behavior of the oscillators, and the classical behavior is recovered when α is equal to 1. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Figures

Figure 1

Open AccessArticle Information Loss in Binomial Data Due to Data Compression
Entropy 2017, 19(2), 75; doi:10.3390/e19020075
Received: 1 December 2016 / Revised: 7 February 2017 / Accepted: 12 February 2017 / Published: 16 February 2017
PDF Full-text (462 KB) | HTML Full-text | XML Full-text
Abstract
This paper explores the idea of information loss through data compression, as occurs in the course of any data analysis, illustrated via detailed consideration of the Binomial distribution. We examine situations where the full sequence of binomial outcomes is retained, situations where only
[...] Read more.
This paper explores the idea of information loss through data compression, as occurs in the course of any data analysis, illustrated via detailed consideration of the Binomial distribution. We examine situations where the full sequence of binomial outcomes is retained, situations where only the total number of successes is retained, and in-between situations. We show that a familiar decomposition of the Shannon entropy H can be rewritten as a decomposition into H t o t a l , H l o s t , and H c o m p , or the total, lost and compressed (remaining) components, respectively. We relate this new decomposition to Landauer’s principle, and we discuss some implications for the “information-dynamic” theory being developed in connection with our broader program to develop a measure of statistical evidence on a properly calibrated scale. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle An Android Malicious Code Detection Method Based on Improved DCA Algorithm
Entropy 2017, 19(2), 65; doi:10.3390/e19020065
Received: 27 October 2016 / Revised: 29 January 2017 / Accepted: 30 January 2017 / Published: 11 February 2017
PDF Full-text (1492 KB) | HTML Full-text | XML Full-text
Abstract
Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source
[...] Read more.
Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source code can not completely extract malware’s code features. The Android malware static detection methods whose features used are only obtained from the AndroidManifest.xml file are easily affected by useless permissions. Therefore, there are some limitations in current Android malware static detection methods. The current Android malware dynamic detection algorithm is mostly required to customize the system or needs system root permissions. Based on the Dendritic Cell Algorithm (DCA), this paper proposes an Android malware algorithm that has a higher detection rate, does not need to modify the system, and reduces the impact of code obfuscation to a certain degree. This algorithm is applied to an Android malware detection method based on oriented Dalvik disassembly sequence and application interface (API) calling sequence. Through the designed experiments, the effectiveness of this method is verified for the detection of Android malware. Full article
Figures

Figure 1

Open AccessArticle Quantifying Synergistic Information Using Intermediate Stochastic Variables
Entropy 2017, 19(2), 85; doi:10.3390/e19020085
Received: 1 November 2016 / Revised: 16 February 2017 / Accepted: 19 February 2017 / Published: 22 February 2017
PDF Full-text (1538 KB) | HTML Full-text | XML Full-text
Abstract
Quantifying synergy among stochastic variables is an important open problem in information theory. Information synergy occurs when multiple sources together predict an outcome variable better than the sum of single-source predictions. It is an essential phenomenon in biology such as in neuronal networks
[...] Read more.
Quantifying synergy among stochastic variables is an important open problem in information theory. Information synergy occurs when multiple sources together predict an outcome variable better than the sum of single-source predictions. It is an essential phenomenon in biology such as in neuronal networks and cellular regulatory processes, where different information flows integrate to produce a single response, but also in social cooperation processes as well as in statistical inference tasks in machine learning. Here we propose a metric of synergistic entropy and synergistic information from first principles. The proposed measure relies on so-called synergistic random variables (SRVs) which are constructed to have zero mutual information about individual source variables but non-zero mutual information about the complete set of source variables. We prove several basic and desired properties of our measure, including bounds and additivity properties. In addition, we prove several important consequences of our measure, including the fact that different types of synergistic information may co-exist between the same sets of variables. A numerical implementation is provided, which we use to demonstrate that synergy is associated with resilience to noise. Our measure may be a marked step forward in the study of multivariate information theory and its numerous applications. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Topological Entropy Dimension and Directional Entropy Dimension for ℤ2-Subshifts
Entropy 2017, 19(2), 46; doi:10.3390/e19020046
Received: 29 November 2016 / Revised: 11 January 2017 / Accepted: 12 January 2017 / Published: 24 January 2017
PDF Full-text (284 KB) | HTML Full-text | XML Full-text
Abstract
The notion of topological entropy dimension for a Z-action has been introduced to measure the subexponential complexity of zero entropy systems. Given a Z2-action, along with a Z2-entropy dimension, we also consider a finer notion of directional entropy
[...] Read more.
The notion of topological entropy dimension for a Z -action has been introduced to measure the subexponential complexity of zero entropy systems. Given a Z 2 -action, along with a Z 2 -entropy dimension, we also consider a finer notion of directional entropy dimension arising from its subactions. The entropy dimension of a Z 2 -action and the directional entropy dimensions of its subactions satisfy certain inequalities. We present several constructions of strictly ergodic Z 2 -subshifts of positive entropy dimension with diverse properties of their subgroup actions. In particular, we show that there is a Z 2 -subshift of full dimension in which every direction has entropy 0. Full article
(This article belongs to the Special Issue Entropic Properties of Dynamical Systems)
Open AccessArticle Scaling Relations of Lognormal Type Growth Process with an Extremal Principle of Entropy
Entropy 2017, 19(2), 56; doi:10.3390/e19020056
Received: 3 November 2016 / Accepted: 23 January 2017 / Published: 27 January 2017
PDF Full-text (335 KB) | HTML Full-text | XML Full-text
Abstract
The scale, inflexion point and maximum point are important scaling parameters for studying growth phenomena with a size following the lognormal function. The width of the size function and its entropy depend on the scale parameter (or the standard deviation) and measure the
[...] Read more.
The scale, inflexion point and maximum point are important scaling parameters for studying growth phenomena with a size following the lognormal function. The width of the size function and its entropy depend on the scale parameter (or the standard deviation) and measure the relative importance of production and dissipation involved in the growth process. The Shannon entropy increases monotonically with the scale parameter, but the slope has a minimum at p 6/6. This value has been used previously to study spreading of spray and epidemical cases. In this paper, this approach of minimizing this entropy slope is discussed in a broader sense and applied to obtain the relationship between the inflexion point and maximum point. It is shown that this relationship is determined by the base of natural logarithm e ' 2.718 and exhibits some geometrical similarity to the minimal surface energy principle. The known data from a number of problems, including the swirling rate of the bathtub vortex, more data of droplet splashing, population growth, distribution of strokes in Chinese language characters and velocity profile of a turbulent jet, are used to assess to what extent the approach of minimizing the entropy slope can be regarded as useful. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Figures

Figure 1

Open AccessArticle A Comparison of Postural Stability during Upright Standing between Normal and Flatfooted Individuals, Based on COP-Based Measures
Entropy 2017, 19(2), 76; doi:10.3390/e19020076
Received: 31 October 2016 / Accepted: 14 February 2017 / Published: 16 February 2017
PDF Full-text (1126 KB) | HTML Full-text | XML Full-text
Abstract
Aging causes foot arches to collapse, possibly leading to foot deformities and falls. This paper proposes a set of measures involving an entropy-based method used for two groups of young adults with dissimilar foot arches to explore and quantize postural stability on a
[...] Read more.
Aging causes foot arches to collapse, possibly leading to foot deformities and falls. This paper proposes a set of measures involving an entropy-based method used for two groups of young adults with dissimilar foot arches to explore and quantize postural stability on a force plate in an upright position. Fifty-four healthy young adults aged 18–30 years participated in this study. These were categorized into two groups: normal (37 participants) and flatfooted (17 participants). We collected the center of pressure (COP) displacement trajectories of participants during upright standing, on a force plate, in a static position, with eyes open (EO), or eyes closed (EC). These nonstationary time-series signals were quantized using entropy-based measures and traditional measures used to assess postural stability, and the results obtained from these measures were compared. The appropriate combinations of entropy-based measures revealed that, with respect to postural stability, the two groups differed significantly (p < 0.05) under both EO and EC conditions. The traditional commonly-used COP-based measures only revealed differences under EO conditions. Entropy-based measures are thus suitable for examining differences in postural stability for flatfooted people, and may be used by clinicians after further refinement. Full article
(This article belongs to the Special Issue Multivariate Entropy Measures and Their Applications)
Figures

Figure 1

Open AccessArticle Energy Transfer between Colloids via Critical Interactions
Entropy 2017, 19(2), 77; doi:10.3390/e19020077
Received: 22 January 2017 / Revised: 12 February 2017 / Accepted: 14 February 2017 / Published: 17 February 2017
PDF Full-text (530 KB) | HTML Full-text | XML Full-text
Abstract
We report the observation of a temperature-controlled synchronization of two Brownian-particles in a binary mixture close to the critical point of the demixing transition. The two beads are trapped by two optical tweezers whose distance is periodically modulated. We notice that the motion
[...] Read more.
We report the observation of a temperature-controlled synchronization of two Brownian-particles in a binary mixture close to the critical point of the demixing transition. The two beads are trapped by two optical tweezers whose distance is periodically modulated. We notice that the motion synchronization of the two beads appears when the critical temperature is approached. In contrast, when the fluid is far from its critical temperature, the displacements of the two beads are uncorrelated. Small changes in temperature can radically change the global dynamics of the system. We show that the synchronisation is induced by the critical Casimir forces. Finally, we present the measure of the energy transfers inside the system produced by the critical interaction. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Open AccessArticle On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests
Entropy 2017, 19(2), 47; doi:10.3390/e19020047
Received: 28 May 2016 / Accepted: 26 December 2016 / Published: 26 January 2017
PDF Full-text (420 KB) | HTML Full-text | XML Full-text
Abstract
Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed,
[...] Read more.
Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. In this short survey, we focus on test statistics that involve the Wasserstein distance. Using an entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ) plots and receiver operating characteristic or ordinal dominance (ROC/ODC) curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Figures

Figure 1

Open AccessArticle Investigation into Multi-Temporal Scale Complexity of Streamflows and Water Levels in the Poyang Lake Basin, China
Entropy 2017, 19(2), 67; doi:10.3390/e19020067
Received: 24 December 2016 / Accepted: 9 February 2017 / Published: 10 February 2017
Cited by 1 | PDF Full-text (3717 KB) | HTML Full-text | XML Full-text
Abstract
The streamflow and water level complexity of the Poyang Lake basin has been investigated over multiple time-scales using daily observations of the water level and streamflow spanning from 1954 through 2013. The composite multiscale sample entropy was applied to measure the complexity and
[...] Read more.
The streamflow and water level complexity of the Poyang Lake basin has been investigated over multiple time-scales using daily observations of the water level and streamflow spanning from 1954 through 2013. The composite multiscale sample entropy was applied to measure the complexity and the Mann-Kendall algorithm was applied to detect the temporal changes in the complexity. The results show that the streamflow and water level complexity increases as the time-scale increases. The sample entropy of the streamflow increases when the timescale increases from a daily to a seasonal scale, also the sample entropy of the water level increases when the time-scale increases from a daily to a monthly scale. The water outflows of Poyang Lake, which is impacted mainly by the inflow processes, lake regulation, and the streamflow processes of the Yangtze River, is more complex than the water inflows. The streamflow and water level complexity over most of the time-scales, between the daily and monthly scales, is dominated by the increasing trend. This indicates the enhanced randomness, disorderliness, and irregularity of the streamflows and water levels. This investigation can help provide a better understanding to the hydrological features of large freshwater lakes. Ongoing research will be made to analyze and understand the mechanisms of the streamflow and water level complexity changes within the context of climate change and anthropogenic activities. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle The Second Law: From Carnot to Thomson-Clausius, to the Theory of Exergy, and to the Entropy-Growth Potential Principle
Entropy 2017, 19(2), 57; doi:10.3390/e19020057
Received: 28 November 2016 / Revised: 20 January 2017 / Accepted: 23 January 2017 / Published: 28 January 2017
PDF Full-text (1215 KB) | HTML Full-text | XML Full-text
Abstract
At its origins, thermodynamics was the study of heat and engines. Carnot transformed it into a scientific discipline by explaining engine power in terms of transfer of “caloric”. That idea became the second law of thermodynamics when Thomson and Clausius reconciled Carnot’s theory
[...] Read more.
At its origins, thermodynamics was the study of heat and engines. Carnot transformed it into a scientific discipline by explaining engine power in terms of transfer of “caloric”. That idea became the second law of thermodynamics when Thomson and Clausius reconciled Carnot’s theory with Joule’s conflicting thesis that power was derived from the consumption of heat, which was determined to be a form of energy. Eventually, Clausius formulated the 2nd-law as the universal entropy growth principle: the synthesis of transfer vs. consumption led to what became known as the mechanical theory of heat (MTH). However, by making universal-interconvertibility the cornerstone of MTH their synthesis-project was a defective one, which precluded MTH from developing the full expression of the second law. This paper reiterates that universal-interconvertibility is demonstrably false—as the case has been made by many others—by clarifying the true meaning of the mechanical equivalent of heat. And, presents a two-part formulation of the second law: universal entropy growth principle as well as a new principle that no change in Nature happens without entropy growth potential. With the new principle as its cornerstone replacing universal-interconvertibility, thermodynamics transcends the defective MTH and becomes a coherent conceptual system. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Admitting Spontaneous Violations of the Second Law in Continuum Thermomechanics
Entropy 2017, 19(2), 78; doi:10.3390/e19020078
Received: 13 December 2016 / Revised: 15 February 2017 / Accepted: 16 February 2017 / Published: 21 February 2017
PDF Full-text (254 KB) | HTML Full-text | XML Full-text
Abstract
We survey new extensions of continuum mechanics incorporating spontaneous violations of the Second Law (SL), which involve the viscous flow and heat conduction. First, following an account of the Fluctuation Theorem (FT) of statistical mechanics that generalizes the SL, the irreversible entropy is
[...] Read more.
We survey new extensions of continuum mechanics incorporating spontaneous violations of the Second Law (SL), which involve the viscous flow and heat conduction. First, following an account of the Fluctuation Theorem (FT) of statistical mechanics that generalizes the SL, the irreversible entropy is shown to evolve as a submartingale. Next, a stochastic thermomechanics is formulated consistent with the FT, which, according to a revision of classical axioms of continuum mechanics, must be set up on random fields. This development leads to a reformulation of thermoviscous fluids and inelastic solids. These two unconventional constitutive behaviors may jointly occur in nano-poromechanics. Full article
(This article belongs to the Special Issue Limits to the Second Law of Thermodynamics: Experiment and Theory)
Open AccessArticle Entropy, Shannon’s Measure of Information and Boltzmann’s H-Theorem
Entropy 2017, 19(2), 48; doi:10.3390/e19020048
Received: 23 November 2016 / Revised: 17 January 2017 / Accepted: 21 January 2017 / Published: 24 January 2017
Cited by 1 | PDF Full-text (1588 KB) | HTML Full-text | XML Full-text
Abstract
We start with a clear distinction between Shannon’s Measure of Information (SMI) and the Thermodynamic Entropy. The first is defined on any probability distribution; and therefore it is a very general concept. On the other hand Entropy is defined on a very special
[...] Read more.
We start with a clear distinction between Shannon’s Measure of Information (SMI) and the Thermodynamic Entropy. The first is defined on any probability distribution; and therefore it is a very general concept. On the other hand Entropy is defined on a very special set of distributions. Next we show that the Shannon Measure of Information (SMI) provides a solid and quantitative basis for the interpretation of the thermodynamic entropy. The entropy measures the uncertainty in the distribution of the locations and momenta of all the particles; as well as two corrections due to the uncertainty principle and the indistinguishability of the particles. Finally we show that the H-function as defined by Boltzmann is an SMI but not entropy. Therefore; much of what has been written on the H-theorem is irrelevant to entropy and the Second Law of Thermodynamics. Full article
(This article belongs to the Special Issue Selected Papers from MaxEnt 2016)
Figures

Figure 1

Open AccessArticle Kinetic Theory of a Confined Quasi-Two-Dimensional Gas of Hard Spheres
Entropy 2017, 19(2), 68; doi:10.3390/e19020068
Received: 25 October 2016 / Revised: 14 December 2016 / Accepted: 10 February 2017 / Published: 14 February 2017
PDF Full-text (598 KB) | HTML Full-text | XML Full-text
Abstract
The dynamics of a system of hard spheres enclosed between two parallel plates separated a distance smaller than two particle diameters is described at the level of kinetic theory. The interest focuses on the behavior of the quasi-two-dimensional fluid seen when looking at
[...] Read more.
The dynamics of a system of hard spheres enclosed between two parallel plates separated a distance smaller than two particle diameters is described at the level of kinetic theory. The interest focuses on the behavior of the quasi-two-dimensional fluid seen when looking at the system from above or below. In the first part, a collisional model for the effective two-dimensional dynamics is analyzed. Although it is able to describe quite well the homogeneous evolution observed in the experiments, it is shown that it fails to predict the existence of non-equilibrium phase transitions, and in particular, the bimodal regime exhibited by the real system. A critical revision analysis of the model is presented , and as a starting point to get a more accurate description, the Boltzmann equation for the quasi-two-dimensional gas has been derived. In the elastic case, the solutions of the equation verify an H-theorem implying a monotonic tendency to a non-uniform steady state. As an example of application of the kinetic equation, here the evolution equations for the vertical and horizontal temperatures of the system are derived in the homogeneous approximation, and the results compared with molecular dynamics simulation results. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Open AccessArticle Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Entropy 2017, 19(2), 58; doi:10.3390/e19020058
Received: 22 December 2016 / Accepted: 22 January 2017 / Published: 2 February 2017
PDF Full-text (289 KB) | HTML Full-text | XML Full-text
Abstract
We compare the application of Bayesian inference and the maximum entropy (MaxEnt) method for the analysis of flow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of flow rates and other variables,
[...] Read more.
We compare the application of Bayesian inference and the maximum entropy (MaxEnt) method for the analysis of flow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of flow rates and other variables, when there is insufficient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf) by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method finds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation. Full article
Open AccessArticle Using k-Mix-Neighborhood Subdigraphs to Compute Canonical Labelings of Digraphs
Entropy 2017, 19(2), 79; doi:10.3390/e19020079
Received: 18 October 2016 / Revised: 25 December 2016 / Accepted: 15 February 2017 / Published: 22 February 2017
PDF Full-text (2787 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel theory and method to calculate the canonical labelings of digraphs whose definition is entirely different from the traditional definition of Nauty. It indicates the mutual relationships that exist between the canonical labeling of a digraph and the
[...] Read more.
This paper presents a novel theory and method to calculate the canonical labelings of digraphs whose definition is entirely different from the traditional definition of Nauty. It indicates the mutual relationships that exist between the canonical labeling of a digraph and the canonical labeling of its complement graph. It systematically examines the link between computing the canonical labeling of a digraph and the k-neighborhood and k-mix-neighborhood subdigraphs. To facilitate the presentation, it introduces several concepts including mix diffusion outdegree sequence and entire mix diffusion outdegree sequences. For each node in a digraph G, it assigns an attribute m_NearestNode to enhance the accuracy of calculating canonical labeling. The four theorems proved here demonstrate how to determine the first nodes added into M a x Q ( G ) . Further, the other two theorems stated below deal with identifying the second nodes added into M a x Q ( G ) . When computing C m a x ( G ) , if M a x Q ( G ) already contains the first i vertices u 1 , u 2 , , u i , Diffusion Theorem provides a guideline on how to choose the subsequent node of M a x Q ( G ) . Besides, the Mix Diffusion Theorem shows that the selection of the ( i + 1 ) th vertex of M a x Q ( G ) for computing C m a x ( G ) is from the open mix-neighborhood subdigraph N + + ( Q ) of the nodes set Q = { u 1 , u 2 , , u i } . It also offers two theorems to calculate the C m a x ( G ) of the disconnected digraphs. The four algorithms implemented in it illustrate how to calculate M a x Q ( G ) of a digraph. Through software testing, the correctness of our algorithms is preliminarily verified. Our method can be utilized to mine the frequent subdigraph. We also guess that if there exists a vertex v S + ( G ) satisfying conditions C m a x ( G v ) C m a x ( G w ) for each w S + ( G ) w v , then u 1 = v for M a x Q ( G ) . Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle On the Binary Input Gaussian Wiretap Channel with/without Output Quantization
Entropy 2017, 19(2), 59; doi:10.3390/e19020059
Received: 8 November 2016 / Revised: 7 January 2017 / Accepted: 30 January 2017 / Published: 4 February 2017
PDF Full-text (472 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we investigate the effect of output quantization on the secrecy capacity of the binary-input Gaussian wiretap channel. As a result, a closed-form expression with infinite summation terms of the secrecy capacity of the binary-input Gaussian wiretap channel is derived for
[...] Read more.
In this paper, we investigate the effect of output quantization on the secrecy capacity of the binary-input Gaussian wiretap channel. As a result, a closed-form expression with infinite summation terms of the secrecy capacity of the binary-input Gaussian wiretap channel is derived for the case when both the legitimate receiver and the eavesdropper have unquantized outputs. In particular, computable tight upper and lower bounds on the secrecy capacity are obtained. Theoretically, we prove that when the legitimate receiver has unquantized outputs while the eavesdropper has binary quantized outputs, the secrecy capacity is larger than that when both the legitimate receiver and the eavesdropper have unquantized outputs or both have binary quantized outputs. Further, numerical results show that in the low signal-to-noise ratio (SNR) (of the main channel) region, the secrecy capacity of the binary input Gaussian wiretap channel when both the legitimate receiver and the eavesdropper have unquantized outputs is larger than the capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs; as the SNR increases, the secrecy capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs tends to overtake. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Two Thermoeconomic Diagnosis Methods Applied to Representative Operating Data of a Commercial Transcritical Refrigeration Plant
Entropy 2017, 19(2), 69; doi:10.3390/e19020069
Received: 20 December 2016 / Revised: 1 February 2017 / Accepted: 8 February 2017 / Published: 15 February 2017
PDF Full-text (832 KB) | HTML Full-text | XML Full-text
Abstract
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the quality of
[...] Read more.
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the quality of malfunction identification. Both methods were applicable to locate and categorise the malfunctions when using steady state data without measurement uncertainties. By introduction of measurement uncertainty, the categorisation of malfunctions became increasingly difficult, though depending on the magnitude of the uncertainties. Two different uncertainty scenarios were evaluated, as the use of repeated measurements yields a lower magnitude of uncertainty. The two methods show similar performance in the presented study for both of the considered measurement uncertainty scenarios. However, only in the low measurement uncertainty scenario, both methods are applicable to locate the causes of the malfunctions. For both the scenarios an outlier limit was found, which determines if it was possible to reject a high relative indicator based on measurement uncertainty. For high uncertainties, the threshold value of the relative indicator was 35, whereas for low uncertainties one of the methods resulted in a threshold at 8. Additionally, the contribution of different measuring instruments to the relative indicator in two central components was analysed. It shows that the contribution was component dependent. Full article
(This article belongs to the Special Issue Thermoeconomics for Energy Efficiency)
Figures

Figure 1

Open AccessArticle Entropy-Based Method for Evaluating Contact Strain-Energy Distribution for Assembly Accuracy Prediction
Entropy 2017, 19(2), 49; doi:10.3390/e19020049
Received: 6 December 2016 / Revised: 12 January 2017 / Accepted: 19 January 2017 / Published: 24 January 2017
PDF Full-text (9163 KB) | HTML Full-text | XML Full-text
Abstract
Assembly accuracy significantly affects the performance of precision mechanical systems. In this study, an entropy-based evaluation method for contact strain-energy distribution is proposed to predict the assembly accuracy. Strain energy is utilized to characterize the effects of the combination of form errors and
[...] Read more.
Assembly accuracy significantly affects the performance of precision mechanical systems. In this study, an entropy-based evaluation method for contact strain-energy distribution is proposed to predict the assembly accuracy. Strain energy is utilized to characterize the effects of the combination of form errors and contact deformations on the formation of assembly errors. To obtain the strain energy, the contact state is analyzed by applying the finite element method (FEM) on 3D, solid models of real parts containing form errors. Entropy is employed for evaluating the uniformity of the contact strain-energy distribution. An evaluation model, in which the uniformity of the contact strain-energy distribution is evaluated in three levels based on entropy, is developed to predict the assembly accuracy, and a comprehensive index is proposed. The assembly experiments for five sets of two rotating parts are conducted. Moreover, the coaxiality between the surfaces of two parts with assembly accuracy requirements is selected as the verification index to verify the effectiveness of the evaluation method. The results are in good agreement with the verification index, indicating that the method presented in this study is reliable and effective in predicting the assembly accuracy. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Other

Jump to: Editorial, Research

Open AccessConcept Paper Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies
Entropy 2017, 19(2), 66; doi:10.3390/e19020066
Received: 27 September 2016 / Revised: 26 January 2017 / Accepted: 6 February 2017 / Published: 10 February 2017
PDF Full-text (246 KB) | HTML Full-text | XML Full-text
Abstract
The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision
[...] Read more.
The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
logo
loading...
Back to Top